2012-09-01
boxes) using a third-party commercial software component. When creating version 1, it was necessary to enter raw Hypertext Markup Language (HTML) tags...Markup Language (HTML) web page. Figure 12. Authors create procedures using the Procedure Editor. Users run procedures using the...step presents instructions to the user using formatted text and graphics specified using the Hypertext Markup Language (HTML). Instructions can
Designing Multimedia for the Hypertext Markup Language.
ERIC Educational Resources Information Center
Schwier, Richard A.; Misanchuk, Earl R.
Dynamic discussions have begun to emerge concerning style of presentation on world wide web sites. Some hypertext markup language (HTML) designers seek an intimate and chatty ambience, while others want to project a more professional image. Evaluators see many sites as overdecorated and indecipherable. This paper offers suggestions on selecting…
A Leaner, Meaner Markup Language.
ERIC Educational Resources Information Center
Online & CD-ROM Review, 1997
1997-01-01
In 1996 a working group of the World Wide Web Consortium developed and released a simpler form of markup language, Extensible Markup Language (XML), combining the flexibility of standard Generalized Markup Language (SGML) and the Web suitability of HyperText Markup Language (HTML). Reviews SGML and discusses XML's suitability for journal…
ERIC Educational Resources Information Center
Lewis, John D.
1998-01-01
Describes XML (extensible markup language), a new language classification submitted to the World Wide Web Consortium that is defined in terms of both SGML (Standard Generalized Markup Language) and HTML (Hypertext Markup Language), specifically designed for the Internet. Limitations of PDF (Portable Document Format) files for electronic journals…
XML: A Language To Manage the World Wide Web. ERIC Digest.
ERIC Educational Resources Information Center
Davis-Tanous, Jennifer R.
This digest provides an overview of XML (Extensible Markup Language), a markup language used to construct World Wide Web pages. Topics addressed include: (1) definition of a markup language, including comparison of XML with SGML (Standard Generalized Markup Language) and HTML (HyperText Markup Language); (2) how XML works, including sample tags,…
XML Content Finally Arrives on the Web!
ERIC Educational Resources Information Center
Funke, Susan
1998-01-01
Explains extensible markup language (XML) and how it differs from hypertext markup language (HTML) and standard generalized markup language (SGML). Highlights include features of XML, including better formatting of documents, better searching capabilities, multiple uses for hyperlinking, and an increase in Web applications; Web browsers; and what…
ERIC Educational Resources Information Center
Buchanan, Larry
1996-01-01
Defines HyperText Markup Language (HTML) as it relates to the World Wide Web (WWW). Describes steps needed to create HTML files on a UNIX system and to make them accessible via the WWW. Presents a list of basic HTML formatting codes and explains the coding used in the author's personal HTML file. (JMV)
The World-Wide Web and Mosaic: An Overview for Librarians.
ERIC Educational Resources Information Center
Morgan, Eric Lease
1994-01-01
Provides an overview of the Internet's World-Wide Web (Web), a hypertext system. Highlights include the client/server model; Uniform Resource Locator; examples of software; Web servers versus Gopher servers; HyperText Markup Language (HTML); converting files; Common Gateway Interface; organizing Web information; and the role of librarians in…
106-17 Telemetry Standards Metadata Configuration Chapter 23
2017-07-01
23-1 23.2 Metadata Description Language ...Chapter 23, July 2017 iii Acronyms HTML Hypertext Markup Language MDL Metadata Description Language PCM pulse code modulation TMATS Telemetry...Attributes Transfer Standard W3C World Wide Web Consortium XML eXtensible Markup Language XSD XML schema document Telemetry Network Standard
XML: A Publisher's Perspective.
ERIC Educational Resources Information Center
Andrews, Timothy M.
1999-01-01
Explains eXtensible Markup Language (XML) and describes how Dow Jones Interactive is using it to improve the news-gathering and dissemination process through intranets and the World Wide Web. Discusses benefits of using XML, the relationship to HyperText Markup Language (HTML), lack of available software tools and industry support, and future…
Benefits and Pitfalls of Using HTML as a CD-ROM Development Tool.
ERIC Educational Resources Information Center
Misanchuk, Earl R.; Schwier, Richard A.
The hypertext markup language (HTML) used to develop pages for the world wide web also has potential for use in creating some types of multimedia instruction destined for CD-ROMs. After providing a brief overview of HTML, this document presents pros and cons relevant to CD-ROM production. HTML can offer compatibility to both Windows and Macintosh…
Fingerprinting Reverse Proxies Using Timing Analysis of TCP Flows
2013-09-01
bayes classifier,” in Cloud Computing Security , ser. CCSW ’09. New York City, NY: ACM, 2009, pp. 31–42. [30] J. Zhang, R. Perdisci, W. Lee, U. Sarfraz...FSM Finite State Machine HTML Hypertext Markup Language HTTP Hypertext Transfer Protocol HTTPS Hypertext Transfer Protocol Secure ICMP Internet Control...This hidden traffic concept supports network access control, security protection through obfuscation, and performance boosts at the Internet facing
How to use the WWW to distribute STI
NASA Technical Reports Server (NTRS)
Roper, Donna G.
1994-01-01
This presentation explains how to use the World Wide Web (WWW) to distribute scientific and technical information as hypermedia. WWW clients and servers use the HyperText Transfer Protocol (HTTP) to transfer documents containing links to other text, graphics, video, and sound. The standard language for these documents is the HyperText MarkUp Language (HTML). These are simply text files with formatting codes that contain layout information and hyperlinks. HTML documents can be created with any text editor or with one of the publicly available HTML editors or convertors. HTML can also include links to available image formats. This presentation is available online. The URL is http://sti.larc.nasa. (followed by) gov/demos/workshop/introtext.html.
Overview of the World Wide Web Consortium (W3C) (SIGs IA, USE).
ERIC Educational Resources Information Center
Daly, Janet
2000-01-01
Provides an overview of a planned session to describe the work of the World Wide Web Consortium, including technical specifications for HTML (Hypertext Markup Language), XML (Extensible Markup Language), CSS (Cascading Style Sheets), and over 20 other Web standards that address graphics, multimedia, privacy, metadata, and other technologies. (LRW)
Making the World Wide Web Accessible to All Students.
ERIC Educational Resources Information Center
Guthrie, Sally A.
2000-01-01
Examines the accessibility of Web sites belonging to 80 colleges of communications and schools of journalism by examining the hypertext markup language (HTML) used to format the pages. Suggests ways to revise the markup of pages to make them more accessible to students with vision, hearing, and mobility problems. Lists resources of the latest…
Internet Resources: Using Web Pages in Social Studies.
ERIC Educational Resources Information Center
Dale, Jack
1999-01-01
Contends that students in social studies classes can utilize Hypertext Markup Language (HTML) as a presentation and collaborative tool by developing websites. Presents two activities where students submitted webpages for country case studies and created a timeline for the French Revolution. Describes how to use HTML by discussing the various tags.…
Web-Writing in One Minute--and Beyond.
ERIC Educational Resources Information Center
Hughes, Kenneth
This paper describes how librarians can teach patrons the basics of hypertext markup language (HTML) so that patrons can publish their own homepages on the World Wide Web. With proper use of handouts and practice time afterwards, the three basics of HTML can be conveyed in only 60 seconds. The three basics are: the basic template of Web tags, used…
Automating testbed documentation and database access using World Wide Web (WWW) tools
NASA Technical Reports Server (NTRS)
Ames, Charles; Auernheimer, Brent; Lee, Young H.
1994-01-01
A method for providing uniform transparent access to disparate distributed information systems was demonstrated. A prototype testing interface was developed to access documentation and information using publicly available hypermedia tools. The prototype gives testers a uniform, platform-independent user interface to on-line documentation, user manuals, and mission-specific test and operations data. Mosaic was the common user interface, and HTML (Hypertext Markup Language) provided hypertext capability.
2017-02-01
entity relationship (diagram) EwID Enterprise-wide Identifier FMID Force Management Identifier GFM Global Force Management HTML Hypertext Markup Language... Management Data Initiative by Frederick S Brundick Approved for public release; distribution is unlimited. NOTICES Disclaimers The findings in this report...Schema in the Global Force Management Data Initiative by Frederick S Brundick Computing and Information Sciences Directorate, ARL Approved for public
Web GIS in practice VIII: HTML5 and the canvas element for interactive online mapping.
Boulos, Maged N Kamel; Warren, Jeffrey; Gong, Jianya; Yue, Peng
2010-03-03
HTML5 is being developed as the next major revision of HTML (Hypertext Markup Language), the core markup language of the World Wide Web. It aims at reducing the need for proprietary, plug-in-based rich Internet application (RIA) technologies such as Adobe Flash. The canvas element is part of HTML5 and is used to draw graphics using scripting (e.g., JavaScript). This paper introduces Cartagen, an open-source, vector-based, client-side framework for rendering plug-in-free, offline-capable, interactive maps in native HTML5 on a wide range of Web browsers and mobile phones. Cartagen was developed at MIT Media Lab's Design Ecology group. Potential applications of the technology as an enabler for participatory online mapping include mapping real-time air pollution, citizen reporting, and disaster response, among many other possibilities.
Data Archival and Retrieval Enhancement (DARE) Metadata Modeling and Its User Interface
NASA Technical Reports Server (NTRS)
Hyon, Jason J.; Borgen, Rosana B.
1996-01-01
The Defense Nuclear Agency (DNA) has acquired terabytes of valuable data which need to be archived and effectively distributed to the entire nuclear weapons effects community and others...This paper describes the DARE (Data Archival and Retrieval Enhancement) metadata model and explains how it is used as a source for generating HyperText Markup Language (HTML)or Standard Generalized Markup Language (SGML) documents for access through web browsers such as Netscape.
Web GIS in practice VIII: HTML5 and the canvas element for interactive online mapping
2010-01-01
HTML5 is being developed as the next major revision of HTML (Hypertext Markup Language), the core markup language of the World Wide Web. It aims at reducing the need for proprietary, plug-in-based rich Internet application (RIA) technologies such as Adobe Flash. The canvas element is part of HTML5 and is used to draw graphics using scripting (e.g., JavaScript). This paper introduces Cartagen, an open-source, vector-based, client-side framework for rendering plug-in-free, offline-capable, interactive maps in native HTML5 on a wide range of Web browsers and mobile phones. Cartagen was developed at MIT Media Lab's Design Ecology group. Potential applications of the technology as an enabler for participatory online mapping include mapping real-time air pollution, citizen reporting, and disaster response, among many other possibilities. PMID:20199681
Computer Literacy and Non-IS Majors
ERIC Educational Resources Information Center
Thomas, Jennifer D. E.; Blackwood, Martina
2010-01-01
This paper presents an investigation of non-Information Systems (IS) major's perceptions and performance when enrolled in a required introductory Computer Information Systems course. Students of various academic backgrounds were taught Excel, Hypertext Markup Language (HTML), JavaScript and computer literacy in a 14-week introductory course, in…
Home Page, Sweet Home Page: Creating a Web Presence.
ERIC Educational Resources Information Center
Falcigno, Kathleen; Green, Tim
1995-01-01
Focuses primarily on design issues and practical concerns involved in creating World Wide Web documents for use within an organization. Concerns for those developing Web home pages are: learning HyperText Markup Language (HTML); defining customer group; allocating staff resources for maintenance of documents; providing feedback mechanism for…
The place of SGML and HTML in building electronic patient records.
Pitty, D; Gordon, C; Reeves, P; Capey, A; Vieyra, P; Rickards, T
1997-01-01
The authors are concerned that, although popular, SGML (Standard Generalized Markup Language) is only one approach to capturing, storing, viewing and exchanging healthcare information and does not provide a suitable paradigm for solving most of the problems associated with paper based patient record systems. Although a discussion of the relative merits of SGML, HTML (HyperText Markup Language) may be interesting, we feel such a discussion is avoiding the real issues associated with the most appropriate way to model, represent, and store electronic patient information in order to solve healthcare problems, and therefore the medical informatics community should firstly concern itself with these issues. The paper substantiates this viewpoint and concludes with some suggestions of how progress can be made.
Guide to the Internet. The world wide web.
Pallen, M.
1995-01-01
The world wide web provides a uniform, user friendly interface to the Internet. Web pages can contain text and pictures and are interconnected by hypertext links. The addresses of web pages are recorded as uniform resource locators (URLs), transmitted by hypertext transfer protocol (HTTP), and written in hypertext markup language (HTML). Programs that allow you to use the web are available for most operating systems. Powerful on line search engines make it relatively easy to find information on the web. Browsing through the web--"net surfing"--is both easy and enjoyable. Contributing to the web is not difficult, and the web opens up new possibilities for electronic publishing and electronic journals. Images p1554-a Fig 5 PMID:8520402
Designing a Virtual Classroom for Distance Learning Students through the Internet.
ERIC Educational Resources Information Center
Bradshaw, Allen
Advantages to using the Internet to deliver instruction include the fact that Hypertext Markup Language (HTML) can be accessed on any computer, broadening the student base to anyone with an Internet browser and a PPP (Point-to-Point Protocol) account. In addition, instructions, lectures, and examples can be linked together for use as students need…
Development and evaluation of a dynamic web-based application.
Hsieh, Yichuan; Brennan, Patricia Flatley
2007-10-11
Traditional consumer health informatics (CHI) applications that were developed for lay public on the Web were commonly written in a Hypertext Markup Language (HTML). As genetics knowledge rapidly advances and requires updating information in a timely fashion, a different content structure is therefore needed to facilitate information delivery. This poster will present the process of developing a dynamic database-driven Web CHI application.
[Radiology information system using HTML, JavaScript, and Web server].
Sone, M; Sasaki, M; Oikawa, H; Yoshioka, K; Ehara, S; Tamakawa, Y
1997-12-01
We have developed a radiology information system using intranet techniques, including hypertext markup language, JavaScript, and Web server. JavaScript made it possible to develop an easy-to-use application, as well as to reduce network traffic and load on the server. The system we have developed is inexpensive and flexible, and its development and maintenance are much easier than with the previous system.
ADASS Web Database XML Project
NASA Astrophysics Data System (ADS)
Barg, M. I.; Stobie, E. B.; Ferro, A. J.; O'Neil, E. J.
In the spring of 2000, at the request of the ADASS Program Organizing Committee (POC), we began organizing information from previous ADASS conferences in an effort to create a centralized database. The beginnings of this database originated from data (invited speakers, participants, papers, etc.) extracted from HyperText Markup Language (HTML) documents from past ADASS host sites. Unfortunately, not all HTML documents are well formed and parsing them proved to be an iterative process. It was evident at the beginning that if these Web documents were organized in a standardized way, such as XML (Extensible Markup Language), the processing of this information across the Web could be automated, more efficient, and less error prone. This paper will briefly review the many programming tools available for processing XML, including Java, Perl and Python, and will explore the mapping of relational data from our MySQL database to XML.
NASA Technical Reports Server (NTRS)
Ullman, Richard; Bane, Bob; Yang, Jingli
2008-01-01
A shell script has been written as a means of automatically making HDF-EOS-formatted data sets available via the World Wide Web. ("HDF-EOS" and variants thereof are defined in the first of the two immediately preceding articles.) The shell script chains together some software tools developed by the Data Usability Group at Goddard Space Flight Center to perform the following actions: Extract metadata in Object Definition Language (ODL) from an HDF-EOS file, Convert the metadata from ODL to Extensible Markup Language (XML), Reformat the XML metadata into human-readable Hypertext Markup Language (HTML), Publish the HTML metadata and the original HDF-EOS file to a Web server and an Open-source Project for a Network Data Access Protocol (OPeN-DAP) server computer, and Reformat the XML metadata and submit the resulting file to the EOS Clearinghouse, which is a Web-based metadata clearinghouse that facilitates searching for, and exchange of, Earth-Science data.
2017-11-01
7 Fig. 10 Build executable code ........................................................................... 8 Fig. 11 3DWF GUI’s main web ...can be designed in any Windows operating system with internet access via Microsoft’s Internet Explorer (IE) web browser. For this particular project...Therefore, it is advised to have network security safeguards in place and operate only in a trusted PC. The GUI’s Hypertext Markup Language (HTML) web
2014-03-01
Humanitarian Assistance and Disaster Relief HTML HyperText Markup Language IA Information Assurance IAI Israel Aerospace Industries IASA Information ...decision maker at the Command and Control “mini cloud” was of upmost interest . This discussion not only confirmed the need to have information ...2) monitoring for specific cyber attacks on a specified system, (3) alerting information of interest to an operator, and finally (4) allowing the
Mercury Shopping Cart Interface
NASA Technical Reports Server (NTRS)
Pfister, Robin; McMahon, Joe
2006-01-01
Mercury Shopping Cart Interface (MSCI) is a reusable component of the Power User Interface 5.0 (PUI) program described in another article. MSCI is a means of encapsulating the logic and information needed to describe an orderable item consistent with Mercury Shopping Cart service protocol. Designed to be used with Web-browser software, MSCI generates Hypertext Markup Language (HTML) pages on which ordering information can be entered. MSCI comprises two types of Practical Extraction and Report Language (PERL) modules: template modules and shopping-cart logic modules. Template modules generate HTML pages for entering the required ordering details and enable submission of the order via a Hypertext Transfer Protocol (HTTP) post. Shopping cart modules encapsulate the logic and data needed to describe an individual orderable item to the Mercury Shopping Cart service. These modules evaluate information entered by the user to determine whether it is sufficient for the Shopping Cart service to process the order. Once an order has been passed from MSCI to a deployed Mercury Shopping Cart server, there is no further interaction with the user.
TOPS On-Line: Automating the Construction and Maintenance of HTML Pages
NASA Technical Reports Server (NTRS)
Jones, Kennie H.
1994-01-01
After the Technology Opportunities Showcase (TOPS), in October, 1993, Langley Research Center's (LaRC) Information Systems Division (ISD) accepted the challenge to preserve the investment in information assembled in the TOPS exhibits by establishing a data base. Following the lead of several people at LaRC and others around the world, the HyperText Transport Protocol (HTTP) server and Mosaic were the obvious tools of choice for implementation. Initially, some TOPS exhibitors began the conventional approach of constructing HyperText Markup Language (HTML) pages of their exhibits as input to Mosaic. Considering the number of pages to construct, a better approach was conceived that would automate the construction of pages. This approach allowed completion of the data base construction in a shorter period of time using fewer resources than would have been possible with the conventional approach. It also provided flexibility for the maintenance and enhancement of the data base. Since that time, this approach has been used to automate construction of other HTML data bases. Through these experiences, it is concluded that the most effective use of the HTTP/Mosaic technology will require better tools and techniques for creating, maintaining and managing the HTML pages. The development and use of these tools and techniques are the subject of this document.
A comprehensive strategy for designing a Web-based medical curriculum.
Zucker, J.; Chase, H.; Molholt, P.; Bean, C.; Kahn, R. M.
1996-01-01
In preparing for a full featured online curriculum, it is necessary to develop scaleable strategies for software design that will support the pedagogical goals of the curriculum and which will address the issues of acquisition and updating of materials, of robust content-based linking, and of integration of the online materials into other methods of learning. A complete online curriculum, as distinct from an individual computerized module, must provide dynamic updating of both content and structure and an easy pathway from the professor's notes to the finished online product. At the College of Physicians and Surgeons, we are developing such strategies including a scripted text conversion process that uses the Hypertext Markup Language (HTML) as structural markup rather than as display markup, automated linking by the use of relational databases and the Unified Medical Language System (UMLS), integration of text, images, and multimedia along with interface designs which promote multiple contexts and collaborative study. PMID:8947624
A radiology department intranet: development and applications.
Willing, S J; Berland, L L
1999-01-01
An intranet is a "private Internet" that uses the protocols of the World Wide Web to share information resources within a company or with the company's business partners and clients. The hardware requirements for an intranet begin with a dedicated Web server permanently connected to the departmental network. The heart of a Web server is the hypertext transfer protocol (HTTP) service, which receives a page request from a client's browser and transmits the page back to the client. Although knowledge of hypertext markup language (HTML) is not essential for authoring a Web page, a working familiarity with HTML is useful, as is knowledge of programming and database management. Security can be ensured by using scripts to write information in hidden fields or by means of "cookies." Interfacing databases and database management systems with the Web server and conforming the user interface to HTML syntax can be achieved by means of the common gateway interface (CGI), Active Server Pages (ASP), or other methods. An intranet in a radiology department could include the following types of content: on-call schedules, work schedules and a calendar, a personnel directory, resident resources, memorandums and discussion groups, software for a radiology information system, and databases.
Katzman, G L
2001-03-01
The goal of the project was to create a method by which an in-house digital teaching file could be constructed that was simple, inexpensive, independent of hypertext markup language (HTML) restrictions, and appears identical on multiple platforms. To accomplish this, Microsoft PowerPoint and Adobe Acrobat were used in succession to assemble digital teaching files in the Acrobat portable document file format. They were then verified to appear identically on computers running Windows, Macintosh Operating Systems (OS), and the Silicon Graphics Unix-based OS as either a free-standing file using Acrobat Reader software or from within a browser window using the Acrobat browser plug-in. This latter display method yields a file viewed through a browser window, yet remains independent of underlying HTML restrictions, which may confer an advantage over simple HTML teaching file construction. Thus, a hybrid of HTML-distributed Adobe Acrobat generated WWW documents may be a viable alternative for digital teaching file construction and distribution.
[Development of quality assurance/quality control web system in radiotherapy].
Okamoto, Hiroyuki; Mochizuki, Toshihiko; Yokoyama, Kazutoshi; Wakita, Akihisa; Nakamura, Satoshi; Ueki, Heihachi; Shiozawa, Keiko; Sasaki, Koji; Fuse, Masashi; Abe, Yoshihisa; Itami, Jun
2013-12-01
Our purpose is to develop a QA/QC (quality assurance/quality control) web system using a server-side script language such as HTML (HyperText Markup Language) and PHP (Hypertext Preprocessor), which can be useful as a tool to share information about QA/QC in radiotherapy. The system proposed in this study can be easily built in one's own institute, because HTML can be easily handled. There are two desired functions in a QA/QC web system: (i) To review the results of QA/QC for a radiotherapy machine, manuals, and reports necessary for routinely performing radiotherapy through this system. By disclosing the results, transparency can be maintained, (ii) To reveal a protocol for QA/QC in one's own institute using pictures and movies relating to QA/QC for simplicity's sake, which can also be used as an educational tool for junior radiation technologists and medical physicists. By using this system, not only administrators, but also all staff involved in radiotherapy, can obtain information about the conditions and accuracy of treatment machines through the QA/QC web system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, Michael J.
SchemaOnRead provides tools for implementing schema-on-read including a single function call (e.g., schemaOnRead("filename")) that reads text (TXT), comma separated value (CSV), raster image (BMP, PNG, GIF, TIFF, and JPG), R data (RDS), HDF5, NetCDF, spreadsheet (XLS, XLSX, ODS, and DIF), Weka Attribute-Relation File Format (ARFF), Epi Info (REC), Pajek network (PAJ), R network (NET), Hypertext Markup Language (HTML), SPSS (SAV), Systat (SYS), and Stata (DTA) files. It also recursively reads folders (e.g., schemaOnRead("folder")), returning a nested list of the contained elements.
ERIC Educational Resources Information Center
Wall, C. Edward; And Others
1995-01-01
Discusses the integration of Standard General Markup Language, Hypertext Markup Language, and MARC format to parse classified analytical bibliographies. Use of the resulting electronic knowledge constructs in local library systems as maps of a specified subset of resources is discussed, and an example is included. (LRW)
Catalogue of HI PArameters (CHIPA)
NASA Astrophysics Data System (ADS)
Saponara, J.; Benaglia, P.; Koribalski, B.; Andruchow, I.
2015-08-01
The catalogue of HI parameters of galaxies HI (CHIPA) is the natural continuation of the compilation by M.C. Martin in 1998. CHIPA provides the most important parameters of nearby galaxies derived from observations of the neutral Hydrogen line. The catalogue contains information of 1400 galaxies across the sky and different morphological types. Parameters like the optical diameter of the galaxy, the blue magnitude, the distance, morphological type, HI extension are listed among others. Maps of the HI distribution, velocity and velocity dispersion can also be display for some cases. The main objective of this catalogue is to facilitate the bibliographic queries, through searching in a database accessible from the internet that will be available in 2015 (the website is under construction). The database was built using the open source `` mysql (SQL, Structured Query Language, management system relational database) '', while the website was built with ''HTML (Hypertext Markup Language)'' and ''PHP (Hypertext Preprocessor)''.
A Platform-Independent Plugin for Navigating Online Radiology Cases.
Balkman, Jason D; Awan, Omer A
2016-06-01
Software methods that enable navigation of radiology cases on various digital platforms differ between handheld devices and desktop computers. This has resulted in poor compatibility of online radiology teaching files across mobile smartphones, tablets, and desktop computers. A standardized, platform-independent, or "agnostic" approach for presenting online radiology content was produced in this work by leveraging modern hypertext markup language (HTML) and JavaScript web software technology. We describe the design and evaluation of this software, demonstrate its use across multiple viewing platforms, and make it publicly available as a model for future development efforts.
iBIOMES Lite: Summarizing Biomolecular Simulation Data in Limited Settings
2015-01-01
As the amount of data generated by biomolecular simulations dramatically increases, new tools need to be developed to help manage this data at the individual investigator or small research group level. In this paper, we introduce iBIOMES Lite, a lightweight tool for biomolecular simulation data indexing and summarization. The main goal of iBIOMES Lite is to provide a simple interface to summarize computational experiments in a setting where the user might have limited privileges and limited access to IT resources. A command-line interface allows the user to summarize, publish, and search local simulation data sets. Published data sets are accessible via static hypertext markup language (HTML) pages that summarize the simulation protocols and also display data analysis graphically. The publication process is customized via extensible markup language (XML) descriptors while the HTML summary template is customized through extensible stylesheet language (XSL). iBIOMES Lite was tested on different platforms and at several national computing centers using various data sets generated through classical and quantum molecular dynamics, quantum chemistry, and QM/MM. The associated parsers currently support AMBER, GROMACS, Gaussian, and NWChem data set publication. The code is available at https://github.com/jcvthibault/ibiomes. PMID:24830957
Home Page: The Mode of Transport through the Information Superhighway
NASA Technical Reports Server (NTRS)
Lujan, Michelle R.
1995-01-01
The purpose of the project with the Aeroacoustics Branch was to create and submit a home page for the internet about branch information. In order to do this, one must also become familiar with the way that the internet operates. Learning HyperText Markup Language (HTML), and the ability to create a document using this language was the final objective in order to place a home page on the internet (World Wide Web). A manual of instructions regarding maintenance of the home page, and how to keep it up to date was also necessary in order to provide branch members with the opportunity to make any pertinent changes.
Just tell me what you want!: the promise and perils of rapid prototyping with the World Wide Web.
Cimino, J J; Socratous, S A
1996-01-01
Construction of applications using the World Wide Web architecture and Hypertext Markup Language (HTML) documents is relatively simple. We are exploring this approach with an application, called PolyMed now in use by surgical residents for one year. We monitored use and obtained user feedback to develop new features and eliminate undesirable ones. The system has been used to keep track of over 4,200 patients. We predicted, several advantages and disadvantages to this approach to prototyping clinical applications. Our experience confirms some advantages (ease of development and customization, ability to exploit non-Web system components, and simplified user interface design) and disadvantages (lack of database management services). Some predicted disadvantages failed to materialize (difficulty modeling a clinical application with hypertext and inconveniences associated with the "connectionless" nature of the Web). We were disappointed to find that while integration of external Web applications (such as Medline) into our application was easy, our users did not find it useful.
Just tell me what you want!: the promise and perils of rapid prototyping with the World Wide Web.
Cimino, J. J.; Socratous, S. A.
1996-01-01
Construction of applications using the World Wide Web architecture and Hypertext Markup Language (HTML) documents is relatively simple. We are exploring this approach with an application, called PolyMed now in use by surgical residents for one year. We monitored use and obtained user feedback to develop new features and eliminate undesirable ones. The system has been used to keep track of over 4,200 patients. We predicted, several advantages and disadvantages to this approach to prototyping clinical applications. Our experience confirms some advantages (ease of development and customization, ability to exploit non-Web system components, and simplified user interface design) and disadvantages (lack of database management services). Some predicted disadvantages failed to materialize (difficulty modeling a clinical application with hypertext and inconveniences associated with the "connectionless" nature of the Web). We were disappointed to find that while integration of external Web applications (such as Medline) into our application was easy, our users did not find it useful. PMID:8947759
Incorporating intelligence into structured radiology reports
NASA Astrophysics Data System (ADS)
Kahn, Charles E.
2014-03-01
The new standard for radiology reporting templates being developed through the Integrating the Healthcare Enterprise (IHE) and DICOM organizations defines the storage and exchange of reporting templates as Hypertext Markup Language version 5 (HTML5) documents. The use of HTML5 enables the incorporation of "dynamic HTML," in which documents can be altered in response to their content. HTML5 documents can employ JavaScript, the HTML Document Object Model (DOM), and external web services to create intelligent reporting templates. Several reporting templates were created to demonstrate the use of scripts to perform in-template calculations and decision support. For example, a template for adrenal CT was created to compute contrast washout percentage from input values of precontrast, dynamic postcontrast, and delayed adrenal nodule attenuation values; the washout value can used to classify an adrenal nodule as a benign cortical adenoma. Dynamic templates were developed to compute volumes and apply diagnostic criteria, such as those for determination of internal carotid artery stenosis. Although reporting systems need not use a web browser to render the templates or their contents, the use of JavaScript creates innumerable opportunities to construct highly sophisticated HTML5 reporting templates. This report demonstrates the ability to incorporate dynamic content to enhance the use of radiology reporting templates.
Adding Hierarchical Objects to Relational Database General-Purpose XML-Based Information Managements
NASA Technical Reports Server (NTRS)
Lin, Shu-Chun; Knight, Chris; La, Tracy; Maluf, David; Bell, David; Tran, Khai Peter; Gawdiak, Yuri
2006-01-01
NETMARK is a flexible, high-throughput software system for managing, storing, and rapid searching of unstructured and semi-structured documents. NETMARK transforms such documents from their original highly complex, constantly changing, heterogeneous data formats into well-structured, common data formats in using Hypertext Markup Language (HTML) and/or Extensible Markup Language (XML). The software implements an object-relational database system that combines the best practices of the relational model utilizing Structured Query Language (SQL) with those of the object-oriented, semantic database model for creating complex data. In particular, NETMARK takes advantage of the Oracle 8i object-relational database model using physical-address data types for very efficient keyword searches of records across both context and content. NETMARK also supports multiple international standards such as WEBDAV for drag-and-drop file management and SOAP for integrated information management using Web services. The document-organization and -searching capabilities afforded by NETMARK are likely to make this software attractive for use in disciplines as diverse as science, auditing, and law enforcement.
Web-Based Collaborative Publications System: R&Tserve
NASA Technical Reports Server (NTRS)
Abrams, Steve
1997-01-01
R&Tserve is a publications system based on 'commercial, off-the-shelf' (COTS) software that provides a persistent, collaborative workspace for authors and editors to support the entire publication development process from initial submission, through iterative editing in a hierarchical approval structure, and on to 'publication' on the WWW. It requires no specific knowledge of the WWW (beyond basic use) or HyperText Markup Language (HTML). Graphics and URLs are automatically supported. The system includes a transaction archive, a comments utility, help functionality, automated graphics conversion, automated table generation, and an email-based notification system. It may be configured and administered via the WWW and can support publications ranging from single page documents to multiple-volume 'tomes'.
Test Generator for MATLAB Simulations
NASA Technical Reports Server (NTRS)
Henry, Joel
2011-01-01
MATLAB Automated Test Tool, version 3.0 (MATT 3.0) is a software package that provides automated tools that reduce the time needed for extensive testing of simulation models that have been constructed in the MATLAB programming language by use of the Simulink and Real-Time Workshop programs. MATT 3.0 runs on top of the MATLAB engine application-program interface to communicate with the Simulink engine. MATT 3.0 automatically generates source code from the models, generates custom input data for testing both the models and the source code, and generates graphs and other presentations that facilitate comparison of the outputs of the models and the source code for the same input data. Context-sensitive and fully searchable help is provided in HyperText Markup Language (HTML) format.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carnes, E.T.; Truett, D.F.; Truett, L.F.
In the handful of years since the World Wide Web (WWW or Web) came into being, Web sites have developed at an astonishing rate. With the influx of Web pages comes a disparity of site types, including personal homepages, commercial sales sites, and educational data. The variety of sites and the deluge of information contained on the Web exemplify the individual nature of the WWW. Whereas some people argue that it is this eclecticism which gives the Web its charm, we propose that sites which are repositories of technical data would benefit from standardization. This paper proffers a methodology formore » publishing ecological research on the Web. The template we describe uses capabilities of HTML (the HyperText Markup Language) to enhance the value of the traditional scientific paper.« less
Kiuchi, T; Kaihara, S
1997-02-01
The World Wide Web-based form is a promising method for the construction of an on-line data collection system for clinical and epidemiological research. It is, however, laborious to prepare a common gateway interface (CGI) program for each project, which the World Wide Web server needs to handle the submitted data. In medicine, it is even more laborious because the CGI program must check deficits, type, ranges, and logical errors (bad combination of data) of entered data for quality assurance as well as data length and meta-characters of the entered data to enhance the security of the server. We have extended the specification of the hypertext markup language (HTML) form to accommodate information necessary for such data checking and we have developed software named AUTOFORM for this purpose. The software automatically analyzes the extended HTML form and generates the corresponding ordinary HTML form, 'Makefile', and C source of CGI programs. The resultant CGI program checks the entered data through the HTML form, records them in a computer, and returns them to the end-user. AUTOFORM drastically reduces the burden of development of the World Wide Web-based data entry system and allows the CGI programs to be more securely and reliably prepared than had they been written from scratch.
17 CFR 232.105 - Limitation on use of HTML documents and hypertext links.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 17 Commodity and Securities Exchanges 2 2010-04-01 2010-04-01 false Limitation on use of HTML... Requirements § 232.105 Limitation on use of HTML documents and hypertext links. (a) Electronic filers must... EDGAR database on the Commission's public web site (www.sec.gov). Electronic filers also may include...
XML — an opportunity for
NASA Astrophysics Data System (ADS)
Houlding, Simon W.
2001-08-01
Extensible markup language (XML) is a recently introduced meta-language standard on the Web. It provides the rules for development of metadata (markup) standards for information transfer in specific fields. XML allows development of markup languages that describe what information is rather than how it should be presented. This allows computer applications to process the information in intelligent ways. In contrast hypertext markup language (HTML), which fuelled the initial growth of the Web, is a metadata standard concerned exclusively with presentation of information. Besides its potential for revolutionizing Web activities, XML provides an opportunity for development of meaningful data standards in specific application fields. The rapid endorsement of XML by science, industry and e-commerce has already spawned new metadata standards in such fields as mathematics, chemistry, astronomy, multi-media and Web micro-payments. Development of XML-based data standards in the geosciences would significantly reduce the effort currently wasted on manipulating and reformatting data between different computer platforms and applications and would ensure compatibility with the new generation of Web browsers. This paper explores the evolution, benefits and status of XML and related standards in the more general context of Web activities and uses this as a platform for discussion of its potential for development of data standards in the geosciences. Some of the advantages of XML are illustrated by a simple, browser-compatible demonstration of XML functionality applied to a borehole log dataset. The XML dataset and the associated stylesheet and schema declarations are available for FTP download.
17 CFR 232.105 - Limitation on use of HTML documents and hypertext links.
Code of Federal Regulations, 2014 CFR
2014-04-01
... submit the following documents in ASCII: Form N-SAR (§ 274.101 of this chapter) and Form 13F (§ 249.325... exhibits to Form N-SAR in HTML. (b) Electronic filers may not include in any HTML document hypertext links... documents within the current submission and to documents previously filed electronically and located in the...
17 CFR 232.105 - Limitation on use of HTML documents and hypertext links.
Code of Federal Regulations, 2013 CFR
2013-04-01
... submit the following documents in ASCII: Form N-SAR (§ 274.101 of this chapter) and Form 13F (§ 249.325... exhibits to Form N-SAR in HTML. (b) Electronic filers may not include in any HTML document hypertext links... documents within the current submission and to documents previously filed electronically and located in the...
17 CFR 232.105 - Limitation on use of HTML documents and hypertext links.
Code of Federal Regulations, 2011 CFR
2011-04-01
... submit the following documents in ASCII: Form N-SAR (§ 274.101 of this chapter) and Form 13F (§ 249.325... exhibits to Form N-SAR in HTML. (b) Electronic filers may not include in any HTML document hypertext links... documents within the current submission and to documents previously filed electronically and located in the...
17 CFR 232.105 - Limitation on use of HTML documents and hypertext links.
Code of Federal Regulations, 2012 CFR
2012-04-01
... submit the following documents in ASCII: Form N-SAR (§ 274.101 of this chapter) and Form 13F (§ 249.325... exhibits to Form N-SAR in HTML. (b) Electronic filers may not include in any HTML document hypertext links... documents within the current submission and to documents previously filed electronically and located in the...
Informatics in radiology (infoRAD): HTML and Web site design for the radiologist: a primer.
Ryan, Anthony G; Louis, Luck J; Yee, William C
2005-01-01
A Web site has enormous potential as a medium for the radiologist to store, present, and share information in the form of text, images, and video clips. With a modest amount of tutoring and effort, designing a site can be as painless as preparing a Microsoft PowerPoint presentation. The site can then be used as a hub for the development of further offshoots (eg, Web-based tutorials, storage for a teaching library, publication of information about one's practice, and information gathering from a wide variety of sources). By learning the basics of hypertext markup language (HTML), the reader will be able to produce a simple and effective Web page that permits display of text, images, and multimedia files. The process of constructing a Web page can be divided into five steps: (a) creating a basic template with formatted text, (b) adding color, (c) importing images and multimedia files, (d) creating hyperlinks, and (e) uploading one's page to the Internet. This Web page may be used as the basis for a Web-based tutorial comprising text documents and image files already in one's possession. Finally, there are many commercially available packages for Web page design that require no knowledge of HTML.
Asia-Pacific POPIN workshop on Internet.
1996-01-01
This brief article announces the accomplishments of the ESCAP Population Division of the Department of Economic and Social Information and Policy Analysis (DESIPA) in conjunction with the Asia-Pacific POPIN Internet (Information Superhighway) Training Workshop in popularizing useful new computer information technologies. A successful workshop was held in Bangkok in November 1996 for 18 people from 8 countries in the Asian and Pacific region, many of whom were from population information centers. Participants were taught some techniques for disseminating population data and information through use of the Internet computer facility. Participants learned 1) how to use Windows software in the ESCAP local area network (LAN), 2) about concepts such as HTML (hypertext mark-up language), and 3) detailed information about computer language. Computer practices involved "surfing the Net (Internet)" and linking with the global POPIN site on the Internet. Participants learned about computer programs for information handling and learned how to prepare documents using HTML, how to mount information on the World Wide Web (WWW) of the Internet, how to convert existing documents into "HTML-style" files, and how to scan graphics, such as logos, photographs, and maps, for visual display on the Internet. The Workshop and the three training modules was funded by the UN Population Fund (UNFPA). The POPIN Coordinator was pleased that competency was accomplished in such a short period of time.
Geospatial Visualization of Scientific Data Through Keyhole Markup Language
NASA Astrophysics Data System (ADS)
Wernecke, J.; Bailey, J. E.
2008-12-01
The development of virtual globes has provided a fun and innovative tool for exploring the surface of the Earth. However, it has been the paralleling maturation of Keyhole Markup Language (KML) that has created a new medium and perspective through which to visualize scientific datasets. Originally created by Keyhole Inc., and then acquired by Google in 2004, in 2007 KML was given over to the Open Geospatial Consortium (OGC). It became an OGC international standard on 14 April 2008, and has subsequently been adopted by all major geobrowser developers (e.g., Google, Microsoft, ESRI, NASA) and many smaller ones (e.g., Earthbrowser). By making KML a standard at a relatively young stage in its evolution, developers of the language are seeking to avoid the issues that plagued the early World Wide Web and development of Hypertext Markup Language (HTML). The popularity and utility of Google Earth, in particular, has been enhanced by KML features such as the Smithsonian volcano layer and the dynamic weather layers. Through KML, users can view real-time earthquake locations (USGS), view animations of polar sea-ice coverage (NSIDC), or read about the daily activities of chimpanzees (Jane Goodall Institute). Perhaps even more powerful is the fact that any users can create, edit, and share their own KML, with no or relatively little knowledge of manipulating computer code. We present an overview of the best current scientific uses of KML and a guide to how scientists can learn to use KML themselves.
Client-side Medical Image Colorization in a Collaborative Environment.
Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela
2015-01-01
The paper presents an application related to collaborative medicine using a browser based medical visualization system with focus on the medical image colorization process and the underlying open source web development technologies involved. Browser based systems allow physicians to share medical data with their remotely located counterparts or medical students, assisting them during patient diagnosis, treatment monitoring, surgery planning or for educational purposes. This approach brings forth the advantage of ubiquity. The system can be accessed from a any device, in order to process the images, assuring the independence towards having a specific proprietary operating system. The current work starts with processing of DICOM (Digital Imaging and Communications in Medicine) files and ends with the rendering of the resulting bitmap images on a HTML5 (fifth revision of the HyperText Markup Language) canvas element. The application improves the image visualization emphasizing different tissue densities.
Calderon, Karynna; Dadisman, S.V.; Kindinger, J.L.; Flocks, J.G.; Wiese, D.S.; Kulp, Mark; Penland, Shea; Britsch, L.D.; Brooks, G.R.
2003-01-01
This archive consists of two-dimensional marine seismic reflection profile data collected in the Barataria Basin of southern Louisiana. These data were acquired in May, June, and July of 2000 aboard the R/V G.K. Gilbert. Included here are data in a variety of formats including binary, American Standard Code for Information Interchange (ASCII), Hyper-Text Markup Language (HTML), shapefiles, and Graphics Interchange Format (GIF) and Joint Photographic Experts Group (JPEG) images. Binary data are in Society of Exploration Geophysicists (SEG) SEG-Y format and may be downloaded for further processing or display. Reference maps and GIF images of the profiles may be viewed with a web browser. The Geographic Information Systems (GIS) information provided here is compatible with Environmental Systems Research Institute (ESRI) GIS software.
Davis, Philip M
2013-07-01
Does PubMed Central--a government-run digital archive of biomedical articles--compete with scientific society journals? A longitudinal, retrospective cohort analysis of 13,223 articles (5999 treatment, 7224 control) published in 14 society-run biomedical research journals in nutrition, experimental biology, physiology, and radiology between February 2008 and January 2011 reveals a 21.4% reduction in full-text hypertext markup language (HTML) article downloads and a 13.8% reduction in portable document format (PDF) article downloads from the journals' websites when U.S. National Institutes of Health-sponsored articles (treatment) become freely available from the PubMed Central repository. In addition, the effect of PubMed Central on reducing PDF article downloads is increasing over time, growing at a rate of 1.6% per year. There was no longitudinal effect for full-text HTML downloads. While PubMed Central may be providing complementary access to readers traditionally underserved by scientific journals, the loss of article readership from the journal website may weaken the ability of the journal to build communities of interest around research papers, impede the communication of news and events to scientific society members and journal readers, and reduce the perceived value of the journal to institutional subscribers.
Enhancing Web applications in radiology with Java: estimating MR imaging relaxation times.
Dagher, A P; Fitzpatrick, M; Flanders, A E; Eng, J
1998-01-01
Java is a relatively new programming language that has been used to develop a World Wide Web-based tool for estimating magnetic resonance (MR) imaging relaxation times, thereby demonstrating how Java may be used for Web-based radiology applications beyond improving the user interface of teaching files. A standard processing algorithm coded with Java is downloaded along with the hypertext markup language (HTML) document. The user (client) selects the desired pulse sequence and inputs data obtained from a region of interest on the MR images. The algorithm is used to modify selected MR imaging parameters in an equation that models the phenomenon being evaluated. MR imaging relaxation times are estimated, and confidence intervals and a P value expressing the accuracy of the final results are calculated. Design features such as simplicity, object-oriented programming, and security restrictions allow Java to expand the capabilities of HTML by offering a more versatile user interface that includes dynamic annotations and graphics. Java also allows the client to perform more sophisticated information processing and computation than is usually associated with Web applications. Java is likely to become a standard programming option, and the development of stand-alone Java applications may become more common as Java is integrated into future versions of computer operating systems.
Deng, Chen-Hui; Zhang, Guan-Min; Bi, Shan-Shan; Zhou, Tian-Yan; Lu, Wei
2011-07-01
This study is to develop a therapeutic drug monitoring (TDM) network server of tacrolimus for Chinese renal transplant patients, which can facilitate doctor to manage patients' information and provide three levels of predictions. Database management system MySQL was employed to build and manage the database of patients and doctors' information, and hypertext mark-up language (HTML) and Java server pages (JSP) technology were employed to construct network server for database management. Based on the population pharmacokinetic model of tacrolimus for Chinese renal transplant patients, above program languages were used to construct the population prediction and subpopulation prediction modules. Based on Bayesian principle and maximization of the posterior probability function, an objective function was established, and minimized by an optimization algorithm to estimate patient's individual pharmacokinetic parameters. It is proved that the network server has the basic functions for database management and three levels of prediction to aid doctor to optimize the regimen of tacrolimus for Chinese renal transplant patients.
Setti, E; Musumeci, R
2001-06-01
The world wide web is an exciting service that allows one to publish electronic documents made of text and images on the internet. Client software called a web browser can access these documents, and display and print them. The most popular browsers are currently Microsoft Internet Explorer (Microsoft, Redmond, WA) and Netscape Communicator (Netscape Communications, Mountain View, CA). These browsers can display text in hypertext markup language (HTML) format and images in Joint Photographic Expert Group (JPEG) and Graphic Interchange Format (GIF). Currently, neither browser can display radiologic images in native Digital Imaging and Communications in Medicine (DICOM) format. With the aim to publish radiologic images on the internet, we wrote a dedicated Java applet. Our software can display radiologic and histologic images in DICOM, JPEG, and GIF formats, and provides a a number of functions like windowing and magnification lens. The applet is compatible with some web browsers, even the older versions. The software is free and available from the author.
ERIC Educational Resources Information Center
Bremser, Wayne
1998-01-01
Discusses how to choose from the available interactive graphic-design possibilities for the World Wide Web. Compatibility and appropriateness are discussed; and DHTML (Dynamic Hypertext Markup Language), Java, CSS (Cascading Style Sheets), plug-ins, ActiveX, and Push and channel technologies are described. (LRW)
WebWatcher: Machine Learning and Hypertext
1995-05-29
WebWatcher: Machine Learning and Hypertext Thorsten Joachims, Tom Mitchell, Dayne Freitag, and Robert Armstrong School of Computer Science Carnegie...HTML-page about machine learning in which we in- serted a hyperlink to WebWatcher (line 6). The user follows this hyperlink and gets to a page which...AND SUBTITLE WebWatcher: Machine Learning and Hypertext 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT
Developing Intranets: Practical Issues for Implementation and Design.
ERIC Educational Resources Information Center
Trowbridge, Dave
1996-01-01
An intranet is a system which has "domesticated" the technologies of the Internet for specific organizational settings and goals. Although the adaptability of Hypertext Markup Language to intranets is sometimes limited, implementing various protocols and technologies enable organizations to share files among heterogeneous computers,…
GIS based application tool -- history of East India Company
NASA Astrophysics Data System (ADS)
Phophaliya, Sudhir
The emphasis of the thesis is to build an intuitive and robust GIS (Geographic Information systems) Tool which gives an in depth information on history of East India Company. The GIS tool also incorporates various achievements of East India Company which helped to establish their business all over world especially India. The user has the option to select these movements and acts by clicking on any of the marked states on the World map. The World Map also incorporates key features for East India Company like landing of East India Company in India, Darjeeling Tea Establishment, East India Company Stock Redemption Act etc. The user can know more about these features simply by clicking on each of them. The primary focus of the tool is to give the user a unique insight about East India Company; for this the tool has several HTML (Hypertext markup language) pages which the user can select. These HTML pages give information on various topics like the first Voyage, Trade with China, 1857 Revolt etc. The tool has been developed in JAVA. For the Indian map MOJO (Map Objects Java Objects) is used. MOJO is developed by ESRI. The major features shown on the World map was designed using MOJO. MOJO made it easy to incorporate the statistical data with these features. The user interface was intentionally kept simple and easy to use. To keep the user engaged, key aspects are explained using HTML pages. The idea is that pictures will help the user garner interest in the history of East India Company.
Conversion of Radiology Reporting Templates to the MRRT Standard.
Kahn, Charles E; Genereaux, Brad; Langlotz, Curtis P
2015-10-01
In 2013, the Integrating the Healthcare Enterprise (IHE) Radiology workgroup developed the Management of Radiology Report Templates (MRRT) profile, which defines both the format of radiology reporting templates using an extension of Hypertext Markup Language version 5 (HTML5), and the transportation mechanism to query, retrieve, and store these templates. Of 200 English-language report templates published by the Radiological Society of North America (RSNA), initially encoded as text and in an XML schema language, 168 have been converted successfully into MRRT using a combination of automated processes and manual editing; conversion of the remaining 32 templates is in progress. The automated conversion process applied Extensible Stylesheet Language Transformation (XSLT) scripts, an XML parsing engine, and a Java servlet. The templates were validated for proper HTML5 and MRRT syntax using web-based services. The MRRT templates allow radiologists to share best-practice templates across organizations and have been uploaded to the template library to supersede the prior XML-format templates. By using MRRT transactions and MRRT-format templates, radiologists will be able to directly import and apply templates from the RSNA Report Template Library in their own MRRT-compatible vendor systems. The availability of MRRT-format reporting templates will stimulate adoption of the MRRT standard and is expected to advance the sharing and use of templates to improve the quality of radiology reports.
Thin client (web browser)-based collaboration for medical imaging and web-enabled data.
Le, Tuong Huu; Malhi, Nadeem
2002-01-01
Utilizing thin client software and open source server technology, a collaborative architecture was implemented allowing for sharing of Digital Imaging and Communications in Medicine (DICOM) and non-DICOM images with real-time markup. Using the Web browser as a thin client integrated with standards-based components, such as DHTML (dynamic hypertext markup language), JavaScript, and Java, collaboration was achieved through a Web server/proxy server combination utilizing Java Servlets and Java Server Pages. A typical collaborative session involved the driver, who directed the navigation of the other collaborators, the passengers, and provided collaborative markups of medical and nonmedical images. The majority of processing was performed on the server side, allowing for the client to remain thin and more accessible.
The New Frontier: Conquering the World Wide Web by Mule.
ERIC Educational Resources Information Center
Gresham, Morgan
1999-01-01
Examines effects of teaching hypertext markup language on students' perceptions of class goals in a networked composition classroom. Suggests sending documents via file transfer protocol by command line and viewing the Web with a textual browser shifted emphasis from writing to coding. Argues that helping students identify a balance between…
Development and Evaluation of a Thai Learning System on the Web Using Natural Language Processing.
ERIC Educational Resources Information Center
Dansuwan, Suyada; Nishina, Kikuko; Akahori, Kanji; Shimizu, Yasutaka
2001-01-01
Describes the Thai Learning System, which is designed to help learners acquire the Thai word order system. The system facilitates the lessons on the Web using HyperText Markup Language and Perl programming, which interfaces with natural language processing by means of Prolog. (Author/VWL)
2017-05-01
Center ESRI Environmental Systems Research Institute GIS Geographic Information System HTML Hyper -Text Markup Language LCAC Landing Craft Air... loop .” The ship simulator bridge is generic in that its layout is similar to that found in a variety of ships. As shown in Figures 17 and 18, the...information stored in the geodatabases. The Hyper -Text Markup Language (HTML) capability built into ArcMap permits a planner to click on a vessel track and
World Wide Web and Internet: applications for radiologists.
Wunderbaldinger, P; Schima, W; Turetschek, K; Helbich, T H; Bankier, A A; Herold, C J
1999-01-01
Global exchange of information is one of the major sources of scientific progress in medicine. For management of the rapidly growing body of medical information, computers and their applications have become an indispensable scientific tool. Approximately 36 million computer users are part of a worldwide network called the Internet or "information highway" and have created a new infrastructure to promote rapid and efficient access to medical, and thus also to radiological, information. With the establishment of the World Wide Web (WWW) by a consortium of computer users who used a standardized, nonproprietary syntax termed HyperText Markup Language (HTML) for composing documents, it has become possible to provide interactive multimedia presentations to a wide audience. The extensive use of images in radiology makes education, worldwide consultation (review) and scientific presentation via the Internet a major beneficiary of this technical development. This is possible, since both information (text) as well as medical images can be transported via the Internet. Presently, the Internet offers an extensive database for radiologists. Since many radiologists and physicians have to be considered "Internet novices" and, hence, cannot yet avail themselves of the broad spectrum of the Internet, the aim of this article is to present a general introduction to the WWW/Internet and its applications for radiologists. All Internet sites mentioned in this article can be found at the following Internet address: http://www.univie.ac. at/radio/radio.html (Department of Radiology, University of Vienna)
Development of a Google-based search engine for data mining radiology reports.
Erinjeri, Joseph P; Picus, Daniel; Prior, Fred W; Rubin, David A; Koppel, Paul
2009-08-01
The aim of this study is to develop a secure, Google-based data-mining tool for radiology reports using free and open source technologies and to explore its use within an academic radiology department. A Health Insurance Portability and Accountability Act (HIPAA)-compliant data repository, search engine and user interface were created to facilitate treatment, operations, and reviews preparatory to research. The Institutional Review Board waived review of the project, and informed consent was not required. Comprising 7.9 GB of disk space, 2.9 million text reports were downloaded from our radiology information system to a fileserver. Extensible markup language (XML) representations of the reports were indexed using Google Desktop Enterprise search engine software. A hypertext markup language (HTML) form allowed users to submit queries to Google Desktop, and Google's XML response was interpreted by a practical extraction and report language (PERL) script, presenting ranked results in a web browser window. The query, reason for search, results, and documents visited were logged to maintain HIPAA compliance. Indexing averaged approximately 25,000 reports per hour. Keyword search of a common term like "pneumothorax" yielded the first ten most relevant results of 705,550 total results in 1.36 s. Keyword search of a rare term like "hemangioendothelioma" yielded the first ten most relevant results of 167 total results in 0.23 s; retrieval of all 167 results took 0.26 s. Data mining tools for radiology reports will improve the productivity of academic radiologists in clinical, educational, research, and administrative tasks. By leveraging existing knowledge of Google's interface, radiologists can quickly perform useful searches.
Online Survey, Enrollment, and Examination: Special Internet Applications in Teacher Education.
ERIC Educational Resources Information Center
Tu, Jho-Ju; Babione, Carolyn; Chen, Hsin-Chu
The Teachers College at Emporia State University in Kansas is now utilizing World Wide Web technology for automating the application procedure for student teaching. The general concepts and some of the key terms that are important for understanding the process involved in this project include: a client-server model, HyperText Markup Language,…
D'Souza, Malcolm J; Barile, Benjamin; Givens, Aaron F
2015-05-01
Synthetic pesticides are widely used in the modern world for human benefit. They are usually classified according to their intended pest target. In Delaware (DE), approximately 42 percent of the arable land is used for agriculture. In order to manage insectivorous and herbaceous pests (such as insects, weeds, nematodes, and rodents), pesticides are used profusely to biologically control the normal pest's life stage. In this undergraduate project, we first created a usable relational database containing 62 agricultural pesticides that are common in Delaware. Chemically pertinent quantitative and qualitative information was first stored in Bio-Rad's KnowItAll® Informatics System. Next, we extracted the data out of the KnowItAll® system and created additional sections on a Microsoft® Excel spreadsheet detailing pesticide use(s) and safety and handling information. Finally, in an effort to promote good agricultural practices, to increase efficiency in business decisions, and to make pesticide data globally accessible, we developed a mobile application for smartphones that displayed the pesticide database using Appery.io™; a cloud-based HyperText Markup Language (HTML5), jQuery Mobile and Hybrid Mobile app builder.
Piccolo, Brian D; Wankhade, Umesh D; Chintapalli, Sree V; Bhattacharyya, Sudeepa; Chunqiao, Luo; Shankar, Kartik
2018-03-15
Dynamic assessment of microbial ecology (DAME) is a Shiny-based web application for interactive analysis and visualization of microbial sequencing data. DAME provides researchers not familiar with R programming the ability to access the most current R functions utilized for ecology and gene sequencing data analyses. Currently, DAME supports group comparisons of several ecological estimates of α-diversity and β-diversity, along with differential abundance analysis of individual taxa. Using the Shiny framework, the user has complete control of all aspects of the data analysis, including sample/experimental group selection and filtering, estimate selection, statistical methods and visualization parameters. Furthermore, graphical and tabular outputs are supported by R packages using D3.js and are fully interactive. DAME was implemented in R but can be modified by Hypertext Markup Language (HTML), Cascading Style Sheets (CSS), and JavaScript. It is freely available on the web at https://acnc-shinyapps.shinyapps.io/DAME/. Local installation and source code are available through Github (https://github.com/bdpiccolo/ACNC-DAME). Any system with R can launch DAME locally provided the shiny package is installed. bdpiccolo@uams.edu.
Teaching with a GIS using existing grade 7--12 curricula
NASA Astrophysics Data System (ADS)
Brown, Stephen Castlebury
As Geographic Information Systems (GIS) become less expensive and easier to use, the demand for individuals knowledgeable of this technology increases. Associated with this is the current and future necessity of a public who understands the wide range of technical proficiencies needed for accurate GIS mapping. On a nationwide basis, GIS education in K--12 schools is rare. In the few instances where a school teaches students about these technologies, it is usually led by a single teacher and is not taught on a school-wide basis. This situation exists despite some research indicating that a classroom GIS might enhance the learning of students. Two primary barriers to teacher use and acceptance of a classroom GIS have been identified. First, most teachers lack any training in the use of a GIS. Secondly, there is conflict over focusing upon teaching about the use of a GIS or teaching with a GIS. Beginning in August of 1996 and concluding in August of 1998, nine separate GIS education programs were conducted for a variety of youths and adult educator audiences. Observations of participant's interactions with the GIS program ArcView would lead to the development of a demonstration curriculum and GIS application. To overcome institutional and educational barriers to youth GIS education, a curriculum partly adapted from existing materials and partly created from original materials was developed in Hypertext Markup Language (HTML). A corresponding GIS application was developed to teach about a GIS while instructing with a GIS. The curriculum was distributed for use on CD-ROM and called Georom. The hypertext curriculum provided lessons and exercises that addressed National Science Education Standards and was accessed using an Internet web browser. The curriculum included World Wide Web links to Internet sites with more information about specific topics. Modifications were made to ArcView's Graphical User Interface (GUI) that maintained the general appearance of its standard GUI, but increased its functionality for classroom use. It was observed that the availability and premise of the hypertext curriculum and GIS application increased school administrator acceptance of classroom GIS education. However, the curriculum and GIS application is still not a completely acceptable alternative to quality inservice education on GIS for many teachers.
PORIDGE: Postmodern Rhizomatics in Digitally Generated Environments--Do We Need a Metatheory for W3?
ERIC Educational Resources Information Center
Wallmannsberger, Josef
1994-01-01
Discusses the World Wide Web (W3) and its relevance to a philosophy of science. Topics include PORIDGE, an electronically mediated encyclopedia of postmodern knowledge; hypertext mark-up language; W3 as a medium for information ecologies; the relationship between W3 and the user; social manufacture of knowledge; and W3 as a model. (29 references)…
HTML5 May Provide Vital Link for Friendly Future Mobile Applications
2012-01-01
31Army Communicator HTML5 may provide vital link for friendly future mobile applications By LTC Gregory Motes An important thread that has exist...would be accomplished with the maturation of HTML5 . To fully grasp the opportunity, one needs to be aware of the history of Hy- perText Markup...number of devices in both on-line and off-line states. To many, this would be accomplished with the maturation of HTML5 . (Continued on page 32
FastScript3D - A Companion to Java 3D
NASA Technical Reports Server (NTRS)
Koenig, Patti
2005-01-01
FastScript3D is a computer program, written in the Java 3D(TM) programming language, that establishes an alternative language that helps users who lack expertise in Java 3D to use Java 3D for constructing three-dimensional (3D)-appearing graphics. The FastScript3D language provides a set of simple, intuitive, one-line text-string commands for creating, controlling, and animating 3D models. The first word in a string is the name of a command; the rest of the string contains the data arguments for the command. The commands can also be used as an aid to learning Java 3D. Developers can extend the language by adding custom text-string commands. The commands can define new 3D objects or load representations of 3D objects from files in formats compatible with such other software systems as X3D. The text strings can be easily integrated into other languages. FastScript3D facilitates communication between scripting languages [which enable programming of hyper-text markup language (HTML) documents to interact with users] and Java 3D. The FastScript3D language can be extended and customized on both the scripting side and the Java 3D side.
HGML: a hypertext guideline markup language.
Hagerty, C. G.; Pickens, D.; Kulikowski, C.; Sonnenberg, F.
2000-01-01
Existing text-based clinical practice guidelines can be difficult to put into practice. While a growing number of such documents have gained acceptance in the medical community and contain a wealth of valuable information, the time required to digest them is substantial. Yet the expressive power, subtlety and flexibility of natural language pose challenges when designing computer tools that will help in their application. At the same time, formal computer languages typically lack such expressiveness and the effort required to translate existing documents into these languages may be costly. We propose a method based on the mark-up concept for converting text-based clinical guidelines into a machine-operable form. This allows existing guidelines to be manipulated by machine, and viewed in different formats at various levels of detail according to the needs of the practitioner, while preserving their originally published form. PMID:11079898
Automated radiotherapy treatment plan integrity verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang Deshan; Moore, Kevin L.
2012-03-15
Purpose: In our clinic, physicists spend from 15 to 60 min to verify the physical and dosimetric integrity of radiotherapy plans before presentation to radiation oncology physicians for approval. The purpose of this study was to design and implement a framework to automate as many elements of this quality control (QC) step as possible. Methods: A comprehensive computer application was developed to carry out a majority of these verification tasks in the Philips PINNACLE treatment planning system (TPS). This QC tool functions based on both PINNACLE scripting elements and PERL sub-routines. The core of this technique is the method ofmore » dynamic scripting, which involves a PERL programming module that is flexible and powerful for treatment plan data handling. Run-time plan data are collected, saved into temporary files, and analyzed against standard values and predefined logical rules. The results were summarized in a hypertext markup language (HTML) report that is displayed to the user. Results: This tool has been in clinical use for over a year. The occurrence frequency of technical problems, which would cause delays and suboptimal plans, has been reduced since clinical implementation. Conclusions: In addition to drastically reducing the set of human-driven logical comparisons, this QC tool also accomplished some tasks that are otherwise either quite laborious or impractical for humans to verify, e.g., identifying conflicts amongst IMRT optimization objectives.« less
Phased development of a web-based PACS viewer
NASA Astrophysics Data System (ADS)
Gidron, Yoad; Shani, Uri; Shifrin, Mark
2000-05-01
The Web browser is an excellent environment for the rapid development of an effective and inexpensive PACS viewer. In this paper we will share our experience in developing a browser-based viewer, from the inception and prototype stages to its current state of maturity. There are many operational advantages to a browser-based viewer, even when native viewers already exist in the system (with multiple and/or high resolution screens): (1) It can be used on existing personal workstations throughout the hospital. (2) It is easy to make the service available from physician's homes. (3) The viewer is extremely portable and platform independent. There is a wide variety of means available for implementing the browser- based viewer. Each file sent to the client by the server can perform some end-user or client/server interaction. These means range from HTML (for HyperText Markup Language) files, through Java Script, to Java applets. Some data types may also invoke plug-in code in the client, although this would reduce the portability of the viewer, it would provide the needed efficiency in critical places. On the server side the range of means is also very rich: (1) A set of files: html, Java Script, Java applets, etc. (2) Extensions of the server via cgi-bin programs, (3) Extensions of the server via servlets, (4) Any other helper application residing and working with the server to access the DICOM archive. The viewer architecture consists of two basic parts: The first part performs query and navigation through the DICOM archive image folders. The second part does the image access and display. While the first part deals with low data traffic, it involves many database transactions. The second part is simple as far as access transactions are concerned, but requires much more data traffic and display functions. Our web-based viewer has gone through three development stages characterized by the complexity of the means and tools employed on both client and server sides.
An interactive HTML ocean nowcast GUI based on Perl and JavaScript
NASA Astrophysics Data System (ADS)
Sakalaukus, Peter J.; Fox, Daniel N.; Louise Perkins, A.; Smedstad, Lucy F.
1999-02-01
We describe the use of Hyper Text Markup Language (HTML), JavaScript code, and Perl I/O to create and validate forms in an Internet-based graphical user interface (GUI) for the Naval Research Laboratory (NRL) Ocean models and Assimilation Demonstration System (NOMADS). The resulting nowcast system can be operated from any compatible browser across the Internet, for although the GUI was prepared in a Netscape browser, it used no Netscape extensions. Code available at: http://www.iamg.org/CGEditor/index.htm
Designing and Managing Your Digital Library.
ERIC Educational Resources Information Center
Guenther, Kim
2000-01-01
Discusses digital libraries and Web site design issues. Highlights include accessibility issues, including standards, markup languages like HTML and XML, and metadata; building virtual communities; the use of Web portals for customized delivery of information; quality assurance tools, including data mining; and determining user needs, including…
ERIC Educational Resources Information Center
Scharf, David
2002-01-01
Discusses XML (extensible markup language), particularly as it relates to libraries. Topics include organizing information; cataloging; metadata; similarities to HTML; organizations dealing with XML; making XML useful; a history of XML; the semantic Web; related technologies; XML at the Library of Congress; and its role in improving the…
The carbohydrate sequence markup language (CabosML): an XML description of carbohydrate structures.
Kikuchi, Norihiro; Kameyama, Akihiko; Nakaya, Shuuichi; Ito, Hiromi; Sato, Takashi; Shikanai, Toshihide; Takahashi, Yoriko; Narimatsu, Hisashi
2005-04-15
Bioinformatics resources for glycomics are very poor as compared with those for genomics and proteomics. The complexity of carbohydrate sequences makes it difficult to define a common language to represent them, and the development of bioinformatics tools for glycomics has not progressed. In this study, we developed a carbohydrate sequence markup language (CabosML), an XML description of carbohydrate structures. The language definition (XML Schema) and an experimental database of carbohydrate structures using an XML database management system are available at http://www.phoenix.hydra.mki.co.jp/CabosDemo.html kikuchi@hydra.mki.co.jp.
Hypertext-based computer vision teaching packages
NASA Astrophysics Data System (ADS)
Marshall, A. David
1994-10-01
The World Wide Web Initiative has provided a means for providing hypertext and multimedia based information across the whole INTERNET. Many applications have been developed on such http servers. At Cardiff we have developed a http hypertext based multimedia server, the Cardiff Information Server, using the widely available Mosaic system. The server provides a variety of information ranging from the provision of teaching modules, on- line documentation, timetables for departmental activities to more light hearted hobby interests. One important and novel development to the server has been the development of courseware facilities. This ranges from the provision of on-line lecture notes, exercises and their solutions to more interactive teaching packages. A variety of disciplines have benefitted notably Computer Vision, and Image Processing but also C programming, X Windows, Computer Graphics and Parallel Computing. This paper will address the issues of the implementation of the Computer Vision and Image Processing packages, the advantages gained from using a hypertext based system and also will relate practical experiences of using the packages in a class environment. The paper addresses issues of how best to provide information in such a hypertext based system and how interactive image processing packages can be developed and integrated into courseware. The suite of tools developed facilitates a flexible and powerful courseware package that has proved popular in the classroom and over the Internet. The paper will also detail many future developments we see possible. One of the key points raised in the paper is that Mosaic's hypertext language (html) is extremely powerful and yet relatively straightforward to use. It is also possible to link in Unix calls so that programs and shells can be executed. This provides a powerful suite of utilities that can be exploited to develop many packages.
Migrating an Online Service to WAP - A Case Study.
ERIC Educational Resources Information Center
Klasen, Lars
2002-01-01
Discusses mobile access via wireless application protocol (WAP) to online services that is offered in Sweden through InfoTorg. Topics include the Swedish online market; filtering HTML data from an Internet/Web server into WML (wireless markup language); mobile phone technology; microbrowsers; WAP protocol; and future possibilities. (LRW)
Report of Official foreign Travel to Spain April 17-29, 1999. (in English;)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mason, j.d.
The Department of Energy (DOE) has moved rapidly toward electronic production, management, and dissemination of scientific and technical information. The World-Wide Web (WWW) has become a primary means of information dissemination. Electronic commerce (EC) is becoming the preferred means of procurement. DOE, like other government agencies, depends on and encourages the use of international standards in data communications. Like most government agencies, DOE has expressed a preference for openly developed standards in preference to proprietary designs promoted as "standards" by vendors. In particular, there is a preference for standards developed by organizations such as the International Organization for Standardization (ISO)more » and the American National Standards Institute (ANSI) that use open, public processes to develop their standards. Among the most widely adopted international standards is the Standard Generalized Markup Language (SGML, ISO 8879:1986, FIPS 152), which DOE has selected as the basis of its electronic management of documents. Besides the official commitment, which has resulted in several specialized projects, DOE makes heavy use of coding derived from SGML, and its use is likely to increase in the future. Most documents on the WWW are coded in HTML ("Hypertext Markup Language"), which is an application of SGML. The World-Wide Web Consortium (W3C), with the backing of major software houses like Microsoft, Adobe, and Netscape, is promoting XML ("eXtensible Markup Language"), a class of SGML applications, for the future of the WWW and the basis for EC. W3C has announced its intention of discontinuing future development of HTML and replacing it with XHTML, an application of XML. In support of DOE's use of these standards, I have served since 1985 as Chairman of the international committee responsible for SGML and related standards, ISO/IEC JTC1/SC34 (SC34) and its predecessor organizations. During my April 1999 trip, I convened the spring 1999 meeting of SC34 in Granada, Spain. I also attended a major conference on the use of SGML and XML. SC34 maintains and continues to enhance several standards. In addition to SGML, which is the basis of HTML and XML, SC34 also works on the Document Style Semantics and Specification Language (DSSSL), which is the basis for W3C's XSL ("eXtensible Style Language," to be used with XML) and the Hypermedia/Time-based Document Structuring Language (HyTime), which is a major influence on W3C's XLink ("XML Linking Language"). SC34 is also involved in work with ISO's TC184, Industrial Data, on the linking of STEP (the standard for the interchange of product model data) with SGML. In addition to the widespread use of the WWW among DOE's plants and facilities in Oak Ridge and among DOE sites across the nation, there are several SGML-based projects at the Y-12 Plant. My project team in Information Technology Services developed an SGML-based publications system that has been used for several major reports at the Y-12 Plant and Oak Ridge National Laboratory (ORNL). SGML is a component of the Weapons Records Archiving and Preservation (WRAP) project at the Y-12 Plant and is the format for catalog metadata chosen for weapons records by the Nuclear Weapons Information Group (NWIG). Supporting standards development allows DOE and the Y-12 plant both input into the process and the opportunity to benefit from contact with some of the leading experts in the subject matter. Oak Ridge has been for some years the location to which other DOE sites turn for expertise in SGML and related topics.« less
Dmitrieva, Olga; Michalakidis, Georgios; Mason, Aaron; Jones, Simon; Chan, Tom; de Lusignan, Simon
2012-01-01
A new distributed model of health care management is being introduced in England. Family practitioners have new responsibilities for the management of health care budgets and commissioning of services. There are national datasets available about health care providers and the geographical areas they serve. These data could be better used to assist the family practitioner turned health service commissioners. Unfortunately these data are not in a form that is readily usable by these fledgling family commissioning groups. We therefore Web enabled all the national hospital dermatology treatment data in England combining it with locality data to provide a smart commissioning tool for local communities. We used open-source software including the Ruby on Rails Web framework and MySQL. The system has a Web front-end, which uses hypertext markup language cascading style sheets (HTML/CSS) and JavaScript to deliver and present data provided by the database. A combination of advanced caching and schema structures allows for faster data retrieval on every execution. The system provides an intuitive environment for data analysis and processing across a large health system dataset. Web-enablement has enabled data about in patients, day cases and outpatients to be readily grouped, viewed, and linked to other data. The combination of web-enablement, consistent data collection from all providers; readily available locality data; and a registration based primary system enables the creation of data, which can be used to commission dermatology services in small areas. Standardized datasets collected across large health enterprises when web enabled can readily benchmark local services and inform commissioning decisions.
Report of Official Foreign Travel to France May 8-27, 1998
DOE Office of Scientific and Technical Information (OSTI.GOV)
mason, j d
1998-06-11
The Department of Energy (DOE) has moved ever more rapidly towards electronic production, management, and dissemination of scientific and technical information. The World-Wide Web (WWW) has become a primary means of information dissemination. Electronic commerce (EC) is becoming the preferred means of procurement. DOE, like other government agencies, depends on and encourages the use of international standards in data communications. Among the most widely adopted standards is the Standard Generalized Markup Language (SGML, ISO 8879:1986, FIPS 152), which DOE has selected as the basis of its electronic management of documents. Besides the official commitment, which has resulted in several specializedmore » projects, DOE makes heavy use of coding derived from SGML, and its use is likely to increase in the future. Most documents on the WWW are coded in HTML (Hypertext Markup Language), which is an application of SGML. The World-Wide Web Consortium (W3C), with the backing of major software houses like Microsoft, Adobe, and Netscape, is promoting XML (eXtensible Markup Language), a class of SGML applications, for the future of the WWW and the basis for EC. In support of DOE's use of these standards, I have served since 1985 as Convenor of the international committee responsible for SGML and related standards, ISO/IEC JTC1/WG4 (WG4). During this trip I convened the spring 1998 meeting of WG4 in Paris, France. I also attended a major conference on the use of SGML and XML. At the close of the conference, I chaired a workshop of standards developers looking at ways of improving online searching of electronic documents. Note: Since the end of the meetings in France, JTC1 has raised the level of WG4 to a full Subcommittee; its designator is now ISO/IEC JTC1/SC34. WG4 maintains and continues to enhance several standards. In addition to SGML, which is the basis of HTML and XML, WG4 also works on the Document Style Semantics and Specification Language (DSSSL), which is the basis for the W3C's XSL (eXtensible Style Language, to be used with XML) and the Hypermedia/Time-based Document Structuring Language (HyTime), which is a major influence on the W3C's XLink (XML Linking Language). WG4 is also involved in work with the ISO's TC184, Industrial Data, on the linking of STEP (the standard for the interchange of product model data) with SGML. In addition to the widespread use of the WWW among DOE's plants and facilities in Oak Ridge and among DOE sites across the nation, there are several SGML-based projects at the Y-12 Plant. My project team in Information Technology Services has developed an SGML-based publications system that has been used for several major reports at the Y-12 Plant and Oak Ridge National Laboratory (ORNL). SGML is a component of the Weapons Records Archiving and Preservation (WRAP) project at Y-12 and is the format for catalog metadata chosen for weapons records by the Nuclear Weapons Information Group (NWIG). Supporting standards development allows DOE and Y-12 both input into the process and the opportunity to benefit from contact with some of the leading experts in the subject matter. Oak Ridge has been for some years the location to which other DOE sites turn for expertise in SGML and related topics.« less
Nassi-Schneiderman Diagram in HTML Based on AML
ERIC Educational Resources Information Center
Menyhárt, László
2013-01-01
In an earlier work I defined an extension of XML called Algorithm Markup Language (AML) for easy and understandable coding in an IDE which supports XML editing (e.g. NetBeans). The AML extension contains annotations and native language (English or Hungarian) tag names used when coding our algorithm. This paper presents a drawing tool with which…
Pinciroli, Francesco; Masseroli, Marco; Acerbo, Livio A; Bonacina, Stefano; Ferrari, Roberto; Marchente, Mario
2004-01-01
This paper presents a low cost software platform prototype supporting health care personnel in retrieving patient referral multimedia data. These information are centralized in a server machine and structured by using a flexible eXtensible Markup Language (XML) Bio-Image Referral Database (BIRD). Data are distributed on demand to requesting client in an Intranet network and transformed via eXtensible Stylesheet Language (XSL) to be visualized in an uniform way on market browsers. The core server operation software has been developed in PHP Hypertext Preprocessor scripting language, which is very versatile and useful for crafting a dynamic Web environment.
NASA Astrophysics Data System (ADS)
Firestone, Richard B.; Chu, S. Y. Frank; Ekstrom, L. Peter; Wu, Shiu-Chin; Singh, Balraj
1997-10-01
The Isotopes Project is developing Internet home pages to provide data for radioactive decay, nuclear structure, nuclear astrophysics, spontaneous fission, thermal neutron capture, and atomic masses. These home pages can be accessed from the Table of Isotopes home page at http://isotopes.lbl.gov/isotopes/toi.html. Data from the Evaluated Nuclear Structure Data File (ENSDF) is now available on the WWW in Nuclear Data Sheet style tables, complete with comments and hypertext linked footnotes. Bibliographic information from the Nuclear Science Reference (NSR) file can be searched on the WWW by combinations of author, A, Z, reaction, and various keywords. Decay gamma-ray data from several databases can be searched by energy. The Table of Superdeformed Nuclear Bands and Fission Isomers is continously updated. Reaction rates from Hoffman and Woosley and from Thielemann, fission yields from England and Rider, thermal neutron cross-sections from BNL-325, atomic masses from Audi, and skeleton scheme drawings and nuclear charts from the Table of Isotopes are among the information available through these websites. The nuclear data home pages are accessed by over 3500 different users each month.
Spectroscopic data for an astronomy database
NASA Technical Reports Server (NTRS)
Parkinson, W. H.; Smith, Peter L.
1995-01-01
Very few of the atomic and molecular data used in analyses of astronomical spectra are currently available in World Wide Web (WWW) databases that are searchable with hypertext browsers. We have begun to rectify this situation by making extensive atomic data files available with simple search procedures. We have also established links to other on-line atomic and molecular databases. All can be accessed from our database homepage with URL: http:// cfa-www.harvard.edu/ amp/ data/ amdata.html.
SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.
Zi, Zhike
2011-04-01
Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.
WebGIS Platform Adressed to Forest Fire Management Methodologies
NASA Astrophysics Data System (ADS)
André Ramos-Simões, Nuno; Neto Paixão, Helena Maria; Granja Martins, Fernando Miguel; Pedras, Celestina; Lança, Rui; Silva, Elisa; Jordán, António; Zavala, Lorena; Soares, Cristina
2015-04-01
Forest fires are one of the natural disasters that causes more damages in nature, as well as high material costs, and sometimes, a significant losses in human lives. In summer season, when high temperatures are attained, fire may rapidly progress and destroy vast areas of forest and also rural and urban areas. The forest fires have effect on forest species, forest composition and structure, soil properties and soil capacity for nutrient retention. In order to minimize the negative impact of the forest fires in the environment, many studies have been developed, e.g. Jordán et al (2009), Cerdà & Jordán (2010), and Gonçalves & Vieira (2013). Nowadays, Remote Sensing (RS) and Geographic Information System (GIS) technologies are used as support tools in fire management decisions, namely during the fire, but also before and after. This study presents the development of a user-friendly WebGIS dedicated to share data, maps and provide updated information on forest fire management for stakeholders in Iberia Peninsula. The WebGIS platform was developed with ArcGIS Online, ArcGIS for Desktop; HyperText Markup Language (HTML) and Javascript. This platform has a database that includes spatial and alphanumeric information, such as: origin, burned areas, vegetation change over time, terrain natural slope, land use, soil erosion and fire related hazards. The same database contains also the following relevant information: water sources, forest tracks and traffic ways, lookout posts and urban areas. The aim of this study is to provide the authorities with a tool to assess risk areas and manage more efficiently forest fire hazards, giving more support to their decisions and helping the populations when facing this kind of phenomena.
HBVPathDB: a database of HBV infection-related molecular interaction network.
Zhang, Yi; Bo, Xiao-Chen; Yang, Jing; Wang, Sheng-Qi
2005-03-21
To describe molecules or genes interaction between hepatitis B viruses (HBV) and host, for understanding how virus' and host's genes and molecules are networked to form a biological system and for perceiving mechanism of HBV infection. The knowledge of HBV infection-related reactions was organized into various kinds of pathways with carefully drawn graphs in HBVPathDB. Pathway information is stored with relational database management system (DBMS), which is currently the most efficient way to manage large amounts of data and query is implemented with powerful Structured Query Language (SQL). The search engine is written using Personal Home Page (PHP) with SQL embedded and web retrieval interface is developed for searching with Hypertext Markup Language (HTML). We present the first version of HBVPathDB, which is a HBV infection-related molecular interaction network database composed of 306 pathways with 1 050 molecules involved. With carefully drawn graphs, pathway information stored in HBVPathDB can be browsed in an intuitive way. We develop an easy-to-use interface for flexible accesses to the details of database. Convenient software is implemented to query and browse the pathway information of HBVPathDB. Four search page layout options-category search, gene search, description search, unitized search-are supported by the search engine of the database. The database is freely available at http://www.bio-inf.net/HBVPathDB/HBV/. The conventional perspective HBVPathDB have already contained a considerable amount of pathway information with HBV infection related, which is suitable for in-depth analysis of molecular interaction network of virus and host. HBVPathDB integrates pathway data-sets with convenient software for query, browsing, visualization, that provides users more opportunity to identify regulatory key molecules as potential drug targets and to explore the possible mechanism of HBV infection based on gene expression datasets.
WaterML: an XML Language for Communicating Water Observations Data
NASA Astrophysics Data System (ADS)
Maidment, D. R.; Zaslavsky, I.; Valentine, D.
2007-12-01
One of the great impediments to the synthesis of water information is the plethora of formats used to publish such data. Each water agency uses its own approach. XML (eXtended Markup Languages) are generalizations of Hypertext Markup Language to communicate specific kinds of information via the internet. WaterML is an XML language for water observations data - streamflow, water quality, groundwater levels, climate, precipitation and aquatic biology data, recorded at fixed, point locations as a function of time. The Hydrologic Information System project of the Consortium of Universities for the Advancement of Hydrologic Science, Inc (CUAHSI) has defined WaterML and prepared a set of web service functions called WaterOneFLow that use WaterML to provide information about observation sites, the variables measured there and the values of those measurments. WaterML has been submitted to the Open GIS Consortium for harmonization with its standards for XML languages. Academic investigators at a number of testbed locations in the WATERS network are providing data in WaterML format using WaterOneFlow web services. The USGS and other federal agencies are also working with CUAHSI to similarly provide access to their data in WaterML through WaterOneFlow services.
Intelligent retrieval of medical images from the Internet
NASA Astrophysics Data System (ADS)
Tang, Yau-Kuo; Chiang, Ted T.
1996-05-01
The object of this study is using Internet resources to provide a cost-effective, user-friendly method to access the medical image archive system and to provide an easy method for the user to identify the images required. This paper describes the prototype system architecture, the implementation, and results. In the study, we prototype the Intelligent Medical Image Retrieval (IMIR) system as a Hypertext Transport Prototype server and provide Hypertext Markup Language forms for user, as an Internet client, using browser to enter image retrieval criteria for review. We are developing the intelligent retrieval engine, with the capability to map the free text search criteria to the standard terminology used for medical image identification. We evaluate retrieved records based on the number of the free text entries matched and their relevance level to the standard terminology. We are in the integration and testing phase. We have collected only a few different types of images for testing and have trained a few phrases to map the free text to the standard medical terminology. Nevertheless, we are able to demonstrate the IMIR's ability to search, retrieve, and review medical images from the archives using general Internet browser. The prototype also uncovered potential problems in performance, security, and accuracy. Additional studies and enhancements will make the system clinically operational.
Report of official foreign travel to France, June 7--20, 2000
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.D. Mason
2000-07-11
The Department of Energy (DOE) has moved rapidly toward electronic production, management, and dissemination of scientific and technical information. The World-Wide Web (WWW) has become a primary means of information dissemination. Electronic commerce (EC) is becoming the preferred means of procurement. DOE, like other government agencies, depends on and encourages the use of international standards in data communications. Like most government agencies, DOE has expressed a preference for openly developed standards over proprietary designs promoted as ``standards'' by vendors. In particular, there is a preference for standards developed by organizations such as the International Organization for Standardization (ISO) and themore » American National Standards Institute (ANSI) that use open, public processes to develop their standards. Among the most widely adopted international standards is the Standard Generalized Markup Language (SGML, ISO 8879:1986, FIPS 152), to which DOE long ago made a commitment. Besides the official commitment, which has resulted in several specialized projects, DOE makes heavy use of coding derived from SGML: Most documents on the WWW are coded in HTML (Hypertext Markup Language), which is an application of SGML. The World-Wide Web Consortium (W3C), with the backing of major software houses like Adobe, IBM, Microsoft, Netscape, Oracle, and Sun, is promoting XML (eXtensible Markup Language), a class of SGML applications, for the future of the WWW and the basis for EC. In support of DOE's use of these standards, the authors has served since 1985 as Chairman of the international committee responsible for SGML and related standards, ISO/IEC JTC1/SC34 (SC34) and its predecessor organizations. During his June 2000 trip, he chaired the spring 2000 meeting of SC34 in Paris, France. He also attended a major conference on the use of SGML and XML and led a meeting of the International SGML/XML Users' Group (ISUG). In addition to the widespread use of the WWW among DOE's plants and facilities in Oak Ridge and among DOE sites across the nation, there are several SGML-based projects at the Oak Ridge Y-12 Plant. The local project team developed an SGML-based publications system that has been used for several major reports at the Y-12 Plant and Oak Ridge National Laboratory (ORNL). SGML is a component of the Weapons Records Archiving and Preservation (WRAP) project at the Y-12 Plant and is the format for catalog metadata chosen for weapons records by the Nuclear Weapons Information Group (NWIG). The Ferret system for automated classification analysis will use XML to structure its knowledge base. Supporting standards development allows DOE and the Y-12 plant the opportunity both to provide input into the process and to benefit from contact with some of the leading experts in the subject matter. Oak Ridge has been for some years the location to which other DOE sites turn for expertise in SGML and related topics.« less
MYCIN II: design and implementation of a therapy reference with complex content-based indexing.
Kim, D. K.; Fagan, L. M.; Jones, K. T.; Berrios, D. C.; Yu, V. L.
1998-01-01
We describe the construction of MYCIN II, a prototype system that provides for content-based markup and search of a forthcoming clinical therapeutics textbook, Antimicrobial Therapy and Vaccines. Existing commercial search technology for digital references utilizes generic tools such as textword-based searches with geographical or statistical refinements. We suggest that the drawbacks of such systems significantly restrict their use in everyday clinical practice. This is in spite of the fact that there is a great need for the information contained within these same references. The system we describe is intended to supplement keyword searching so that certain important questions can be asked easily and can be answered reliably (in terms of precision and recall). Our method attacks this problem in a restricted domain of knowledge-clinical infectious disease. For example, we would like to be able to answer the class of questions exemplified by the following query: "What antimicrobial agents can be used to treat endocarditis caused by Eikenella corrodens?" We have compiled and analyzed a list of such questions to develop a concept-based markup scheme. This scheme was then applied within an HTML markup to electronically "highlight" passages from three textbook chapters. We constructed a functioning web-based search interface. Our system also provides semi-automated querying of PubMed using our concept markup and the user's actions as a guide. PMID:9929205
MYCIN II: design and implementation of a therapy reference with complex content-based indexing.
Kim, D K; Fagan, L M; Jones, K T; Berrios, D C; Yu, V L
1998-01-01
We describe the construction of MYCIN II, a prototype system that provides for content-based markup and search of a forthcoming clinical therapeutics textbook, Antimicrobial Therapy and Vaccines. Existing commercial search technology for digital references utilizes generic tools such as textword-based searches with geographical or statistical refinements. We suggest that the drawbacks of such systems significantly restrict their use in everyday clinical practice. This is in spite of the fact that there is a great need for the information contained within these same references. The system we describe is intended to supplement keyword searching so that certain important questions can be asked easily and can be answered reliably (in terms of precision and recall). Our method attacks this problem in a restricted domain of knowledge-clinical infectious disease. For example, we would like to be able to answer the class of questions exemplified by the following query: "What antimicrobial agents can be used to treat endocarditis caused by Eikenella corrodens?" We have compiled and analyzed a list of such questions to develop a concept-based markup scheme. This scheme was then applied within an HTML markup to electronically "highlight" passages from three textbook chapters. We constructed a functioning web-based search interface. Our system also provides semi-automated querying of PubMed using our concept markup and the user's actions as a guide.
Frank, M S; Dreyer, K
2001-06-01
We describe a virtual web site hosting technology that enables educators in radiology to emblazon and make available for delivery on the world wide web their own interactive educational content, free from dependencies on in-house resources and policies. This suite of technologies includes a graphically oriented software application, designed for the computer novice, to facilitate the input, storage, and management of domain expertise within a database system. The database stores this expertise as choreographed and interlinked multimedia entities including text, imagery, interactive questions, and audio. Case-based presentations or thematic lectures can be authored locally, previewed locally within a web browser, then uploaded at will as packaged knowledge objects to an educator's (or department's) personal web site housed within a virtual server architecture. This architecture can host an unlimited number of unique educational web sites for individuals or departments in need of such service. Each virtual site's content is stored within that site's protected back-end database connected to Internet Information Server (Microsoft Corp, Redmond WA) using a suite of Active Server Page (ASP) modules that incorporate Microsoft's Active Data Objects (ADO) technology. Each person's or department's electronic teaching material appears as an independent web site with different levels of access--controlled by a username-password strategy--for teachers and students. There is essentially no static hypertext markup language (HTML). Rather, all pages displayed for a given site are rendered dynamically from case-based or thematic content that is fetched from that virtual site's database. The dynamically rendered HTML is displayed within a web browser in a Socratic fashion that can assess the recipient's current fund of knowledge while providing instantaneous user-specific feedback. Each site is emblazoned with the logo and identification of the participating institution. Individuals with teacher-level access can use a web browser to upload new content as well as manage content already stored on their virtual site. Each virtual site stores, collates, and scores participants' responses to the interactive questions posed on line. This virtual web site strategy empowers the educator with an end-to-end solution for creating interactive educational content and hosting that content within the educator's personalized and protected educational site on the world wide web, thus providing a valuable outlet that can magnify the impact of his or her talents and contributions.
XML Based Scientific Data Management Facility
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Zubair, M.; Ziebartt, John (Technical Monitor)
2001-01-01
The World Wide Web consortium has developed an Extensible Markup Language (XML) to support the building of better information management infrastructures. The scientific computing community realizing the benefits of HTML has designed markup languages for scientific data. In this paper, we propose a XML based scientific data management facility, XDMF. The project is motivated by the fact that even though a lot of scientific data is being generated, it is not being shared because of lack of standards and infrastructure support for discovering and transforming the data. The proposed data management facility can be used to discover the scientific data itself, the transformation functions, and also for applying the required transformations. We have built a prototype system of the proposed data management facility that can work on different platforms. We have implemented the system using Java, and Apache XSLT engine Xalan. To support remote data and transformation functions, we had to extend the XSLT specification and the Xalan package.
STS Case Study Development Support
NASA Technical Reports Server (NTRS)
Rosa de Jesus, Dan A.; Johnson, Grace K.
2013-01-01
The Shuttle Case Study Collection (SCSC) has been developed using lessons learned documented by NASA engineers, analysts, and contractors. The SCSC provides educators with a new tool to teach real-world engineering processes with the goal of providing unique educational materials that enhance critical thinking, decision-making and problem-solving skills. During this third phase of the project, responsibilities included: the revision of the Hyper Text Markup Language (HTML) source code to ensure all pages follow World Wide Web Consortium (W3C) standards, and the addition and edition of website content, including text, documents, and images. Basic HTML knowledge was required, as was basic knowledge of photo editing software, and training to learn how to use NASA's Content Management System for website design. The outcome of this project was its release to the public.
2014-06-22
GIG Global Information Grid GOTS Government Off-the-Shelf HTML Hyper Text Markup Language ICT Information and Communication Technology IEC...maintenance, retrieval, and preservation of vital information created in public and private organizations in all sectors of the economy . It is also the...constructed in the 1940’s, as part of a government-effort to provide employment 120 during the Depression, and boost the economy . This road is set in
Report of Official Foreign Travel to Germany, May 16-June 1, 2001
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. D. Mason
2001-06-18
The Department of Energy (DOE) and associated agencies have moved rapidly toward electronic production, management, and dissemination of scientific and technical information. The World-Wide Web (WWW) has become a primary means of information dissemination. Electronic commerce (EC) is becoming the preferred means of procurement. DOE, like other government agencies, depends on and encourages the use of international standards in data communications. Like most government agencies, DOE has expressed a preference for openly developed standards over proprietary designs promoted as ''standards'' by vendors. In particular, there is a preference for standards developed by organizations such as the International Organization for Standardizationmore » (ISO) and the American National Standards Institute (ANSI) that use open, public processes to develop their standards. Among the most widely adopted international standards is the Standard Generalized Markup Language (SGML, ISO 8879:1986, FIPS 152), to which DOE long ago made a commitment. Besides the official commitment, which has resulted in several specialized projects, DOE makes heavy use of coding derived from SGML: Most documents on the WWW are coded in HTML (Hypertext Markup Language), which is an application of SGML. The World-Wide Web Consortium (W3C), with the backing of major software houses like Adobe, IBM, Microsoft, Netscape, Oracle, and Sun, is promoting XML (eXtensible Markup Language), a class of SGML applications, for the future of the WWW and the basis for EC. In support of DOE's use of these standards, I have served since 1985 as Chairman of the international committee responsible for SGML and related standards, ISO/IEC JTC1/SC34 (SC34) and its predecessor organizations. During my May 2001 trip, I chaired the spring 2001 meeting of SC34 in Berlin, Germany. I also attended XML Europe 2001, a major conference on the use of SGML and XML sponsored by the Graphic Communications Association (GCA), and chaired a meeting of the International SGML/XML Users' Group (ISUG). In addition to the widespread use of the WWW among DOE's plants and facilities in Oak Ridge and among DOE sites across the nation, there have been several past and present SGML- and XML-based projects at the Y-12 National Security Complex (Y-12). Our local project team has done SGML and XML development at Y-12 and Oak Ridge National Laboratory (ORNL) since the late 1980s. SGML is a component of the Weapons Records Archiving and Preservation (WRAP) project at Y-12 and is the format for catalog metadata chosen for weapons records by the Nuclear Weapons Information Group (NWIG). The ''Ferret'' system for automated classification analysis uses XML to structure its knowledge base. The Ferret team also provides XML consulting to OSTI and DOE Headquarters, particularly the National Nuclear Security Administration (NNSA). Supporting standards development allows DOE and Y-12 the opportunity both to provide input into the process and to benefit from contact with some of the leading experts in the subject matter. Oak Ridge has been for some years the location to which other DOE sites turn for expertise in SGML, XML, and related topics.« less
NASA Astrophysics Data System (ADS)
Burton, Paul
1998-05-01
Thirty useful physics-related sites are listed to help get you started. I hope you will find some of the following sites of use in your teaching or good for pointing your pupils in the right direction when doing research. I have not attempted to rank or sort them in any order. However, by the time you read this issue of Physics Education some of the sites may not be available; this is the nature of the net. Those not wishing to retype each address can access them from my school's physics page (http://www.bootham.demon.co.uk/physics/links.html) or e-mail me at pkb@bootham.demon.co.uk and I can send you a document with the hypertext live links in. The new IOP sponsored 16-19 Physics project is promising great things with its own Internet site. You will be able to download information, updates, worksheets etc. Any queries about the development of this project at present can be sent to Evelyn van Dyk at: 16-19project@iop.org Engineering and Physical Sciences Research Councilhttp://www.epsrc.ac.uk Particle Physics and Astronomy Research Councilhttp://www.pparc.ac.uk American Institute of Physicshttp://www.aip.org Usenet Physics FAQ (frequently asked questions)http://www.weburbia.demon.co.uk/physics/faq.html CERNhttp://www.cern.ch/ BBC Educationhttp://www.bbc.co.uk/education/ Useful data on the Periodic Tablehttp://www.shef.ac.uk/chemistry/web-elements/ JET WWW index page:http://www.jet.uk NERC satellite station, Dundee Universityhttp://www.sat.dundee.ac.uk/ The Meteorological Officehttp://www.meto.govt.uk/ The Smithsonian Institute, Washington, DChttp://www.si.edu/newstart.htm Frequently asked questions on time and frequencyhttp://www.boulder.nist.gov/timefreq/faq/faq.htm Physics newshttp://www.het.brown.edu/news/index.html TIPTOP: The Internet Pilot to Physicshttp://www.tp.umu.se/TIPTOP/ A Dictionary of Scientific Quotationshttp://naturalscience.com/dsqhome.html ScI-Journal: an on-line publication for science studentshttp://www.soton.ac.uk/~plf/ScI-Journal/ Science On-linehttp://www.shu.ac.uk/schools/sci/sol/contents.htm Physics humourhttp://quark.physics.uwo.ca/~harwood/humor12.htm Searching for someone's e-mail address?http://www.four11.com SKY publicationshttp://www.skypub.com Planet Sciencehttp://www.keysites.com New Scientisthttp://www.newscientist.com NASA links to the American space programhttp://www.nasa.gov NASA Jet Propulsion Laboratoryhttp://www.jpl.nasa.gov Hewlett-Packardhttp://www.hp.com The Bradford Schools Telescope Projecthttp://www.telescope.org/rti/nuffield/ To contact a professional societyhttp://www.lib.uwaterloo.ca/society/overview.html The Schools' Physics Group: post-16 issueshttp://diana.ecs.soton.ac.uk/~pm/Physics/post16.html Sleuth search for physics and chemistryhttp://www.isleuth.com/index.shtml The Particle Adventurehttp://pdg.lbl.gov/cpep/adventure_home.html Acknowledgments I thank colleagues David Robinson and Robin Peach for their help in selecting and validating these sites and William Try, pupil at Bootham School, for preparing and maintaining the department's homepage with hypertext links. Received 21 January 1998
Education and Training Module in Alertness Management
NASA Technical Reports Server (NTRS)
Mallis, M. M.; Brandt, S. L.; Oyung, R. L.; Reduta, D. D.; Rosekind, M. R.
2006-01-01
The education and training module (ETM) in alertness management has now been integrated as part of the training regimen of the Pilot Proficiency Awards Program ("WINGS") of the Federal Aviation Administration. Originated and now maintained current by the Fatigue Countermeasures Group at NASA Ames Research Center, the ETM in Alertness Management is designed to give pilots the benefit of the best and most recent research on the basics of sleep physiology, the causes of fatigue, and strategies for managing alertness during flight operations. The WINGS program is an incentive program that encourages pilots at all licensing levels to participate in recurrent training, upon completion of which distinctive lapel or tie pins (wings) and certificates of completion are awarded. In addition to flight training, all WINGS applicants must attend at least one FAA-sponsored safety seminar, FAA-sanctioned safety seminar, or industry recurrent training program. The Fatigue Countermeasures Group provides an FAA-approved industry recurrent training program through an on-line General Aviation (GA) WINGS ETM in alertness management to satisfy this requirement. Since 1993, the Fatigue Countermeasures Group has translated fatigue and alertness information to operational environments by conducting two-day ETM workshops oriented primarily toward air-carrier operations subject to Part 121 of the Federal Aviation Regulations pertaining to such operations. On the basis of the information presented in the two-day ETM workshops, an ETM was created for GA pilots and was transferred to a Web-based version. To comply with the requirements of the WINGS Program, the original Web-based version has been modified to include hypertext markup language (HTML) content that makes information easily accessible, in-depth testing of alertness-management knowledge, new interactive features, and increased informational resources for GA pilots. Upon successful completion of this training module, a participant receives a computer- screen display of a certificate of completion. The certificate, which includes the pilot s name and an identifying number, can be printed out and submitted, for ground training credit, with the pilot s WINGS application.
NASA Astrophysics Data System (ADS)
Lahti, Paul M.; Motyka, Eric J.; Lancashire, Robert J.
2000-05-01
A straightforward procedure is described to combine computation of molecular vibrational modes using commonly available molecular modeling programs with visualization of the modes using advanced features of the MDL Information Systems Inc. Chime World Wide Web browser plug-in. Minor editing of experimental spectra that are stored in the JCAMP-DX format allows linkage of IR spectral frequency ranges to Chime molecular display windows. The spectra and animation files can be combined by Hypertext Markup Language programming to allow interactive linkage between experimental spectra and computationally generated vibrational displays. Both the spectra and the molecular displays can be interactively manipulated to allow the user maximum control of the objects being viewed. This procedure should be very valuable not only for aiding students through visual linkage of spectra and various vibrational animations, but also by assisting them in learning the advantages and limitations of computational chemistry by comparison to experiment.
Radar Unix: a complete package for GPR data processing
NASA Astrophysics Data System (ADS)
Grandjean, Gilles; Durand, Herve
1999-03-01
A complete package for ground penetrating radar data interpretation including data processing, forward modeling and a case history database consultation is presented. Running on an Unix operating system, its architecture consists of a graphical user interface generating batch files transmitted to a library of processing routines. This design allows a better software maintenance and the possibility for the user to run processing or modeling batch files by itself and differed in time. A case history data base is available and consists of an hypertext document which can be consulted by using a standard HTML browser. All the software specifications are presented through a realistic example.
Records and history of the United States Geological Survey
Nelson, Clifford M.
2000-01-01
This publication contains two presentations in Portable Document Format (PDF). The first is Renee M. Jaussaud's inventory of the documents accessioned by the end of 1997 into Record Group 57 (Geological Survey) at the National Archives and Records Administration's (NARA) Archives II facility in College Park, Md., but not the materials in NARA's regional archives. The second is Mary C. Rabbitt's 'The United States Geological Survey 1879-1989,' which appeared in 1989 as USGS Circular 1050. Additionally, USGS Circular 1050 is also presented in Hyper Text Markup Language (HTML) format.
jsNMR: an embedded platform-independent NMR spectrum viewer.
Vosegaard, Thomas
2015-04-01
jsNMR is a lightweight NMR spectrum viewer written in JavaScript/HyperText Markup Language (HTML), which provides a cross-platform spectrum visualizer that runs on all computer architectures including mobile devices. Experimental (and simulated) datasets are easily opened in jsNMR by (i) drag and drop on a jsNMR browser window, (ii) by preparing a jsNMR file from the jsNMR web site, or (iii) by mailing the raw data to the jsNMR web portal. jsNMR embeds the original data in the HTML file, so a jsNMR file is a self-transforming dataset that may be exported to various formats, e.g. comma-separated values. The main applications of jsNMR are to provide easy access to NMR data without the need for dedicated software installed and to provide the possibility to visualize NMR spectra on web sites. Copyright © 2015 John Wiley & Sons, Ltd.
Webmail: an Automated Web Publishing System
NASA Astrophysics Data System (ADS)
Bell, David
A system for publishing frequently updated information to the World Wide Web will be described. Many documents now hosted by the NOAO Web server require timely posting and frequent updates, but need only minor changes in markup or are in a standard format requiring only conversion to HTML. These include information from outside the organization, such as electronic bulletins, and a number of internal reports, both human and machine generated. Webmail uses procmail and Perl scripts to process incoming email messages in a variety of ways. This processing may include wrapping or conversion to HTML, posting to the Web or internal newsgroups, updating search indices or links on related pages, and sending email notification of the new pages to interested parties. The Webmail system has been in use at NOAO since early 1997 and has steadily grown to include fourteen recipes that together handle about fifty messages per week.
Facilitating NCAR Data Discovery by Connecting Related Resources
NASA Astrophysics Data System (ADS)
Rosati, A.
2012-12-01
Linking datasets, creators, and users by employing the proper standards helps to increase the impact of funded research. In order for users to find a dataset, it must first be named. Data citations play the important role of giving datasets a persistent presence by assigning a formal "name" and location. This project focuses on the next step of the "name-find-use" sequence: enhancing discoverability of NCAR data by connecting related resources on the web. By examining metadata schemas that document datasets, I examined how Semantic Web approaches can help to ensure the widest possible range of data users. The focus was to move from search engine optimization (SEO) to information connectivity. Two main markup types are very visible in the Semantic Web and applicable to scientific dataset discovery: The Open Archives Initiative-Object Reuse and Exchange (OAI-ORE - www.openarchives.org) and Microdata (HTML5 and www.schema.org). My project creates pilot aggregations of related resources using both markup types for three case studies: The North American Regional Climate Change Assessment Program (NARCCAP) dataset and related publications, the Palmer Drought Severity Index (PSDI) animation and image files from NCAR's Visualization Lab (VisLab), and the multidisciplinary data types and formats from the Advanced Cooperative Arctic Data and Information Service (ACADIS). This project documents the differences between these markups and how each creates connectedness on the web. My recommendations point toward the most efficient and effective markup schema for aggregating resources within the three case studies based on the following assessment criteria: ease of use, current state of support and adoption of technology, integration with typical web tools, available vocabularies and geoinformatic standards, interoperability with current repositories and access portals (e.g. ESG, Java), and relation to data citation tools and methods.
Hypermedia 1990 structured Hypertext tutorial
NASA Technical Reports Server (NTRS)
Johnson, J. Scott
1990-01-01
Hypermedia 1990 structured Hypertext tutorial is presented in the form of view-graphs. The following subject areas are covered: structured hypertext; analyzing hypertext documents for structure; designing structured hypertext documents; creating structured hypertext applications; structuring service and repair documents; maintaining structured hypertext documents; and structured hypertext conclusion.
Feral Hypertext: When Hypertext Literature Escapes Control
NASA Astrophysics Data System (ADS)
Rettberg, Jill Walker
This article explores the historical development of hypertext, arguing that we have seen a transition from early visions and implementations of hypertext that primarily dealt with using hypertext to gain greater control over knowledge and ideas, to today's messy web. Pre-web hypertext can be seen as a domesticated species bred in captivity. On the web, however, some breeds of hypertext have gone feral. Feral hypertext is no longer tame and domesticated, but is fundamentally out of our control. In order to understand and work with feral hypertext, we need to accept this and think more as hunter-gatherers than as the farmers we were for domesticated hypertext. The article discusses hypertext in general with an emphasis on literary and creative hypertext practice.
An electronic laboratory notebook based on HTML forms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marstaller, J.E.; Zorn, M.D.
The electronic notebook records information that has traditionally been kept in handwritten laboratory notebooks. It keeps detailed information about the progress of the research , such as the optimization of primers, the screening of the primers and, finally, the mapping of the probes. The notebook provides two areas of services: Data entry, and reviewing of data in all stages. The World wide Web browsers, with HTML based forms provide a fast and easy mechanism to create forms-based user interfaces. The computer scientist can sit down with the biologist and rapidly make changes in response to the user`s comments. Furthermore themore » HTML forms work equally well on a number of different hardware platforms; thus the biologists may continue using their Macintosh computers and find a familiar interface if they have to work on a Unix workstation. The web browser can be run from any machine connected to the Internet: thus the users are free to enter or view information even away from their labs at home or while on travel. Access can be restricted by password and other means to secure the confidentiality of the data. A bonus that is hard to implement otherwise is the facile connection to outside resources. Linking local information to data in public databases is only a hypertext link away with little or no additional programming efforts.« less
Distance education through the Internet: the GNA-VSNS biocomputing course.
de la Vega, F M; Giegerich, R; Fuellen, G
1996-01-01
A prototype course on biocomputing was delivered via international computer networks in early summer 1995. The course lasted 11 weeks, and was offered free of charge. It was organized by the BioComputing Division of the Virtual School of Natural Sciences, which is a member school of the Globewide Network Academy. It brought together 34 students and 7 instructors from all over the world, and covered the basics of sequence analysis. Five authors from Germany and USA prepared a hypertext book which was discussed in weekly study sessions that took place in a virtual classroom at the BioMOO electronic conferencing system. The course aimed at students with backgrounds in molecular biology, biomedicine or computer science, complementing and extending their skills with an interdisciplinary curriculum. Special emphasis was placed on the use of Internet resources, and the development of new teaching tools. The hypertext book includes direct links to sequence analysis and databank search services on the Internet. A tool for the interactive visualization of unit-cost pairwise sequence alignment was developed for the course. All course material will stay accessible at the World Wide Web address (Uniform Resource Locator) http://+www.techfak.uni-bielefeld.de/bcd/welcome .html. This paper describes the aims and organization of the course, and gives a preliminary account of this novel experience in distance education.
Zare-Farashbandi, Firoozeh; Ramezan-Shirazi, Mahtab; Ashrafi-Rizi, Hasan; Nouri, Rasool
2014-01-01
Recent progress in providing innovative solutions in the organization of electronic resources and research in this area shows a global trend in the use of new strategies such as metadata to facilitate description, place for, organization and retrieval of resources in the web environment. In this context, library metadata standards have a special place; therefore, the purpose of the present study has been a comparative study on the Central Libraries' Websites of Iran State Universities for Hyper Text Mark-up Language (HTML) and Dublin Core metadata elements usage in 2011. The method of this study is applied-descriptive and data collection tool is the check lists created by the researchers. Statistical community includes 98 websites of the Iranian State Universities of the Ministry of Health and Medical Education and Ministry of Science, Research and Technology and method of sampling is the census. Information was collected through observation and direct visits to websites and data analysis was prepared by Microsoft Excel software, 2011. The results of this study indicate that none of the websites use Dublin Core (DC) metadata and that only a few of them have used overlaps elements between HTML meta tags and Dublin Core (DC) elements. The percentage of overlaps of DC elements centralization in the Ministry of Health were 56% for both description and keywords and, in the Ministry of Science, were 45% for the keywords and 39% for the description. But, HTML meta tags have moderate presence in both Ministries, as the most-used elements were keywords and description (56%) and the least-used elements were date and formatter (0%). It was observed that the Ministry of Health and Ministry of Science follows the same path for using Dublin Core standard on their websites in the future. Because Central Library Websites are an example of scientific web pages, special attention in designing them can help the researchers to achieve faster and more accurate information resources. Therefore, the influence of librarians' ideas on the awareness of web designers and developers will be important for using metadata elements as general, and specifically for applying such standards.
Zare-Farashbandi, Firoozeh; Ramezan-Shirazi, Mahtab; Ashrafi-Rizi, Hasan; Nouri, Rasool
2014-01-01
Introduction: Recent progress in providing innovative solutions in the organization of electronic resources and research in this area shows a global trend in the use of new strategies such as metadata to facilitate description, place for, organization and retrieval of resources in the web environment. In this context, library metadata standards have a special place; therefore, the purpose of the present study has been a comparative study on the Central Libraries’ Websites of Iran State Universities for Hyper Text Mark-up Language (HTML) and Dublin Core metadata elements usage in 2011. Materials and Methods: The method of this study is applied-descriptive and data collection tool is the check lists created by the researchers. Statistical community includes 98 websites of the Iranian State Universities of the Ministry of Health and Medical Education and Ministry of Science, Research and Technology and method of sampling is the census. Information was collected through observation and direct visits to websites and data analysis was prepared by Microsoft Excel software, 2011. Results: The results of this study indicate that none of the websites use Dublin Core (DC) metadata and that only a few of them have used overlaps elements between HTML meta tags and Dublin Core (DC) elements. The percentage of overlaps of DC elements centralization in the Ministry of Health were 56% for both description and keywords and, in the Ministry of Science, were 45% for the keywords and 39% for the description. But, HTML meta tags have moderate presence in both Ministries, as the most-used elements were keywords and description (56%) and the least-used elements were date and formatter (0%). Conclusion: It was observed that the Ministry of Health and Ministry of Science follows the same path for using Dublin Core standard on their websites in the future. Because Central Library Websites are an example of scientific web pages, special attention in designing them can help the researchers to achieve faster and more accurate information resources. Therefore, the influence of librarians’ ideas on the awareness of web designers and developers will be important for using metadata elements as general, and specifically for applying such standards. PMID:24741646
Addressing hypertext design and conversion issues
NASA Technical Reports Server (NTRS)
Glusko, Robert J.
1990-01-01
Hypertext is a network of information units connected by relational links. A hypertext system is a configuration of hardware and software that presents a hypertext to users and allows them to manage and access the information that it contains. Hypertext is also a user interface concept that closely supports the ways that people use printed information. Hypertext concepts encourage modularity and the elimination of redundancy in data bases because information can be stored only once but viewed in any appropriate context. Hypertext is such a hot idea because it is an enabling technology in that workstations and personal computers finally provide enough local processing power for hypertext user interfaces.
Hypertext: Link to the Future.
ERIC Educational Resources Information Center
Marmion, Dan
1990-01-01
Describes the origins of hypertext and reviews the history of the concept of nonsequential access to information that led to hypertext. Technological developments that have been combined with hypertext are discussed, including workstations, video and laser disk technology, and microcomputers; and library applications of hypertext and hypermedia…
Hypertext and Information Retrieval.
ERIC Educational Resources Information Center
Smith, Karen E.; And Others
1988-01-01
An overview of hypertext and hypermedia is followed by a description of the Intermedia system, and possibilities for using hypertext in the information industry are explored. A sidebar discusses information retrieval in the humanities using hypertext, and a 58-item annotated bibliography on hypertext is presented. (7 references) (MES)
Automating hypertext for decision support
NASA Technical Reports Server (NTRS)
Bieber, Michael
1990-01-01
A decision support system (DSS) shell is being constructed that can support applications in a variety of fields, e.g., engineering, manufacturing, finance. The shell provides a hypertext-style interface for 'navigating' among DSS application models, data, and reports. The traditional notion of hypertext had to be enhanced. Hypertext normally requires manually, pre-defined links. A DSS shell, however, requires that hypertext connections to be built 'on the fly'. The role of hypertext is discussed in augmenting DSS applications and the decision making process. Also discussed is how hypertext nodes, links, and link markers tailored to an arbitrary DSS application were automatically generated.
BioJS DAGViewer: A reusable JavaScript component for displaying directed graphs
Micklem, Gos
2014-01-01
Summary: The DAGViewer BioJS component is a reusable JavaScript component made available as part of the BioJS project and intended to be used to display graphs of structured data, with a particular emphasis on Directed Acyclic Graphs (DAGs). It enables users to embed representations of graphs of data, such as ontologies or phylogenetic trees, in hyper-text documents (HTML). This component is generic, since it is capable (given the appropriate configuration) of displaying any kind of data that is organised as a graph. The features of this component which are useful for examining and filtering large and complex graphs are described. Availability: http://github.com/alexkalderimis/dag-viewer-biojs; http://github.com/biojs/biojs; http://dx.doi.org/10.5281/zenodo.8303. PMID:24627804
Implications of the Java language on computer-based patient records.
Pollard, D; Kucharz, E; Hammond, W E
1996-01-01
The growth of the utilization of the World Wide Web (WWW) as a medium for the delivery of computer-based patient records (CBPR) has created a new paradigm in which clinical information may be delivered. Until recently the authoring tools and environment for application development on the WWW have been limited to Hyper Text Markup Language (HTML) utilizing common gateway interface scripts. While, at times, this provides an effective medium for the delivery of CBPR, it is a less than optimal solution. The server-centric dynamics and low levels of interactivity do not provide for a robust application which is required in a clinical environment. The emergence of Sun Microsystems' Java language is a solution to the problem. In this paper we examine the Java language and its implications to the CBPR. A quantitative and qualitative assessment was performed. The Java environment is compared to HTML and Telnet CBPR environments. Qualitative comparisons include level of interactivity, server load, client load, ease of use, and application capabilities. Quantitative comparisons include data transfer time delays. The Java language has demonstrated promise for delivering CBPRs.
Decision Facilitator for Launch Operations using Intelligent Agents
NASA Technical Reports Server (NTRS)
Thirumalainambi, Rajkumar; Bardina, Jorge
2005-01-01
Launch operations require millions of micro-decisions which contribute to the macro decision of 'Go/No-Go' for a launch. Knowledge workers"(such as managers and technical professionals) need information in a timely precise manner as it can greatly affect mission success. The intelligent agent (web search agent) uses the words of a hypertext markup language document which is connected through the internet. The intelligent agent's actions are to determine if its goal of seeking a website containing a specified target (e.g., keyword or phrase), has been met. There are few parameters that should be defined for the keyword search like "Go" and "No-Go". Instead of visiting launch and range decision making servers individually, the decision facilitator constantly connects to all servers, accumulating decisions so the final decision can be decided in a timely manner. The facilitator agent uses the singleton design pattern, which ensures that only a single instance of the facilitator agent exists at one time. Negotiations could proceed between many agents resulting in a final decision. This paper describes details of intelligent agents and their interaction to derive an unified decision support system.
Control and the Cyborg: Writing and Being Written in Hypertext.
ERIC Educational Resources Information Center
Johnson-Eilola, Johndan
1993-01-01
Describes the computer technology called hypertext, especially as it relates to teaching composition. Argues that the ability to redistribute textual control hold both empowerment and danger for hypertext writer/readers, who can be compared to cyborgs. Discusses the implications of hypertext for composition pedagogy. (HB)
Empirical Evaluation of Adaptive Annotation in Hypermedia.
ERIC Educational Resources Information Center
Specht, Marcus
Empirical evaluations of learning with hypertext have shown contradictory results. Adaptive hypertext was introduced to solve some problems when learning with hypertext. This paper reports on two empirical studies comparing different forms of adaptive hypertext. In the first experiment, four treatments were realized by a combination of adaptive…
Visual-Spatial Thinking in Hypertexts.
ERIC Educational Resources Information Center
Johnson-Sheehan, Richard; Baehr, Craig
2001-01-01
Explores what it means to think visually and spatially in hypertexts and how users react and maneuver in real and virtual three-dimensional spaces. Offers four principles of visual thinking that can be applied when developing hypertexts. Applies these principles to actual hypertexts, demonstrating how selectivity, fixation, depth discernment, and…
Seven ways to make a hypertext project fail
NASA Technical Reports Server (NTRS)
Glushko, Robert J.
1990-01-01
Hypertext is an exciting concept, but designing and developing hypertext applications of practical scale is hard. To make a project feasible and successful 'hypertext engineers' must overcome the following problems: (1) developing realistic expectations in the face of hypertext hype; (2) assembling a multidisciplinary project team; (3) establishing and following design guidelines; (4) dealing with installed base constraints; (5) obtaining usable source files; (6) finding appropriate software technology and methods; and (7) overcoming legal uncertainties about intellectual property concerns.
A review of hypertext in a NASA project management context
NASA Technical Reports Server (NTRS)
Bell, Christopher J.
1987-01-01
The principles of data storage, the comparative strengths of data bases, and the evolution of hypertext within this context are discussed. A classification schema of indexing and of hypertext document structures is provided. Issues associated with hypertext implementation are also discussed and potential areas for further research are indicated.
ERIC Educational Resources Information Center
Barhoumi, Chokri; Rossi, Pier Giuseppe
2013-01-01
The use of hypertext systems for learning and teaching complex and ill-structured domain of knowledge has been attracting attention in design of instruction. In this context, an experimental research has been conducted to explore the effectiveness of instructional design oriented hypertext systems. Cognitive flexibility hypertext theory is…
Learner Variables Associated with Reading and Learning in a Hypertext Environment.
ERIC Educational Resources Information Center
Niederhauser, Dale S.; Shapiro, Amy
While many elements like character decoding, word recognition, comprehension, and others remain the same as in learning from traditional text, when learning from hypertext, a number of features that are unique to reading hypertext produce added complexity. It is these features that drive research on hypertext in education. There is a greater…
OntologyWidget – a reusable, embeddable widget for easily locating ontology terms
Beauheim, Catherine C; Wymore, Farrell; Nitzberg, Michael; Zachariah, Zachariah K; Jin, Heng; Skene, JH Pate; Ball, Catherine A; Sherlock, Gavin
2007-01-01
Background Biomedical ontologies are being widely used to annotate biological data in a computer-accessible, consistent and well-defined manner. However, due to their size and complexity, annotating data with appropriate terms from an ontology is often challenging for experts and non-experts alike, because there exist few tools that allow one to quickly find relevant ontology terms to easily populate a web form. Results We have produced a tool, OntologyWidget, which allows users to rapidly search for and browse ontology terms. OntologyWidget can easily be embedded in other web-based applications. OntologyWidget is written using AJAX (Asynchronous JavaScript and XML) and has two related elements. The first is a dynamic auto-complete ontology search feature. As a user enters characters into the search box, the appropriate ontology is queried remotely for terms that match the typed-in text, and the query results populate a drop-down list with all potential matches. Upon selection of a term from the list, the user can locate this term within a generic and dynamic ontology browser, which comprises the second element of the tool. The ontology browser shows the paths from a selected term to the root as well as parent/child tree hierarchies. We have implemented web services at the Stanford Microarray Database (SMD), which provide the OntologyWidget with access to over 40 ontologies from the Open Biological Ontology (OBO) website [1]. Each ontology is updated weekly. Adopters of the OntologyWidget can either use SMD's web services, or elect to rely on their own. Deploying the OntologyWidget can be accomplished in three simple steps: (1) install Apache Tomcat [2] on one's web server, (2) download and install the OntologyWidget servlet stub that provides access to the SMD ontology web services, and (3) create an html (HyperText Markup Language) file that refers to the OntologyWidget using a simple, well-defined format. Conclusion We have developed OntologyWidget, an easy-to-use ontology search and display tool that can be used on any web page by creating a simple html description. OntologyWidget provides a rapid auto-complete search function paired with an interactive tree display. We have developed a web service layer that communicates between the web page interface and a database of ontology terms. We currently store 40 of the ontologies from the OBO website [1], as well as a several others. These ontologies are automatically updated on a weekly basis. OntologyWidget can be used in any web-based application to take advantage of the ontologies we provide via web services or any other ontology that is provided elsewhere in the correct format. The full source code for the JavaScript and description of the OntologyWidget is available from . PMID:17854506
OntologyWidget - a reusable, embeddable widget for easily locating ontology terms.
Beauheim, Catherine C; Wymore, Farrell; Nitzberg, Michael; Zachariah, Zachariah K; Jin, Heng; Skene, J H Pate; Ball, Catherine A; Sherlock, Gavin
2007-09-13
Biomedical ontologies are being widely used to annotate biological data in a computer-accessible, consistent and well-defined manner. However, due to their size and complexity, annotating data with appropriate terms from an ontology is often challenging for experts and non-experts alike, because there exist few tools that allow one to quickly find relevant ontology terms to easily populate a web form. We have produced a tool, OntologyWidget, which allows users to rapidly search for and browse ontology terms. OntologyWidget can easily be embedded in other web-based applications. OntologyWidget is written using AJAX (Asynchronous JavaScript and XML) and has two related elements. The first is a dynamic auto-complete ontology search feature. As a user enters characters into the search box, the appropriate ontology is queried remotely for terms that match the typed-in text, and the query results populate a drop-down list with all potential matches. Upon selection of a term from the list, the user can locate this term within a generic and dynamic ontology browser, which comprises the second element of the tool. The ontology browser shows the paths from a selected term to the root as well as parent/child tree hierarchies. We have implemented web services at the Stanford Microarray Database (SMD), which provide the OntologyWidget with access to over 40 ontologies from the Open Biological Ontology (OBO) website 1. Each ontology is updated weekly. Adopters of the OntologyWidget can either use SMD's web services, or elect to rely on their own. Deploying the OntologyWidget can be accomplished in three simple steps: (1) install Apache Tomcat 2 on one's web server, (2) download and install the OntologyWidget servlet stub that provides access to the SMD ontology web services, and (3) create an html (HyperText Markup Language) file that refers to the OntologyWidget using a simple, well-defined format. We have developed OntologyWidget, an easy-to-use ontology search and display tool that can be used on any web page by creating a simple html description. OntologyWidget provides a rapid auto-complete search function paired with an interactive tree display. We have developed a web service layer that communicates between the web page interface and a database of ontology terms. We currently store 40 of the ontologies from the OBO website 1, as well as a several others. These ontologies are automatically updated on a weekly basis. OntologyWidget can be used in any web-based application to take advantage of the ontologies we provide via web services or any other ontology that is provided elsewhere in the correct format. The full source code for the JavaScript and description of the OntologyWidget is available from http://smd.stanford.edu/ontologyWidget/.
NASA Astrophysics Data System (ADS)
Roganov, E. A.; Roganova, N. A.; Aleksandrov, A. I.; Ukolova, A. V.
2017-01-01
We implement a web portal which dynamically creates documents in more than 30 different formats including html, pdf and docx from a single original material source. It is obtained by using a number of free software such as Markdown (markup language), Pandoc (document converter), MathJax (library to display mathematical notation in web browsers), framework Ruby on Rails. The portal enables the creation of documents with a high quality visualization of mathematical formulas, is compatible with a mobile device and allows one to search documents by text or formula fragments. Moreover, it gives professors the ability to develop the latest technology educational materials, without qualified technicians' assistance, thus improving the quality of the whole educational process.
Effects of Different Metaphor Usage on Hypertext Learning
ERIC Educational Resources Information Center
Merdivan, Ece; Ozdener, Nesrin
2011-01-01
There are many studies that offer different opinions on the effects of hypertext usage as an educational tool. Given the differences of opinion, it is useful to research the effects of metaphor usage in hypertext education and the use of hypertext as an educational tool. In this study, the effects of metaphors' uses in constructing the…
The tissue micro-array data exchange specification: a web based experience browsing imported data
Nohle, David G; Hackman, Barbara A; Ayers, Leona W
2005-01-01
Background The AIDS and Cancer Specimen Resource (ACSR) is an HIV/AIDS tissue bank consortium sponsored by the National Cancer Institute (NCI) Division of Cancer Treatment and Diagnosis (DCTD). The ACSR offers to approved researchers HIV infected biologic samples and uninfected control tissues including tissue cores in micro-arrays (TMA) accompanied by de-identified clinical data. Researchers interested in the type and quality of TMA tissue cores and the associated clinical data need an efficient method for viewing available TMA materials. Because each of the tissue samples within a TMA has separate data including a core tissue digital image and clinical data, an organized, standard approach to producing, navigating and publishing such data is necessary. The Association for Pathology Informatics (API) extensible mark-up language (XML) TMA data exchange specification (TMA DES) proposed in April 2003 provides a common format for TMA data. Exporting TMA data into the proposed format offers an opportunity to implement the API TMA DES. Using our public BrowseTMA tool, we created a web site that organizes and cross references TMA lists, digital "virtual slide" images, TMA DES export data, linked legends and clinical details for researchers. Microsoft Excel® and Microsoft Word® are used to convert tabular clinical data and produce an XML file in the TMA DES format. The BrowseTMA tool contains Extensible Stylesheet Language Transformation (XSLT) scripts that convert XML data into Hyper-Text Mark-up Language (HTML) web pages with hyperlinks automatically added to allow rapid navigation. Results Block lists, virtual slide images, legends, clinical details and exports have been placed on the ACSR web site for 14 blocks with 1623 cores of 2.0, 1.0 and 0.6 mm sizes. Our virtual microscope can be used to view and annotate these TMA images. Researchers can readily navigate from TMA block lists to TMA legends and to clinical details for a selected tissue core. Exports for 11 blocks with 3812 cores from three other institutions were processed with the BrowseTMA tool. Fifty common data elements (CDE) from the TMA DES were used and 42 more created for site-specific data. Researchers can download TMA clinical data in the TMA DES format. Conclusion Virtual TMAs with clinical data can be viewed on the Internet by interested researchers using the BrowseTMA tool. We have organized our approach to producing, sorting, navigating and publishing TMA information to facilitate such review. We have converted Excel TMA data into TMA DES XML, and imported it and TMA DES XML from another institution into BrowseTMA to produce web pages that allow us to browse through the merged data. We proposed enhancements to the TMA DES as a result of this experience. We implemented improvements to the API TMA DES as a result of using exported data from several institutions. A document type definition was written for the API TMA DES (that optionally includes proposed enhancements). Independent validators can be used to check exports against the DTD (with or without the proposed enhancements). Linking tissue core images to readily navigable clinical data greatly improves the value of the TMA. PMID:16086837
Mueller, Martina; Wagner, Carol L; Annibale, David J; Knapp, Rebecca G; Hulsey, Thomas C; Almeida, Jonas S
2006-03-01
Approximately 30% of intubated preterm infants with respiratory distress syndrome (RDS) will fail attempted extubation, requiring reintubation and mechanical ventilation. Although ventilator technology and monitoring of premature infants have improved over time, optimal extubation remains challenging. Furthermore, extubation decisions for premature infants require complex informational processing, techniques implicitly learned through clinical practice. Computer-aided decision-support tools would benefit inexperienced clinicians, especially during peak neonatal intensive care unit (NICU) census. A five-step procedure was developed to identify predictive variables. Clinical expert (CE) thought processes comprised one model. Variables from that model were used to develop two mathematical models for the decision-support tool: an artificial neural network (ANN) and a multivariate logistic regression model (MLR). The ranking of the variables in the three models was compared using the Wilcoxon Signed Rank Test. The best performing model was used in a web-based decision-support tool with a user interface implemented in Hypertext Markup Language (HTML) and the mathematical model employing the ANN. CEs identified 51 potentially predictive variables for extubation decisions for an infant on mechanical ventilation. Comparisons of the three models showed a significant difference between the ANN and the CE (p = 0.0006). Of the original 51 potentially predictive variables, the 13 most predictive variables were used to develop an ANN as a web-based decision-tool. The ANN processes user-provided data and returns the prediction 0-1 score and a novelty index. The user then selects the most appropriate threshold for categorizing the prediction as a success or failure. Furthermore, the novelty index, indicating the similarity of the test case to the training case, allows the user to assess the confidence level of the prediction with regard to how much the new data differ from the data originally used for the development of the prediction tool. State-of-the-art, machine-learning methods can be employed for the development of sophisticated tools to aid clinicians' decisions. We identified numerous variables considered relevant for extubation decisions for mechanically ventilated premature infants with RDS. We then developed a web-based decision-support tool for clinicians which can be made widely available and potentially improve patient care world wide.
Web-based X-ray quality control documentation.
David, George; Burnett, Lou Ann; Schenkel, Robert
2003-01-01
The department of radiology at the Medical College of Georgia Hospital and Clinics has developed an equipment quality control web site. Our goal is to provide immediate access to virtually all medical physics survey data. The web site is designed to assist equipment engineers, department management and technologists. By improving communications and access to equipment documentation, we believe productivity is enhanced. The creation of the quality control web site was accomplished in three distinct steps. First, survey data had to be placed in a computer format. The second step was to convert these various computer files to a format supported by commercial web browsers. Third, a comprehensive home page had to be designed to provide convenient access to the multitude of surveys done in the various x-ray rooms. Because we had spent years previously fine-tuning the computerization of the medical physics quality control program, most survey documentation was already in spreadsheet or database format. A major technical decision was the method of conversion of survey spreadsheet and database files into documentation appropriate for the web. After an unsatisfactory experience with a HyperText Markup Language (HTML) converter (packaged with spreadsheet and database software), we tried creating Portable Document Format (PDF) files using Adobe Acrobat software. This process preserves the original formatting of the document and takes no longer than conventional printing; therefore, it has been very successful. Although the PDF file generated by Adobe Acrobat is a proprietary format, it can be displayed through a conventional web browser using the freely distributed Adobe Acrobat Reader program that is available for virtually all platforms. Once a user installs the software, it is automatically invoked by the web browser whenever the user follows a link to a file with a PDF extension. Although no confidential patient information is available on the web site, our legal department recommended that we secure the site in order to keep out those wishing to make mischief. Our interim solution has not been to password protect the page, which we feared would hinder access for occasional legitimate users, but also not to provide links to it from other hospital and department pages. Utility and productivity were improved and time and money were saved by making radiological equipment quality control documentation instantly available on-line.
System to monitor data analyses and results of physics data validation between pulses at DIII-D
NASA Astrophysics Data System (ADS)
Flanagan, S.; Schachter, J. M.; Schissel, D. P.
2004-06-01
A data analysis monitoring (DAM) system has been developed to monitor between pulse physics analysis at the DIII-D National Fusion Facility (http://nssrv1.gat.com:8000/dam). The system allows for rapid detection of discrepancies in diagnostic measurements or the results from physics analysis codes. This enables problems to be detected and possibly fixed between pulses as opposed to after the experimental run has concluded, thus increasing the efficiency of experimental time. An example of a consistency check is comparing the experimentally measured neutron rate and the expected neutron emission, RDD0D. A significant difference between these two values could indicate a problem with one or more diagnostics, or the presence of unanticipated phenomena in the plasma. This system also tracks the progress of MDSplus dispatched data analysis software and the loading of analyzed data into MDSplus. DAM uses a Java Servlet to receive messages, C Language Integrated Production System to implement expert system logic, and displays its results to multiple web clients via Hypertext Markup Language. If an error is detected by DAM, users can view more detailed information so that steps can be taken to eliminate the error for the next pulse.
Microprocessor-controlled, wide-range streak camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amy E. Lewis, Craig Hollabaugh
Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storagemore » using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.« less
Microprocessor-controlled wide-range streak camera
NASA Astrophysics Data System (ADS)
Lewis, Amy E.; Hollabaugh, Craig
2006-08-01
Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.
Query by Browsing: An Alternative Hypertext Information Retrieval Method
Frisse, Mark E.; Cousins, Steve B.
1989-01-01
In this paper we discuss our efforts to develop programs which enhance the ability to navigate through large medical hypertext systems. Our approach organizes hypertext index terms into a belief network and uses reader feedback to update the degree of belief in the index terms' utility to a query. We begin by describing various possible configurations for indexes to hypertext. We then describe how belief network calculations can be applied to these indexes. After a brief discussion of early results using manuscripts from a medical handbook, we close with an analysis of our approach's applicability to a wider range of hypertext information retrieval problems.
Managing the computational chemistry big data problem: the ioChem-BD platform.
Álvarez-Moreno, M; de Graaf, C; López, N; Maseras, F; Poblet, J M; Bo, C
2015-01-26
We present the ioChem-BD platform ( www.iochem-bd.org ) as a multiheaded tool aimed to manage large volumes of quantum chemistry results from a diverse group of already common simulation packages. The platform has an extensible structure. The key modules managing the main tasks are to (i) upload of output files from common computational chemistry packages, (ii) extract meaningful data from the results, and (iii) generate output summaries in user-friendly formats. A heavy use of the Chemical Mark-up Language (CML) is made in the intermediate files used by ioChem-BD. From them and using XSL techniques, we manipulate and transform such chemical data sets to fulfill researchers' needs in the form of HTML5 reports, supporting information, and other research media.
Model annotation for synthetic biology: automating model to nucleotide sequence conversion
Misirli, Goksel; Hallinan, Jennifer S.; Yu, Tommy; Lawson, James R.; Wimalaratne, Sarala M.; Cooling, Michael T.; Wipat, Anil
2011-01-01
Motivation: The need for the automated computational design of genetic circuits is becoming increasingly apparent with the advent of ever more complex and ambitious synthetic biology projects. Currently, most circuits are designed through the assembly of models of individual parts such as promoters, ribosome binding sites and coding sequences. These low level models are combined to produce a dynamic model of a larger device that exhibits a desired behaviour. The larger model then acts as a blueprint for physical implementation at the DNA level. However, the conversion of models of complex genetic circuits into DNA sequences is a non-trivial undertaking due to the complexity of mapping the model parts to their physical manifestation. Automating this process is further hampered by the lack of computationally tractable information in most models. Results: We describe a method for automatically generating DNA sequences from dynamic models implemented in CellML and Systems Biology Markup Language (SBML). We also identify the metadata needed to annotate models to facilitate automated conversion, and propose and demonstrate a method for the markup of these models using RDF. Our algorithm has been implemented in a software tool called MoSeC. Availability: The software is available from the authors' web site http://research.ncl.ac.uk/synthetic_biology/downloads.html. Contact: anil.wipat@ncl.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21296753
Blom, Helen; Segers, Eliane; Hermans, Daan; Knoors, Harry; Verhoeven, Ludo
2017-02-01
This paper provides insight into the reading comprehension of hierarchically structured hypertexts within D/HH students and students with SLI. To our knowledge, it is the first study on hypertext comprehension in D/HH students and students with SLI, and it also considers the role of working memory. We compared hypertext versus linear text comprehension in D/HH students and students with SLI versus younger students without language problems who had a similar level of decoding and vocabulary. The results demonstrated no difference in text comprehension between the hierarchically structured hypertext and the linear text. Text comprehension of D/HH students and students with SLI was comparable to that of the students without language problems. In addition, there was a similar positive predictive value of visuospatial and not verbal working memory on hypertext comprehension for all three groups. The findings implicate that educational settings can make use of hierarchically structured hypertexts as well as linear texts and that children can navigate in the digital world from young age on, even if language or working memory problems are present. Copyright © 2016 Elsevier Ltd. All rights reserved.
Misleading Theoretical Assumptions in Hypertext/Hypermedia Research.
ERIC Educational Resources Information Center
Tergan, Sigmar-Olaf
1997-01-01
Reviews basic theoretical assumptions of research on learning with hypertext/hypermedia. Focuses on whether the results of research on hypertext/hypermedia-based learning support these assumptions. Results of empirical studies and theoretical analysis reveal that many research approaches have been misled by inappropriate theoretical assumptions on…
Hypertext Interchange Using ICA.
ERIC Educational Resources Information Center
Rada, Roy; And Others
1995-01-01
Discusses extended ICA (Integrated Chameleon Architecture), a public domain toolset for generating text-to-hypertext translators. A system called SGML-MUCH has been developed using E-ICA (Extended Integrated Chameleon Architecture) and is presented as a case study with converters for the hypertext systems MUCH, Guide, Hyperties, and Toolbook.…
What's New in Software? Hot New Tool: The Hypertext.
ERIC Educational Resources Information Center
Hedley, Carolyn N.
1989-01-01
This article surveys recent developments in hypertext software, a highly interactive nonsequential reading/writing/database approach to research and teaching that allows paths to be created through related materials including text, graphics, video, and animation sources. Described are uses, advantages, and problems of hypertext. (PB)
Cognitive Overhead in Hypertext Learning Reexamined: Overcoming the Myths
ERIC Educational Resources Information Center
Zumbach, Joerg
2006-01-01
In hypertext learning, comparative research is mostly dedicated to differences in text-hypertext information retrieval and processing and to optimization of nonlinear information retrieval. Most of these investigations are conducted within the context of applied research. The theoretical background of information acquisition from linear and…
Conceptual and Methodological Shortcomings in Hypertext/Hypermedia Design and Research.
ERIC Educational Resources Information Center
Tergan, Sigmar-Olaf
1997-01-01
Some studies of hypertext/hypermedia systems have concluded that there is little evidence supporting its educational efficacy. After examining conceptual and methodological shortcomings of research, this article suggests that the educational potential of hypertext/hypermedia has been underestimated and argues that overcoming these shortcomings…
TELLTALE: Experiments in a Dynamic Hypertext Environment for Degraded and Multilingual Data.
ERIC Educational Resources Information Center
Pearce, Claudia; Nicholas, Charles
1996-01-01
Presents experimentation results for the TELLTALE system, a dynamic hypertext environment that provides full-text search from a hypertext-style user interface for text corpora that may be garbled by OCR (optical character recognition) or transmission errors, and that may contain languages other than English. (Author/LRW)
The User Interface: A Hypertext Model Linking Art Objects and Related Information.
ERIC Educational Resources Information Center
Moline, Judi
This report presents a model combining the emerging technologies of hypertext and expert systems. Hypertext is relatively unexplored but promises an innovative approach to information retrieval. In contrast, expert systems have been used experimentally in many different application areas ranging from medical diagnosis to oil exploration. The…
Hypertext Publishing and the Revitalization of Knowledge.
ERIC Educational Resources Information Center
Louie, Steven; Rubeck, Robert F.
1989-01-01
Discusses the use of hypertext for publishing and other document control activities in higher education. Topics discussed include a model of hypertext, called GUIDE, that is used at the University of Arizona Medical School; the increase in the number of scholarly publications; courseware development by faculty; and artificial intelligence. (LRW)
Hypertext: Behind the Hype. ERIC Digest.
ERIC Educational Resources Information Center
Bevilacqua, Ann F.
This digest begins by defining the concept of hypertext and describing the two types of hypertext--static and dynamic. Three prototype applications are then discussed: (1) Intermedia, a large-scale multimedia system at Brown University; (2) the Perseus Project at Harvard University, which is developing interactive courseware on classical Greek…
Trends, Fashions, Patterns, Norms, Conventions...and Hypertext Too.
ERIC Educational Resources Information Center
Amitay, Einat
2001-01-01
Outlines the theory behind the formation of language conventions, then reveals conventions evolving in the community of people writing hypertext on the Web. Demonstrates how these conventions can be used to augment and shift the meaning of already published hypertexts. Describes the system called InCommonSense, which reuses particular hypertext…
Meditations upon Hypertext: A Rhetorethics for Cyborgs.
ERIC Educational Resources Information Center
Gilbert, Pamela K.
1997-01-01
Suggests that the ability to actualize the potential of hypertext is limited by the lack of an adequate theory of hypertext reading which accounts for ethical and political issues of identity or subjectivity. Identifies examples of this problem and speculates on some responses; considers what sort of reader and/or reading practices hypertext…
ERIC Educational Resources Information Center
Burin, Debora I.; Barreyro, Juan P.; Saux, Gastón; Irrazábal, Natalia C.
2015-01-01
Introduction: In contemporary information societies, reading digital text has become pervasive. One of the most distinctive features of digital texts is their internal connections via hyperlinks, resulting in non-linear hypertexts. Hypertext structure and previous knowledge affect navigation and comprehension of digital expository texts. From the…
Using an Architectural Metaphor for Information Design in Hypertext.
ERIC Educational Resources Information Center
Deboard, Donn R.; Lee, Doris
2001-01-01
Uses Frank Lloyd Wright's (1867-1959) organic architecture as a metaphor to define the relationship between a part and a whole, whether the focus is on a building and its surroundings or information delivered via hypertext. Reviews effective strategies for designing text information via hypertext and incorporates three levels of information…
Debugging expert systems using a dynamically created hypertext network
NASA Technical Reports Server (NTRS)
Boyle, Craig D. B.; Schuette, John F.
1991-01-01
The labor intensive nature of expert system writing and debugging motivated this study. The hypothesis is that a hypertext based debugging tool is easier and faster than one traditional tool, the graphical execution trace. HESDE (Hypertext Expert System Debugging Environment) uses Hypertext nodes and links to represent the objects and their relationships created during the execution of a rule based expert system. HESDE operates transparently on top of the CLIPS (C Language Integrated Production System) rule based system environment and is used during the knowledge base debugging process. During the execution process HESDE builds an execution trace. Use of facts, rules, and their values are automatically stored in a Hypertext network for each execution cycle. After the execution process, the knowledge engineer may access the Hypertext network and browse the network created. The network may be viewed in terms of rules, facts, and values. An experiment was conducted to compare HESDE with a graphical debugging environment. Subjects were given representative tasks. For speed and accuracy, in eight of the eleven tasks given to subjects, HESDE was significantly better.
Techniques for capturing expert knowledge - An expert systems/hypertext approach
NASA Technical Reports Server (NTRS)
Lafferty, Larry; Taylor, Greg; Schumann, Robin; Evans, Randy; Koller, Albert M., Jr.
1990-01-01
The knowledge-acquisition strategy developed for the Explosive Hazards Classification (EHC) Expert System is described in which expert systems and hypertext are combined, and broad applications are proposed. The EHC expert system is based on rapid prototyping in which primary knowledge acquisition from experts is not emphasized; the explosive hazards technical bulletin, technical guidance, and minimal interviewing are used to develop the knowledge-based system. Hypertext is used to capture the technical information with respect to four issues including procedural, materials, test, and classification issues. The hypertext display allows the integration of multiple knowlege representations such as clarifications or opinions, and thereby allows the performance of a broad range of tasks on a single machine. Among other recommendations, it is suggested that the integration of hypertext and expert systems makes the resulting synergistic system highly efficient.
Linking Information to Objects: A Hypertext Prototype for Numismatists.
ERIC Educational Resources Information Center
Moline, Judi
1991-01-01
This report focuses on the user of a prototype hypertext application designed to help coin collectors link ancient coins with relevant numismatic information. It is noted that hypertext systems promote the collection of information that may be multimedia in nature and may be linked so that information can be accessed in a non-linear manner. The…
ERIC Educational Resources Information Center
Girill, T. R.; And Others
1991-01-01
Describes enhancements made to a hypertext information retrieval system at the National Energy Research Supercomputer Center (NERSC) called DFT (Document, Find, and Theseus). The enrichment of DFT's entry vocabulary is described, DFT and other hypertext systems are compared, and problems that occur due to the need for frequent updates are…
An Investigation of Scaffolded Reading on EFL Hypertext Comprehension
ERIC Educational Resources Information Center
Shang, Hui-Fang
2015-01-01
With the rapid growth of computer technology, some printed texts are designed as hypertexts to help EFL (English as a foreign language) learners search for and process multiple resources in a timely manner for autonomous learning. The purpose of this study was to design a hypertext system and examine if a 14-week teacher-guided print-based and…
ERIC Educational Resources Information Center
White, Charles E., Jr.
The purpose of this study was to develop and implement a hypertext documentation system in an industrial laboratory and to evaluate its usefulness by participative observation and a questionnaire. Existing word-processing test method documentation was converted directly into a hypertext format or "hyperdocument." The hyperdocument was designed and…
Proceedings of the Hypertext Standardization Workshop (Gaithersburg, Maryland, January 16-18, 1990).
ERIC Educational Resources Information Center
Moline, Judi, Ed.; And Others
This report constitutes the proceedings of a three day workshop on Hypertext Standardization held at the National Institute of Standards and Technology (NIST) on January 16-18, 1990. Efforts towards standardization of hypertext have already been initiated in various interested organizations. The major purpose of the workshop was to provide a forum…
Hypertext and Hypermedia: Applications for Educational Use. Year 2 Monograph.
ERIC Educational Resources Information Center
Boone, Randall; Higgins, Kyle
This report presents information on the second year of a 3-year project to develop hypertext computer study guides and to study their use by secondary students, including remedial students and those with learning disabilities. The first section provides an introduction to hypertext, what it is, how it is structured, and how it compares with…
Modeling the Arden Syntax for medical decisions in XML.
Kim, Sukil; Haug, Peter J; Rocha, Roberto A; Choi, Inyoung
2008-10-01
A new model expressing Arden Syntax with the eXtensible Markup Language (XML) was developed to increase its portability. Every example was manually parsed and reviewed until the schema and the style sheet were considered to be optimized. When the first schema was finished, several MLMs in Arden Syntax Markup Language (ArdenML) were validated against the schema. They were then transformed to HTML formats with the style sheet, during which they were compared to the original text version of their own MLM. When faults were found in the transformed MLM, the schema and/or style sheet was fixed. This cycle continued until all the examples were encoded into XML documents. The original MLMs were encoded in XML according to the proposed XML schema and reverse-parsed MLMs in ArdenML were checked using a public domain Arden Syntax checker. Two hundred seventy seven examples of MLMs were successfully transformed into XML documents using the model, and the reverse-parse yielded the original text version of MLMs. Two hundred sixty five of the 277 MLMs showed the same error patterns before and after transformation, and all 11 errors related to statement structure were resolved in XML version. The model uses two syntax checking mechanisms, first an XML validation process, and second, a syntax check using an XSL style sheet. Now that we have a schema for ArdenML, we can also begin the development of style sheets for transformation ArdenML into other languages.
Hospital markup and operation outcomes in the United States.
Gani, Faiz; Ejaz, Aslam; Makary, Martin A; Pawlik, Timothy M
2016-07-01
Although the price hospitals charge for operations has broad financial implications, hospital pricing is not subject to regulation. We sought to characterize national variation in hospital price markup for major cardiothoracic and gastrointestinal operations and to evaluate perioperative outcomes of hospitals relative to hospital price markup. All hospitals in which a patient underwent a cardiothoracic or gastrointestinal procedure were identified using the Nationwide Inpatient Sample for 2012. Markup ratios (ratio of charges to costs) for the total cost of hospitalization were compared across hospitals. Risk-adjusted morbidity, failure-to-rescue, and mortality were calculated using multivariable, hierarchical logistic regression. Among the 3,498 hospitals identified, markup ratios ranged from 0.5-12.2, with a median markup ratio of 2.8 (interquartile range 2.7-3.9). For the 888 hospitals with extreme markup (greatest markup ratio quartile: markup ratio >3.9), the median markup ratio was 4.9 (interquartile range 4.3-6.0), with 10% of these hospitals billing more than 7 times the Medicare-allowable costs (markup ratio ≥7.25). Extreme markup hospitals were more often large (46.3% vs 33.8%, P < .001), urban, nonteaching centers (57.0% vs 37.9%, P < .001), and located in the Southern (46.4% vs 32.8%, P < .001) or Western (27.8% vs 17.6%, P < .001) regions of the United States. Of the 639 investor-owned, for-profit hospitals, 401 hospitals (62.8%) had an extreme markup ratio compared with 19.3% (n = 452) and 6.8% (n = 35) of nonprofit and government hospitals, respectively. Perioperative morbidity (32.7% vs 26.4%, P < .001) was greater at extreme markup hospitals. There is wide variation in hospital markup for cardiothoracic and gastrointestinal procedures, with approximately a quarter of hospital charges being 4 times greater than the actual cost of hospitalization. Hospitals with an extreme markup had greater perioperative morbidity. Copyright © 2016 Elsevier Inc. All rights reserved.
Real-time Data Display System of the Korean Neonatal Network
Lee, Byong Sop; Moon, Wi Hwan
2015-01-01
Real-time data reporting in clinical research networks can provide network members through interim analyses of the registered data, which can facilitate further studies and quality improvement activities. The aim of this report was to describe the building process of the data display system (DDS) of the Korean Neonatal Network (KNN) and its basic structure. After member verification at the KNN member's site, users can choose a variable of interest that is listed in the in-hospital data statistics (for 90 variables) or in the follow-up data statistics (for 54 variables). The statistical results of the outcome variables are displayed on the HyperText Markup Language 5-based chart graphs and tables. Participating hospitals can compare their performance to those of KNN as a whole and identify the trends over time. Ranking of each participating hospital is also displayed in terms of key outcome variables such as mortality and major neonatal morbidities with the names of other centers blinded. The most powerful function of the DDS is the ability to perform 'conditional filtering' which allows users to exclusively review the records of interest. Further collaboration is needed to upgrade the DDS to a more sophisticated analytical system and to provide a more user-friendly interface. PMID:26566352
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Gary E.; Sutherland, G. Bruce
The 2008 Columbia River Estuary Conference was held at the Liberty Theater in Astoria, Oregon, on April 19-20. The conference theme was ecosystem restoration. The purpose of the conference was to exchange data and information among researchers, policy-makers, and the public, i.e., interrelate science with management. Conference organizers invited presentations synthesizing material on Restoration Planning and Implementation (Session 1), Research to Reduce Restoration Uncertainties (Session 2), Wetlands and Flood Management (Session 3), Action Effectiveness Monitoring (Session 4), and Management Perspectives (Session 5). A series of three plenary talks opened the conference. Facilitated speaker and audience discussion periods were held atmore » the end of each session. Contributed posters conveyed additional data and information. These proceedings include abstracts and notes documenting questions from the audience and clarifying answers from the presenter for each talk. The proceedings also document key points from the discussion periods at the end of each session. The conference program is outlined in the agenda section. Speaker biographies are presented in Appendix A. Poster titles and authors are listed in Appendix B. A list of conference attendees is contained in Appendix C. A compact disk, attached to the back cover, contains material in hypertext-markup-language from the conference website (http://cerc.labworks.org/) and the individual presentations.« less
Masseroli, Marco; Marchente, Mario
2008-07-01
We present X-PAT, a platform-independent software prototype that is able to manage patient referral multimedia data in an intranet network scenario according to the specific control procedures of a healthcare institution. It is a self-developed storage framework based on a file system, implemented in eXtensible Markup Language (XML) and PHP Hypertext Preprocessor Language, and addressed to the requirements of limited-dimension healthcare entities (small hospitals, private medical centers, outpatient clinics, and laboratories). In X-PAT, healthcare data descriptions, stored in a novel Referral Base Management System (RBMS) according to Health Level 7 Clinical Document Architecture Release 2 (CDA R2) standard, can be easily applied to the specific data and organizational procedures of a particular healthcare working environment thanks also to the use of standard clinical terminology. Managed data, centralized on a server, are structured in the RBMS schema using a flexible patient record and CDA healthcare referral document structures based on XML technology. A novel search engine allows defining and performing queries on stored data, whose rapid execution is ensured by expandable RBMS indexing structures. Healthcare personnel can interface the X-PAT system, according to applied state-of-the-art privacy and security measures, through friendly and intuitive Web pages that facilitate user acceptance.
Hypertext Theory: Rethinking and Reformulating What We Know, Web 2.0
ERIC Educational Resources Information Center
Baehr, Craig; Lang, Susan M.
2012-01-01
This article traces the influences of hypertext theory throughout the various genres of online publication in technical communication. It begins with a look back at some of the important concepts and theorists writing about hypertext theory from the post-World War II era, to the early years of the World Wide Web 2.0, and the very differing notions…
Development of an intelligent hypertext system for wind tunnel testing
NASA Technical Reports Server (NTRS)
Lo, Ching F.; Shi, George Z.; Steinle, Frank W.; Wu, Y. C. L. Susan; Hoyt, W. Andes
1991-01-01
This paper summarizes the results of a system utilizing artificial intelligence technology to improve the productivity of project engineers who conduct wind tunnel tests. The objective was to create an intelligent hypertext system which integrates a hypertext manual and expert system that stores experts' knowledge and experience. The preliminary (Phase I) effort implemented a prototype IHS module encompassing a portion of the manuals and knowledge used for wind tunnel testing. The effort successfully demonstrated the feasibility of the intelligent hypertext system concept. A module for the internal strain gage balance, implemented on both IBM-PC and Macintosh computers, is presented. A description of the Phase II effort is included.
Toward intelligent information sysytem
NASA Astrophysics Data System (ADS)
Onodera, Natsuo
"Hypertext" means a concept of a novel computer-assisted tool for storage and retrieval of text information based on human association. Structure of knowledge in our idea processing is generally complicated and networked, but traditional paper documents merely express it in essentially linear and sequential forms. However, recent advances in work-station technology have allowed us to process easily electronic documents containing non-linear structure such as references or hierarchies. This paper describes concept, history and basic organization of hypertext, and shows the outline and features of existing main hypertext systems. Particularly, use of the hypertext database is illustrated by an example of Intermedia developed by Brown University.
ERIC Educational Resources Information Center
Gillingham, Mark G.
A study examined what happened when a group of adult students read a hypertext for the goal of answering specific questions. Subjects, 30 students enrolled in an upper-division psychology course at a state university in the northwestern United States, read a binary tree-structured hypertext to answer three two-part questions on the topic of…
ERIC Educational Resources Information Center
Zammit, Katina
2011-01-01
With the increased use of hypertexts to locate information, students need to make informed decisions about their pathway so they build knowledge efficiently. The moves they make need to contribute to understanding the topic more than detracting them. This paper explores the use of Systemic Functional Linguistics (SFL) to describe the construction…
RDNAnalyzer: A tool for DNA secondary structure prediction and sequence analysis.
Afzal, Muhammad; Shahid, Ahmad Ali; Shehzadi, Abida; Nadeem, Shahid; Husnain, Tayyab
2012-01-01
RDNAnalyzer is an innovative computer based tool designed for DNA secondary structure prediction and sequence analysis. It can randomly generate the DNA sequence or user can upload the sequences of their own interest in RAW format. It uses and extends the Nussinov dynamic programming algorithm and has various application for the sequence analysis. It predicts the DNA secondary structure and base pairings. It also provides the tools for routinely performed sequence analysis by the biological scientists such as DNA replication, reverse compliment generation, transcription, translation, sequence specific information as total number of nucleotide bases, ATGC base contents along with their respective percentages and sequence cleaner. RDNAnalyzer is a unique tool developed in Microsoft Visual Studio 2008 using Microsoft Visual C# and Windows Presentation Foundation and provides user friendly environment for sequence analysis. It is freely available. http://www.cemb.edu.pk/sw.html RDNAnalyzer - Random DNA Analyser, GUI - Graphical user interface, XAML - Extensible Application Markup Language.
Cell illustrator 4.0: a computational platform for systems biology.
Nagasaki, Masao; Saito, Ayumu; Jeong, Euna; Li, Chen; Kojima, Kaname; Ikeda, Emi; Miyano, Satoru
2011-01-01
Cell Illustrator is a software platform for Systems Biology that uses the concept of Petri net for modeling and simulating biopathways. It is intended for biological scientists working at bench. The latest version of Cell Illustrator 4.0 uses Java Web Start technology and is enhanced with new capabilities, including: automatic graph grid layout algorithms using ontology information; tools using Cell System Markup Language (CSML) 3.0 and Cell System Ontology 3.0; parameter search module; high-performance simulation module; CSML database management system; conversion from CSML model to programming languages (FORTRAN, C, C++, Java, Python and Perl); import from SBML, CellML, and BioPAX; and, export to SVG and HTML. Cell Illustrator employs an extension of hybrid Petri net in an object-oriented style so that biopathway models can include objects such as DNA sequence, molecular density, 3D localization information, transcription with frame-shift, translation with codon table, as well as biochemical reactions.
MOCCASIN: converting MATLAB ODE models to SBML.
Gómez, Harold F; Hucka, Michael; Keating, Sarah M; Nudelman, German; Iber, Dagmar; Sealfon, Stuart C
2016-06-15
MATLAB is popular in biological research for creating and simulating models that use ordinary differential equations (ODEs). However, sharing or using these models outside of MATLAB is often problematic. A community standard such as Systems Biology Markup Language (SBML) can serve as a neutral exchange format, but translating models from MATLAB to SBML can be challenging-especially for legacy models not written with translation in mind. We developed MOCCASIN (Model ODE Converter for Creating Automated SBML INteroperability) to help. MOCCASIN can convert ODE-based MATLAB models of biochemical reaction networks into the SBML format. MOCCASIN is available under the terms of the LGPL 2.1 license (http://www.gnu.org/licenses/lgpl-2.1.html). Source code, binaries and test cases can be freely obtained from https://github.com/sbmlteam/moccasin : mhucka@caltech.edu More information is available at https://github.com/sbmlteam/moccasin. © The Author 2016. Published by Oxford University Press.
Cell Illustrator 4.0: a computational platform for systems biology.
Nagasaki, Masao; Saito, Ayumu; Jeong, Euna; Li, Chen; Kojima, Kaname; Ikeda, Emi; Miyano, Satoru
2010-01-01
Cell Illustrator is a software platform for Systems Biology that uses the concept of Petri net for modeling and simulating biopathways. It is intended for biological scientists working at bench. The latest version of Cell Illustrator 4.0 uses Java Web Start technology and is enhanced with new capabilities, including: automatic graph grid layout algorithms using ontology information; tools using Cell System Markup Language (CSML) 3.0 and Cell System Ontology 3.0; parameter search module; high-performance simulation module; CSML database management system; conversion from CSML model to programming languages (FORTRAN, C, C++, Java, Python and Perl); import from SBML, CellML, and BioPAX; and, export to SVG and HTML. Cell Illustrator employs an extension of hybrid Petri net in an object-oriented style so that biopathway models can include objects such as DNA sequence, molecular density, 3D localization information, transcription with frame-shift, translation with codon table, as well as biochemical reactions.
Techniques for integrating ‐omics data
Akula, Siva Prasad; Miriyala, Raghava Naidu; Thota, Hanuman; Rao, Allam Appa; Gedela, Srinubabu
2009-01-01
The challenge for -omics research is to tackle the problem of fragmentation of knowledge by integrating several sources of heterogeneous information into a coherent entity. It is widely recognized that successful data integration is one of the keys to improve productivity for stored data. Through proper data integration tools and algorithms, researchers may correlate relationships that enable them to make better and faster decisions. The need for data integration is essential for present ‐omics community, because ‐omics data is currently spread world wide in wide variety of formats. These formats can be integrated and migrated across platforms through different techniques and one of the important techniques often used is XML. XML is used to provide a document markup language that is easier to learn, retrieve, store and transmit. It is semantically richer than HTML. Here, we describe bio warehousing, database federation, controlled vocabularies and highlighting the XML application to store, migrate and validate -omics data. PMID:19255651
Techniques for integrating -omics data.
Akula, Siva Prasad; Miriyala, Raghava Naidu; Thota, Hanuman; Rao, Allam Appa; Gedela, Srinubabu
2009-01-01
The challenge for -omics research is to tackle the problem of fragmentation of knowledge by integrating several sources of heterogeneous information into a coherent entity. It is widely recognized that successful data integration is one of the keys to improve productivity for stored data. Through proper data integration tools and algorithms, researchers may correlate relationships that enable them to make better and faster decisions. The need for data integration is essential for present -omics community, because -omics data is currently spread world wide in wide variety of formats. These formats can be integrated and migrated across platforms through different techniques and one of the important techniques often used is XML. XML is used to provide a document markup language that is easier to learn, retrieve, store and transmit. It is semantically richer than HTML. Here, we describe bio warehousing, database federation, controlled vocabularies and highlighting the XML application to store, migrate and validate -omics data.
Database Reports Over the Internet
NASA Technical Reports Server (NTRS)
Smith, Dean Lance
2002-01-01
Most of the summer was spent developing software that would permit existing test report forms to be printed over the web on a printer that is supported by Adobe Acrobat Reader. The data is stored in a DBMS (Data Base Management System). The client asks for the information from the database using an HTML (Hyper Text Markup Language) form in a web browser. JavaScript is used with the forms to assist the user and verify the integrity of the entered data. Queries to a database are made in SQL (Sequential Query Language), a widely supported standard for making queries to databases. Java servlets, programs written in the Java programming language running under the control of network server software, interrogate the database and complete a PDF form template kept in a file. The completed report is sent to the browser requesting the report. Some errors are sent to the browser in an HTML web page, others are reported to the server. Access to the databases was restricted since the data are being transported to new DBMS software that will run on new hardware. However, the SQL queries were made to Microsoft Access, a DBMS that is available on most PCs (Personal Computers). Access does support the SQL commands that were used, and a database was created with Access that contained typical data for the report forms. Some of the problems and features are discussed below.
NASA Astrophysics Data System (ADS)
Swamy, Ashwin Balegar
This thesis involves development of an interactive GIS (Geographic Information System) based application, which gives information about the ancient history of Egypt. The astonishing architecture, the strange burial rituals and their civilization were some of the intriguing questions that motivated me towards developing this application. The application is a historical timeline starting from 3100 BC, leading up to 664 BC, focusing on the evolution of the Egyptian dynasties. The tool holds information regarding some of the famous monuments which were constructed during that era and also about the civilizations that co-existed. It also provides details about the religions followed by their kings. It also includes the languages spoken during those periods. The tool is developed using JAVA, a programing language and MOJO (Map Objects Java Objects) a product of ESRI (Environmental Science Research Institute) to create map objects, to provide geographic information. JAVA Swing is used for designing the user interface. HTML (Hyper Text Markup Language) pages are created to provide the user with more information related to the historic period. CSS (Cascade Style Sheets) and JAVA Scripts are used with HTML5 to achieve creative display of content. The tool is kept simple and easy for the user to interact with. The tool also includes pictures and videos for the user to get a feel of the historic period. The application is built to motivate people to know more about one of the prominent and ancient civilization of the Mediterranean world.
SGML-Based Markup for Literary Texts: Two Problems and Some Solutions.
ERIC Educational Resources Information Center
Barnard, David; And Others
1988-01-01
Identifies the Standard Generalized Markup Language (SGML) as the best basis for a markup standard for encoding literary texts. Outlines solutions to problems using SGML and discusses the problem of maintaining multiple views of a document. Examines several ways of reducing the burden of markups. (GEA)
Galdino, Greg M; Gotway, Michael
2005-02-01
The curriculum vitae (CV) has been the traditional method for radiologists to illustrate their accomplishments in the field of medicine. Despite its presence in medicine as a standard, widely accepted means to describe one's professional career and its use for decades as an accomplice to most applications and interviews, there is relatively little written in the medical literature regarding the CV. Misrepresentation on medical students', residents', and fellows' applications has been reported. Using digital technology, CVs have the potential to be much more than printed words on paper and offers a solution to misrepresentation. Digital CVs may incorporate full-length articles, graphics, presentations, clinical images, and video. Common formats for digital CVs include CD-ROMs or DVD-ROMs containing articles (in Adobe Portable Document Format) and presentations (in Microsoft PowerPoint format) accompanying printed CVs, word processing documents with hyperlinks to articles and presentations either locally (on CD-ROMs or DVD-ROMs) or remotely (via the Internet), or hypertext markup language documents. Digital CVs afford the ability to provide more information that is readily accessible to those receiving and reviewing them. Articles, presentations, videos, images, and Internet links can be illustrated using standard file formats commonly available to all radiologists. They can be easily updated and distributed on an inexpensive media, such as a CD-ROM or DVD-ROM. With the availability of electronic articles, presentations, and information via the Internet, traditional paper CVs may soon be superseded by their electronic successors.
2010-01-01
Background Numerous not-for-profit pharmacies have been created to improve access to medicines for the poor, but many have failed due to insufficient financial planning and management. These pharmacies are not well described in health services literature despite strong demand from policy makers, implementers, and researchers. Surveys reporting unaffordable medicine prices and high mark-ups have spurred efforts to reduce medicine prices, but price reduction goals are arbitrary in the absence of information on pharmacy costs, revenues, and profit structures. Health services research is needed to develop sustainable and "reasonable" medicine price goals and strategic initiatives to reach them. Methods We utilized cost accounting methods on inventory and financial information obtained from a not-for-profit rural pharmacy network in mountainous Kyrgyzstan to quantify costs, revenues, profits and medicine mark-ups during establishment and maintenance periods (October 2004-December 2007). Results Twelve pharmacies and one warehouse were established in remote Kyrgyzstan with < US $25,000 due to governmental resource-sharing. The network operated at break-even profit, leaving little room to lower medicine prices and mark-ups. Medicine mark-ups needed for sustainability were greater than originally envisioned by network administration. In 2005, 55%, 35%, and 10% of the network's top 50 products revealed mark-ups of < 50%, 50-99% and > 100%, respectively. Annual mark-ups increased dramatically each year to cover increasing recurrent costs, and by 2007, only 19% and 46% of products revealed mark-ups of < 50% and 50-99%, respectively; while 35% of products revealed mark-ups > 100%. 2007 medicine mark-ups varied substantially across these products, ranging from 32% to 244%. Mark-ups needed to sustain private pharmacies would be even higher in the absence of government subsidies. Conclusion Pharmacy networks can be established in hard-to-reach regions with little funding using public-private partnership, resource-sharing models. Medicine prices and mark-ups must be interpreted with consideration for regional costs of business. Mark-ups vary dramatically across medicines. Some mark-ups appear "excessive" but are likely necessary for pharmacy viability. Pharmacy financial data is available in remote settings and can be used towards determination of "reasonable" medicine price goals. Health systems researchers must document the positive and negative financial experiences of pharmacy initiatives to inform future projects and advance access to medicines goals. PMID:20626904
Waning, Brenda; Maddix, Jason; Soucy, Lyne
2010-07-13
Numerous not-for-profit pharmacies have been created to improve access to medicines for the poor, but many have failed due to insufficient financial planning and management. These pharmacies are not well described in health services literature despite strong demand from policy makers, implementers, and researchers. Surveys reporting unaffordable medicine prices and high mark-ups have spurred efforts to reduce medicine prices, but price reduction goals are arbitrary in the absence of information on pharmacy costs, revenues, and profit structures. Health services research is needed to develop sustainable and "reasonable" medicine price goals and strategic initiatives to reach them. We utilized cost accounting methods on inventory and financial information obtained from a not-for-profit rural pharmacy network in mountainous Kyrgyzstan to quantify costs, revenues, profits and medicine mark-ups during establishment and maintenance periods (October 2004-December 2007). Twelve pharmacies and one warehouse were established in remote Kyrgyzstan with < US $25,000 due to governmental resource-sharing. The network operated at break-even profit, leaving little room to lower medicine prices and mark-ups. Medicine mark-ups needed for sustainability were greater than originally envisioned by network administration. In 2005, 55%, 35%, and 10% of the network's top 50 products revealed mark-ups of < 50%, 50-99% and > 100%, respectively. Annual mark-ups increased dramatically each year to cover increasing recurrent costs, and by 2007, only 19% and 46% of products revealed mark-ups of < 50% and 50-99%, respectively; while 35% of products revealed mark-ups > 100%. 2007 medicine mark-ups varied substantially across these products, ranging from 32% to 244%. Mark-ups needed to sustain private pharmacies would be even higher in the absence of government subsidies. Pharmacy networks can be established in hard-to-reach regions with little funding using public-private partnership, resource-sharing models. Medicine prices and mark-ups must be interpreted with consideration for regional costs of business. Mark-ups vary dramatically across medicines. Some mark-ups appear "excessive" but are likely necessary for pharmacy viability. Pharmacy financial data is available in remote settings and can be used towards determination of "reasonable" medicine price goals. Health systems researchers must document the positive and negative financial experiences of pharmacy initiatives to inform future projects and advance access to medicines goals.
SoyFN: a knowledge database of soybean functional networks.
Xu, Yungang; Guo, Maozu; Liu, Xiaoyan; Wang, Chunyu; Liu, Yang
2014-01-01
Many databases for soybean genomic analysis have been built and made publicly available, but few of them contain knowledge specifically targeting the omics-level gene-gene, gene-microRNA (miRNA) and miRNA-miRNA interactions. Here, we present SoyFN, a knowledge database of soybean functional gene networks and miRNA functional networks. SoyFN provides user-friendly interfaces to retrieve, visualize, analyze and download the functional networks of soybean genes and miRNAs. In addition, it incorporates much information about KEGG pathways, gene ontology annotations and 3'-UTR sequences as well as many useful tools including SoySearch, ID mapping, Genome Browser, eFP Browser and promoter motif scan. SoyFN is a schema-free database that can be accessed as a Web service from any modern programming language using a simple Hypertext Transfer Protocol call. The Web site is implemented in Java, JavaScript, PHP, HTML and Apache, with all major browsers supported. We anticipate that this database will be useful for members of research communities both in soybean experimental science and bioinformatics. Database URL: http://nclab.hit.edu.cn/SoyFN.
Intelligent hypertext systems for aerospace engineering applications
NASA Technical Reports Server (NTRS)
Lo, Ching F.
1989-01-01
This paper is a progress report on the utilization of AI technology for assisting users locating and understanding technical information in manuals used for planning and conducting wind tunnel test. The specific goal is to create an Intelligent Hypertext System (IHS) for wind tunnel testing which combines the computerized manual in the form of hypertext and an advisory system that stores experts' knowledge and experiences. A prototype IHS for conducting transonic wind tunnel testing has been constructed with limited knowledge base. The prototype is being evaluated by potential users.
Gagl, Benjamin
2016-01-01
Highlighted text in the Internet (i.e., hypertext) is predominantly blue and underlined. The perceptibility of these hypertext characteristics was heavily questioned by applied research and empirical tests resulted in inconclusive results. The ability to recognize blue text in foveal and parafoveal vision was identified as potentially constrained by the low number of foveally centered blue light sensitive retinal cells. The present study investigates if foveal and parafoveal perceptibility of blue hypertext is reduced in comparison to normal black text during reading. A silent-sentence reading study with simultaneous eye movement recordings and the invisible boundary paradigm, which allows the investigation of foveal and parafoveal perceptibility, separately, was realized (comparing fixation times after degraded vs. un-degraded parafoveal previews). Target words in sentences were presented in either black or blue and either underlined or normal. No effect of color and underlining, but a preview benefit could be detected for first pass reading measures. Fixation time measures that included re-reading, e.g., total viewing times, showed, in addition to a preview effect, a reduced fixation time for not highlighted (black not underlined) in contrast to highlighted target words (either blue or underlined or both). The present pattern reflects no detectable perceptual disadvantage of hyperlink stimuli but increased attraction of attention resources, after first pass reading, through highlighting. Blue or underlined text allows readers to easily perceive hypertext and at the same time readers re-visited highlighted words longer. On the basis of the present evidence, blue hypertext can be safely recommended to web designers for future use.
Literary and Electronic Hypertext: Borges, Criticism, Literary Research, and the Computer.
ERIC Educational Resources Information Center
Davison, Ned J.
1991-01-01
Examines what "hypertext" means to literary criticism on the one hand (i.e., intertextuality) and computing on the other, to determine how the two concepts may serve each other in a mutually productive way. (GLR)
A Hypertext Glossary of Nematology.
ERIC Educational Resources Information Center
Francl, Leonard J.
1993-01-01
Describes NEMATODE GLOSSARY, a hypertext glossary of terminology used in graduate nematology courses. Glossary definitions of anatomical terms are linked to color illustrations. Common names of plant and animal parasites and mnemonic codes for nematode genes are in separate appendices. (Author/MDH)
RDNAnalyzer: A tool for DNA secondary structure prediction and sequence analysis
Afzal, Muhammad; Shahid, Ahmad Ali; Shehzadi, Abida; Nadeem, Shahid; Husnain, Tayyab
2012-01-01
RDNAnalyzer is an innovative computer based tool designed for DNA secondary structure prediction and sequence analysis. It can randomly generate the DNA sequence or user can upload the sequences of their own interest in RAW format. It uses and extends the Nussinov dynamic programming algorithm and has various application for the sequence analysis. It predicts the DNA secondary structure and base pairings. It also provides the tools for routinely performed sequence analysis by the biological scientists such as DNA replication, reverse compliment generation, transcription, translation, sequence specific information as total number of nucleotide bases, ATGC base contents along with their respective percentages and sequence cleaner. RDNAnalyzer is a unique tool developed in Microsoft Visual Studio 2008 using Microsoft Visual C# and Windows Presentation Foundation and provides user friendly environment for sequence analysis. It is freely available. Availability http://www.cemb.edu.pk/sw.html Abbreviations RDNAnalyzer - Random DNA Analyser, GUI - Graphical user interface, XAML - Extensible Application Markup Language. PMID:23055611
Real-time WebRTC-based design for a telepresence wheelchair.
Van Kha Ly Ha; Rifai Chai; Nguyen, Hung T
2017-07-01
This paper presents a novel approach to the telepresence wheelchair system which is capable of real-time video communication and remote interaction. The investigation of this emerging technology aims at providing a low-cost and efficient way for assisted-living of people with disabilities. The proposed system has been designed and developed by deploying the JavaScript with Hyper Text Markup Language 5 (HTML5) and Web Real-time Communication (WebRTC) in which the adaptive rate control algorithm for video transmission is invoked. We conducted experiments in real-world environments, and the wheelchair was controlled from a distance using the Internet browser to compare with existing methods. The results show that the adaptively encoded video streaming rate matches the available bandwidth. The video streaming is high-quality with approximately 30 frames per second (fps) and round trip time less than 20 milliseconds (ms). These performance results confirm that the WebRTC approach is a potential method for developing a telepresence wheelchair system.
Resource Discovery for Extreme Scale Collaboration (RDESC) Final Report - RPI/TWC - Year 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, Peter
The amount of data produced in the practice of science is growing rapidly. Despite the accumulation and demand for scientific data, relatively little is actually made available for the broader scientific community. We surmise that the root of the problem is the perceived difficulty to electronically publish scientific data and associated metadata in a way that makes it discoverable. We propose to exploit Semantic Web technologies and practices to make (meta)data discoverable and easy to publish. We share our experiences in curating metadata to illustrate both the flexibility of our approach and the pain of discovering data in the currentmore » research environment. We also make recommendations by concrete example of how data publishers can provide their (meta)data by adding some limited, additional markup to HTML pages on the Web. With little additional effort from data publishers, the difficulty of data discovery/access/sharing can be greatly reduced and the impact of research data greatly enhanced.« less
Calderon, Karynna; Dadisman, Shawn V.; Kindinger, Jack G.; Flocks, James G.; Wiese, Dana S.
2003-01-01
This archive consists of marine seismic reflection profile data collected in four survey areas from southeast of Charleston Harbor to the mouth of the North Edisto River of South Carolina. These data were acquired June 26 - July 1, 1996, aboard the R/V G.K. Gilbert. Included here are data in a variety of formats including binary, American Standard Code for Information Interchange (ASCII), Hyper Text Markup Language (HTML), Portable Document Format (PDF), Rich Text Format (RTF), Graphics Interchange Format (GIF) and Joint Photographic Experts Group (JPEG) images, and shapefiles. Binary data are in Society of Exploration Geophysicists (SEG) SEG-Y format and may be downloaded for further processing or display. Reference maps and GIF images of the profiles may be viewed with a web browser. The Geographic Information Systems (GIS) map documents provided were created with Environmental Systems Research Institute (ESRI) GIS software ArcView 3.2 and 8.1.
Enhancing the Impact of Science Data: Toward Data Discovery and Reuse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chappell, Alan R.; Weaver, Jesse R.; Purohit, Sumit
The amount of data produced in support of scientific research continues to grow rapidly. Despite the accumulation and demand for scientific data, relatively little data are actually made available for the broader scientific community. We surmise that one root of this problem is the perceived difficulty of electronically publishing scientific data and associated metadata in a way that makes it discoverable. We propose exploiting Semantic Web technologies and best practices to make metadata both discoverable and easy to publish. We share experiences in curating metadata to illustrate the cumbersome nature of data reuse in the current research environment. We alsomore » make recommendations with a real-world example of how data publishers can provide their metadata by adding limited additional markup to HTML pages on the Web. With little additional effort from data publishers, the difficulty of data discovery, access, and sharing can be greatly reduced and the impact of research data greatly enhanced.« less
Linked Data: what does it offer Earth Sciences?
NASA Astrophysics Data System (ADS)
Cox, Simon; Schade, Sven
2010-05-01
'Linked Data' is a current buzz-phrase promoting access to various forms of data on the internet. It starts from the two principles that have underpinned the architecture and scalability of the World Wide Web: 1. Universal Resource Identifiers - using the http protocol which is supported by the DNS system. 2. Hypertext - in which URIs of related resources are embedded within a document. Browsing is the key mode of interaction, with traversal of links between resources under control of the client. Linked Data also adds, or re-emphasizes: • Content negotiation - whereby the client uses http headers to tell the service what representation of a resource is acceptable, • Semantic Web principles - formal semantics for links, following the RDF data model and encoding, and • The 'mashup' effect - in which original and unexpected value may emerge from reuse of data, even if published in raw or unpolished form. Linked Data promotes typed links to all kinds of data, so is where the semantic web meets the 'deep web', i.e. resources which may be accessed using web protocols, but are in representations not indexed by search engines. Earth sciences are data rich, but with a strong legacy of specialized formats managed and processed by disconnected applications. However, most contemporary research problems require a cross-disciplinary approach, in which the heterogeneity resulting from that legacy is a significant challenge. In this context, Linked Data clearly has much to offer the earth sciences. But, there are some important questions to answer. What is a resource? Most earth science data is organized in arrays and databases. A subset useful for a particular study is usually identified by a parameterized query. The Linked Data paradigm emerged from the world of documents, and will often only resolve data-sets. It is impractical to create even nested navigation resources containing links to all potentially useful objects or subsets. From the viewpoint of human user interfaces, the browse metaphor, which has been such an important part of the success of the web, must be augmented with other interaction mechanisms, including query. What are the impacts on search and metadata? Hypertext provides links selected by the page provider. However, science should endeavor to be exhaustive in its use of data. Resource discovery through links must be supplemented by more systematic data discovery through search. Conversely, the crawlers that generate search indexes must be fed by resource providers (a) serving navigation pages with links to every dataset (b) adding enough 'metadata' (semantics) on each link to effectively populate the indexes. Linked Data makes this easier due to its integration with semantic web technologies, including structured vocabularies. What is the relation between structured data and Linked Data? Linked Data has focused on web-pages (primarily HTML) for human browsing, and RDF for semantics, assuming that other representations are opaque. However, this overlooks the wealth of XML data on the web, some of which is structured according to XML Schemas that provide semantics. Technical applications can use content-negotiation to get a structured representation, and exploit its semantics. Particularly relevant for earth sciences are data representations based on OGC Geography Markup Language (GML), such as GeoSciML, O&M and MOLES. GML was strongly influenced by RDF, and typed links are intrinsic: xlink:href plays the role that rdf:resource does in RDF representations. Services which expose GML-formatted resources (such as OGC Web Feature Service) are a prototype of Linked Data. Giving credit where it is due. Organizations investing in data collection may be reluctant to publish the raw data prior to completing an initial analysis. To encourage early data publication the system must provide suitable incentives, and citation analysis must recognize the increasing diversity of publication routes and forms. Linked Data makes it easier to include rich citation information when data is both published and used.
Variation in markup of general surgical procedures by hospital market concentration.
Cerullo, Marcelo; Chen, Sophia Y; Dillhoff, Mary; Schmidt, Carl R; Canner, Joseph K; Pawlik, Timothy M
2018-04-01
Increasing hospital market concentration (with concomitantly decreasing hospital market competition) may be associated with rising hospital prices. Hospital markup - the relative increase in price over costs - has been associated with greater hospital market concentration. Patients undergoing a cardiothoracic or gastrointestinal procedure in the 2008-2011 Nationwide Inpatient Sample (NIS) were identified and linked to Hospital Market Structure Files. The association between market concentration, hospital markup and hospital for-profit status was assessed using mixed-effects log-linear models. A weighted total of 1,181,936 patients were identified. In highly concentrated markets, private for-profit status was associated with an 80.8% higher markup compared to public/private not-for-profit status (95%CI: +69.5% - +96.9%; p < 0.001). However, private for-profit status in highly concentrated markets was associated with only a 62.9% higher markup compared to public/private not-for-profit status in unconcentrated markets (95%CI: +45.4% - +81.1%; p < 0.001). Hospital for-profit status modified the association between hospitals' market concentration and markup. Government and private not-for-profit hospitals employed lower markups in more concentrated markets, whereas private for-profit hospitals employed higher markups in more concentrated markets. Copyright © 2017 Elsevier Inc. All rights reserved.
Applying Hypertext Structures to Software Documentation.
ERIC Educational Resources Information Center
French, James C.; And Others
1997-01-01
Describes a prototype system for software documentation management called SLEUTH (Software Literacy Enhancing Usefulness to Humans) being developed at the University of Virginia. Highlights include information retrieval techniques, hypertext links that are installed automatically, a WAIS (Wide Area Information Server) search engine, user…
Oak Regeneration: A Knowledge Synthesis
H. Michael Rauscher; David L. Loftis; Charles E. McGee; Christopher V. Worth
1997-01-01
This scientific literature is represented by a hypertext software. To view this literature you must download and install the hypertext software.Abstract: The scientific literature concerning oak regeneration problems is lengthy, complex, paradoxical, and often perplexing. Despite a large scientific literature and numerous conference...
Smith, R F; Wiese, B A; Wojzynski, M K; Davison, D B; Worley, K C
1996-05-01
The BCM Search Launcher is an integrated set of World Wide Web (WWW) pages that organize molecular biology-related search and analysis services available on the WWW by function, and provide a single point of entry for related searches. The Protein Sequence Search Page, for example, provides a single sequence entry form for submitting sequences to WWW servers that offer remote access to a variety of different protein sequence search tools, including BLAST, FASTA, Smith-Waterman, BEAUTY, PROSITE, and BLOCKS searches. Other Launch pages provide access to (1) nucleic acid sequence searches, (2) multiple and pair-wise sequence alignments, (3) gene feature searches, (4) protein secondary structure prediction, and (5) miscellaneous sequence utilities (e.g., six-frame translation). The BCM Search Launcher also provides a mechanism to extend the utility of other WWW services by adding supplementary hypertext links to results returned by remote servers. For example, links to the NCBI's Entrez data base and to the Sequence Retrieval System (SRS) are added to search results returned by the NCBI's WWW BLAST server. These links provide easy access to auxiliary information, such as Medline abstracts, that can be extremely helpful when analyzing BLAST data base hits. For new or infrequent users of sequence data base search tools, we have preset the default search parameters to provide the most informative first-pass sequence analysis possible. We have also developed a batch client interface for Unix and Macintosh computers that allows multiple input sequences to be searched automatically as a background task, with the results returned as individual HTML documents directly to the user's system. The BCM Search Launcher and batch client are available on the WWW at URL http:@gc.bcm.tmc.edu:8088/search-launcher.html.
2016-01-01
Background Highlighted text in the Internet (i.e., hypertext) is predominantly blue and underlined. The perceptibility of these hypertext characteristics was heavily questioned by applied research and empirical tests resulted in inconclusive results. The ability to recognize blue text in foveal and parafoveal vision was identified as potentially constrained by the low number of foveally centered blue light sensitive retinal cells. The present study investigates if foveal and parafoveal perceptibility of blue hypertext is reduced in comparison to normal black text during reading. Methods A silent-sentence reading study with simultaneous eye movement recordings and the invisible boundary paradigm, which allows the investigation of foveal and parafoveal perceptibility, separately, was realized (comparing fixation times after degraded vs. un-degraded parafoveal previews). Target words in sentences were presented in either black or blue and either underlined or normal. Results No effect of color and underlining, but a preview benefit could be detected for first pass reading measures. Fixation time measures that included re-reading, e.g., total viewing times, showed, in addition to a preview effect, a reduced fixation time for not highlighted (black not underlined) in contrast to highlighted target words (either blue or underlined or both). Discussion The present pattern reflects no detectable perceptual disadvantage of hyperlink stimuli but increased attraction of attention resources, after first pass reading, through highlighting. Blue or underlined text allows readers to easily perceive hypertext and at the same time readers re-visited highlighted words longer. On the basis of the present evidence, blue hypertext can be safely recommended to web designers for future use. PMID:27688970
Saadawi, Gilan M; Harrison, James H
2006-10-01
Clinical laboratory procedure manuals are typically maintained as word processor files and are inefficient to store and search, require substantial effort for review and updating, and integrate poorly with other laboratory information. Electronic document management systems could improve procedure management and utility. As a first step toward building such systems, we have developed a prototype electronic format for laboratory procedures using Extensible Markup Language (XML). Representative laboratory procedures were analyzed to identify document structure and data elements. This information was used to create a markup vocabulary, CLP-ML, expressed as an XML Document Type Definition (DTD). To determine whether this markup provided advantages over generic markup, we compared procedures structured with CLP-ML or with the vocabulary of the Health Level Seven, Inc. (HL7) Clinical Document Architecture (CDA) narrative block. CLP-ML includes 124 XML tags and supports a variety of procedure types across different laboratory sections. When compared with a general-purpose markup vocabulary (CDA narrative block), CLP-ML documents were easier to edit and read, less complex structurally, and simpler to traverse for searching and retrieval. In combination with appropriate software, CLP-ML is designed to support electronic authoring, reviewing, distributing, and searching of clinical laboratory procedures from a central repository, decreasing procedure maintenance effort and increasing the utility of procedure information. A standard electronic procedure format could also allow laboratories and vendors to share procedures and procedure layouts, minimizing duplicative word processor editing. Our results suggest that laboratory-specific markup such as CLP-ML will provide greater benefit for such systems than generic markup.
Comprehension and Navigation of Networked Hypertexts
ERIC Educational Resources Information Center
Blom, Helen; Segers, Eliane; Knoors, Harry; Hermans, Daan; Verhoeven, Ludo
2018-01-01
This study aims to investigate secondary school students' reading comprehension and navigation of networked hypertexts with and without a graphic overview compared to linear digital texts. Additionally, it was studied whether prior knowledge, vocabulary, verbal, and visual working memory moderated the relation between text design and…
Learning with Hypertext Learning Environments: Theory, Design, and Research.
ERIC Educational Resources Information Center
Jacobson, Michael J.; And Others
1996-01-01
Studied 69 undergraduates who used conceptually-indexed hypertext learning environments with differently structured thematic criss-crossing (TCC) treatments: guided and learner selected. Found that students need explicit modeling and scaffolding support to learn complex knowledge from these learning environments, and considers implications for…
ERIC Educational Resources Information Center
Balajthy, Ernest
1990-01-01
The article examines the potential impact of computer-based text technologies, called hypermedia, on disabled readers. Discussed are hypertext, the hypercard, and implications of metacognitive research (such as author versus user control over text manipulations), instructional implications, and instructional text engineering. (DB)
A Bibliography on Hypertext and Hypermedia with Selected Annotations.
ERIC Educational Resources Information Center
Franklin, Carl
1990-01-01
The first of 2 parts, this bibliography contains 233 references to materials dealing with hypertext and hypermedia. Entries are presented in the following categories: alternatives to HyperCard; bibliographies; biographies; books and book reviews; dictionaries; hardware; interviews; library applications; optical disk-related; theoretical and…
Rehabilitation R@D Progress Reports, 1992-1993. Volume 30-31
1993-01-01
Transcripts of the videotape are being ana- lyzed on a hypertext database and also by qualitative data analysis software ( NUDIST ) to determine elements...number of videotapes have been transcribed and are being analyzed by the hypertext and NUDIST software. The first cycle is in progress, reflecting
The Liberating Teaching Methods of the Brazilian Paulo Freire and Hypertext.
ERIC Educational Resources Information Center
Gomez-Martinez, Jose Luis
2003-01-01
Exemplifies, through the pedagogical theories put forth by Paulo Freire in his book "Pedagogia del oprimado" (teaching the oppressed) and along with the potentials of hypertext, the intimate relationship between socio-cultural forces and the technical responses emerging from the dialectic process between them. (AS)
Down the Yellow Chip Road: Hypertext Portfolios in Oz.
ERIC Educational Resources Information Center
Fischer, Katherine M.
1996-01-01
Describes a creative writing class in which students used hypertext to develop their writing portfolios. Suggests that, much like "Kansas Dorothy" who ventured into Oz, a "tornado" carried these students and their teacher from the safe Paperland to the yellow chip road of electronic portfolios. Notes that students' portfolios…
Elaborated Resources: An Instructional Design Strategy for Hypermedia.
ERIC Educational Resources Information Center
Rezabek, Randall H.; Ragan, Tillman J.
The concept of hypertext was introduced by Ted Nelson in 1965, but only recently has the widely available technology caught up with the idea. The new generation of microcomputers featuring large internal memories, graphic interfaces, and large data storage capacities have made the commercial development of hypertext/hypermedia software possible. A…
Effects of External Learning Aids on Learning with Ill-Structured Hypertext.
ERIC Educational Resources Information Center
Astleitner, Hermann
1997-01-01
Describes three experiments with high school and college students concerning learning with ill-structured hypertext; in each study, one different kind of external learning aid (memo pads, learning time, and teaching objectives) was manipulated and examined for its effect on intentional and incidental knowledge acquisition. Findings are discussed…
Seamless Merging of Hypertext and Algorithm Animation
ERIC Educational Resources Information Center
Karavirta, Ville
2009-01-01
Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…
Landmarks in the World Wide Web: A Preliminary Study.
ERIC Educational Resources Information Center
Heffron, Jennifer K.; Dillon, Andrew; Mostafa, Javed
1996-01-01
Outlines results of a pilot study examining what constitutes a landmark in hypertext. Seven subjects began a search task from the same Indiana University School of Library and Information Science Homepage; searches had to be conducted without the use of search engines, and strictly following hypertext links. (Author/AEF)
Online Metacognitive Strategies, Hypermedia Annotations, and Motivation on Hypertext Comprehension
ERIC Educational Resources Information Center
Shang, Hui-Fang
2016-01-01
This study examined the effect of online metacognitive strategies, hypermedia annotations, and motivation on reading comprehension in a Taiwanese hypertext environment. A path analysis model was proposed based on the assumption that if English as a foreign language learners frequently use online metacognitive strategies and hypermedia annotations,…
WorldWide Web: Hypertext from CERN.
ERIC Educational Resources Information Center
Nickerson, Gord
1992-01-01
Discussion of software tools for accessing information on the Internet focuses on the WorldWideWeb (WWW) system, which was developed at the European Particle Physics Laboratory (CERN) in Switzerland to build a worldwide network of hypertext links using available networking technology. Its potential for use with multimedia documents is also…
Telemetry Attributes Transfer Standard (TMATS) Handbook
2015-07-01
Example ......................... 6-1 Appendix A. Extensible Markup Language TMATS Differences ...................................... A-1 Appendix B...return-to-zero - level TG Telemetry Group TM telemetry TMATS Telemetry Attributes Transfer Standard XML eXtensible Markup Language Telemetry... Markup Language) format. The initial version of a standard 1 Range Commanders Council. Telemetry
48 CFR 552.243-71 - Equitable Adjustments.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) Markups. (3) Change to the time for completion specified in the contract. (e) Direct costs. The Contractor... contract regarding the Contractor's project schedule. (h) Markups. For each firm whose direct costs are... applicable, a bond rate and insurance rate. Markups shall be determined and applied as follows: (1) Overhead...
Competition in the economic crisis: Analysis of procurement auctions.
Gugler, Klaus; Weichselbaumer, Michael; Zulehner, Christine
2015-01-01
We study the effects of the recent economic crisis on firms׳ bidding behavior and markups in sealed bid auctions. Using data from Austrian construction procurements, we estimate bidders׳ construction costs within a private value auction model. We find that markups of all bids submitted decrease by 1.5 percentage points in the recent economic crisis, markups of winning bids decrease by 3.3 percentage points. We also find that without the government stimulus package this decrease would have been larger. These two pieces of evidence point to pro-cyclical markups.
The caBIG annotation and image Markup project.
Channin, David S; Mongkolwat, Pattanasak; Kleper, Vladimir; Sepukar, Kastubh; Rubin, Daniel L
2010-04-01
Image annotation and markup are at the core of medical interpretation in both the clinical and the research setting. Digital medical images are managed with the DICOM standard format. While DICOM contains a large amount of meta-data about whom, where, and how the image was acquired, DICOM says little about the content or meaning of the pixel data. An image annotation is the explanatory or descriptive information about the pixel data of an image that is generated by a human or machine observer. An image markup is the graphical symbols placed over the image to depict an annotation. While DICOM is the standard for medical image acquisition, manipulation, transmission, storage, and display, there are no standards for image annotation and markup. Many systems expect annotation to be reported verbally, while markups are stored in graphical overlays or proprietary formats. This makes it difficult to extract and compute with both of them. The goal of the Annotation and Image Markup (AIM) project is to develop a mechanism, for modeling, capturing, and serializing image annotation and markup data that can be adopted as a standard by the medical imaging community. The AIM project produces both human- and machine-readable artifacts. This paper describes the AIM information model, schemas, software libraries, and tools so as to prepare researchers and developers for their use of AIM.
ERIC Educational Resources Information Center
Chen, I-Jung; Yen, Jung-Chuan
2013-01-01
This study extends current knowledge by exploring the effect of different annotation formats, namely in-text annotation, glossary annotation, and pop-up annotation, on hypertext reading comprehension in a foreign language and vocabulary acquisition across student proficiencies. User attitudes toward the annotation presentation were also…
Rethinking Joseph Janangelo's "Joseph Cornell and the Artistry of Composing Persuasive Hypertexts"
ERIC Educational Resources Information Center
College Composition and Communication, 2007
2007-01-01
This article presents several excerpts from an article written by Joseph Janangelo titled "Joseph Cornell and the Artistry of Composing Persuasive Hypertexts." In his article, Janangelo suggested that Cornell's work and ideas about composing model intelligent ways to composing persuasive nonsequential text. Janangelo also wondered if the use of…
E-Learning Today: A Review of Research on Hypertext Comprehension
ERIC Educational Resources Information Center
Hinesley, Gail A.
2007-01-01
Use of hypertext is pervasive in education today--it is used for all online course delivery as well as many stand-alone delivery methods such as educational computer software and compact discs (CDs). This article will review Kintsch's Construction-Integration and Anderson's Adaptive Control of Thought-Rational (ACT-R) cognitive architectures and…
ERIC Educational Resources Information Center
Girill, T. R.
1991-01-01
This article continues the description of DFT (Document, Find, Theseus), an online documentation system that provides computer-managed on-demand printing of software manuals as well as the interactive retrieval of reference passages. Document boundaries in the hypertext database are discussed, search vocabulary complexities are described, and text…
ERIC Educational Resources Information Center
Naumann, Johannes; Richter, Tobias; Christmann, Ursula; Groeben, Norbert
2008-01-01
Cognitive and metacognitive strategies are particularly important for learning with hypertext. The effectiveness of strategy training, however, depends on available working memory resources. Thus, especially learners high on working memory capacity can profit from strategy training, while learners low on working memory capacity might easily be…
Working Memory Capacity and L2 University Students' Comprehension of Linear Texts and Hypertexts
ERIC Educational Resources Information Center
Fontanini, Ingrid; Tomitch, Leda Maria Braga
2009-01-01
The aim of this study was to investigate the relationship between working memory capacity and L2 reading comprehension of both linear texts and hypertexts. Three different instruments were used to measure comprehension (recall, comprehension questions and perception of contradictions) and the Reading Span Test (Daneman & Carpenter, 1980) was…
Navigation Maps in a Computer-Networked Hypertext Learning System.
ERIC Educational Resources Information Center
Chou, Chien; Lin, Hua
A study of first-year college students (n=121) in Taiwan investigated the effects of navigation maps and learner cognitive styles on performance in searches for information, estimation of course scope, and the development of cognitive maps within a hypertext learning course. Students were tested to determine level of perceptual field dependence…
Recent Literature Shows Accelerated Growth in Hypermedia Tools: An Annotated Bibliography.
ERIC Educational Resources Information Center
Gabbard, Ralph
1994-01-01
Presents an annotated bibliography of materials on hypertext/hypermedia. Information available on the World Wide Web is described; journals that cover hypermedia are listed; and the main bibliography is divided into 3 sections on general hypertext applications (17 titles), DOS/Windows applications (17 titles), and HyperCard applications (18…
ERIC Educational Resources Information Center
Tergan, Sigmar-Olaf
1997-01-01
Reviews research on the effectiveness of hypertext/hypermedia-based learning and concludes that presenting subject matter from different perspectives, in multiple contexts, and in multiple codes does not automatically contribute to higher performance but may when instructional scaffolding is provided. The additional cognitive load may actually…
Data Display Markup Language (DDML) Handbook
2017-01-31
Moreover, the tendency of T&E is towards a plug-and-play-like data acquisition system that requires standard languages and modules for data displays...Telemetry Group DOCUMENT 127-17 DATA DISPLAY MARKUP LANGUAGE (DDML) HANDBOOK DISTRIBUTION A: APPROVED FOR...DOCUMENT 127-17 DATA DISPLAY MARKUP LANGUAGE (DDML) HANDBOOK January 2017 Prepared by Telemetry Group
A Conversion Tool for Mathematical Expressions in Web XML Files.
ERIC Educational Resources Information Center
Ohtake, Nobuyuki; Kanahori, Toshihiro
2003-01-01
This article discusses the conversion of mathematical equations into Extensible Markup Language (XML) on the World Wide Web for individuals with visual impairments. A program is described that converts the presentation markup style to the content markup style in MathML to allow browsers to render mathematical expressions without other programs.…
Answer Markup Algorithms for Southeast Asian Languages.
ERIC Educational Resources Information Center
Henry, George M.
1991-01-01
Typical markup methods for providing feedback to foreign language learners are not applicable to languages not written in a strictly linear fashion. A modification of Hart's edit markup software is described, along with a second variation based on a simple edit distance algorithm adapted to a general Southeast Asian font system. (10 references)…
An Introduction to the Extensible Markup Language (XML).
ERIC Educational Resources Information Center
Bryan, Martin
1998-01-01
Describes Extensible Markup Language (XML), a subset of the Standard Generalized Markup Language (SGML) that is designed to make it easy to interchange structured documents over the Internet. Topics include Document Type Definition (DTD), components of XML, the use of XML, text and non-text elements, and uses for XML-coded files. (LRW)
Chemical Markup, XML and the World-Wide Web. 8. Polymer Markup Language.
Adams, Nico; Winter, Jerry; Murray-Rust, Peter; Rzepa, Henry S
2008-11-01
Polymers are among the most important classes of materials but are only inadequately supported by modern informatics. The paper discusses the reasons why polymer informatics is considerably more challenging than small molecule informatics and develops a vision for the computer-aided design of polymers, based on modern semantic web technologies. The paper then discusses the development of Polymer Markup Language (PML). PML is an extensible language, designed to support the (structural) representation of polymers and polymer-related information. PML closely interoperates with Chemical Markup Language (CML) and overcomes a number of the previously identified challenges.
Data on the interexaminer variation of minutia markup on latent fingerprints.
Ulery, Bradford T; Hicklin, R Austin; Roberts, Maria Antonia; Buscaglia, JoAnn
2016-09-01
The data in this article supports the research paper entitled "Interexaminer variation of minutia markup on latent fingerprints" [1]. The data in this article describes the variability in minutia markup during both analysis of the latents and comparison between latents and exemplars. The data was collected in the "White Box Latent Print Examiner Study," in which each of 170 volunteer latent print examiners provided detailed markup documenting their examinations of latent-exemplar pairs of prints randomly assigned from a pool of 320 pairs. Each examiner examined 22 latent-exemplar pairs; an average of 12 examiners marked each latent.
ERIC Educational Resources Information Center
Zandieh, Zeinab; Jafarigohar, Manoochehr
2012-01-01
The present study investigated comprehension, immediate and delayed vocabulary retention under incidental and intentional learning conditions via computer mediated hypertext gloss. One hundred and eighty four (N = 184) intermediate students of English as a foreign language at an English school participated in the study. They were randomly assigned…
ERIC Educational Resources Information Center
Farjardo, Inmaculada; Arfe, Barbara; Benedetti, Patrizia; Altoe, Gianmarco
2008-01-01
Sixty deaf and hearing students were asked to search for goods in a Hypertext Supermarket with either graphical or textual links of high typicality, frequency, and familiarity. Additionally, they performed a picture and word categorization task and two working memory span tasks (spatial and verbal). Results showed that deaf students were faster in…
Reconceptualising Pedagogy: Students' Hypertext Stories with Pictures and Words.
ERIC Educational Resources Information Center
Russell, Glenn
Hypertext software permits students to write non-linear stories which include pictures and words. The characteristics of these stories may be affected by student and teacher understandings of how pictures and words may be combined to produce meanings for the reader. The use of images and words in comic books and children's picture-books contribute…
ERIC Educational Resources Information Center
Yao, Yuanming; Gill, Michele
2009-01-01
The impact of hypertext presentation formats on learner control and cognitive load was examined in this study using Campbell and Stanley's (1963) Posttest Only Control Group design. One hundred eighty-six undergraduate students were randomly assigned to read a web-based text with no annotations, online glossary annotations, embedded annotations,…
ERIC Educational Resources Information Center
Bernacki, Matthew
2010-01-01
This study examined how learners construct textbase and situation model knowledge in hypertext computer-based learning environments (CBLEs) and documented the influence of specific self-regulated learning (SRL) tactics, prior knowledge, and characteristics of the learner on posttest knowledge scores from exposure to a hypertext. A sample of 160…
ERIC Educational Resources Information Center
Tebbutt, John
1999-01-01
Discusses efforts at National Institute of Standards and Technology (NIST) to construct an information discovery tool through the fusion of hypertext and information retrieval that works by parsing a contiguous document base into smaller documents and inserting semantic links between them. Also presents a case study that evaluated user reactions.…
Hypertext-based design of a user interface for scheduling
NASA Technical Reports Server (NTRS)
Woerner, Irene W.; Biefeld, Eric
1993-01-01
Operations Mission Planner (OMP) is an ongoing research project at JPL that utilizes AI techniques to create an intelligent, automated planning and scheduling system. The information space reflects the complexity and diversity of tasks necessary in most real-world scheduling problems. Thus the problem of the user interface is to present as much information as possible at a given moment and allow the user to quickly navigate through the various types of displays. This paper describes a design which applies the hypertext model to solve these user interface problems. The general paradigm is to provide maps and search queries to allow the user to quickly find an interesting conflict or problem, and then allow the user to navigate through the displays in a hypertext fashion.
ERIC Educational Resources Information Center
Campbell, D. Grant
2002-01-01
Describes a qualitative study which investigated the attitudes of literary scholars towards the features of semantic markup for primary texts in XML format. Suggests that layout is a vital part of the reading process which implies that the standardization of DTDs (Document Type Definitions) should extend to styling as well. (Author/LRW)
2015-07-01
Acronyms ASCII American Standard Code for Information Interchange DAU data acquisition unit DDML data display markup language IHAL...Transfer Standard URI uniform resource identifier W3C World Wide Web Consortium XML extensible markup language XSD XML schema definition XML Style...Style Guide, RCC 125-15, July 2015 1 Introduction The next generation of telemetry systems will rely heavily on extensible markup language (XML
Samwald, Matthias; Lim, Ernest; Masiar, Peter; Marenco, Luis; Chen, Huajun; Morse, Thomas; Mutalik, Pradeep; Shepherd, Gordon; Miller, Perry; Cheung, Kei-Hoi
2009-01-01
The amount of biomedical data available in Semantic Web formats has been rapidly growing in recent years. While these formats are machine-friendly, user-friendly web interfaces allowing easy querying of these data are typically lacking. We present "Entrez Neuron", a pilot neuron-centric interface that allows for keyword-based queries against a coherent repository of OWL ontologies. These ontologies describe neuronal structures, physiology, mathematical models and microscopy images. The returned query results are organized hierarchically according to brain architecture. Where possible, the application makes use of entities from the Open Biomedical Ontologies (OBO) and the 'HCLS knowledgebase' developed by the W3C Interest Group for Health Care and Life Science. It makes use of the emerging RDFa standard to embed ontology fragments and semantic annotations within its HTML-based user interface. The application and underlying ontologies demonstrate how Semantic Web technologies can be used for information integration within a curated information repository and between curated information repositories. It also demonstrates how information integration can be accomplished on the client side, through simple copying and pasting of portions of documents that contain RDFa markup.
Effects of small particle numbers on long-term behaviour in discrete biochemical systems.
Kreyssig, Peter; Wozar, Christian; Peter, Stephan; Veloz, Tomás; Ibrahim, Bashar; Dittrich, Peter
2014-09-01
The functioning of many biological processes depends on the appearance of only a small number of a single molecular species. Additionally, the observation of molecular crowding leads to the insight that even a high number of copies of species do not guarantee their interaction. How single particles contribute to stabilizing biological systems is not well understood yet. Hence, we aim at determining the influence of single molecules on the long-term behaviour of biological systems, i.e. whether they can reach a steady state. We provide theoretical considerations and a tool to analyse Systems Biology Markup Language models for the possibility to stabilize because of the described effects. The theory is an extension of chemical organization theory, which we called discrete chemical organization theory. Furthermore we scanned the BioModels Database for the occurrence of discrete chemical organizations. To exemplify our method, we describe an application to the Template model of the mitotic spindle assembly checkpoint mechanism. http://www.biosys.uni-jena.de/Services.html. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
An XML-based interchange format for genotype-phenotype data.
Whirl-Carrillo, M; Woon, M; Thorn, C F; Klein, T E; Altman, R B
2008-02-01
Recent advances in high-throughput genotyping and phenotyping have accelerated the creation of pharmacogenomic data. Consequently, the community requires standard formats to exchange large amounts of diverse information. To facilitate the transfer of pharmacogenomics data between databases and analysis packages, we have created a standard XML (eXtensible Markup Language) schema that describes both genotype and phenotype data as well as associated metadata. The schema accommodates information regarding genes, drugs, diseases, experimental methods, genomic/RNA/protein sequences, subjects, subject groups, and literature. The Pharmacogenetics and Pharmacogenomics Knowledge Base (PharmGKB; www.pharmgkb.org) has used this XML schema for more than 5 years to accept and process submissions containing more than 1,814,139 SNPs on 20,797 subjects using 8,975 assays. Although developed in the context of pharmacogenomics, the schema is of general utility for exchange of genotype and phenotype data. We have written syntactic and semantic validators to check documents using this format. The schema and code for validation is available to the community at http://www.pharmgkb.org/schema/index.html (last accessed: 8 October 2007). (c) 2007 Wiley-Liss, Inc.
2016-02-08
Data Display Markup Language HUD heads-up display IRIG Inter-Range Instrumentation Group RCC Range Commanders Council SVG Scalable Vector Graphics...T&E test and evaluation TMATS Telemetry Attributes Transfer Standard XML eXtensible Markup Language DDML Schema Validation, RCC 126-16, February...2016 viii This page intentionally left blank. DDML Schema Validation, RCC 126-16, February 2016 1 1. Introduction This Data Display Markup
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-10
... that it is appropriate to charge a markup with respect to directed orders to reflect the costs of offering routing services and the value of such services. Notably, in all instances NASDAQ charges a markup... that it does not currently charge a markup with respect to non-directed orders that are routed to PSX...
Application of whole slide image markup and annotation for pathologist knowledge capture.
Campbell, Walter S; Foster, Kirk W; Hinrichs, Steven H
2013-01-01
The ability to transfer image markup and annotation data from one scanned image of a slide to a newly acquired image of the same slide within a single vendor platform was investigated. The goal was to study the ability to use image markup and annotation data files as a mechanism to capture and retain pathologist knowledge without retaining the entire whole slide image (WSI) file. Accepted mathematical principles were investigated as a method to overcome variations in scans of the same glass slide and to accurately associate image markup and annotation data across different WSI of the same glass slide. Trilateration was used to link fixed points within the image and slide to the placement of markups and annotations of the image in a metadata file. Variation in markup and annotation placement between WSI of the same glass slide was reduced from over 80 μ to less than 4 μ in the x-axis and from 17 μ to 6 μ in the y-axis (P < 0.025). This methodology allows for the creation of a highly reproducible image library of histopathology images and interpretations for educational and research use.
Application of whole slide image markup and annotation for pathologist knowledge capture
Campbell, Walter S.; Foster, Kirk W.; Hinrichs, Steven H.
2013-01-01
Objective: The ability to transfer image markup and annotation data from one scanned image of a slide to a newly acquired image of the same slide within a single vendor platform was investigated. The goal was to study the ability to use image markup and annotation data files as a mechanism to capture and retain pathologist knowledge without retaining the entire whole slide image (WSI) file. Methods: Accepted mathematical principles were investigated as a method to overcome variations in scans of the same glass slide and to accurately associate image markup and annotation data across different WSI of the same glass slide. Trilateration was used to link fixed points within the image and slide to the placement of markups and annotations of the image in a metadata file. Results: Variation in markup and annotation placement between WSI of the same glass slide was reduced from over 80 μ to less than 4 μ in the x-axis and from 17 μ to 6 μ in the y-axis (P < 0.025). Conclusion: This methodology allows for the creation of a highly reproducible image library of histopathology images and interpretations for educational and research use. PMID:23599902
NASA Astrophysics Data System (ADS)
Stylianou, Agni
2003-06-01
Digital texts which are based on hypertext and hypermedia technologies are now being used to support science learning. Hypertext offers certain opportunities for learning as well as difficulties that challenge readers to become metacognitively aware of their navigation decisions in order to trade both meaning and structure while reading. The goal of this study was to investigate whether supporting sixth grade students to monitor and regulate their navigation behavior while reading from hypertext would lead to better navigation and learning. Metanavigation support in the form of prompts was provided to groups of students who used a hypertext system called CoMPASS to complete a design challenge. The metanavigation prompts aimed at encouraging students to understand the affordances of the navigational aids in CoMPASS and use them to guide their navigation. The study was conducted in a real classroom setting during the implementation of CoMPASS in sixth grade science classes. Multiple sources of group and individual data were collected and analyzed. Measures included student's individual performance in a pre-science knowledge test, the Metacognitive Awareness of Reading Strategies Inventory (MARSI), a reading comprehension test and a concept map test. Process measures included log file information that captured group navigation paths during the use of CoMPASS. The results suggested that providing metanavigation support enabled the groups to make coherent transitions among the text units. Findings also revealed that reading comprehension, presence of metanavigation support and prior domain knowledge significantly predicted students' individual understanding of science. Implications for hypertext design and literacy research fields are discussed.
ERIC Educational Resources Information Center
Manurung, Sondang R.; Mihardi, Satria
2016-01-01
The purpose of this study was to determine the effectiveness of hypertext media based kinematic learning and formal thinking ability to improve the conceptual understanding of physic prospective students. The research design used is the one-group pretest-posttest experimental design is carried out in the research by taking 36 students on from…
On the Creation of Hypertext Links in Full-Text Documents: Measurement of Inter-Linker Consistency.
ERIC Educational Resources Information Center
Ellis, David; And Others
1994-01-01
Describes a study in which several different sets of hypertext links are inserted by different people in full-text documents. The degree of similarity between the sets is measured using coefficients and topological indices. As in comparable studies of inter-indexer consistency, the sets of links used by different people showed little similarity.…
Computer integrated documentation
NASA Technical Reports Server (NTRS)
Boy, Guy
1991-01-01
The main technical issues of the Computer Integrated Documentation (CID) project are presented. The problem of automation of documents management and maintenance is analyzed both from an artificial intelligence viewpoint and from a human factors viewpoint. Possible technologies for CID are reviewed: conventional approaches to indexing and information retrieval; hypertext; and knowledge based systems. A particular effort was made to provide an appropriate representation for contextual knowledge. This representation is used to generate context on hypertext links. Thus, indexing in CID is context sensitive. The implementation of the current version of CID is described. It includes a hypertext data base, a knowledge based management and maintenance system, and a user interface. A series is also presented of theoretical considerations as navigation in hyperspace, acquisition of indexing knowledge, generation and maintenance of a large documentation, and relation to other work.
TMATS/ IHAL/ DDML Schema Validation
2017-02-01
task was to create a method for performing IRIG eXtensible Markup Language (XML) schema validation. As opposed to XML instance document validation...TMATS / IHAL / DDML Schema Validation, RCC 126-17, February 2017 vii Acronyms DDML Data Display Markup Language HUD heads-up display iNET...system XML eXtensible Markup Language TMATS / IHAL / DDML Schema Validation, RCC 126-17, February 2017 viii This page intentionally left blank
An object-oriented approach for harmonization of multimedia markup languages
NASA Astrophysics Data System (ADS)
Chen, Yih-Feng; Kuo, May-Chen; Sun, Xiaoming; Kuo, C.-C. Jay
2003-12-01
An object-oriented methodology is proposed to harmonize several different markup languages in this research. First, we adopt the Unified Modelling Language (UML) as the data model to formalize the concept and the process of the harmonization process between the eXtensible Markup Language (XML) applications. Then, we design the Harmonization eXtensible Markup Language (HXML) based on the data model and formalize the transformation between the Document Type Definitions (DTDs) of the original XML applications and HXML. The transformation between instances is also discussed. We use the harmonization of SMIL and X3D as an example to demonstrate the proposed methodology. This methodology can be generalized to various application domains.
Managing and Querying Image Annotation and Markup in XML.
Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel
2010-01-01
Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid.
Development of clinical contents model markup language for electronic health records.
Yun, Ji-Hyun; Ahn, Sun-Ju; Kim, Yoon
2012-09-01
To develop dedicated markup language for clinical contents models (CCM) to facilitate the active use of CCM in electronic health record systems. Based on analysis of the structure and characteristics of CCM in the clinical domain, we designed extensible markup language (XML) based CCM markup language (CCML) schema manually. CCML faithfully reflects CCM in both the syntactic and semantic aspects. As this language is based on XML, it can be expressed and processed in computer systems and can be used in a technology-neutral way. CCML HAS THE FOLLOWING STRENGTHS: it is machine-readable and highly human-readable, it does not require a dedicated parser, and it can be applied for existing electronic health record systems.
Managing and Querying Image Annotation and Markup in XML
Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel
2010-01-01
Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid. PMID:21218167
1994-12-01
complex Internet addresses. Hypertext and hypermedia documents have logical and physical structure (Shneiderman, 1993). The logical structure delineates...Rubra, Miliaria Profunda , Anhidrotic Heat Exhaustion, Heat Syncope, Heat Edema, Sunburn, and Heat Tetany. The user may return to the main document...military or scientific organizations via digital communications networks such as the Internet . Access clearance would first be obtained from the USARIEM
Utilizing Internet Technologies in Observatory Control Systems
NASA Astrophysics Data System (ADS)
Cording, Dean
2002-12-01
The 'Internet boom' of the past few years has spurred the development of a number of technologies to provide services such as secure communications, reliable messaging, information publishing and application distribution for commercial applications. Over the same period, a new generation of computer languages have also developed to provide object oriented design and development, improved reliability, and cross platform compatibility. Whilst the business models of the 'dot.com' era proved to be largely unviable, the technologies that they were based upon have survived and have matured to the point were they can now be utilized to build secure, robust and complete observatory control control systems. This paper will describe how Electro Optic Systems has utilized these technologies in the development of its third generation Robotic Observatory Control System (ROCS). ROCS provides an extremely flexible configuration capability within a control system structure to provide truly autonomous robotic observatory operation including observation scheduling. ROCS was built using Internet technologies such as Java, Java Messaging Service (JMS), Lightweight Directory Access Protocol (LDAP), Secure Sockets Layer (SSL), eXtendible Markup Language (XML), Hypertext Transport Protocol (HTTP) and Java WebStart. ROCS was designed to be capable of controlling all aspects of an observatory and be able to be reconfigured to handle changing equipment configurations or user requirements without the need for an expert computer programmer. ROCS consists of many small components, each designed to perform a specific task, with the configuration of the system specified using a simple meta language. The use of small components facilitates testing and makes it possible to prove that the system is correct.
Interexaminer variation of minutia markup on latent fingerprints.
Ulery, Bradford T; Hicklin, R Austin; Roberts, Maria Antonia; Buscaglia, JoAnn
2016-07-01
Latent print examiners often differ in the number of minutiae they mark during analysis of a latent, and also during comparison of a latent with an exemplar. Differences in minutia counts understate interexaminer variability: examiners' markups may have similar minutia counts but differ greatly in which specific minutiae were marked. We assessed variability in minutia markup among 170 volunteer latent print examiners. Each provided detailed markup documenting their examinations of 22 latent-exemplar pairs of prints randomly assigned from a pool of 320 pairs. An average of 12 examiners marked each latent. The primary factors associated with minutia reproducibility were clarity, which regions of the prints examiners chose to mark, and agreement on value or comparison determinations. In clear areas (where the examiner was "certain of the location, presence, and absence of all minutiae"), median reproducibility was 82%; in unclear areas, median reproducibility was 46%. Differing interpretations regarding which regions should be marked (e.g., when there is ambiguity in the continuity of a print) contributed to variability in minutia markup: especially in unclear areas, marked minutiae were often far from the nearest minutia marked by a majority of examiners. Low reproducibility was also associated with differences in value or comparison determinations. Lack of standardization in minutia markup and unfamiliarity with test procedures presumably contribute to the variability we observed. We have identified factors accounting for interexaminer variability; implementing standards for detailed markup as part of documentation and focusing future training efforts on these factors may help to facilitate transparency and reduce subjectivity in the examination process. Published by Elsevier Ireland Ltd.
Yoshida, Yutaka; Miyazaki, Kenji; Kamiie, Junichi; Sato, Masao; Okuizumi, Seiji; Kenmochi, Akihisa; Kamijo, Ken'ichi; Nabetani, Takuji; Tsugita, Akira; Xu, Bo; Zhang, Ying; Yaoita, Eishin; Osawa, Tetsuo; Yamamoto, Tadashi
2005-03-01
To contribute to physiology and pathophysiology of the glomerulus of human kidney, we have launched a proteomic study of human glomerulus, and compiled a profile of proteins expressed in the glomerulus of normal human kidney by two-dimensional gel electrophoresis (2-DE) and identification with matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS) and/or liquid chromatography-tandem mass spectrometry (LC-MS/MS). Kidney cortices with normal appearance were obtained from patients under surgical nephrectomy due to renal tumor, and glomeruli were highly purified by a standard sieving method followed by picking-up under a phase-contrast microscope. The glomerular proteins were separated by 2-DE with 24 cm immobilized pH gradient strips in the 3-10 range in the first dimension and 26 x 20 cm sodium dodecyl sulfate polyacrylamide electrophoresis gels of 12.5% in the second dimension. Gels were silver-stained, and valid spots were processed for identification through an integrated robotic system that consisted of a spot picker, an in-gel digester, and a MALDI-TOF MS and / or a LC-MS/MS. From 2-DE gel images of glomeruli of four subjects with no apparent pathologic manifestations, a synthetic gel image of normal glomerular proteins was created. The synthetic gel image contained 1713 valid spots, of which 1559 spots were commonly observed in the respective 2-DE gels. Among the 1559 spots, 347 protein spots, representing 212 proteins, have so far been identified, and used for the construction of an extensible markup language (XML)-based database. The database is deposited on a web site (http://www.sw.nec.co.jp/bio/rd/hgldb/index.html) in a form accessible to researchers to contribute to proteomic studies of human glomerulus in health and disease.
Application of XML to Journal Table Archiving
NASA Astrophysics Data System (ADS)
Shaya, E. J.; Blackwell, J. H.; Gass, J. E.; Kargatis, V. E.; Schneider, G. L.; Weiland, J. L.; Borne, K. D.; White, R. A.; Cheung, C. Y.
1998-12-01
The Astronomical Data Center (ADC) at the NASA Goddard Space Flight Center is a major archive for machine-readable astronomical data tables. Many ADC tables are derived from published journal articles. Article tables are reformatted to be machine-readable and documentation is crafted to facilitate proper reuse by researchers. The recent switch of journals to web based electronic format has resulted in the generation of large amounts of tabular data that could be captured into machine-readable archive format at fairly low cost. The large data flow of the tables from all major North American astronomical journals (a factor of 100 greater than the present rate at the ADC) necessitates the development of rigorous standards for the exchange of data between researchers, publishers, and the archives. We have selected a suitable markup language that can fully describe the large variety of astronomical information contained in ADC tables. The eXtensible Markup Language XML is a powerful internet-ready documentation format for data. It provides a precise and clear data description language that is both machine- and human-readable. It is rapidly becoming the standard format for business and information transactions on the internet and it is an ideal common metadata exchange format. By labelling, or "marking up", all elements of the information content, documents are created that computers can easily parse. An XML archive can easily and automatically be maintained, ingested into standard databases or custom software, and even totally restructured whenever necessary. Structuring astronomical data into XML format will enable efficient and focused search capabilities via off-the-shelf software. The ADC is investigating XML's expanded hyperlinking power to enhance connectivity within the ADC data/metadata and developing XSL display scripts to enhance display of astronomical data. The ADC XML Definition Type Document can be viewed at http://messier.gsfc.nasa.gov/dtdhtml/DTD-TREE.html
Gates, Allison; Dolovich, Lisa; Slavcev, Roderick; Drimmie, Rob; Aghaei, Behzad; Poon, Calvin; Khan, Shamrozé; Leat, Susan J
2014-01-01
Background In order to take medications safely and effectively, individuals need to be able to see, read, and understand the medication labels. However, one-half of medication labels are currently misunderstood, often because of low literacy, low vision, and cognitive impairment. We sought to design a mobile tool termed ClereMed that could rapidly screen for adults who have difficulty reading or understanding their medication labels. Objective The aim of this study was to build the ClereMed prototype; to determine the usability of the prototype with adults 55 and over; to assess its accuracy for identifying adults with low-functional reading ability, poor ability on a real-life pill-sorting task, and low cognition; and to assess the acceptability of a touchscreen device with older adults with age-related changes to vision and cognition. Methods This pilot study enrolled adults (≥55 years) who were recruited through pharmacies, retirement residences, and a low-vision optometry clinic. ClereMed is a hypertext markup language (HTML)-5 prototype app that simulates medication taking using an iPad, and also provides information on how to improve the accessibility of prescription labels. A paper-based questionnaire included questions on participant demographics, computer literacy, and the Systems Usability Scale (SUS). Cognition was assessed using the Montreal Cognitive Assessment tool, and functional reading ability was measured using the MNRead Acuity Chart. Simulation results were compared with a real-life, medication-taking exercise using prescription vials, tablets, and pillboxes. Results The 47 participants had a mean age of 76 (SD 11) years and 60% (28/47) were female. Of the participants, 32% (15/47) did not own a computer or touchscreen device. The mean SUS score was 76/100. ClereMed correctly identified 72% (5/7) of participants with functional reading difficulty, and 63% (5/8) who failed a real-life pill-sorting task, but only 21% (6/28) of participants with cognitive impairment. Participants who owned a computer or touchscreen completed ClereMed in a mean time of 26 (SD 16) seconds, compared with 52 (SD 34) seconds for those who do not own a device (P<.001). Those who had difficulty, struggled with screen glare, button activation, and the “drag and drop” function. Conclusions ClereMed was well accepted by older participants, but it was only moderately accurate for reading ability and not for mild cognitive impairment. Future versions may be most useful as part of a larger medication assessment or as a tool to help family members and caregivers identify individuals with impaired functional reading ability. Future research is needed to improve the sensitivity for measuring cognitive impairment and on the feasibility of implementing a mobile app into pharmacy workflow. PMID:25131813
[Use of hypertext as information and training tools in the prevention of occupational risk].
Franco, G
1998-01-01
Modern medical education is based on a variety of teaching techniques, by means of which individuals learn most effectively. The availability of the new technologies together with the diffusion of personal computers is favouring the spreading of the use of hypertexts through the World Wide Web. This contribution describes 2 hypertexts ("Human Activities and Health Risk"; "Occupation, Risk and Disease. A Problem-Oriented Hypertext-Tool to Learn Occupational Medicine") and the prototype "Virtual Hospital". Assuming that prevention of health risks is based upon their knowledge, they have been created with the aim of providing users with problem-oriented tools, whose retorical aspects (content, information organization, user interface) are analysed. The "Human Activities and Health Risk" deals with the description of working activities and allows user to recognize health risks. The "Occupation, Risk and Disease. A Problem-Oriented Hypertext-Tool to Learn Occupational Medicine" embodies a case report containing the clustered information about the patient and the library including educational material (risk factors, symptoms and signs, organ system diseases, jobs, occupational risk factors, environment related diseases. The "Virtual Hospital" has been conceived assuming that an appropriate information can change workers' behaviour in hospital, where health risks can be often underevaluated. It consists of a variety of structured and unstructured information, which can be browsed by users, allowing the discovery of links and providing the awareness of the semantic relationship between related information elements (including environment, instruments, drugs, job analysis, situations at risk for health, preventive means). The "Virtual Hospital" aims making the understanding of the working situations at risk easier and more interesting, stimulating the awareness of the relationship between jobs and risks.
Facilitating access to information in large documents with an intelligent hypertext system
NASA Technical Reports Server (NTRS)
Mathe, Nathalie
1993-01-01
Retrieving specific information from large amounts of documentation is not an easy task. It could be facilitated if information relevant in the current problem solving context could be automatically supplied to the user. As a first step towards this goal, we have developed an intelligent hypertext system called CID (Computer Integrated Documentation) and tested it on the Space Station Freedom requirement documents. The CID system enables integration of various technical documents in a hypertext framework and includes an intelligent context-sensitive indexing and retrieval mechanism. This mechanism utilizes on-line user information requirements and relevance feedback either to reinforce current indexing in case of success or to generate new knowledge in case of failure. This allows the CID system to provide helpful responses, based on previous usage of the documentation, and to improve its performance over time.
Development of Clinical Contents Model Markup Language for Electronic Health Records
Yun, Ji-Hyun; Kim, Yoon
2012-01-01
Objectives To develop dedicated markup language for clinical contents models (CCM) to facilitate the active use of CCM in electronic health record systems. Methods Based on analysis of the structure and characteristics of CCM in the clinical domain, we designed extensible markup language (XML) based CCM markup language (CCML) schema manually. Results CCML faithfully reflects CCM in both the syntactic and semantic aspects. As this language is based on XML, it can be expressed and processed in computer systems and can be used in a technology-neutral way. Conclusions CCML has the following strengths: it is machine-readable and highly human-readable, it does not require a dedicated parser, and it can be applied for existing electronic health record systems. PMID:23115739
Intelligent search and retrieval of a large multimedia knowledgebase for the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Clapis, Paul J.; Byers, William S.
1990-01-01
A document-retrieval assistant (DRA) in a microcomputer format is described which incorporates hypertext and natural language capabilities. Hypertext is used to introduce an intelligent search capability, and the natural-language interface permits access to specific data without the use of keywords. The DRA can be used to access and 'browse' the large multimedia database that is composed of project documentation from the HST.
Variation in Emergency Department vs Internal Medicine Excess Charges in the United States.
Xu, Tim; Park, Angela; Bai, Ge; Joo, Sarah; Hutfless, Susan M; Mehta, Ambar; Anderson, Gerard F; Makary, Martin A
2017-08-01
Uninsured and insured but out-of-network emergency department (ED) patients are often billed hospital chargemaster prices, which exceed amounts typically paid by insurers. To examine the variation in excess charges for services provided by emergency medicine and internal medicine physicians. Retrospective analysis was conducted of professional fee payment claims made by the Centers for Medicare & Medicaid Services for all services provided to Medicare Part B fee-for-service beneficiaries in calendar year 2013. Data analysis was conducted from January 1 to July 31, 2016. Markup ratios for ED and internal medicine professional services, defined as the charges submitted by the hospital divided by the Medicare allowable amount. Our analysis included 12 337 emergency medicine physicians from 2707 hospitals and 57 607 internal medicine physicians from 3669 hospitals in all 50 states. Services provided by emergency medicine physicians had an overall markup ratio of 4.4 (340% excess charges), which was greater than the markup ratio of 2.1 (110% excess charges) for all services performed by internal medicine physicians. Markup ratios for all ED services ranged by hospital from 1.0 to 12.6 (median, 4.2; interquartile range [IQR], 3.3-5.8); markup ratios for all internal medicine services ranged by hospital from 1.0 to 14.1 (median, 2.0; IQR, 1.7-2.5). The median markup ratio by hospital for ED evaluation and management procedure codes varied between 4.0 and 5.0. Among the most common ED services, laceration repair had the highest median markup ratio (7.0); emergency medicine physician review of a head computed tomographic scan had the greatest interhospital variation (range, 1.6-27.7). Across hospitals, markups in the ED were often substantially higher than those in the internal medicine department for the same services. Higher ED markup ratios were associated with hospital for-profit ownership (median, 5.7; IQR, 4.0-7.1), a greater percentage of uninsured patients seen (median, 5.0; IQR, 3.5-6.7 for ≥20% uninsured), and location (median, 5.3; IQR, 3.8-6.8 for the southeastern United States). Across hospitals, there is wide variation in excess charges on ED services, which are often priced higher than internal medicine services. Our results inform policy efforts to protect uninsured and out-of-network patients from highly variable pricing.
ArdenML: The Arden Syntax Markup Language (or Arden Syntax: It's Not Just Text Any More!)
Sailors, R. Matthew
2001-01-01
It is no longer necessary to think of Arden Syntax as simply a text-based knowledge base format. The development of ArdenML (Arden Syntax Markup Language), an XML-based markup language allows structured access to most of the maintenance and library categories without the need to write or buy a compiler may lead to the development of simple commercial and freeware tools for processing Arden Syntax Medical Logic Modules (MLMs)
A quality assessment tool for markup-based clinical guidelines.
Shalom, Erez; Shahar, Yuval; Taieb-Maimon, Meirav; Lunenfeld, Eitan
2008-11-06
We introduce a tool for quality assessment of procedural and declarative knowledge. We developed this tool for evaluating the specification of mark-up-based clinical GLs. Using this graphical tool, the expert physician and knowledge engineer collaborate to perform scoring, using pre-defined scoring scale, each of the knowledge roles of the mark-ups, comparing it to a gold standard. The tool enables scoring the mark-ups simultaneously at different sites by different users at different locations.
Changes in latent fingerprint examiners' markup between analysis and comparison.
Ulery, Bradford T; Hicklin, R Austin; Roberts, Maria Antonia; Buscaglia, JoAnn
2015-02-01
After the initial analysis of a latent print, an examiner will sometimes revise the assessment during comparison with an exemplar. Changes between analysis and comparison may indicate that the initial analysis of the latent was inadequate, or that confirmation bias may have affected the comparison. 170 volunteer latent print examiners, each randomly assigned 22 pairs of prints from a pool of 320 total pairs, provided detailed markup documenting their interpretations of the prints and the bases for their comparison conclusions. We describe changes in value assessments and markup of features and clarity. When examiners individualized, they almost always added or deleted minutiae (90.3% of individualizations); every examiner revised at least some markups. For inconclusive and exclusion determinations, changes were less common, and features were added more frequently when the image pair was mated (same source). Even when individualizations were based on eight or fewer corresponding minutiae, in most cases some of those minutiae had been added during comparison. One erroneous individualization was observed: the markup changes were notably extreme, and almost all of the corresponding minutiae had been added during comparison. Latents assessed to be of value for exclusion only (VEO) during analysis were often individualized when compared to a mated exemplar (26%); in our previous work, where examiners were not required to provide markup of features, VEO individualizations were much less common (1.8%). Published by Elsevier Ireland Ltd.
Documenting AUTOGEN and APGEN Model Files
NASA Technical Reports Server (NTRS)
Gladden, Roy E.; Khanampompan, Teerapat; Fisher, Forest W.; DelGuericio, Chris c.
2008-01-01
A computer program called "autogen hypertext map generator" satisfies a need for documenting and assisting in visualization of, and navigation through, model files used in the AUTOGEN and APGEN software mentioned in the two immediately preceding articles. This program parses autogen script files, autogen model files, PERL scripts, and apgen activity-definition files and produces a hypertext map of the files to aid in the navigation of the model. This program also provides a facility for adding notes and descriptions, beyond what is in the source model represented by the hypertext map. Further, this program provides access to a summary of the model through variable, function, sub routine, activity and resource declarations as well as providing full access to the source model and source code. The use of the tool enables easy access to the declarations and the ability to traverse routines and calls while analyzing the model.
Mental Representations Formed From Educational Website Formats
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elizabeth T. Cady; Kimberly R. Raddatz; Tuan Q. Tran
2006-10-01
The increasing popularity of web-based distance education places high demand on distance educators to format web pages to facilitate learning. However, limited guidelines exist regarding appropriate writing styles for web-based distance education. This study investigated the effect of four different writing styles on reader’s mental representation of hypertext. Participants studied hypertext written in one of four web-writing styles (e.g., concise, scannable, objective, and combined) and were then administered a cued association task intended to measure their mental representations of the hypertext. It is hypothesized that the scannable and combined styles will bias readers to scan rather than elaborately read, whichmore » may result in less dense mental representations (as identified through Pathfinder analysis) relative to the objective and concise writing styles. Further, the use of more descriptors in the objective writing style will lead to better integration of ideas and more dense mental representations than the concise writing style.« less
17 CFR 240.15c2-7 - Identification of quotations.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., guarantee of profit, guarantee against loss, commission, markup, markdown, indication of interest and... account, guarantee of profit, guarantee against loss, commission, markup, markdown, indication of interest...
XML Based Markup Languages for Specific Domains
NASA Astrophysics Data System (ADS)
Varde, Aparna; Rundensteiner, Elke; Fahrenholz, Sally
A challenging area in web based support systems is the study of human activities in connection with the web, especially with reference to certain domains. This includes capturing human reasoning in information retrieval, facilitating the exchange of domain-specific knowledge through a common platform and developing tools for the analysis of data on the web from a domain expert's angle. Among the techniques and standards related to such work, we have XML, the eXtensible Markup Language. This serves as a medium of communication for storing and publishing textual, numeric and other forms of data seamlessly. XML tag sets are such that they preserve semantics and simplify the understanding of stored information by users. Often domain-specific markup languages are designed using XML, with a user-centric perspective. Standardization bodies and research communities may extend these to include additional semantics of areas within and related to the domain. This chapter outlines the issues to be considered in developing domain-specific markup languages: the motivation for development, the semantic considerations, the syntactic constraints and other relevant aspects, especially taking into account human factors. Illustrating examples are provided from domains such as Medicine, Finance and Materials Science. Particular emphasis in these examples is on the Materials Markup Language MatML and the semantics of one of its areas, namely, the Heat Treating of Materials. The focus of this chapter, however, is not the design of one particular language but rather the generic issues concerning the development of domain-specific markup languages.
Yang, Caijun; Shen, Qian; Cai, Wenfang; Zhu, Wenwen; Li, Zongjie; Wu, Lina; Fang, Yu
2017-02-01
To assess the long-term effects of the introduction of China's zero-markup drug policy on hospitalisation expenditure and hospitalisation expenditures after reimbursement. An interrupted time series was used to evaluate the impact of the zero-markup drug policy on hospitalisation expenditure and hospitalisation expenditure after reimbursement at primary health institutions in Fufeng County of Shaanxi Province, western China. Two regression models were developed. Monthly average hospitalisation expenditure and monthly average hospitalisation expenditure after reimbursement in primary health institutions were analysed covering the period 2009 through to 2013. For the monthly average hospitalisation expenditure, the increasing trend was slowed down after the introduction of the zero-markup drug policy (coefficient = -16.49, P = 0.009). For the monthly average hospitalisation expenditure after reimbursement, the increasing trend was slowed down after the introduction of the zero-markup drug policy (coefficient = -10.84, P = 0.064), and a significant decrease in the intercept was noted after the second intervention of changes in reimbursement schemes of the new rural cooperative medical insurance (coefficient = -220.64, P < 0.001). A statistically significant absolute decrease in the level or trend of monthly average hospitalisation expenditure and monthly average hospitalisation expenditure after reimbursement was detected after the introduction of the zero-markup drug policy in western China. However, hospitalisation expenditure and hospitalisation expenditure after reimbursement were still increasing. More effective policies are needed to prevent these costs from continuing to rise. © 2016 John Wiley & Sons Ltd.
Samwald, Matthias; Lim, Ernest; Masiar, Peter; Marenco, Luis; Chen, Huajun; Morse, Thomas; Mutalik, Pradeep; Shepherd, Gordon; Miller, Perry; Cheung, Kei-Hoi
2013-01-01
The amount of biomedical data available in Semantic Web formats has been rapidly growing in recent years. While these formats are machine-friendly, user-friendly web interfaces allowing easy querying of these data are typically lacking. We present “Entrez Neuron”, a pilot neuron-centric interface that allows for keyword-based queries against a coherent repository of OWL ontologies. These ontologies describe neuronal structures, physiology, mathematical models and microscopy images. The returned query results are organized hierarchically according to brain architecture. Where possible, the application makes use of entities from the Open Biomedical Ontologies (OBO) and the ‘HCLS knowledgebase’ developed by the W3C Interest Group for Health Care and Life Science. It makes use of the emerging RDFa standard to embed ontology fragments and semantic annotations within its HTML-based user interface. The application and underlying ontologies demonstrates how Semantic Web technologies can be used for information integration within a curated information repository and between curated information repositories. It also demonstrates how information integration can be accomplished on the client side, through simple copying and pasting of portions of documents that contain RDFa markup. PMID:19745321
Nuclear data made easily accessible through the Notre Dame Nuclear Database
NASA Astrophysics Data System (ADS)
Khouw, Timothy; Lee, Kevin; Fasano, Patrick; Mumpower, Matthew; Aprahamian, Ani
2014-09-01
In 1994, the NNDC revolutionized nuclear research by providing a colorful, clickable, searchable database over the internet. Over the last twenty years, web technology has evolved dramatically. Our project, the Notre Dame Nuclear Database, aims to provide a more comprehensive and broadly searchable interactive body of data. The database can be searched by an array of filters which includes metadata such as the facility where a measurement is made, the author(s), or date of publication for the datum of interest. The user interface takes full advantage of HTML, a web markup language, CSS (cascading style sheets to define the aesthetics of the website), and JavaScript, a language that can process complex data. A command-line interface is supported that interacts with the database directly on a user's local machine which provides single command access to data. This is possible through the use of a standardized API (application programming interface) that relies upon well-defined filtering variables to produce customized search results. We offer an innovative chart of nuclides utilizing scalable vector graphics (SVG) to deliver users an unsurpassed level of interactivity supported on all computers and mobile devices. We will present a functional demo of our database at the conference.
Effects of small particle numbers on long-term behaviour in discrete biochemical systems
Ibrahim, Bashar; Dittrich, Peter
2014-01-01
Motivation: The functioning of many biological processes depends on the appearance of only a small number of a single molecular species. Additionally, the observation of molecular crowding leads to the insight that even a high number of copies of species do not guarantee their interaction. How single particles contribute to stabilizing biological systems is not well understood yet. Hence, we aim at determining the influence of single molecules on the long-term behaviour of biological systems, i.e. whether they can reach a steady state. Results: We provide theoretical considerations and a tool to analyse Systems Biology Markup Language models for the possibility to stabilize because of the described effects. The theory is an extension of chemical organization theory, which we called discrete chemical organization theory. Furthermore we scanned the BioModels Database for the occurrence of discrete chemical organizations. To exemplify our method, we describe an application to the Template model of the mitotic spindle assembly checkpoint mechanism. Availability and implementation: http://www.biosys.uni-jena.de/Services.html. Contact: bashar.ibrahim@uni-jena.de or dittrich@minet.uni-jena.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25161236
Variation in Emergency Department vs Internal Medicine Excess Charges in the United States
Park, Angela; Bai, Ge; Joo, Sarah; Hutfless, Susan M.; Mehta, Ambar; Anderson, Gerard F.; Makary, Martin A.
2017-01-01
Importance Uninsured and insured but out-of-network emergency department (ED) patients are often billed hospital chargemaster prices, which exceed amounts typically paid by insurers. Objective To examine the variation in excess charges for services provided by emergency medicine and internal medicine physicians. Design, Setting, and Participants Retrospective analysis was conducted of professional fee payment claims made by the Centers for Medicare & Medicaid Services for all services provided to Medicare Part B fee-for-service beneficiaries in calendar year 2013. Data analysis was conducted from January 1 to July 31, 2016. Main Outcomes and Measures Markup ratios for ED and internal medicine professional services, defined as the charges submitted by the hospital divided by the Medicare allowable amount. Results Our analysis included 12 337 emergency medicine physicians from 2707 hospitals and 57 607 internal medicine physicians from 3669 hospitals in all 50 states. Services provided by emergency medicine physicians had an overall markup ratio of 4.4 (340% excess charges), which was greater than the markup ratio of 2.1 (110% excess charges) for all services performed by internal medicine physicians. Markup ratios for all ED services ranged by hospital from 1.0 to 12.6 (median, 4.2; interquartile range [IQR], 3.3-5.8); markup ratios for all internal medicine services ranged by hospital from 1.0 to 14.1 (median, 2.0; IQR, 1.7-2.5). The median markup ratio by hospital for ED evaluation and management procedure codes varied between 4.0 and 5.0. Among the most common ED services, laceration repair had the highest median markup ratio (7.0); emergency medicine physician review of a head computed tomographic scan had the greatest interhospital variation (range, 1.6-27.7). Across hospitals, markups in the ED were often substantially higher than those in the internal medicine department for the same services. Higher ED markup ratios were associated with hospital for-profit ownership (median, 5.7; IQR, 4.0-7.1), a greater percentage of uninsured patients seen (median, 5.0; IQR, 3.5-6.7 for ≥20% uninsured), and location (median, 5.3; IQR, 3.8-6.8 for the southeastern United States). Conclusions and Relevance Across hospitals, there is wide variation in excess charges on ED services, which are often priced higher than internal medicine services. Our results inform policy efforts to protect uninsured and out-of-network patients from highly variable pricing. PMID:28558093
SuML: A Survey Markup Language for Generalized Survey Encoding
Barclay, MW; Lober, WB; Karras, BT
2002-01-01
There is a need in clinical and research settings for a sophisticated, generalized, web based survey tool that supports complex logic, separation of content and presentation, and computable guidelines. There are many commercial and open source survey packages available that provide simple logic; few provide sophistication beyond “goto” statements; none support the use of guidelines. These tools are driven by databases, static web pages, and structured documents using markup languages such as eXtensible Markup Language (XML). We propose a generalized, guideline aware language and an implementation architecture using open source standards.
Palafox, Benjamin; Patouillard, Edith; Tougher, Sarah; Goodman, Catherine; Hanson, Kara; Kleinschmidt, Immo; Torres Rueda, Sergio; Kiefer, Sabine; O’Connell, Kate; Zinsou, Cyprien; Phok, Sochea; Akulayi, Louis; Arogundade, Ekundayo; Buyungo, Peter; Mpasela, Felton; Poyer, Stephen; Chavasse, Desmond
2016-01-01
The private for-profit sector is an important source of treatment for malaria. However, private patients face high prices for the recommended treatment for uncomplicated malaria, artemisinin combination therapies (ACTs), which makes them more likely to receive cheaper, less effective non-artemisinin therapies (nATs). This study seeks to better understand consumer antimalarial prices by documenting and exploring the pricing behaviour of retailers and wholesalers. Using data collected in 2009–10, we present survey estimates of antimalarial retail prices, and wholesale- and retail-level price mark-ups from six countries (Benin, Cambodia, the Democratic Republic of Congo, Nigeria, Uganda and Zambia), along with qualitative findings on factors affecting pricing decisions. Retail prices were lowest for nATs, followed by ACTs and artemisinin monotherapies (AMTs). Retailers applied the highest percentage mark-ups on nATs (range: 40% in Nigeria to 100% in Cambodia and Zambia), whereas mark-ups on ACTs (range: 22% in Nigeria to 71% in Zambia) and AMTs (range: 22% in Nigeria to 50% in Uganda) were similar in magnitude, but lower than those applied to nATs. Wholesale mark-ups were generally lower than those at retail level, and were similar across antimalarial categories in most countries. When setting prices wholesalers and retailers commonly considered supplier prices, prevailing market prices, product availability, product characteristics and the costs related to transporting goods, staff salaries and maintaining a property. Price discounts were regularly used to encourage sales and were sometimes used by wholesalers to reward long-term customers. Pricing constraints existed only in Benin where wholesaler and retailer mark-ups are regulated; however, unlicensed drug vendors based in open-air markets did not adhere to the pricing regime. These findings indicate that mark-ups on antimalarials are reasonable. Therefore, improving ACT affordability would be most readily achieved by interventions that reduce commodity prices for retailers, such as ACT subsidies, pooled purchasing mechanisms and cost-effective strategies to increase the distribution coverage area of wholesalers. PMID:25944705
Descriptive Metadata: Emerging Standards.
ERIC Educational Resources Information Center
Ahronheim, Judith R.
1998-01-01
Discusses metadata, digital resources, cross-disciplinary activity, and standards. Highlights include Standard Generalized Markup Language (SGML); Extensible Markup Language (XML); Dublin Core; Resource Description Framework (RDF); Text Encoding Initiative (TEI); Encoded Archival Description (EAD); art and cultural-heritage metadata initiatives;…
Looking Tasks Online: Utilizing Webcams to Collect Video Data from Home
Semmelmann, Kilian; Hönekopp, Astrid; Weigelt, Sarah
2017-01-01
Online experimentation is emerging as a new methodology within classical data acquisition in psychology. It allows for easy, fast, broad, and cheap data conduction from the comfort of people’s homes. To add another method to the array of available tools, here we used recent developments in web technology to investigate the technical feasibility of online HyperText Markup Language-5/JavaScript-based video data recording. We employed a preferential looking task with children between 4 and 24 months. Parents and their children participated from home through a three-stage process: First, interested adults registered and took pictures through a webcam-based photo application. In the second step, we edited the pictures and integrated them into the design. Lastly, participants returned to the website and the video data acquisition took place through their webcam. In sum, we were able to create and employ the video recording application with participants as young as 4 months old. Quality-wise, no participant had to be removed due to the framerate or quality of videos and only 7% of data was excluded due to behavioral factors (lack of concentration). Results-wise, interrater reliability of rated looking side (left/right) showed a high agreement between raters, Fleiss’ Kappa, κ = 0.97, which can be translated to sufficient data quality for further analyses. With regard to on-/off-screen attention attribution, we found that children lost interest after about 10 s after trial onset using a static image presentation or 60 s total experimental time. Taken together, we were able to show that online video data recording is possible and viable for developmental psychology and beyond. PMID:28955284
Looking Tasks Online: Utilizing Webcams to Collect Video Data from Home.
Semmelmann, Kilian; Hönekopp, Astrid; Weigelt, Sarah
2017-01-01
Online experimentation is emerging as a new methodology within classical data acquisition in psychology. It allows for easy, fast, broad, and cheap data conduction from the comfort of people's homes. To add another method to the array of available tools, here we used recent developments in web technology to investigate the technical feasibility of online HyperText Markup Language-5/JavaScript-based video data recording. We employed a preferential looking task with children between 4 and 24 months. Parents and their children participated from home through a three-stage process: First, interested adults registered and took pictures through a webcam-based photo application. In the second step, we edited the pictures and integrated them into the design. Lastly, participants returned to the website and the video data acquisition took place through their webcam. In sum, we were able to create and employ the video recording application with participants as young as 4 months old. Quality-wise, no participant had to be removed due to the framerate or quality of videos and only 7% of data was excluded due to behavioral factors (lack of concentration). Results-wise, interrater reliability of rated looking side (left/right) showed a high agreement between raters, Fleiss' Kappa, κ = 0.97, which can be translated to sufficient data quality for further analyses. With regard to on-/off-screen attention attribution, we found that children lost interest after about 10 s after trial onset using a static image presentation or 60 s total experimental time. Taken together, we were able to show that online video data recording is possible and viable for developmental psychology and beyond.
Teaching physiology and the World Wide Web: electrochemistry and electrophysiology on the Internet.
Dwyer, T M; Fleming, J; Randall, J E; Coleman, T G
1997-12-01
Students seek active learning experiences that can rapidly impart relevant information in the most convenient way possible. Computer-assisted education can now use the resources of the World Wide Web to convey the important characteristics of events as elemental as the physical properties of osmotically active particles in the cell and as complex as the nerve action potential or the integrative behavior of the intact organism. We have designed laboratory exercises that introduce first-year medical students to membrane and action potentials, as well as the more complex example of integrative physiology, using the dynamic properties of computer simulations. Two specific examples are presented. The first presents the physical laws that apply to osmotic, chemical, and electrical gradients, leading to the development of the concept of membrane potentials; this module concludes with the simulation of the ability of the sodium-potassium pump to establish chemical gradients and maintain cell volume. The second module simulates the action potential according to the Hodgkin-Huxley model, illustrating the concepts of threshold, inactivation, refractory period, and accommodation. Students can access these resources during the scheduled laboratories or on their own time via our Web site on the Internet (http./(/)phys-main.umsmed.edu) by using the World Wide Web protocol. Accurate version control is possible because one valid, but easily edited, copy of the labs exists at the Web site. A common graphical interface is possible through the use of the Hypertext mark-up language. Platform independence is possible through the logical and arithmetic calculations inherent to graphical browsers and the Javascript computer language. The initial success of this program indicates that medical education can be very effective both by the use of accurate simulations and by the existence of a universally accessible Internet resource.
Improving Interoperability by Incorporating UnitsML Into Markup Languages
Celebi, Ismet; Dragoset, Robert A.; Olsen, Karen J.; Schaefer, Reinhold; Kramer, Gary W.
2010-01-01
Maintaining the integrity of analytical data over time is a challenge. Years ago, data were recorded on paper that was pasted directly into a laboratory notebook. The digital age has made maintaining the integrity of data harder. Nowadays, digitized analytical data are often separated from information about how the sample was collected and prepared for analysis and how the data were acquired. The data are stored on digital media, while the related information about the data may be written in a paper notebook or stored separately in other digital files. Sometimes the connection between this “scientific meta-data” and the analytical data is lost, rendering the spectrum or chromatogram useless. We have been working with ASTM Subcommittee E13.15 on Analytical Data to create the Analytical Information Markup Language or AnIML—a new way to interchange and store spectroscopy and chromatography data based on XML (Extensible Markup Language). XML is a language for describing what data are by enclosing them in computer-useable tags. Recording the units associated with the analytical data and metadata is an essential issue for any data representation scheme that must be addressed by all domain-specific markup languages. As scientific markup languages proliferate, it is very desirable to have a single scheme for handling units to facilitate moving information between different data domains. At NIST, we have been developing a general markup language just for units that we call UnitsML. This presentation will describe how UnitsML is used and how it is being incorporated into AnIML. PMID:27134778
Improving Interoperability by Incorporating UnitsML Into Markup Languages.
Celebi, Ismet; Dragoset, Robert A; Olsen, Karen J; Schaefer, Reinhold; Kramer, Gary W
2010-01-01
Maintaining the integrity of analytical data over time is a challenge. Years ago, data were recorded on paper that was pasted directly into a laboratory notebook. The digital age has made maintaining the integrity of data harder. Nowadays, digitized analytical data are often separated from information about how the sample was collected and prepared for analysis and how the data were acquired. The data are stored on digital media, while the related information about the data may be written in a paper notebook or stored separately in other digital files. Sometimes the connection between this "scientific meta-data" and the analytical data is lost, rendering the spectrum or chromatogram useless. We have been working with ASTM Subcommittee E13.15 on Analytical Data to create the Analytical Information Markup Language or AnIML-a new way to interchange and store spectroscopy and chromatography data based on XML (Extensible Markup Language). XML is a language for describing what data are by enclosing them in computer-useable tags. Recording the units associated with the analytical data and metadata is an essential issue for any data representation scheme that must be addressed by all domain-specific markup languages. As scientific markup languages proliferate, it is very desirable to have a single scheme for handling units to facilitate moving information between different data domains. At NIST, we have been developing a general markup language just for units that we call UnitsML. This presentation will describe how UnitsML is used and how it is being incorporated into AnIML.
The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation model.
Mongkolwat, Pattanasak; Kleper, Vladimir; Talbot, Skip; Rubin, Daniel
2014-12-01
Knowledge contained within in vivo imaging annotated by human experts or computer programs is typically stored as unstructured text and separated from other associated information. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation information model is an evolution of the National Institute of Health's (NIH) National Cancer Institute's (NCI) Cancer Bioinformatics Grid (caBIG®) AIM model. The model applies to various image types created by various techniques and disciplines. It has evolved in response to the feedback and changing demands from the imaging community at NCI. The foundation model serves as a base for other imaging disciplines that want to extend the type of information the model collects. The model captures physical entities and their characteristics, imaging observation entities and their characteristics, markups (two- and three-dimensional), AIM statements, calculations, image source, inferences, annotation role, task context or workflow, audit trail, AIM creator details, equipment used to create AIM instances, subject demographics, and adjudication observations. An AIM instance can be stored as a Digital Imaging and Communications in Medicine (DICOM) structured reporting (SR) object or Extensible Markup Language (XML) document for further processing and analysis. An AIM instance consists of one or more annotations and associated markups of a single finding along with other ancillary information in the AIM model. An annotation describes information about the meaning of pixel data in an image. A markup is a graphical drawing placed on the image that depicts a region of interest. This paper describes fundamental AIM concepts and how to use and extend AIM for various imaging disciplines.
NAVAIR Portable Source Initiative (NPSI) Data Preparation Standard V2.2: NPSI DPS V2.2
2012-05-22
Keyhole Markup Language (file format) KMZ ............................................................................. Keyhole Markup...required for the geo-specific texture may differ within the database depending on the mission parameters. When operating close to the ground (e.g
TumorML: Concept and requirements of an in silico cancer modelling markup language.
Johnson, David; Cooper, Jonathan; McKeever, Steve
2011-01-01
This paper describes the initial groundwork carried out as part of the European Commission funded Transatlantic Tumor Model Repositories project, to develop a new markup language for computational cancer modelling, TumorML. In this paper we describe the motivations for such a language, arguing that current state-of-the-art biomodelling languages are not suited to the cancer modelling domain. We go on to describe the work that needs to be done to develop TumorML, the conceptual design, and a description of what existing markup languages will be used to compose the language specification.
SGML Authoring Tools for Technical Communication.
ERIC Educational Resources Information Center
Davidson, W. J.
1993-01-01
Explains that structured authoring systems designed for the creation of generically encoded reusable information have context-sensitive application of markup, markup suppression, queing and automated formatting, structural navigation, and self-validation features. Maintains that they are a real alternative to conventional publishing systems. (SR)
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-10
... rates. In effect, the Exchange is obtaining wholesale rates from the carriers and then charging a markup... a markup to allow the Exchange to cover its administrative costs and to earn a profit on its...
ERIC Educational Resources Information Center
Ensign, Chet
1993-01-01
Describes how the change to Standard Generalized Markup Language at Information Builders began with the use of SGML-like markup in text because it solved a specific problem. Notes that many additional unexpected benefits led to an investigation of converting to formal SGML-based electronic publishing. (SR)
Hypertext and hypermedia systems in information retrieval
NASA Technical Reports Server (NTRS)
Kaye, K. M.; Kuhn, A. D.
1992-01-01
This paper opens with a brief history of hypertext and hypermedia in the context of information management during the 'information age.' Relevant terms are defined and the approach of the paper is explained. Linear and hypermedia information access methods are contrasted. A discussion of hyperprogramming in the handling of complex scientific and technical information follows. A selection of innovative hypermedia systems is discussed. An analysis of the Clinical Practice Library of Medicine NASA STI Program hypermedia application is presented. The paper concludes with a discussion of the NASA STI Program's future hypermedia project plans.
Intelligent hypertext manual development for the Space Shuttle hazardous gas detection system
NASA Technical Reports Server (NTRS)
Lo, Ching F.; Hoyt, W. Andes
1989-01-01
This research is designed to utilize artificial intelligence (AI) technology to increase the efficiency of personnel involved with monitoring the space shuttle hazardous gas detection systems at the Marshall Space Flight Center. The objective is to create a computerized service manual in the form of a hypertext and expert system which stores experts' knowledge and experience. The resulting Intelligent Manual will assist the user in interpreting data timely, in identifying possible faults, in locating the applicable documentation efficiently, in training inexperienced personnel effectively, and updating the manual frequently as required.
Using Hypertext to Facilitate Information Sharing in Biomedical Research Groups
Chaney, R. Jesse; Shipman, Frank M.; Gorry, G. Anthony
1989-01-01
As part of our effort to create an Integrated Academic Information Management System at Baylor College of Medicine, we are developing information technology to support the efforts of scientific work groups. Many of our ideas in this regard are embodied in a system called the Virtual Notebook which is intended to facilitate information sharing and management in such groups. Here we discuss the foundations of that system - a hypertext system that we have developed using a relational data base and the distributable interface the we have written in the X Window System.
Dugan, J M; Berrios, D C; Liu, X; Kim, D K; Kaizer, H; Fagan, L M
1999-01-01
Our group has built an information retrieval system based on a complex semantic markup of medical textbooks. We describe the construction of a set of web-based knowledge-acquisition tools that expedites the collection and maintenance of the concepts required for text markup and the search interface required for information retrieval from the marked text. In the text markup system, domain experts (DEs) identify sections of text that contain one or more elements from a finite set of concepts. End users can then query the text using a predefined set of questions, each of which identifies a subset of complementary concepts. The search process matches that subset of concepts to relevant points in the text. The current process requires that the DE invest significant time to generate the required concepts and questions. We propose a new system--called ACQUIRE (Acquisition of Concepts and Queries in an Integrated Retrieval Environment)--that assists a DE in two essential tasks in the text-markup process. First, it helps her to develop, edit, and maintain the concept model: the set of concepts with which she marks the text. Second, ACQUIRE helps her to develop a query model: the set of specific questions that end users can later use to search the marked text. The DE incorporates concepts from the concept model when she creates the questions in the query model. The major benefit of the ACQUIRE system is a reduction in the time and effort required for the text-markup process. We compared the process of concept- and query-model creation using ACQUIRE to the process used in previous work by rebuilding two existing models that we previously constructed manually. We observed a significant decrease in the time required to build and maintain the concept and query models.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-15
..., thereby maintaining the $0.0002 markup that exists in the current fee schedule. \\4\\ SR-PHLX-2011-11... recent pricing changes by that venue, and allows NASDAQ to maintain the current markup of $0.0002 per...
XML and E-Journals: The State of Play.
ERIC Educational Resources Information Center
Wusteman, Judith
2003-01-01
Discusses the introduction of the use of XML (Extensible Markup Language) in publishing electronic journals. Topics include standards, including DTDs (Document Type Definition), or document type definitions; aggregator requirements; SGML (Standard Generalized Markup Language); benefits of XML for e-journals; XML metadata; the possibility of…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-10
... rates. In effect, the Exchange is obtaining wholesale rates from the carriers and then charging a markup... a markup to allow the Exchange to cover its administrative costs and to earn a profit on its...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-10
... rates. In effect, the Exchange is obtaining wholesale rates from the carriers and then charging a markup... a markup to allow the Exchange to cover its administrative costs and to earn a profit on its...
Mongkolwat, Pattanasak; Channin, David S; Kleper, Vladimir; Rubin, Daniel L
2012-01-01
In a routine clinical environment or clinical trial, a case report form or structured reporting template can be used to quickly generate uniform and consistent reports. Annotation and image markup (AIM), a project supported by the National Cancer Institute's cancer biomedical informatics grid, can be used to collect information for a case report form or structured reporting template. AIM is designed to store, in a single information source, (a) the description of pixel data with use of markups or graphical drawings placed on the image, (b) calculation results (which may or may not be directly related to the markups), and (c) supplemental information. To facilitate the creation of AIM annotations with data entry templates, an AIM template schema and an open-source template creation application were developed to assist clinicians, image researchers, and designers of clinical trials to quickly create a set of data collection items, thereby ultimately making image information more readily accessible.
Channin, David S.; Rubin, Vladimir Kleper Daniel L.
2012-01-01
In a routine clinical environment or clinical trial, a case report form or structured reporting template can be used to quickly generate uniform and consistent reports. Annotation and Image Markup (AIM), a project supported by the National Cancer Institute’s cancer Biomedical Informatics Grid, can be used to collect information for a case report form or structured reporting template. AIM is designed to store, in a single information source, (a) the description of pixel data with use of markups or graphical drawings placed on the image, (b) calculation results (which may or may not be directly related to the markups), and (c) supplemental information. To facilitate the creation of AIM annotations with data entry templates, an AIM template schema and an open-source template creation application were developed to assist clinicians, image researchers, and designers of clinical trials to quickly create a set of data collection items, thereby ultimately making image information more readily accessible. © RSNA, 2012 PMID:22556315
Semi-automated XML markup of biosystematic legacy literature with the GoldenGATE editor.
Sautter, Guido; Böhm, Klemens; Agosti, Donat
2007-01-01
Today, digitization of legacy literature is a big issue. This also applies to the domain of biosystematics, where this process has just started. Digitized biosystematics literature requires a very precise and fine grained markup in order to be useful for detailed search, data linkage and mining. However, manual markup on sentence level and below is cumbersome and time consuming. In this paper, we present and evaluate the GoldenGATE editor, which is designed for the special needs of marking up OCR output with XML. It is built in order to support the user in this process as far as possible: Its functionality ranges from easy, intuitive tagging through markup conversion to dynamic binding of configurable plug-ins provided by third parties. Our evaluation shows that marking up an OCR document using GoldenGATE is three to four times faster than with an off-the-shelf XML editor like XML-Spy. Using domain-specific NLP-based plug-ins, these numbers are even higher.
Palafox, Benjamin; Patouillard, Edith; Tougher, Sarah; Goodman, Catherine; Hanson, Kara; Kleinschmidt, Immo; Torres Rueda, Sergio; Kiefer, Sabine; O'Connell, Kate; Zinsou, Cyprien; Phok, Sochea; Akulayi, Louis; Arogundade, Ekundayo; Buyungo, Peter; Mpasela, Felton; Poyer, Stephen; Chavasse, Desmond
2016-03-01
The private for-profit sector is an important source of treatment for malaria. However, private patients face high prices for the recommended treatment for uncomplicated malaria, artemisinin combination therapies (ACTs), which makes them more likely to receive cheaper, less effective non-artemisinin therapies (nATs). This study seeks to better understand consumer antimalarial prices by documenting and exploring the pricing behaviour of retailers and wholesalers. Using data collected in 2009-10, we present survey estimates of antimalarial retail prices, and wholesale- and retail-level price mark-ups from six countries (Benin, Cambodia, the Democratic Republic of Congo, Nigeria, Uganda and Zambia), along with qualitative findings on factors affecting pricing decisions. Retail prices were lowest for nATs, followed by ACTs and artemisinin monotherapies (AMTs). Retailers applied the highest percentage mark-ups on nATs (range: 40% in Nigeria to 100% in Cambodia and Zambia), whereas mark-ups on ACTs (range: 22% in Nigeria to 71% in Zambia) and AMTs (range: 22% in Nigeria to 50% in Uganda) were similar in magnitude, but lower than those applied to nATs. Wholesale mark-ups were generally lower than those at retail level, and were similar across antimalarial categories in most countries. When setting prices wholesalers and retailers commonly considered supplier prices, prevailing market prices, product availability, product characteristics and the costs related to transporting goods, staff salaries and maintaining a property. Price discounts were regularly used to encourage sales and were sometimes used by wholesalers to reward long-term customers. Pricing constraints existed only in Benin where wholesaler and retailer mark-ups are regulated; however, unlicensed drug vendors based in open-air markets did not adhere to the pricing regime. These findings indicate that mark-ups on antimalarials are reasonable. Therefore, improving ACT affordability would be most readily achieved by interventions that reduce commodity prices for retailers, such as ACT subsidies, pooled purchasing mechanisms and cost-effective strategies to increase the distribution coverage area of wholesalers. © The Author 2015. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine.
Symmetric Key Services Markup Language (SKSML)
NASA Astrophysics Data System (ADS)
Noor, Arshad
Symmetric Key Services Markup Language (SKSML) is the eXtensible Markup Language (XML) being standardized by the OASIS Enterprise Key Management Infrastructure Technical Committee for requesting and receiving symmetric encryption cryptographic keys within a Symmetric Key Management System (SKMS). This protocol is designed to be used between clients and servers within an Enterprise Key Management Infrastructure (EKMI) to secure data, independent of the application and platform. Building on many security standards such as XML Signature, XML Encryption, Web Services Security and PKI, SKSML provides standards-based capability to allow any application to use symmetric encryption keys, while maintaining centralized control. This article describes the SKSML protocol and its capabilities.
The "New Oxford English Dictionary" Project.
ERIC Educational Resources Information Center
Fawcett, Heather
1993-01-01
Describes the conversion of the 22,000-page Oxford English Dictionary to an electronic version incorporating a modified Standard Generalized Markup Language (SGML) syntax. Explains that the database designers chose structured markup because it supports users' data searching needs, allows textual components to be extracted or modified, and allows…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-10
... believes that it is reasonable to charge a $0.0001 per share markup on such routed orders as a means of..., and accordingly, it is equitable for NASDAQ to charge members a markup for making use of NASDAQ's...
A hypertext system that learns from user feedback
NASA Technical Reports Server (NTRS)
Mathe, Nathalie
1994-01-01
Retrieving specific information from large amounts of documentation is not an easy task. It could be facilitated if information relevant in the current problem solving context could be automatically supplied to the user. As a first step towards this goal, we have developed an intelligent hypertext system called CID (Computer Integrated Documentation). Besides providing an hypertext interface for browsing large documents, the CID system automatically acquires and reuses the context in which previous searches were appropriate. This mechanism utilizes on-line user information requirements and relevance feedback either to reinforce current indexing in case of success or to generate new knowledge in case of failure. Thus, the user continually augments and refines the intelligence of the retrieval system. This allows the CID system to provide helpful responses, based on previous usage of the documentation, and to improve its performance over time. We successfully tested the CID system with users of the Space Station Freedom requirements documents. We are currently extending CID to other application domains (Space Shuttle operations documents, airplane maintenance manuals, and on-line training). We are also exploring the potential commercialization of this technique.
A general UNIX interface for biocomputing and network information retrieval software.
Kiong, B K; Tan, T W
1993-10-01
We describe a UNIX program, HYBROW, which can integrate without modification a wide range of UNIX biocomputing and network information retrieval software. HYBROW works in conjunction with a separate set of ASCII files containing embedded hypertext-like links. The program operates like a hypertext browser featuring five basic links: file link, execute-only link, execute-display link, directory-browse link and field-filling link. Useful features of the interface may be developed using combinations of these links with simple shell scripts and examples of these are briefly described. The system manager who supports biocomputing users should find the program easy to maintain, and useful in assisting new and infrequent users; it is also simple to incorporate new programs. Moreover, the individual user can customize the interface, create dynamic menus, hypertext a document, invoke shell scripts and new programs simply with a basic understanding of the UNIX operating system and any text editor. This program was written in C language and uses the UNIX curses and termcap libraries. It is freely available as a tar compressed file (by anonymous FTP from nuscc.nus.sg).
NASA Technical Reports Server (NTRS)
Jackson, Bruce
2006-01-01
DAVEtools is a set of Java archives that embodies tools for manipulating flight-dynamics models that have been encoded in dynamic aerospace vehicle exchange markup language (DAVE-ML). [DAVE-ML is an application program, written in Extensible Markup Language (XML), for encoding complete computational models of the dynamics of aircraft and spacecraft.
An Introduction to the Resource Description Framework.
ERIC Educational Resources Information Center
Miller, Eric
1998-01-01
Explains the Resource Description Framework (RDF), an infrastructure developed under the World Wide Web Consortium that enables the encoding, exchange, and reuse of structured metadata. It is an application of Extended Markup Language (XML), which is a subset of Standard Generalized Markup Language (SGML), and helps with expressing semantics.…
Ong, Edison; Xiang, Zuoshuang; Zhao, Bin; Liu, Yue; Lin, Yu; Zheng, Jie; Mungall, Chris; Courtot, Mélanie; Ruttenberg, Alan; He, Yongqun
2017-01-01
Linked Data (LD) aims to achieve interconnected data by representing entities using Unified Resource Identifiers (URIs), and sharing information using Resource Description Frameworks (RDFs) and HTTP. Ontologies, which logically represent entities and relations in specific domains, are the basis of LD. Ontobee (http://www.ontobee.org/) is a linked ontology data server that stores ontology information using RDF triple store technology and supports query, visualization and linkage of ontology terms. Ontobee is also the default linked data server for publishing and browsing biomedical ontologies in the Open Biological Ontology (OBO) Foundry (http://obofoundry.org) library. Ontobee currently hosts more than 180 ontologies (including 131 OBO Foundry Library ontologies) with over four million terms. Ontobee provides a user-friendly web interface for querying and visualizing the details and hierarchy of a specific ontology term. Using the eXtensible Stylesheet Language Transformation (XSLT) technology, Ontobee is able to dereference a single ontology term URI, and then output RDF/eXtensible Markup Language (XML) for computer processing or display the HTML information on a web browser for human users. Statistics and detailed information are generated and displayed for each ontology listed in Ontobee. In addition, a SPARQL web interface is provided for custom advanced SPARQL queries of one or multiple ontologies. PMID:27733503
Resolving Controlled Vocabulary in DITA Markup: A Case Example in Agroforestry
ERIC Educational Resources Information Center
Zschocke, Thomas
2012-01-01
Purpose: This paper aims to address the issue of matching controlled vocabulary on agroforestry from knowledge organization systems (KOS) and incorporating these terms in DITA markup. The paper has been selected for an extended version from MTSR'11. Design/methodology/approach: After a general description of the steps taken to harmonize controlled…
Developing a Markup Language for Encoding Graphic Content in Plan Documents
ERIC Educational Resources Information Center
Li, Jinghuan
2009-01-01
While deliberating and making decisions, participants in urban development processes need easy access to the pertinent content scattered among different plans. A Planning Markup Language (PML) has been proposed to represent the underlying structure of plans in an XML-compliant way. However, PML currently covers only textual information and lacks…
ERIC Educational Resources Information Center
Chang, May
2000-01-01
Describes the development of electronic finding aids for archives at the University of Illinois, Urbana-Champaign that used XML (extensible markup language) and EAD (encoded archival description) to enable more flexible information management and retrieval than using MARC or a relational database management system. EAD template is appended.…
Automated Text Markup for Information Retrieval from an Electronic Textbook of Infectious Disease
Berrios, Daniel C.; Kehler, Andrew; Kim, David K.; Yu, Victor L.; Fagan, Lawrence M.
1998-01-01
The information needs of practicing clinicians frequently require textbook or journal searches. Making these sources available in electronic form improves the speed of these searches, but precision (i.e., the fraction of relevant to total documents retrieved) remains low. Improving the traditional keyword search by transforming search terms into canonical concepts does not improve search precision greatly. Kim et al. have designed and built a prototype system (MYCIN II) for computer-based information retrieval from a forthcoming electronic textbook of infectious disease. The system requires manual indexing by experts in the form of complex text markup. However, this mark-up process is time consuming (about 3 person-hours to generate, review, and transcribe the index for each of 218 chapters). We have designed and implemented a system to semiautomate the markup process. The system, information extraction for semiautomated indexing of documents (ISAID), uses query models and existing information-extraction tools to provide support for any user, including the author of the source material, to mark up tertiary information sources quickly and accurately.
Dugan, J. M.; Berrios, D. C.; Liu, X.; Kim, D. K.; Kaizer, H.; Fagan, L. M.
1999-01-01
Our group has built an information retrieval system based on a complex semantic markup of medical textbooks. We describe the construction of a set of web-based knowledge-acquisition tools that expedites the collection and maintenance of the concepts required for text markup and the search interface required for information retrieval from the marked text. In the text markup system, domain experts (DEs) identify sections of text that contain one or more elements from a finite set of concepts. End users can then query the text using a predefined set of questions, each of which identifies a subset of complementary concepts. The search process matches that subset of concepts to relevant points in the text. The current process requires that the DE invest significant time to generate the required concepts and questions. We propose a new system--called ACQUIRE (Acquisition of Concepts and Queries in an Integrated Retrieval Environment)--that assists a DE in two essential tasks in the text-markup process. First, it helps her to develop, edit, and maintain the concept model: the set of concepts with which she marks the text. Second, ACQUIRE helps her to develop a query model: the set of specific questions that end users can later use to search the marked text. The DE incorporates concepts from the concept model when she creates the questions in the query model. The major benefit of the ACQUIRE system is a reduction in the time and effort required for the text-markup process. We compared the process of concept- and query-model creation using ACQUIRE to the process used in previous work by rebuilding two existing models that we previously constructed manually. We observed a significant decrease in the time required to build and maintain the concept and query models. Images Figure 1 Figure 2 Figure 4 Figure 5 PMID:10566457
Modularization and Structured Markup for Learning Content in an Academic Environment
ERIC Educational Resources Information Center
Schluep, Samuel; Bettoni, Marco; Schar, Sissel Guttormsen
2006-01-01
This article aims to present a flexible component model for modular, web-based learning content, and a simple structured markup schema for the separation of content and presentation. The article will also contain an overview of the dynamic Learning Content Management System (dLCMS) project, which implements these concepts. Content authors are a…
The Adoption of Mark-Up Tools in an Interactive e-Textbook Reader
ERIC Educational Resources Information Center
Van Horne, Sam; Russell, Jae-eun; Schuh, Kathy L.
2016-01-01
Researchers have more often examined whether students prefer using an e-textbook over a paper textbook or whether e-textbooks provide a better resource for learning than paper textbooks, but students' adoption of mark-up tools has remained relatively unexamined. Drawing on the concept of Innovation Diffusion Theory, we used educational data mining…
A methodology for evaluation of a markup-based specification of clinical guidelines.
Shalom, Erez; Shahar, Yuval; Taieb-Maimon, Meirav; Lunenfeld, Eitan
2008-11-06
We introduce a three-phase, nine-step methodology for specification of clinical guidelines (GLs) by expert physicians, clinical editors, and knowledge engineers, and for quantitative evaluation of the specification's quality. We applied this methodology to a particular framework for incremental GL structuring (mark-up) and to GLs in three clinical domains with encouraging results.
ERIC Educational Resources Information Center
Battalio, John T.
2002-01-01
Describes the influence that Extensible Markup Language (XML) will have on the software documentation process and subsequently on the curricula of advanced undergraduate and master's programs in technical communication. Recommends how curricula of advanced undergraduate and master's programs in technical communication ought to change in order to…
Astronomical Instrumentation System Markup Language
NASA Astrophysics Data System (ADS)
Goldbaum, Jesse M.
2016-05-01
The Astronomical Instrumentation System Markup Language (AISML) is an Extensible Markup Language (XML) based file format for maintaining and exchanging information about astronomical instrumentation. The factors behind the need for an AISML are first discussed followed by the reasons why XML was chosen as the format. Next it's shown how XML also provides the framework for a more precise definition of an astronomical instrument and how these instruments can be combined to form an Astronomical Instrumentation System (AIS). AISML files for several instruments as well as one for a sample AIS are provided. The files demonstrate how AISML can be utilized for various tasks from web page generation and programming interface to instrument maintenance and quality management. The advantages of widespread adoption of AISML are discussed.
Navigation in large information spaces represented as hypertext: A review of the literature
NASA Technical Reports Server (NTRS)
Brown, Marcus
1990-01-01
The problem addressed is the failure of information-space navigation tools when the space grows to large. The basic goal is to provide the power of the hypertext interface in such a way as to be most easily comprehensible to the user. It was determined that the optimal structure for information is an overlapping, simplified hierarchy. The hierarchical structure should be made obvious to the user, and many of the non-hierarchical links in the information space should either by eliminated, or should be de-emphasized so that the novice user is not confused by them. Only one of the hierarchies should be very simple.
NASA Astrophysics Data System (ADS)
Alturki, Uthman T.
The goal of this research was to research, design, and develop a hypertext program for students who study biology. The Ecology Hypertext Program was developed using Research and Development (R&D) methodology. The purpose of this study was to place the final "product", a CD-ROM for learning biology concepts, in the hands of teachers and students to help them in learning and teaching process. The product was created through a cycle of literature review, needs assessment, development, and a cycle of field tests and revisions. I applied the ten steps of R&D process suggested by Borg and Gall (1989) which, consisted of: (1) Literature review, (2) Needs assessment, (3) Planning, (4) Develop preliminary product, (5) Preliminary field-testing, (6) Preliminary revision, (7) Main field-testing, (8) Main revision, (9) Final field-testing, and (10) Final product revision. The literature review and needs assessment provided a support and foundation for designing the preliminary product---the Ecology Hypertext Program. Participants in the needs assessment joined a focus group discussion. They were a group of graduate students in education who suggested the importance for designing this product. For the preliminary field test, the participants were a group of high school students studying biology. They were the potential user of the product. They reviewed the preliminary product and then filled out a questionnaire. Their feedback and suggestions were used to develop and improve the product in a step called preliminary revision. The second round of field tasting was the main field test in which the participants joined a focus group discussion. They were the same group who participated in needs assessment task. They reviewed the revised product and then provided ideas and suggestions to improve the product. Their feedback were categorized and implemented to develop the product as in the main revision task. Finally, a group of science teachers participated in this study by reviewing the product and then filling out the questionnaire. Their suggestions were used to conduct the final step in R&D methodology, the final product revision. The primary result of this study was the Ecology Hypertext Program. It considered a small attempt to give students an opportunity to learn through an interactive hypertext program. In addition, using the R&D methodology was an ideal procedure for designing and developing new educational products and material.
ERIC Educational Resources Information Center
Walsh, Lucas
2007-01-01
This article seeks to provide an introduction to Extensible Markup Language (XML) by looking at its use in a single source publishing approach to the provision of teaching resources in both hardcopy and online. Using the development of the International Baccalaureate Organisation's online Economics Subject Guide as a practical example, this…
A Practical Introduction to the XML, Extensible Markup Language, by Way of Some Useful Examples
ERIC Educational Resources Information Center
Snyder, Robin
2004-01-01
XML, Extensible Markup Language, is important as a way to represent and encapsulate the structure of underlying data in a portable way that supports data exchange regardless of the physical storage of the data. This paper (and session) introduces some useful and practical aspects of XML technology for sharing information in a educational setting…
Creating a course-based web site in a university environment
NASA Astrophysics Data System (ADS)
Robin, Bernard R.; Mcneil, Sara G.
1997-06-01
The delivery of educational materials is undergoing a remarkable change from the traditional lecture method to dissemination of courses via the World Wide Web. This paradigm shift from a paper-based structure to an electronic one has profound implications for university faculty. Students are enrolling in classes with the expectation of using technology and logging on to the Internet, and professors are realizing that the potential of the Web can have a significant impact on classroom activities. An effective method of integrating electronic technologies into teaching and learning is to publish classroom materials on the World Wide Web. Already, many faculty members are creating their own home pages and Web sites for courses that include syllabi, handouts, and student work. Additionally, educators are finding value in adding hypertext links to a wide variety of related Web resources from online research and electronic journals to government and commercial sites. A number of issues must be considered when developing course-based Web sites. These include meeting the needs of a target audience, designing effective instructional materials, and integrating graphics and other multimedia components. There are also numerous technical issues that must be addressed in developing, uploading and maintaining HTML documents. This article presents a model for a university faculty who want to begin using the Web in their teaching and is based on the experiences of two College of Education professors who are using the Web as an integral part of their graduate courses.
Knowledge Provenance in Semantic Wikis
NASA Astrophysics Data System (ADS)
Ding, L.; Bao, J.; McGuinness, D. L.
2008-12-01
Collaborative online environments with a technical Wiki infrastructure are becoming more widespread. One of the strengths of a Wiki environment is that it is relatively easy for numerous users to contribute original content and modify existing content (potentially originally generated by others). As more users begin to depend on informational content that is evolving by Wiki communities, it becomes more important to track the provenance of the information. Semantic Wikis expand upon traditional Wiki environments by adding some computationally understandable encodings of some of the terms and relationships in Wikis. We have developed a semantic Wiki environment that expands a semantic Wiki with provenance markup. Provenance of original contributions as well as modifications is encoded using the provenance markup component of the Proof Markup Language. The Wiki environment provides the provenance markup automatically, thus users are not required to make specific encodings of author, contribution date, and modification trail. Further, our Wiki environment includes a search component that understands the provenance primitives and thus can be used to provide a provenance-aware search facility. We will describe the knowledge provenance infrastructure of our Semantic Wiki and show how it is being used as the foundation of our group web site as well as a number of project web sites.
Determinants of price setting decisions on anti-malarial drugs at retail shops in Cambodia.
Patouillard, Edith; Hanson, Kara; Kleinschmidt, Immo; Palafox, Benjamin; Tougher, Sarah; Pok, Sochea; O'Connell, Kate; Goodman, Catherine
2015-05-30
In many low-income countries, the private commercial sector plays an important role in the provision of malaria treatment. However, the quality of care it provides is often poor, with artemisinin combination therapy (ACT) generally being too costly for consumers. Decreasing ACT prices is critical for improving private sector treatment outcomes and reducing the spread of artemisinin resistance. Yet limited evidence exists on the factors influencing retailers' pricing decisions. This study investigates the determinants of price mark-ups on anti-malarial drugs in retail outlets in Cambodia. Taking an economics perspective, the study tests the hypothesis that the structure of the anti-malarial market determines the way providers set their prices. Providers facing weak competition are hypothesized to apply high mark-ups and set prices above the competitive level. To analyse the relationship between market competition and provider pricing, the study used cross-sectional data from retail outlets selling anti-malarial drugs, including outlet characteristics data (e.g. outlet type, anti-malarial sales volumes), range of anti-malarial drugs stocked (e.g. dosage form, brand status) and purchase and selling prices. Market concentration, a measure of the level of market competition, was estimated using sales volume data. Market accessibility was defined based on travel time to the closest main commercial area. Percent mark-ups were calculated using price data. The relationship between mark-ups and market concentration was explored using regression analysis. The anti-malarial market was on average highly concentrated, suggesting weak competition. Higher concentration was positively associated with higher mark-ups in moderately accessible markets only, with no significant relationship or a negative relationship in other markets. Other determinants of pricing included anti-malarial brand status and generic type, with higher mark-ups on cheaper products. The results indicate that provider pricing as well as other key elements of anti-malarial supply and demand may have played an important role in the limited access to appropriate malaria treatment in Cambodia. The potential for an ACT price subsidy at manufacturer level combined with effective communications directed at consumers and supportive private sector regulation should be explored to improve access to quality malaria treatment in Cambodia.
Hypermedia and intelligent tutoring applications in a mission operations environment
NASA Technical Reports Server (NTRS)
Ames, Troy; Baker, Clifford
1990-01-01
Hypermedia, hypertext and Intelligent Tutoring System (ITS) applications to support all phases of mission operations are investigated. The application of hypermedia and ITS technology to improve system performance and safety in supervisory control is described - with an emphasis on modeling operator's intentions in the form of goals, plans, tasks, and actions. Review of hypermedia and ITS technology is presented as may be applied to the tutoring of command and control languages. Hypertext based ITS is developed to train flight operation teams and System Test and Operation Language (STOL). Specific hypermedia and ITS application areas are highlighted, including: computer aided instruction of flight operation teams (STOL ITS) and control center software development tools (CHIMES and STOL Certification Tool).
Integrating and visualizing primary data from prospective and legacy taxonomic literature
Agosti, Donat; Penev, Lyubomir; Sautter, Guido; Georgiev, Teodor; Catapano, Terry; Patterson, David; King, David; Pereira, Serrano; Vos, Rutger Aldo; Sierra, Soraya
2015-01-01
Abstract Specimen data in taxonomic literature are among the highest quality primary biodiversity data. Innovative cybertaxonomic journals are using workflows that maintain data structure and disseminate electronic content to aggregators and other users; such structure is lost in traditional taxonomic publishing. Legacy taxonomic literature is a vast repository of knowledge about biodiversity. Currently, access to that resource is cumbersome, especially for non-specialist data consumers. Markup is a mechanism that makes this content more accessible, and is especially suited to machine analysis. Fine-grained XML (Extensible Markup Language) markup was applied to all (37) open-access articles published in the journal Zootaxa containing treatments on spiders (Order: Araneae). The markup approach was optimized to extract primary specimen data from legacy publications. These data were combined with data from articles containing treatments on spiders published in Biodiversity Data Journal where XML structure is part of the routine publication process. A series of charts was developed to visualize the content of specimen data in XML-tagged taxonomic treatments, either singly or in aggregate. The data can be filtered by several fields (including journal, taxon, institutional collection, collecting country, collector, author, article and treatment) to query particular aspects of the data. We demonstrate here that XML markup using GoldenGATE can address the challenge presented by unstructured legacy data, can extract structured primary biodiversity data which can be aggregated with and jointly queried with data from other Darwin Core-compatible sources, and show how visualization of these data can communicate key information contained in biodiversity literature. We complement recent studies on aspects of biodiversity knowledge using XML structured data to explore 1) the time lag between species discovry and description, and 2) the prevelence of rarity in species descriptions. PMID:26023286
NASA Technical Reports Server (NTRS)
Bauman, William H., III
2010-01-01
The 12-km resolution North American Mesoscale (NAM) model (MesoNAM) is used by the 45th Weather Squadron (45 WS) Launch Weather Officers at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) to support space launch weather operations. The 45 WS tasked the Applied Meteorology Unit to conduct an objective statistics-based analysis of MesoNAM output compared to wind tower mesonet observations and then develop a an operational tool to display the results. The National Centers for Environmental Prediction began running the current version of the MesoNAM in mid-August 2006. The period of record for the dataset was 1 September 2006 - 31 January 2010. The AMU evaluated MesoNAM hourly forecasts from 0 to 84 hours based on model initialization times of 00, 06, 12 and 18 UTC. The MesoNAM forecast winds, temperature and dew point were compared to the observed values of these parameters from the sensors in the KSC/CCAFS wind tower network. The data sets were stratified by model initialization time, month and onshore/offshore flow for each wind tower. Statistics computed included bias (mean difference), standard deviation of the bias, root mean square error (RMSE) and a hypothesis test for bias = O. Twelve wind towers located in close proximity to key launch complexes were used for the statistical analysis with the sensors on the towers positioned at varying heights to include 6 ft, 30 ft, 54 ft, 60 ft, 90 ft, 162 ft, 204 ft and 230 ft depending on the launch vehicle and associated weather launch commit criteria being evaluated. These twelve wind towers support activities for the Space Shuttle (launch and landing), Delta IV, Atlas V and Falcon 9 launch vehicles. For all twelve towers, the results indicate a diurnal signal in the bias of temperature (T) and weaker but discernable diurnal signal in the bias of dewpoint temperature (T(sub d)) in the MesoNAM forecasts. Also, the standard deviation of the bias and RMSE of T, T(sub d), wind speed and wind direction indicated the model error increased with the forecast period all four parameters. The hypothesis testing uses statistics to determine the probability that a given hypothesis is true. The goal of using the hypothesis test was to determine if the model bias of any of the parameters assessed throughout the model forecast period was statistically zero. For th is dataset, if this test produced a value >= -1 .96 or <= 1.96 for a data point, then the bias at that point was effectively zero and the model forecast for that point was considered to have no error. A graphical user interface (GUI) was developed so the 45 WS would have an operational tool at their disposal that would be easy to navigate among the multiple stratifications of information to include tower locations, month, model initialization times, sensor heights and onshore/offshore flow. The AMU developed the GUI using HyperText Markup Language (HTML) so the tool could be used in most popular web browsers with computers running different operating systems such as Microsoft Windows and Linux.
A New Method of Viewing Attachment Document of eMail on Various Mobile Devices
NASA Astrophysics Data System (ADS)
Ko, Heeae; Seo, Changwoo; Lim, Yonghwan
As the computing power of the mobile devices is improving rapidly, many kinds of web services are also available in mobile devices just as Email service. Mobile Mail Service began early, but this service is mostly limited in some specified mobile devices such as Smart Phone. That is a limitation that users have to purchase specified phone to be benefited from Mobile Mail Service. In this paper, it uses DIDL (digital item declaration language) markup type defined in MPEG-21 and MobileGate Server, and solved this problem. DIDL could be converted to other markup types which are displayed by mobile devices. By transforming PC Web Mail contents including attachment document to DIDL markup through MobileGate Server, the Mobile Mail Service could be available for all kinds of mobile devices.
Sung, Yao-Ting; Wu, Ming-Da; Chen, Chun-Kuang; Chang, Kuo-En
2015-01-01
Online reading is developing at an increasingly rapid rate, but the debate concerning whether learning is more effective when using hypertexts than when using traditional linear texts is still persistent. In addition, several researchers stated that online reading comprehension always starts with a question, but little empirical evidence has been gathered to investigate this claim. This study used eye-tracking technology and retrospective think aloud technique to examine online reading behaviors of fifth-graders (N = 50). The participants were asked to read four texts on the website. The present study employed a three-way mixed design: 2 (reading ability: high vs. low) × 2 (reading goals: with vs. without) × 2 (text types: hypertext vs. linear text). The dependent variables were eye-movement indices and the frequencies of using online reading strategy. The results show that fifth-graders, irrespective of their reading ability, found it difficult to navigate the non-linear structure of hypertexts when searching for and integrating information. When they read with goals, they adjusted their reading speed and the focus of their attention. Their offline reading ability also influenced their online reading performance. These results suggest that online reading skills and strategies have to be taught in order to enhance the online reading abilities of elementary-school students. PMID:26074837
Sung, Yao-Ting; Wu, Ming-Da; Chen, Chun-Kuang; Chang, Kuo-En
2015-01-01
Online reading is developing at an increasingly rapid rate, but the debate concerning whether learning is more effective when using hypertexts than when using traditional linear texts is still persistent. In addition, several researchers stated that online reading comprehension always starts with a question, but little empirical evidence has been gathered to investigate this claim. This study used eye-tracking technology and retrospective think aloud technique to examine online reading behaviors of fifth-graders (N = 50). The participants were asked to read four texts on the website. The present study employed a three-way mixed design: 2 (reading ability: high vs. low) × 2 (reading goals: with vs. without) × 2 (text types: hypertext vs. linear text). The dependent variables were eye-movement indices and the frequencies of using online reading strategy. The results show that fifth-graders, irrespective of their reading ability, found it difficult to navigate the non-linear structure of hypertexts when searching for and integrating information. When they read with goals, they adjusted their reading speed and the focus of their attention. Their offline reading ability also influenced their online reading performance. These results suggest that online reading skills and strategies have to be taught in order to enhance the online reading abilities of elementary-school students.
Development of Human Face Literature Database Using Text Mining Approach: Phase I.
Kaur, Paramjit; Krishan, Kewal; Sharma, Suresh K
2018-06-01
The face is an important part of the human body by which an individual communicates in the society. Its importance can be highlighted by the fact that a person deprived of face cannot sustain in the living world. The amount of experiments being performed and the number of research papers being published under the domain of human face have surged in the past few decades. Several scientific disciplines, which are conducting research on human face include: Medical Science, Anthropology, Information Technology (Biometrics, Robotics, and Artificial Intelligence, etc.), Psychology, Forensic Science, Neuroscience, etc. This alarms the need of collecting and managing the data concerning human face so that the public and free access of it can be provided to the scientific community. This can be attained by developing databases and tools on human face using bioinformatics approach. The current research emphasizes on creating a database concerning literature data of human face. The database can be accessed on the basis of specific keywords, journal name, date of publication, author's name, etc. The collected research papers will be stored in the form of a database. Hence, the database will be beneficial to the research community as the comprehensive information dedicated to the human face could be found at one place. The information related to facial morphologic features, facial disorders, facial asymmetry, facial abnormalities, and many other parameters can be extracted from this database. The front end has been developed using Hyper Text Mark-up Language and Cascading Style Sheets. The back end has been developed using hypertext preprocessor (PHP). The JAVA Script has used as scripting language. MySQL (Structured Query Language) is used for database development as it is most widely used Relational Database Management System. XAMPP (X (cross platform), Apache, MySQL, PHP, Perl) open source web application software has been used as the server.The database is still under the developmental phase and discusses the initial steps of its creation. The current paper throws light on the work done till date.
RTML: remote telescope markup language and you
NASA Astrophysics Data System (ADS)
Hessman, F. V.
2001-12-01
In order to coordinate the use of robotic and remotely operated telescopes in networks -- like Göttingen's MOnitoring NEtwork of Telescopes (MONET) -- a standard format for the exchange of observing requests and reports is needed. I describe the benefits of Remote Telescope Markup Language (RTML), an XML-based protocol originally developed by the Hands-On Universe Project, which is being used and further developed by several robotic telescope projects and firms.
Development of Markup Language for Medical Record Charting: A Charting Language.
Jung, Won-Mo; Chae, Younbyoung; Jang, Bo-Hyoung
2015-01-01
Nowadays a lot of trials for collecting electronic medical records (EMRs) exist. However, structuring data format for EMR is an especially labour-intensive task for practitioners. Here we propose a new mark-up language for medical record charting (called Charting Language), which borrows useful properties from programming languages. Thus, with Charting Language, the text data described in dynamic situation can be easily used to extract information.
Visualization Development of the Ballistic Threat Geospatial Optimization
2015-07-01
topographic globes, Keyhole Markup Language (KML), and Collada files. World Wind gives the user the ability to import 3-D models and navigate...present. After the first person view window is closed , the images stored in memory are then converted to a QuickTime movie (.MOV). The video will be...processing unit HPC high-performance computing JOGL Java implementation of OpenGL KML Keyhole Markup Language NASA National Aeronautics and Space
Ong, Edison; Xiang, Zuoshuang; Zhao, Bin; Liu, Yue; Lin, Yu; Zheng, Jie; Mungall, Chris; Courtot, Mélanie; Ruttenberg, Alan; He, Yongqun
2017-01-04
Linked Data (LD) aims to achieve interconnected data by representing entities using Unified Resource Identifiers (URIs), and sharing information using Resource Description Frameworks (RDFs) and HTTP. Ontologies, which logically represent entities and relations in specific domains, are the basis of LD. Ontobee (http://www.ontobee.org/) is a linked ontology data server that stores ontology information using RDF triple store technology and supports query, visualization and linkage of ontology terms. Ontobee is also the default linked data server for publishing and browsing biomedical ontologies in the Open Biological Ontology (OBO) Foundry (http://obofoundry.org) library. Ontobee currently hosts more than 180 ontologies (including 131 OBO Foundry Library ontologies) with over four million terms. Ontobee provides a user-friendly web interface for querying and visualizing the details and hierarchy of a specific ontology term. Using the eXtensible Stylesheet Language Transformation (XSLT) technology, Ontobee is able to dereference a single ontology term URI, and then output RDF/eXtensible Markup Language (XML) for computer processing or display the HTML information on a web browser for human users. Statistics and detailed information are generated and displayed for each ontology listed in Ontobee. In addition, a SPARQL web interface is provided for custom advanced SPARQL queries of one or multiple ontologies. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
An object model and database for functional genomics.
Jones, Andrew; Hunt, Ela; Wastling, Jonathan M; Pizarro, Angel; Stoeckert, Christian J
2004-07-10
Large-scale functional genomics analysis is now feasible and presents significant challenges in data analysis, storage and querying. Data standards are required to enable the development of public data repositories and to improve data sharing. There is an established data format for microarrays (microarray gene expression markup language, MAGE-ML) and a draft standard for proteomics (PEDRo). We believe that all types of functional genomics experiments should be annotated in a consistent manner, and we hope to open up new ways of comparing multiple datasets used in functional genomics. We have created a functional genomics experiment object model (FGE-OM), developed from the microarray model, MAGE-OM and two models for proteomics, PEDRo and our own model (Gla-PSI-Glasgow Proposal for the Proteomics Standards Initiative). FGE-OM comprises three namespaces representing (i) the parts of the model common to all functional genomics experiments; (ii) microarray-specific components; and (iii) proteomics-specific components. We believe that FGE-OM should initiate discussion about the contents and structure of the next version of MAGE and the future of proteomics standards. A prototype database called RNA And Protein Abundance Database (RAPAD), based on FGE-OM, has been implemented and populated with data from microbial pathogenesis. FGE-OM and the RAPAD schema are available from http://www.gusdb.org/fge.html, along with a set of more detailed diagrams. RAPAD can be accessed by registration at the site.
2014-01-01
Background Logos are commonly used in molecular biology to provide a compact graphical representation of the conservation pattern of a set of sequences. They render the information contained in sequence alignments or profile hidden Markov models by drawing a stack of letters for each position, where the height of the stack corresponds to the conservation at that position, and the height of each letter within a stack depends on the frequency of that letter at that position. Results We present a new tool and web server, called Skylign, which provides a unified framework for creating logos for both sequence alignments and profile hidden Markov models. In addition to static image files, Skylign creates a novel interactive logo plot for inclusion in web pages. These interactive logos enable scrolling, zooming, and inspection of underlying values. Skylign can avoid sampling bias in sequence alignments by down-weighting redundant sequences and by combining observed counts with informed priors. It also simplifies the representation of gap parameters, and can optionally scale letter heights based on alternate calculations of the conservation of a position. Conclusion Skylign is available as a website, a scriptable web service with a RESTful interface, and as a software package for download. Skylign’s interactive logos are easily incorporated into a web page with just a few lines of HTML markup. Skylign may be found at http://skylign.org. PMID:24410852
Chemical Markup, XML, and the World Wide Web. 7. CMLSpect, an XML vocabulary for spectral data.
Kuhn, Stefan; Helmus, Tobias; Lancashire, Robert J; Murray-Rust, Peter; Rzepa, Henry S; Steinbeck, Christoph; Willighagen, Egon L
2007-01-01
CMLSpect is an extension of Chemical Markup Language (CML) for managing spectral and other analytical data. It is designed to be flexible enough to contain a wide variety of spectral data. The paper describes the CMLElements used and gives practical examples for common types of spectra. In addition it demonstrates how different views of the data can be expressed and what problems still exist.
Computer support for physiological cell modelling using an ontology on cell physiology.
Takao, Shimayoshi; Kazuhiro, Komurasaki; Akira, Amano; Takeshi, Iwashita; Masanori, Kanazawa; Tetsuya, Matsuda
2006-01-01
The development of electrophysiological whole cell models to support the understanding of biological mechanisms is increasing rapidly. Due to the complexity of biological systems, comprehensive cell models, which are composed of many imported sub-models of functional elements, can get quite complicated as well, making computer modification difficult. Here, we propose a computer support to enhance structural changes of cell models, employing the markup languages CellML and our original PMSML (physiological model structure markup language), in addition to a new ontology for cell physiological modelling. In particular, a method to make references from CellML files to the ontology and a method to assist manipulation of model structures using markup languages together with the ontology are reported. Using these methods three software utilities, including a graphical model editor, are implemented. Experimental results proved that these methods are effective for the modification of electrophysiological models.
An integrated knowledge system for the Space Shuttle hazardous gas detection system
NASA Technical Reports Server (NTRS)
Lo, Ching F.; Shi, George Z.; Bangasser, Carl; Fensky, Connie; Cegielski, Eric; Overbey, Glenn
1993-01-01
A computer-based integrated Knowledge-Based System, the Intelligent Hypertext Manual (IHM), was developed for the Space Shuttle Hazardous Gas Detection System (HGDS) at NASA Marshall Space Flight Center (MSFC). The IHM stores HGDS related knowledge and presents it in an interactive and intuitive manner. This manual is a combination of hypertext and an expert system which store experts' knowledge and experience in hazardous gas detection and analysis. The IHM's purpose is to provide HGDS personnel with the capabilities of: locating applicable documentation related to procedures, constraints, and previous fault histories; assisting in the training of personnel; enhancing the interpretation of real time data; and recognizing and identifying possible faults in the Space Shuttle sub-systems related to hazardous gas detection.
Development of the Plate Tectonics and Seismology markup languages with XML
NASA Astrophysics Data System (ADS)
Babaie, H.; Babaei, A.
2003-04-01
The Extensible Markup Language (XML) and its specifications such as the XSD Schema, allow geologists to design discipline-specific vocabularies such as Seismology Markup Language (SeismML) or Plate Tectonics Markup Language (TectML). These languages make it possible to store and interchange structured geological information over the Web. Development of a geological markup language requires mapping geological concepts, such as "Earthquake" or "Plate" into a UML object model, applying a modeling and design environment. We have selected four inter-related geological concepts: earthquake, fault, plate, and orogeny, and developed four XML Schema Definitions (XSD), that define the relationships, cardinalities, hierarchies, and semantics of these concepts. In such a geological concept model, the UML object "Earthquake" is related to one or more "Wave" objects, each arriving to a seismic station at a specific "DateTime", and relating to a specific "Epicenter" object that lies at a unique "Location". The "Earthquake" object occurs along a "Segment" of a "Fault" object, which is related to a specific "Plate" object. The "Fault" has its own associations with such things as "Bend", "Step", and "Segment", and could be of any kind (e.g., "Thrust", "Transform'). The "Plate" is related to many other objects such as "MOR", "Subduction", and "Forearc", and is associated with an "Orogeny" object that relates to "Deformation" and "Strain" and several other objects. These UML objects were mapped into XML Metadata Interchange (XMI) formats, which were then converted into four XSD Schemas. The schemas were used to create and validate the XML instance documents, and to create a relational database hosting the plate tectonics and seismological data in the Microsoft Access format. The SeismML and TectML allow seismologists and structural geologists, among others, to submit and retrieve structured geological data on the Internet. A seismologist, for example, can submit peer-reviewed and reliable data about a specific earthquake to a Java Server Page on our web site hosting the XML application. Other geologists can readily retrieve the submitted data, saved in files or special tables of the designed database, through a search engine designed with J2EE (JSP, servlet, Java Bean) and XML specifications such as XPath, XPointer, and XSLT. When extended to include all the important concepts of seismology and plate tectonics, the two markup languages will make global interchange of geological data a reality.
An Overview of Genomic Sequence Variation Markup Language (GSVML)
Nakaya, Jun; Hiroi, Kaei; Ido, Keisuke; Yang, Woosung; Kimura, Michio
2006-01-01
Internationally accumulated genomic sequence variation data on human requires the interoperable data exchanging format. We developed the GSVML as the data exchanging format. The GSVML is human health oriented and has three categories. Analyses on the use case in human health domain and the investigation on the databases and markup languages were conducted. An interface ability to Health Level Seven Genotype Model was examined. GSVML provides a sharable platform for both clinical and research applications.
Instrument Remote Control via the Astronomical Instrument Markup Language
NASA Technical Reports Server (NTRS)
Sall, Ken; Ames, Troy; Warsaw, Craig; Koons, Lisa; Shafer, Richard
1998-01-01
The Instrument Remote Control (IRC) project ongoing at NASA's Goddard Space Flight Center's (GSFC) Information Systems Center (ISC) supports NASA's mission by defining an adaptive intranet-based framework that provides robust interactive and distributed control and monitoring of remote instruments. An astronomical IRC architecture that combines the platform-independent processing capabilities of Java with the power of Extensible Markup Language (XML) to express hierarchical data in an equally platform-independent, as well as human readable manner, has been developed. This architecture is implemented using a variety of XML support tools and Application Programming Interfaces (API) written in Java. IRC will enable trusted astronomers from around the world to easily access infrared instruments (e.g., telescopes, cameras, and spectrometers) located in remote, inhospitable environments, such as the South Pole, a high Chilean mountaintop, or an airborne observatory aboard a Boeing 747. Using IRC's frameworks, an astronomer or other scientist can easily define the type of onboard instrument, control the instrument remotely, and return monitoring data all through the intranet. The Astronomical Instrument Markup Language (AIML) is the first implementation of the more general Instrument Markup Language (IML). The key aspects of our approach to instrument description and control applies to many domains, from medical instruments to machine assembly lines. The concepts behind AIML apply equally well to the description and control of instruments in general. IRC enables us to apply our techniques to several instruments, preferably from different observatories.
Tang, Wenxi; Xie, Jing; Lu, Yijuan; Liu, Qizhi; Malone, Daniel; Ma, Aixia
2018-04-01
The State Council of China requires that all urban public hospitals must eliminate drug markups by September 2017, and that hospital drugs must be sold at the purchase price. Nanjing-one of the first provincial capital cities to implement the reform-is studied to evaluate the effects of the comprehensive reform on drug prices in public hospitals, and to explore differential compensation plans. Sixteen hospitals were selected, and financial data were collected over the 48-month period before the reform and for 12 months after the reform. An analysis was carried out using a simple linear interrupted time series model. The average difference ratio of drug surplus fell 13.39% after the reform, and the drug markups were basically eliminated. Revenue from medical services showed a net growth of 28.25%. The overall compensation received from government financial budget and medical service revenue growth was 103.69% for the loss from policy-permitted 15% markup sales, and 116.48% for the net loss. However, there were large differences in compensation levels at different hospitals, ranging from -21.92% to 413.74% by medical services revenue growth, causing the combined rate of both financial and service compensation to vary from 28.87-413.74%, There was a significant positive correlation between the services compensation rate and the proportion of medical service revenue (p < .001), and the compensation rate increased by 8% for every 1% increase in the proportion of services revenue. Nanjing's pricing and compensation reform has basically achieved the policy targets of eliminating the drug markups, promoting the growth of medical services revenue, and adjusting the structure of medical revenue. However, the growth rate of service revenue of hospitals varied significantly from one another. Nanjing's reform represents successful pricing and compensation reform in Chinese urban public hospitals. It is recommended that a differentiated and dynamic compensation plan should be established in accordance with the revenue structure of different hospitals.
Babar, Zaheer Ud Din; Ibrahim, Mohamed Izham Mohamed; Singh, Harpal; Bukahri, Nadeem Irfan; Creese, Andrew
2007-01-01
Background Malaysia's stable health care system is facing challenges with increasing medicine costs. To investigate these issues a survey was carried out to evaluate medicine prices, availability, affordability, and the structure of price components. Methods and Findings The methodology developed by the World Health Organization (WHO) and Health Action International (HAI) was used. Price and availability data for 48 medicines was collected from 20 public sector facilities, 32 private sector retail pharmacies and 20 dispensing doctors in four geographical regions of West Malaysia. Medicine prices were compared with international reference prices (IRPs) to obtain a median price ratio. The daily wage of the lowest paid unskilled government worker was used to gauge the affordability of medicines. Price component data were collected throughout the supply chain, and markups, taxes, and other distribution costs were identified. In private pharmacies, innovator brand (IB) prices were 16 times higher than the IRPs, while generics were 6.6 times higher. In dispensing doctor clinics, the figures were 15 times higher for innovator brands and 7.5 for generics. Dispensing doctors applied high markups of 50%–76% for IBs, and up to 316% for generics. Retail pharmacy markups were also high—25%–38% and 100%–140% for IBs and generics, respectively. In the public sector, where medicines are free, availability was low even for medicines on the National Essential Drugs List. For a month's treatment for peptic ulcer disease and hypertension people have to pay about a week's wages in the private sector. Conclusions The free market by definition does not control medicine prices, necessitating price monitoring and control mechanisms. Markups for generic products are greater than for IBs. Reducing the base price without controlling markups may increase profits for retailers and dispensing doctors without reducing the price paid by end users. To increase access and affordability, promotion of generic medicines and improved availability of medicines in the public sector are required. PMID:17388660
Babar, Zaheer Ud Din; Ibrahim, Mohamed Izham Mohamed; Singh, Harpal; Bukahri, Nadeem Irfan; Creese, Andrew
2007-03-27
Malaysia's stable health care system is facing challenges with increasing medicine costs. To investigate these issues a survey was carried out to evaluate medicine prices, availability, affordability, and the structure of price components. The methodology developed by the World Health Organization (WHO) and Health Action International (HAI) was used. Price and availability data for 48 medicines was collected from 20 public sector facilities, 32 private sector retail pharmacies and 20 dispensing doctors in four geographical regions of West Malaysia. Medicine prices were compared with international reference prices (IRPs) to obtain a median price ratio. The daily wage of the lowest paid unskilled government worker was used to gauge the affordability of medicines. Price component data were collected throughout the supply chain, and markups, taxes, and other distribution costs were identified. In private pharmacies, innovator brand (IB) prices were 16 times higher than the IRPs, while generics were 6.6 times higher. In dispensing doctor clinics, the figures were 15 times higher for innovator brands and 7.5 for generics. Dispensing doctors applied high markups of 50%-76% for IBs, and up to 316% for generics. Retail pharmacy markups were also high-25%-38% and 100%-140% for IBs and generics, respectively. In the public sector, where medicines are free, availability was low even for medicines on the National Essential Drugs List. For a month's treatment for peptic ulcer disease and hypertension people have to pay about a week's wages in the private sector. The free market by definition does not control medicine prices, necessitating price monitoring and control mechanisms. Markups for generic products are greater than for IBs. Reducing the base price without controlling markups may increase profits for retailers and dispensing doctors without reducing the price paid by end users. To increase access and affordability, promotion of generic medicines and improved availability of medicines in the public sector are required.
Importing MAGE-ML format microarray data into BioConductor.
Durinck, Steffen; Allemeersch, Joke; Carey, Vincent J; Moreau, Yves; De Moor, Bart
2004-12-12
The microarray gene expression markup language (MAGE-ML) is a widely used XML (eXtensible Markup Language) standard for describing and exchanging information about microarray experiments. It can describe microarray designs, microarray experiment designs, gene expression data and data analysis results. We describe RMAGEML, a new Bioconductor package that provides a link between cDNA microarray data stored in MAGE-ML format and the Bioconductor framework for preprocessing, visualization and analysis of microarray experiments. http://www.bioconductor.org. Open Source.
Murray-Rust, Peter; Rzepa, Henry S; Williamson, Mark J; Willighagen, Egon L
2004-01-01
Examples of the use of the RSS 1.0 (RDF Site Summary) specification together with CML (Chemical Markup Language) to create a metadata based alerting service termed CMLRSS for molecular content are presented. CMLRSS can be viewed either using generic software or with modular opensource chemical viewers and editors enhanced with CMLRSS modules. We discuss the more automated use of CMLRSS as a component of a World Wide Molecular Matrix of semantically rich chemical information.
Sankar, Punnaivanam; Aghila, Gnanasekaran
2007-01-01
The mechanism models for primary organic reactions encoding the structural fragments undergoing substitution, addition, elimination, and rearrangements are developed. In the proposed models, each and every structural component of mechanistic pathways is represented with flexible and fragment based markup technique in XML syntax. A significant feature of the system is the encoding of the electron movements along with the other components like charges, partial charges, half bonded species, lone pair electrons, free radicals, reaction arrows, etc. needed for a complete representation of reaction mechanism. The rendering of reaction schemes described with the proposed methodology is achieved with a concise XML extension language interoperating with the structure markup. The reaction scheme is visualized as 2D graphics in a browser by converting them into SVG documents enabling the desired layouts normally perceived by the chemists conventionally. An automatic representation of the complex patterns of the reaction mechanism is achieved by reusing the knowledge in chemical ontologies and developing artificial intelligence components in terms of axioms.
NASA Technical Reports Server (NTRS)
Rinker, Nancy A.
1994-01-01
The role of librarians today is drastically influenced by the changing nature of information and library services. The museum-like libraries of yesterday are a thing of the past: today's libraries are bustling with life, activity, and the sounds of new technologies. Libraries are replacing their paper card catalogs with state-of-the-art online systems, which provide faster and more comprehensive search capabilities. Even the resources themselves are changing. New formats for information, such as CD-ROM's, are becoming popular for all types of publications, from bibliographic tools to encyclopedias to electronic journals, even replacing print materials completely in some cases. Today it is almost impossible to walk into a library and find the information you need without coming into contact with at least one computer system. Librarians are not only struggling to keep up with the technological advancements of the day, but they are becoming information intermediaries: they must teach library users how to use all of the new systems and electronic resources. Not surprisingly, bibliographic instruction itself has taken on a new look and feel in these electronically advanced libraries. Many libraries are experimenting with the development of expert systems and other computer aided instruction interfaces for teaching patrons how to use the library and its resources. One popular type of interface in library instruction programs is hypertext, which utilizes 'stacks' or linked pages of information. Hypertext stacks can incorporate color graphics along with text to provide a more interesting interface and entice users into trying out the system. Another advantage of hypertext is that it is generally easy to use, even for those unfamiliar with computers. As such, it lends itself well to application in libraries, which often serve a broad range of clientele. This paper will discuss the design, development, and implementation of a hypertext library tour in a special library setting. The library featured in the electronic library tour is the National Aeronautics and Space Administration's Technical Library at Langley Research Center in Hampton, Virginia.
Trimmel, Michael; Atzlsdorfer, Jürgen; Tupy, Nina; Trimmel, Karin
2012-11-01
The effects of low intensity noise on cognitive learning and autonomous physiological processes are of high practical relevance but are rarely addressed in empirical investigations. This study investigated the impact of neighbourhood noise (of 45 dB[A], n=20) and of noise coming from passing aircraft (of 48 dB[A] peak amplitude presented once per minute; n=19) during computer based learning of different texts (with three types of text structure, i.e. linear text, hierarchic hypertext, and network hypertext) in relation to a control group (35 dB[A], n=20). Using a between subjects design, reproduction scores, heart rate, and spontaneous skin conductance fluctuations were compared. Results showed impairments of reproduction in both noise conditions. Additionally, whereas in the control group and the neighbourhood noise group scores were better for network hypertext structure than for hierarchic hypertext, no effect of text structure on reproduction appeared in the aircraft noise group. Compared to the control group, for most of the learning period the number of spontaneous skin conductance fluctuations was higher for the aircraft noise group. For the neighbourhood noise group, fluctuations were higher during pre- and post task periods when noise stimulation was still present. Additionally, during the last 5 min of the 15 min learning period, an increased heart rate was found in the aircraft noise group. Data indicate remarkable cognitive and physiological effects of low intensity background noise. Some aspects of reproduction were impaired in the two noise groups. Cognitive learning, as indicated by reproduction scores, was changed structurally in the aircraft noise group and was accompanied by higher sympathetic activity. An additional cardiovascular load appeared for aircraft noise when combined with time pressure as indicated by heart rate for the announced last 5 min of the learning period during aircraft noise with a peak SPL of even 48 dB(A). Attentional mechanisms (attentional control) like being threatened by passing aircraft approaching the airport, higher demands of selective filtering, and difficulties in changing cognitive strategies during noise are discussed as underlying mechanisms. Copyright © 2011 Elsevier GmbH. All rights reserved.
Do state minimum markup/price laws work? Evidence from retail scanner data and TUS-CPS
Huang, Jidong; Chriqui, Jamie F; DeLong, Hillary; Mirza, Maryam; Diaz, Megan C; Chaloupka, Frank J
2016-01-01
Background Minimum markup/price laws (MPLs) have been proposed as an alternative non-tax pricing strategy to reduce tobacco use and access. However, the empirical evidence on the effectiveness of MPLs in increasing cigarette prices is very limited. This study aims to fill this critical gap by examining the association between MPLs and cigarette prices. Methods State MPLs were compiled from primary legal research databases and were linked to cigarette prices constructed from the Nielsen retail scanner data and the self-reported cigarette prices from the Tobacco Use Supplement to the Current Population Survey. Multivariate regression analyses were conducted to examine the association between MPLs and the major components of MPLs and cigarette prices. Results The presence of MPLs was associated with higher cigarette prices. In addition, cigarette prices were higher, above and beyond the higher prices resulting from MPLs, in states that prohibit below-cost combination sales; do not allow any distributing party to use trade discounts to reduce the base cost of cigarettes; prohibit distributing parties from meeting the price of a competitor, and prohibit distributing below-cost coupons to the consumer. Moreover, states that had total markup rates >24% were associated with significantly higher cigarette prices. Conclusions MPLs are an effective way to increase cigarette prices. The impact of MPLs can be further strengthened by imposing greater markup rates and by prohibiting coupon distribution, competitor price matching, and use of below-cost combination sales and trade discounts. PMID:27697948
Sutliff, Jacqueline Page; Olton, Robert; Omarzu, Christopher H.
1990-01-01
This demonstration shows a hypertext-linked integrated database consisting of a variety of sources of consumer health information that enables the user to retrieve and understand information more easily than consulting independent sources in the traditional fashion.
HTML5: a new standard for the Web.
Hoy, Matthew B
2011-01-01
HTML5 is the newest revision of the HTML standard developed by the World Wide Web Consortium (W3C). This new standard adds several exciting news features and capabilities to HTML. This article will briefly discuss the history of HTML standards, explore what changes are in the new HTML5 standard, and what implications it has for information professionals. A list of HTML5 resources and examples will also be provided.
XML in an Adaptive Framework for Instrument Control
NASA Technical Reports Server (NTRS)
Ames, Troy J.
2004-01-01
NASA Goddard Space Flight Center is developing an extensible framework for instrument command and control, known as Instrument Remote Control (IRC), that combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML). A key aspect of the architecture is software that is driven by an instrument description, written using the Instrument Markup Language (IML). IML is an XML dialect used to describe interfaces to control and monitor the instrument, command sets and command formats, data streams, communication mechanisms, and data processing algorithms.
Experimental Applications of Automatic Test Markup Language (ATML)
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; McCartney, Patrick; Gorringe, Chris
2012-01-01
The authors describe challenging use-cases for Automatic Test Markup Language (ATML), and evaluate solutions. The first case uses ATML Test Results to deliver active features to support test procedure development and test flow, and bridging mixed software development environments. The second case examines adding attributes to Systems Modelling Language (SysML) to create a linkage for deriving information from a model to fill in an ATML document set. Both cases are outside the original concept of operations for ATML but are typical when integrating large heterogeneous systems with modular contributions from multiple disciplines.
Swat, M J; Moodie, S; Wimalaratne, S M; Kristensen, N R; Lavielle, M; Mari, A; Magni, P; Smith, M K; Bizzotto, R; Pasotti, L; Mezzalana, E; Comets, E; Sarr, C; Terranova, N; Blaudez, E; Chan, P; Chard, J; Chatel, K; Chenel, M; Edwards, D; Franklin, C; Giorgino, T; Glont, M; Girard, P; Grenon, P; Harling, K; Hooker, A C; Kaye, R; Keizer, R; Kloft, C; Kok, J N; Kokash, N; Laibe, C; Laveille, C; Lestini, G; Mentré, F; Munafo, A; Nordgren, R; Nyberg, H B; Parra-Guillen, Z P; Plan, E; Ribba, B; Smith, G; Trocóniz, I F; Yvon, F; Milligan, P A; Harnisch, L; Karlsson, M; Hermjakob, H; Le Novère, N
2015-06-01
The lack of a common exchange format for mathematical models in pharmacometrics has been a long-standing problem. Such a format has the potential to increase productivity and analysis quality, simplify the handling of complex workflows, ensure reproducibility of research, and facilitate the reuse of existing model resources. Pharmacometrics Markup Language (PharmML), currently under development by the Drug Disease Model Resources (DDMoRe) consortium, is intended to become an exchange standard in pharmacometrics by providing means to encode models, trial designs, and modeling steps.
Swat, MJ; Moodie, S; Wimalaratne, SM; Kristensen, NR; Lavielle, M; Mari, A; Magni, P; Smith, MK; Bizzotto, R; Pasotti, L; Mezzalana, E; Comets, E; Sarr, C; Terranova, N; Blaudez, E; Chan, P; Chard, J; Chatel, K; Chenel, M; Edwards, D; Franklin, C; Giorgino, T; Glont, M; Girard, P; Grenon, P; Harling, K; Hooker, AC; Kaye, R; Keizer, R; Kloft, C; Kok, JN; Kokash, N; Laibe, C; Laveille, C; Lestini, G; Mentré, F; Munafo, A; Nordgren, R; Nyberg, HB; Parra-Guillen, ZP; Plan, E; Ribba, B; Smith, G; Trocóniz, IF; Yvon, F; Milligan, PA; Harnisch, L; Karlsson, M; Hermjakob, H; Le Novère, N
2015-01-01
The lack of a common exchange format for mathematical models in pharmacometrics has been a long-standing problem. Such a format has the potential to increase productivity and analysis quality, simplify the handling of complex workflows, ensure reproducibility of research, and facilitate the reuse of existing model resources. Pharmacometrics Markup Language (PharmML), currently under development by the Drug Disease Model Resources (DDMoRe) consortium, is intended to become an exchange standard in pharmacometrics by providing means to encode models, trial designs, and modeling steps. PMID:26225259
ERIC Educational Resources Information Center
Lancaster, F. W.
1989-01-01
Describes various stages involved in the applications of electronic media to the publishing industry. Highlights include computer typesetting, or photocomposition; machine-readable databases; the distribution of publications in electronic form; computer conferencing and electronic mail; collaborative authorship; hypertext; hypermedia publications;…
OneGeology: Making the World’s Geological Map Data Accessible Online
NASA Astrophysics Data System (ADS)
Broome, H.; Jackson, I.; Robida, F.; Thorleifson, H.
2009-12-01
OneGeology (http://onegeology.org) is a successful international initiative of the geological surveys of the world and the flagship project of the ‘International Year of Planet Earth’. Its aim is to provide dynamic web access to geological map data covering the world, creating a focus for accessing geological information for everyone. Thanks to the enthusiasm and support of participating nations the initiative has progressed rapidly and geological surveys and the many users of their data are excited about this ground-breaking project. Currently 10 international geoscience organizations have endorsed the initiative and more than 109 countries have agreed to participate. OneGeology works with whatever digital format is available in each country. The target scale is 1:1 million, but the project is pragmatic and accepts a range of scales and the best available data. The initiative recognizes that different nations have differing abilities to participate and transfer of know-how to those who need it is a key aspect of the approach. A key contributor to the success of OneGeology has been its utilization of the latest new web technology and an emerging data exchange standard for geological map data called GeoSciML. GeoSciML (GeoScience Markup Language) is a schema written in GML (Geography Markup Language) for geological data. GeoSciML has the ability to represent both the geography (geometries e.g. polygons, lines and points) and geological attribution in a clear and structured format. OneGeology was launched March 2007 at the inaugural workshop in Brighton England. At that workshop the 43 participating nations developed a declaration of a common objective and principles called the “Brighton Accord” (http://onegeology.org/what_is/accord.html) . Work was initiated immediately and the resulting OneGeology Portal was launched at the International Geological Congress in Oslo in August 2008 by Simon Winchester, author of “The Map that Changed the World”. Since the successful launch, OneGeology participants have continued working both to increase national participation and content, and to put in place a more formal governance structure to oversee the long term evolution of the initiative. OneGeology is an example of collaboration in action and is both multilateral and multinational. In 2007, a group of motivated geoscientists and data managers identified an opportunity and took the initiative to engage their peers to work in concert to achieve a shared objective. OneGeology has facilitated collaborative development of an Internet site that provides unprecedented online access to global geological map data.
Impact of cigarette minimum price laws on the retail price of cigarettes in the USA.
Tynan, Michael A; Ribisl, Kurt M; Loomis, Brett R
2013-05-01
Cigarette price increases prevent youth initiation, reduce cigarette consumption and increase the number of smokers who quit. Cigarette minimum price laws (MPLs), which typically require cigarette wholesalers and retailers to charge a minimum percentage mark-up for cigarette sales, have been identified as an intervention that can potentially increase cigarette prices. 24 states and the District of Columbia have cigarette MPLs. Using data extracted from SCANTRACK retail scanner data from the Nielsen company, average cigarette prices were calculated for designated market areas in states with and without MPLs in three retail channels: grocery stores, drug stores and convenience stores. Regression models were estimated using the average cigarette pack price in each designated market area and calendar quarter in 2009 as the outcome variable. The average difference in cigarette pack prices are 46 cents in the grocery channel, 29 cents in the drug channel and 13 cents in the convenience channel, with prices being lower in states with MPLs for all three channels. The findings that MPLs do not raise cigarette prices could be the result of a lack of compliance and enforcement by the state or could be attributed to the minimum state mark-up being lower than the free-market mark-up for cigarettes. Rather than require a minimum mark-up, which can be nullified by promotional incentives and discounts, states and countries could strengthen MPLs by setting a simple 'floor price' that is the true minimum price for all cigarettes or could prohibit discounts to consumers and retailers.
Hucka, Michael; Bergmann, Frank T.; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M.; Le Novére, Nicolas; Myers, Chris J.; Olivier, Brett G.; Sahle, Sven; Schaff, James C.; Smith, Lucian P.; Waltemath, Dagmar; Wilkinson, Darren J.
2017-01-01
Summary Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/. PMID:26528569
Kyoda, Koji; Tohsato, Yukako; Ho, Kenneth H. L.; Onami, Shuichi
2015-01-01
Motivation: Recent progress in live-cell imaging and modeling techniques has resulted in generation of a large amount of quantitative data (from experimental measurements and computer simulations) on spatiotemporal dynamics of biological objects such as molecules, cells and organisms. Although many research groups have independently dedicated their efforts to developing software tools for visualizing and analyzing these data, these tools are often not compatible with each other because of different data formats. Results: We developed an open unified format, Biological Dynamics Markup Language (BDML; current version: 0.2), which provides a basic framework for representing quantitative biological dynamics data for objects ranging from molecules to cells to organisms. BDML is based on Extensible Markup Language (XML). Its advantages are machine and human readability and extensibility. BDML will improve the efficiency of development and evaluation of software tools for data visualization and analysis. Availability and implementation: A specification and a schema file for BDML are freely available online at http://ssbd.qbic.riken.jp/bdml/. Contact: sonami@riken.jp Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:25414366
The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core
Hucka, Michael; Bergmann, Frank T.; Hoops, Stefan; Keating, Sarah M.; Sahle, Sven; Schaff, James C.; Smith, Lucian P.; Wilkinson, Darren J.
2017-01-01
Summary Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/. PMID:26528564
Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J
2015-09-04
Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org.
The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core.
Hucka, Michael; Bergmann, Frank T; Hoops, Stefan; Keating, Sarah M; Sahle, Sven; Schaff, James C; Smith, Lucian P; Wilkinson, Darren J
2015-09-04
Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.
The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core.
Hucka, Michael; Bergmann, Frank T; Hoops, Stefan; Keating, Sarah M; Sahle, Sven; Schaff, James C; Smith, Lucian P; Wilkinson, Darren J
2015-06-01
Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.
Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J
2015-06-01
Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.
Kyoda, Koji; Tohsato, Yukako; Ho, Kenneth H L; Onami, Shuichi
2015-04-01
Recent progress in live-cell imaging and modeling techniques has resulted in generation of a large amount of quantitative data (from experimental measurements and computer simulations) on spatiotemporal dynamics of biological objects such as molecules, cells and organisms. Although many research groups have independently dedicated their efforts to developing software tools for visualizing and analyzing these data, these tools are often not compatible with each other because of different data formats. We developed an open unified format, Biological Dynamics Markup Language (BDML; current version: 0.2), which provides a basic framework for representing quantitative biological dynamics data for objects ranging from molecules to cells to organisms. BDML is based on Extensible Markup Language (XML). Its advantages are machine and human readability and extensibility. BDML will improve the efficiency of development and evaluation of software tools for data visualization and analysis. A specification and a schema file for BDML are freely available online at http://ssbd.qbic.riken.jp/bdml/. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
Collaborative Planning of Robotic Exploration
NASA Technical Reports Server (NTRS)
Norris, Jeffrey; Backes, Paul; Powell, Mark; Vona, Marsette; Steinke, Robert
2004-01-01
The Science Activity Planner (SAP) software system includes an uplink-planning component, which enables collaborative planning of activities to be undertaken by an exploratory robot on a remote planet or on Earth. Included in the uplink-planning component is the SAP-Uplink Browser, which enables users to load multiple spacecraft activity plans into a single window, compare them, and merge them. The uplink-planning component includes a subcomponent that implements the Rover Markup Language Activity Planning format (RML-AP), based on the Extensible Markup Language (XML) format that enables the representation, within a single document, of planned spacecraft and robotic activities together with the scientific reasons for the activities. Each such document is highly parseable and can be validated easily. Another subcomponent of the uplink-planning component is the Activity Dictionary Markup Language (ADML), which eliminates the need for two mission activity dictionaries - one in a human-readable format and one in a machine-readable format. Style sheets that have been developed along with the ADML format enable users to edit one dictionary in a user-friendly environment without compromising
Ben Ayed, Rayda; Ben Hassen, Hanen; Ennouri, Karim; Ben Marzoug, Riadh; Rebai, Ahmed
2016-01-01
Olive (Olea europaea), whose importance is mainly due to nutritional and health features, is one of the most economically significant oil-producing trees in the Mediterranean region. Unfortunately, the increasing market demand towards virgin olive oil could often result in its adulteration with less expensive oils, which is a serious problem for the public and quality control evaluators of virgin olive oil. Therefore, to avoid frauds, olive cultivar identification and virgin olive oil authentication have become a major issue for the producers and consumers of quality control in the olive chain. Presently, genetic traceability using SSR is the cost effective and powerful marker technique that can be employed to resolve such problems. However, to identify an unknown monovarietal virgin olive oil cultivar, a reference system has become necessary. Thus, an Olive Genetic Diversity Database (OGDD) (http://www.bioinfo-cbs.org/ogdd/) is presented in this work. It is a genetic, morphologic and chemical database of worldwide olive tree and oil having a double function. In fact, besides being a reference system generated for the identification of unkown olive or virgin olive oil cultivars based on their microsatellite allele size(s), it provides users additional morphological and chemical information for each identified cultivar. Currently, OGDD is designed to enable users to easily retrieve and visualize biologically important information (SSR markers, and olive tree and oil characteristics of about 200 cultivars worldwide) using a set of efficient query interfaces and analysis tools. It can be accessed through a web service from any modern programming language using a simple hypertext transfer protocol call. The web site is implemented in java, JavaScript, PHP, HTML and Apache with all major browsers supported. Database URL: http://www.bioinfo-cbs.org/ogdd/ PMID:26827236
A World Wide Web (WWW) server database engine for an organelle database, MitoDat.
Lemkin, P F; Chipperfield, M; Merril, C; Zullo, S
1996-03-01
We describe a simple database search engine "dbEngine" which may be used to quickly create a searchable database on a World Wide Web (WWW) server. Data may be prepared from spreadsheet programs (such as Excel, etc.) or from tables exported from relationship database systems. This Common Gateway Interface (CGI-BIN) program is used with a WWW server such as available commercially, or from National Center for Supercomputer Algorithms (NCSA) or CERN. Its capabilities include: (i) searching records by combinations of terms connected with ANDs or ORs; (ii) returning search results as hypertext links to other WWW database servers; (iii) mapping lists of literature reference identifiers to the full references; (iv) creating bidirectional hypertext links between pictures and the database. DbEngine has been used to support the MitoDat database (Mendelian and non-Mendelian inheritance associated with the Mitochondrion) on the WWW.
Computer technologies and institutional memory
NASA Technical Reports Server (NTRS)
Bell, Christopher; Lachman, Roy
1989-01-01
NASA programs for manned space flight are in their 27th year. Scientists and engineers who worked continuously on the development of aerospace technology during that period are approaching retirement. The resulting loss to the organization will be considerable. Although this problem is general to the NASA community, the problem was explored in terms of the institutional memory and technical expertise of a single individual in the Man-Systems division. The main domain of the expert was spacecraft lighting, which became the subject area for analysis in these studies. The report starts with an analysis of the cumulative expertise and institutional memory of technical employees of organizations such as NASA. A set of solutions to this problem are examined and found inadequate. Two solutions were investigated at length: hypertext and expert systems. Illustrative examples were provided of hypertext and expert system representation of spacecraft lighting. These computer technologies can be used to ameliorate the problem of the loss of invaluable personnel.
PubMed-EX: a web browser extension to enhance PubMed search with text mining features.
Tsai, Richard Tzong-Han; Dai, Hong-Jie; Lai, Po-Ting; Huang, Chi-Hsin
2009-11-15
PubMed-EX is a browser extension that marks up PubMed search results with additional text-mining information. PubMed-EX's page mark-up, which includes section categorization and gene/disease and relation mark-up, can help researchers to quickly focus on key terms and provide additional information on them. All text processing is performed server-side, freeing up user resources. PubMed-EX is freely available at http://bws.iis.sinica.edu.tw/PubMed-EX and http://iisr.cse.yzu.edu.tw:8000/PubMed-EX/.
cluML: A markup language for clustering and cluster validity assessment of microarray data.
Bolshakova, Nadia; Cunningham, Pádraig
2005-01-01
cluML is a new markup language for microarray data clustering and cluster validity assessment. The XML-based format has been designed to address some of the limitations observed in traditional formats, such as inability to store multiple clustering (including biclustering) and validation results within a dataset. cluML is an effective tool to support biomedical knowledge representation in gene expression data analysis. Although cluML was developed for DNA microarray analysis applications, it can be effectively used for the representation of clustering and for the validation of other biomedical and physical data that has no limitations.
Generating Systems Biology Markup Language Models from the Synthetic Biology Open Language.
Roehner, Nicholas; Zhang, Zhen; Nguyen, Tramy; Myers, Chris J
2015-08-21
In the context of synthetic biology, model generation is the automated process of constructing biochemical models based on genetic designs. This paper discusses the use cases for model generation in genetic design automation (GDA) software tools and introduces the foundational concepts of standards and model annotation that make this process useful. Finally, this paper presents an implementation of model generation in the GDA software tool iBioSim and provides an example of generating a Systems Biology Markup Language (SBML) model from a design of a 4-input AND sensor written in the Synthetic Biology Open Language (SBOL).
Document Delivery: An Annotated Selective Bibliography.
ERIC Educational Resources Information Center
Khalil, Mounir A.; Katz, Suzanne R.
1992-01-01
Presents a selective annotated bibliography of 61 items that deal with topics related to document delivery, including networks; hypertext; interlibrary loan; computer security; electronic publishing; copyright; online catalogs; resource sharing; electronic mail; electronic libraries; optical character recognition; microcomputers; liability issues;…
How to Assess Data Availability, Accessibility and Format for Risk Analysis?
Humblet, M-F; Vandeputte, S; Mignot, C; Bellet, C; De Koeijer, A; Swanenburg, M; Afonso, A; Sanaa, M; Saegerman, C
2016-12-01
Risk assessments are mostly carried out based on available data, which do not reflect all data theoretically required by experts to answer them. This study aimed at developing a methodology to assess data availability, accessibility and format, based on a scoring system and focusing on two diseases: Venezuelan equine encephalomyelitis (VEE), still exotic to Europe, and alveolar echinococcosis, caused by Echinococcus multilocularis (EM), endemic in several Member States (MSs). After reviewing 36 opinions of the EFSA-AHAW Panel on risk assessment of animal health questions, a generic list of needed data was elaborated. The methodology consisted, first, in implementing a direct and an indirect survey to collect the data needed for both case studies: the direct survey consisted in a questionnaire sent to contact points of three European MSs (Belgium, France and the Netherlands), and the organization of a workshop gathering experts on both diseases. The indirect survey, focusing on the three MSs involved in the direct survey plus Spain, relied on web searches. Secondly, a scoring system with reference to data availability, accessibility and format was elaborated, to, finally, compare both diseases and data between MSs. The accessibility of data was generally related to their availability. Web searches resulted in more data available for VEE compared to EM, despite its current exotic status in the European Union. Hypertext markup language and portable document files were the main formats of available data. Data availability, accessibility and format should be improved for research scientists/assessors. The format of data plays a key role in the feasibility and rapidness of data management and analysis, through a prompt compilation, combination and aggregation in working databases. Harmonization of data collection process is encouraged, according to standardized procedures, to provide useful and reliable data, both at the national and the international levels for both animal and human health; it would allow assessing data gaps through comparative studies. The present methodology is a good way of assessing the relevance of data for risk assessment, as it allows integrating the uncertainty linked to the quality of data used. Such an approach could be described as transparent and traceable and should be performed systematically. © 2015 Blackwell Verlag GmbH.
Do state minimum markup/price laws work? Evidence from retail scanner data and TUS-CPS.
Huang, Jidong; Chriqui, Jamie F; DeLong, Hillary; Mirza, Maryam; Diaz, Megan C; Chaloupka, Frank J
2016-10-01
Minimum markup/price laws (MPLs) have been proposed as an alternative non-tax pricing strategy to reduce tobacco use and access. However, the empirical evidence on the effectiveness of MPLs in increasing cigarette prices is very limited. This study aims to fill this critical gap by examining the association between MPLs and cigarette prices. State MPLs were compiled from primary legal research databases and were linked to cigarette prices constructed from the Nielsen retail scanner data and the self-reported cigarette prices from the Tobacco Use Supplement to the Current Population Survey. Multivariate regression analyses were conducted to examine the association between MPLs and the major components of MPLs and cigarette prices. The presence of MPLs was associated with higher cigarette prices. In addition, cigarette prices were higher, above and beyond the higher prices resulting from MPLs, in states that prohibit below-cost combination sales; do not allow any distributing party to use trade discounts to reduce the base cost of cigarettes; prohibit distributing parties from meeting the price of a competitor, and prohibit distributing below-cost coupons to the consumer. Moreover, states that had total markup rates >24% were associated with significantly higher cigarette prices. MPLs are an effective way to increase cigarette prices. The impact of MPLs can be further strengthened by imposing greater markup rates and by prohibiting coupon distribution, competitor price matching, and use of below-cost combination sales and trade discounts. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Automatic Text Structuring and Summarization.
ERIC Educational Resources Information Center
Salton, Gerard; And Others
1997-01-01
Discussion of the use of information retrieval techniques for automatic generation of semantic hypertext links focuses on automatic text summarization. Topics include World Wide Web links, text segmentation, and evaluation of text summarization by comparing automatically generated abstracts with manually prepared abstracts. (Author/LRW)
Educational Systems Design Implications of Electronic Publishing.
ERIC Educational Resources Information Center
Romiszowski, Alexander J.
1994-01-01
Discussion of electronic publishing focuses on the four main purposes of media in general: communication, entertainment, motivation, and education. Highlights include electronic journals and books; hypertext; user control; computer graphics and animation; electronic games; virtual reality; multimedia; electronic performance support;…
Crossroads 2000 proceedings : table of contents
DOT National Transportation Integrated Search
1998-08-01
This compilation of papers from the Crossroads 2000 Proceedings were presented from August 19-20, 1998 at Iowa State University at Ames, Iowa. From the main conference web page, link to the table of contents, which contains hypertext links to each pa...
PC-based web authoring: How to learn as little unix as possible while getting on the Web
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gennari, L.T.; Breaux, M.; Minton, S.
1996-09-01
This document is a general guide for creating Web pages, using commonly available word processing and file transfer applications. It is not a full guide to HTML, nor does it provide an introduction to the many WYSIWYG HTML editors available. The viability of the authoring method it describes will not be affected by changes in the HTML specification or the rapid release-and-obsolescence cycles of commercial WYSIWYG HTML editors. This document provides a gentle introduction to HTML for the beginner, and as the user gains confidence and experience, encourages greater familiarity with HTML through continued exposure to and hands-on usage ofmore » HTML code.« less
Mac-based Web authoring: How to learn as little Unix as possible while getting on the Web.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gennari, L.T.
1996-06-01
This document is a general guide for creating Web pages, using commonly available word processing and file transfer applications. It is not a full guide to HTML, nor does it provide an introduction to the many WYSIWYG HTML editors available. The viability of the authoring method it describes will not be affected by changes in the HTML specification or the rapid release-and-obsolescence cycles of commercial WYSIWYG HTML editors. This document provides a gentle introduction to HTML for the beginner and as the user gains confidence and experience, encourages greater familiarity with HTML through continued exposure to and hands-on usage ofmore » HTML code.« less
NASA Astrophysics Data System (ADS)
Scianna, A.; La Guardia, M.
2018-05-01
Recently, the diffusion of knowledge on Cultural Heritage (CH) has become an element of primary importance for its valorization. At the same time, the diffusion of surveys based on UAV Unmanned Aerial Vehicles (UAV) technologies and new methods of photogrammetric reconstruction have opened new possibilities for 3D CH representation. Furthermore the recent development of faster and more stable internet connections leads people to increase the use of mobile devices. In the light of all this, the importance of the development of Virtual Reality (VR) environments applied to CH is strategic for the diffusion of knowledge in a smart solution. In particular, the present work shows how, starting from a basic survey and the further photogrammetric reconstruction of a cultural good, is possible to built a 3D CH interactive information system useful for desktop and mobile devices. For this experimentation the Arab-Norman church of the Trinity of Delia (in Castelvetrano-Sicily-Italy) has been adopted as case study. The survey operations have been carried out considering different rapid methods of acquisition (UAV camera, SLR camera and smartphone camera). The web platform to publish the 3D information has been built using HTML5 markup language and WebGL JavaScript libraries (Three.js libraries). This work presents the construction of a 3D navigation system for a web-browsing of a virtual CH environment, with the integration of first person controls and 3D popup links. This contribution adds a further step to enrich the possibilities of open-source technologies applied to the world of CH valorization on web.
Making journals accessible to the visually impaired: the future is near
GARDNER, John; BULATOV, Vladimir; KELLY, Robert
2010-01-01
The American Physical Society (APS) has been a leader in using markup languages for publishing. ViewPlus has led development of innovative technologies for graphical information accessibility by people with print disabilities. APS, ViewPlus, and other collaborators in the Enhanced Reading Project are working together to develop the necessary technology and infrastructure for APS to publish its journals in the DAISY (Digital Accessible Information SYstem) eXtended Markup Language (XML) format, in which all text, math, and figures would be accessible to people who are blind or have other print disabilities. The first APS DAISY XML publications are targeted for late 2010. PMID:20676358
A Converter from the Systems Biology Markup Language to the Synthetic Biology Open Language.
Nguyen, Tramy; Roehner, Nicholas; Zundel, Zach; Myers, Chris J
2016-06-17
Standards are important to synthetic biology because they enable exchange and reproducibility of genetic designs. This paper describes a procedure for converting between two standards: the Systems Biology Markup Language (SBML) and the Synthetic Biology Open Language (SBOL). SBML is a standard for behavioral models of biological systems at the molecular level. SBOL describes structural and basic qualitative behavioral aspects of a biological design. Converting SBML to SBOL enables a consistent connection between behavioral and structural information for a biological design. The conversion process described in this paper leverages Systems Biology Ontology (SBO) annotations to enable inference of a designs qualitative function.
Bacon, James; Tardella, Neil; Pratt, Janey; Hu, John; English, James
2006-01-01
Under contract with the Telemedicine & Advanced Technology Research Center (TATRC), Energid Technologies is developing a new XML-based language for describing surgical training exercises, the Surgical Simulation and Training Markup Language (SSTML). SSTML must represent everything from organ models (including tissue properties) to surgical procedures. SSTML is an open language (i.e., freely downloadable) that defines surgical training data through an XML schema. This article focuses on the data representation of the surgical procedures and organ modeling, as they highlight the need for a standard language and illustrate the features of SSTML. Integration of SSTML with software is also discussed.
Field Markup Language: biological field representation in XML.
Chang, David; Lovell, Nigel H; Dokos, Socrates
2007-01-01
With an ever increasing number of biological models available on the internet, a standardized modeling framework is required to allow information to be accessed or visualized. Based on the Physiome Modeling Framework, the Field Markup Language (FML) is being developed to describe and exchange field information for biological models. In this paper, we describe the basic features of FML, its supporting application framework and its ability to incorporate CellML models to construct tissue-scale biological models. As a typical application example, we present a spatially-heterogeneous cardiac pacemaker model which utilizes both FML and CellML to describe and solve the underlying equations of electrical activation and propagation.
The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 2 Core.
Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J
2018-03-09
Computational models can help researchers to interpret data, understand biological functions, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that different software systems can exchange. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 2 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML, their encoding in XML (the eXtensible Markup Language), validation rules that determine the validity of an SBML document, and examples of models in SBML form. The design of Version 2 differs from Version 1 principally in allowing new MathML constructs, making more child elements optional, and adding identifiers to all SBML elements instead of only selected elements. Other materials and software are available from the SBML project website at http://sbml.org/.
Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML)
Lechevalier, D.; Ak, R.; Ferguson, M.; Law, K. H.; Lee, Y.-T. T.; Rachuri, S.
2017-01-01
This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain. PMID:29202125
Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML).
Park, J; Lechevalier, D; Ak, R; Ferguson, M; Law, K H; Lee, Y-T T; Rachuri, S
2017-01-01
This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain.
Extreme Markup: The Fifty US Hospitals With The Highest Charge-To-Cost Ratios.
Bai, Ge; Anderson, Gerard F
2015-06-01
Using Medicare cost reports, we examined the fifty US hospitals with the highest charge-to-cost ratios in 2012. These hospitals have markups (ratios of charges over Medicare-allowable costs) approximately ten times their Medicare-allowable costs compared to a national average of 3.4 and a mode of 2.4. Analysis of the fifty hospitals showed that forty-nine are for profit (98 percent), forty-six are owned by for-profit hospital systems (92 percent), and twenty (40 percent) operate in Florida. One for-profit hospital system owns half of these fifty hospitals. While most public and private health insurers do not use hospital charges to set their payment rates, uninsured patients are commonly asked to pay the full charges, and out-of-network patients and casualty and workers' compensation insurers are often expected to pay a large portion of the full charges. Because it is difficult for patients to compare prices, market forces fail to constrain hospital charges. Federal and state governments may want to consider limitations on the charge-to-cost ratio, some form of all-payer rate setting, or mandated price disclosure to regulate hospital markups. Project HOPE—The People-to-People Health Foundation, Inc.
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.
Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar
2015-09-04
The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.
Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar
2015-06-01
The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.
Pricing and components analysis of some key essential pediatric medicine in Odisha state.
Samal, Satyajit; Swain, Trupti Rekha
2017-01-01
Study highlighting prices, i.e., the patients actually pay at ground level is important for interventions such as alternate procurement schemes or to expedite regulatory assessment of essential medicines for children. The present study was undertaken to study pricing and component analysis of few key essential medicines in Odisha state. Six child-specific medicines of different formulations were selected based on use in different disease condition and having widest pricing variation. Data were collected, entered, and analyzed in the price components data collection form of the World Health Organization-Health Action International (WHO-HAI) 2007 Workbook version 5 - Part II provided as part of the WHO/HAI methodology. The analysis includes the cumulative percent markup, total cumulative percent markup, and percent contribution of individual components to the final medicine price in both public and private sector of Odisha state. Add-on costs such as taxes, wholesale, and retail markups contribute substantially to the final price of medicines in private sector, particularly for branded-generic products. The largest contributor to add-on costs is at the level of retailer shop. Policy should be framed to achieve a greater transparency and uniformity of the pricing of medicines at different health sectors of Odisha.
Authoritative Authoring: Software That Makes Multimedia Happen.
ERIC Educational Resources Information Center
Florio, Chris; Murie, Michael
1996-01-01
Compares seven mid- to high-end multimedia authoring software systems that combine graphics, sound, animation, video, and text for Windows and Macintosh platforms. A run-time project was created with each program using video, animation, graphics, sound, formatted text, hypertext, and buttons. (LRW)
ERIC Educational Resources Information Center
Microcomputers for Information Management, 1995
1995-01-01
Provides definitions for 71 terms related to the Internet, including Archie, bulletin board system, cyberspace, e-mail (electronic mail), file transfer protocol, gopher, hypertext, integrated services digital network, local area network, listserv, modem, packet switching, server, telnet, UNIX, WAIS (wide area information servers), and World Wide…
Streamlining the Process of Acquiring Secure Open Architecture Software Systems
2013-10-08
Microsoft.NET, Enterprise Java Beans, GNU Lesser General Public License (LGPL) libraries, and data communication protocols like the Hypertext Transfer...NetBeans development environments), customer relationship management (SugarCRM), database management systems (PostgreSQL, MySQL ), operating
Interfaces for End-User Information Seeking.
ERIC Educational Resources Information Center
Marchionini, Gary
1992-01-01
Discusses essential features of interfaces to support end-user information seeking. Highlights include cognitive engineering; task models and task analysis; the problem-solving nature of information seeking; examples of systems for end-users, including online public access catalogs (OPACs), hypertext, and help systems; and suggested research…
Hypermedia Concepts and Research: An Overview.
ERIC Educational Resources Information Center
Burton, John K.; And Others
1995-01-01
Provides an overview of hypermedia, including a history of hypertext and multimedia, and discusses how they have been combined into the term hypermedia; a cognitive overview; dual coding and cue summation; and theories related to learners, including field dependence-independence, memory, and metacognition. Contains 156 references. (LRW)
A Manifesto for Instructional Technology: Hyperpedagogy.
ERIC Educational Resources Information Center
Dwight, Jim; Garrison, Jim
2003-01-01
Calls for digital technology in education to embrace forms of pedagogy appropriate for hypertext, challenging western metaphysics and relying on the philosophy of John Dewey to propose an alternative. The paper reviews dominant models of curriculum, especially Ralph Tyler's, revealing their concealed metaphysical assumptions; shows that the…
Multimedia as Rhizome: Design Issues in a Network Environment.
ERIC Educational Resources Information Center
Burnett, Kathleen
1992-01-01
Defines the concepts of hypertext, hypermedia, multimedia, and multimedia networks. Using the rhizome as a metaphor for electronically mediated exchange, a theory of hypermedia design that incorporates principles of connection and heterogeneity, multiplicity, asignifying rupture, and cartography and decalomania is explored. (four references) (MES)
Immigration Policies in Europe: Impact on Crime -- A Case Study of Germany
2008-06-01
2180188,00.html? maca =en-bulletin- 433-html. Accessed 24 Feb 2008. 51 448,000 illegal immigrants up until September 2006 without EU assistance.168 The...asylum. 168 Your Link to Germany. http://www.dw-world.de/dw/article/0,,2180188,00.html? maca =en...for a European Response to Illegal Immigration" 21 September 2006. .http://www.dw- world.de/dw/article/0,,2180188,00.html? maca =en-bulletin-433-html
Hydrologic Data for Deep Creek Lake and Selected Tributaries, Garrett County, Maryland, 2007-08
Banks, William S.L.; Davies, William J.; Gellis, Allen C.; LaMotte, Andrew E.; McPherson, Wendy S.; Soeder, Daniel J.
2010-01-01
Introduction Recent and ongoing efforts to develop the land in the area around Deep Creek Lake, Garrett County, Maryland, are expected to change the volume of sediment moving toward and into the lake, as well as impact the water quality of the lake and its many tributaries. With increased development, there is an associated increased demand for groundwater and surface-water withdrawals, as well as boat access. Proposed dredging of the lake bottom to improve boat access has raised concerns about the adverse environmental effects such activities would have on the lake. The Maryland Department of Natural Resources (MDDNR) and the U.S. Geological Survey (USGS) entered into a cooperative study during 2007 and 2008 to address these issues. This study was designed to address several objectives to support MDDNR?s management strategy for Deep Creek Lake. The objectives of this study were to: Determine the current physical shape of the lake through bathymetric surveys; Initiate flow and sediment monitoring of selected tributaries to characterize the stream discharge and sediment load of lake inflows; Determine sedimentation rates using isotope analysis of sediment cores; Characterize the degree of hydraulic connection between the lake and adjacent aquifer systems; and Develop an estimate of water use around Deep Creek Lake. Summary of Activities Data were collected in Deep Creek Lake and in selected tributaries from September 2007 through September 2008. The methods of investigation are presented here and all data have been archived according to USGS policy for future use. The material presented in this report is intended to provide resource managers and policy makers with a broad understanding of the bathymetry, surface water, sedimentation rates, groundwater, and water use in the study area. The report is structured so that the reader can access each topic separately using any hypertext markup (HTML) language reader. In order to establish a base-line water-depth map of Deep Creek Lake, a bathymetric survey of the lake bottom was conducted in 2007. The data collected were used to generate a bathymetric map depicting depth to the lake bottom from a full pool elevation of 2,462 feet (National Geodetic Vertical Datum of 1929). Data were collected along about 90 linear miles across the lake using a fathometer and a differentially corrected global positioning system. As part of a long-term monitoring plan for all surface-water inputs to the lake, streamflow data were collected continuously at two stations constructed on Poland Run and Cherry Creek. The sites were selected to represent areas of the watershed under active development and areas that are relatively stable with respect to development. Twelve months of discharge data are provided for both streams. In addition, five water-quality parameters were collected continuously at the Poland Run station including pH, specific conductance, temperature, dissolved oxygen, and turbidity. Water samples collected at Poland Run were analyzed for sediment concentration, and the results of this analysis were used to estimate the annual sediment load into Deep Creek Lake from Poland Run. To determine sedimentation rates, cores of lake-bottom sediments were collected at 23 locations. Five of the cores were analyzed using a radiometric-dating method, allowing average rates of sedimentation to be estimated for the time periods 1925 to 2008, 1925 to 1963, and 1963 to 2008. Particle-size data from seven cores collected at locations throughout the study area were analyzed to provide information on the amount of fine material in lake-bed sediments. Groundwater levels were monitored continuously in four wells and weekly in nine additional wells during October, November, and December of 2008. Water levels were compared to recorded lake levels and precipitation during the same period to determine the effect of lake-level drawdown and recovery on the adjacent aquifer systems. Water use in the Deep Creek Lake wa
Optimality of profit-including prices under ideal planning.
Samuelson, P A
1973-07-01
Although prices calculated by a constant percentage markup on all costs (nonlabor as well as direct-labor) are usually admitted to be more realistic for a competitive capitalistic model, the view is often expressed that, for optimal planning purposes, the "values" model of Marx's Capital, Volume I, is to be preferred. It is shown here that an optimal-control model that maximizes discounted social utility of consumption per capita and that ultimately approaches a steady state must ultimately have optimal pricing that involves equal rates of steady-state profit in all industries; and such optimal pricing will necessarily deviate from Marx's model of equal rates of surplus value (markups on direct-labor only) in all industries.
Optimality of Profit-Including Prices Under Ideal Planning
Samuelson, Paul A.
1973-01-01
Although prices calculated by a constant percentage markup on all costs (nonlabor as well as direct-labor) are usually admitted to be more realistic for a competitive capitalistic model, the view is often expressed that, for optimal planning purposes, the “values” model of Marx's Capital, Volume I, is to be preferred. It is shown here that an optimal-control model that maximizes discounted social utility of consumption per capita and that ultimately approaches a steady state must ultimately have optimal pricing that involves equal rates of steady-state profit in all industries; and such optimal pricing will necessarily deviate from Marx's model of equal rates of surplus value (markups on direct-labor only) in all industries. PMID:16592102
NASA Astrophysics Data System (ADS)
The following hearings and markups have been tentatively scheduled for the coming weeks by the Senate and House of Representatives. Dates and times should be verified with the committee or subcommittee holding the hearing or markup; all offices on Capitol Hill may be reached by telephoning 202-224-3121. For guidelines on contacting a member of Congress, see AGU's Guide to Legislative Information and Contacts (Eos, August 28, 1984, p. 669).October 8: A joint hearing by the Energy Research & Development Subcommittee of the Senate Energy and Natural Resources Committee and the Nuclear Regulation Subcommittee of the Senate Environment and Public Works Committee on low-level radioactive waste (S. 1517 and S. 1578). Room SD-366, Dirksen Building, 9:30 A.M.
Earth Science Markup Language: Transitioning From Design to Application
NASA Technical Reports Server (NTRS)
Moe, Karen; Graves, Sara; Ramachandran, Rahul
2002-01-01
The primary objective of the proposed Earth Science Markup Language (ESML) research is to transition from design to application. The resulting schema and prototype software will foster community acceptance for the "define once, use anywhere" concept central to ESML. Supporting goals include: 1. Refinement of the ESML schema and software libraries in cooperation with the user community. 2. Application of the ESML schema and software libraries to a variety of Earth science data sets and analysis tools. 3. Development of supporting prototype software for enhanced ease of use. 4. Cooperation with standards bodies in order to assure ESML is aligned with related metadata standards as appropriate. 5. Widespread publication of the ESML approach, schema, and software.
Hoelzer, Simon; Schweiger, Ralf K; Liu, Raymond; Rudolf, Dirk; Rieger, Joerg; Dudeck, Joachim
2005-01-01
With the introduction of the ICD-10 as the standard for diagnosis, the development of an electronic representation of its complete content, inherent semantics and coding rules is necessary. Our concept refers to current efforts of the CEN/TC 251 to establish a European standard for hierarchical classification systems in healthcare. We have developed an electronic representation of the ICD-10 with the extensible Markup Language (XML) that facilitates the integration in current information systems or coding software taking into account different languages and versions. In this context, XML offers a complete framework of related technologies and standard tools for processing that helps to develop interoperable applications.
Hypertext or Textbook: Effects on Motivation and Gain in Knowledge
ERIC Educational Resources Information Center
Conradty, Cathérine; Bogner, Franz X.
2016-01-01
Computers are considered innovative in classrooms, raising expectations of increased cognitive learning outcomes or motivation with effects on Deeper Learning (DL). The "new medium", however, may cause cognitive overloads. Combined with gender-related variations in ability, self-efficacy or self-confidence, computers may even diminish…
Non-Print Social Studies Materials--Elementary School Level.
ERIC Educational Resources Information Center
Lynn, Karen
Types of non-print social studies materials developed for presentation to, and use by, elementary school students are identified. "Non-print" materials include films, filmstrips, video cassettes, audio recordings, computer databases, telecommunications, and hypertext. An explanation of why elementary school students can benefit from the use of…
The Classroom Manager. Hands-on Multimedia.
ERIC Educational Resources Information Center
Kaplan, Nancy; And Others
1992-01-01
Four teachers discuss how they help students create hands-on, multimedia reports and presentations. Ideas include using hypertext programs on classroom computers to make computerized notecards of data on study topics, using CD-ROM disks for research, creating storyboards of videotaped reports, and setting up schedules for videotaping. (SM)
School and Situated Knowledge: Travel or Tourism?
ERIC Educational Resources Information Center
Damarin, Suzanne K.
1993-01-01
Examines issues related to situated cognition and learning, both in the classroom and in the world. Topics discussed include educational theories; the situated nature of knowledge; the perception of experts; and the role of technology in situated learning, including virtual reality, hypertext, and telecommunications. (26 references) (LRW)
Pulling the Internet Together with Mosaic.
ERIC Educational Resources Information Center
Sheehan, Mark
1995-01-01
Presents the history of the Internet with specific emphasis on Mosaic; discusses hypertext and hypermedia information; and describes software and hardware requirements. Sidebars include information on the National Center for Super Computing Applications (NCSA); World Wide Web browsers for use in Windows, Macintosh, and X-Windows (UNIX); and…
Teaching Hypertext and Hypermedia through the Web.
ERIC Educational Resources Information Center
de Bra, Paul M. E.
This paper describes a World Wide Web-based introductory course titled "Hypermedia Structures and Systems," offered as an optional part of the curriculum in computing science at the Eindhoven University of Technology (Netherlands). The technical environment for the current (1996) edition of the course is presented, which features…
Automatic Text Decomposition and Structuring.
ERIC Educational Resources Information Center
Salton, Gerard; And Others
1996-01-01
Text similarity measurements are used to determine relationships between natural-language texts and text excerpts. The resulting linked hypertext maps can be broken down into text segments and themes used to identify different text types and structures, leading to improved information access and utilization. Examples are provided for text…
Reading and Writing in Multimodal Contexts: Exploring the Deictic Nature of Literacy
ERIC Educational Resources Information Center
Bailey, Margaret Denice
2012-01-01
This study examined the reading and writing processes that seventh-graders used in hypertext versus traditional print environments. Additionally, it considered the impact of incorporating technology and collaboration into pedagogical practice. Three separate literacy activities involved students in finding information, creating presentations, and…
ERIC Educational Resources Information Center
Neal, James G.
1999-01-01
Examines the changes that are affecting academic library collection development. Highlights include computer technology; digital information; networking; virtual reality; hypertext; fair use and copyrights; technological infrastructure; digital libraries; information policy; academic and scholarly publishing; and experiences at the Johns Hopkins…
Hypertext and the Art of Memory.
ERIC Educational Resources Information Center
Storkerson, Peter; Wong, Janine
1997-01-01
Posits that intelligibility is a persistent problem in interactive multimedia and hypermedia. Describes the Art of Memory, a visual and symbolic mnemonic method used to map new information onto familiar and symbolically different structures. Presents the Art of Memory as a way to offer insight into intelligibility. (PA)
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 3 (L1V3).
Bergmann, Frank T; Cooper, Jonathan; König, Matthias; Moraru, Ion; Nickerson, David; Le Novère, Nicolas; Olivier, Brett G; Sahle, Sven; Smith, Lucian; Waltemath, Dagmar
2018-03-19
The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML) describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML) is an XML-based format that encodes, for a given simulation experiment, (i) which models to use; (ii) which modifications to apply to models before simulation; (iii) which simulation procedures to run on each model; (iv) how to post-process the data; and (v) how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1) implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.
SBRML: a markup language for associating systems biology data with models.
Dada, Joseph O; Spasić, Irena; Paton, Norman W; Mendes, Pedro
2010-04-01
Research in systems biology is carried out through a combination of experiments and models. Several data standards have been adopted for representing models (Systems Biology Markup Language) and various types of relevant experimental data (such as FuGE and those of the Proteomics Standards Initiative). However, until now, there has been no standard way to associate a model and its entities to the corresponding datasets, or vice versa. Such a standard would provide a means to represent computational simulation results as well as to frame experimental data in the context of a particular model. Target applications include model-driven data analysis, parameter estimation, and sharing and archiving model simulations. We propose the Systems Biology Results Markup Language (SBRML), an XML-based language that associates a model with several datasets. Each dataset is represented as a series of values associated with model variables, and their corresponding parameter values. SBRML provides a flexible way of indexing the results to model parameter values, which supports both spreadsheet-like data and multidimensional data cubes. We present and discuss several examples of SBRML usage in applications such as enzyme kinetics, microarray gene expression and various types of simulation results. The XML Schema file for SBRML is available at http://www.comp-sys-bio.org/SBRML under the Academic Free License (AFL) v3.0.
Pricing and components analysis of some key essential pediatric medicine in Odisha state
Samal, Satyajit; Swain, Trupti Rekha
2017-01-01
Objective: Study highlighting prices, i.e., the patients actually pay at ground level is important for interventions such as alternate procurement schemes or to expedite regulatory assessment of essential medicines for children. The present study was undertaken to study pricing and component analysis of few key essential medicines in Odisha state. Methodology: Six child-specific medicines of different formulations were selected based on use in different disease condition and having widest pricing variation. Data were collected, entered, and analyzed in the price components data collection form of the World Health Organization-Health Action International (WHO-HAI) 2007 Workbook version 5 – Part II provided as part of the WHO/HAI methodology. The analysis includes the cumulative percent markup, total cumulative percent markup, and percent contribution of individual components to the final medicine price in both public and private sector of Odisha state. Results: Add-on costs such as taxes, wholesale, and retail markups contribute substantially to the final price of medicines in private sector, particularly for branded-generic products. The largest contributor to add-on costs is at the level of retailer shop. Conclusion: Policy should be framed to achieve a greater transparency and uniformity of the pricing of medicines at different health sectors of Odisha. PMID:28458429
Boivin, Rémi
2014-03-01
Illegal drug prices are extremely high, compared to similar goods. There is, however, considerable variation in value depending on place, market level and type of drugs. A prominent framework for the study of illegal drugs is the "risks and prices" model (Reuter & Kleiman, 1986). Enforcement is seen as a "tax" added to the regular price. In this paper, it is argued that such economic models are not sufficient to explain price variations at country-level. Drug markets are analysed as global trade networks in which a country's position has an impact on various features, including illegal drug prices. This paper uses social network analysis (SNA) to explain price markups between pairs of countries involved in the trafficking of illegal drugs between 1998 and 2007. It aims to explore a simple question: why do prices increase between two countries? Using relational data from various international organizations, separate trade networks were built for cocaine, heroin and cannabis. Wholesale price markups are predicted with measures of supply, demand, risks of seizures, geographic distance and global positioning within the networks. Reported prices (in $US) and purchasing power parity-adjusted values are analysed. Drug prices increase more sharply when drugs are headed to countries where law enforcement imposes higher costs on traffickers. The position and role of a country in global drug markets are also closely associated with the value of drugs. Price markups are lower if the destination country is a transit to large potential markets. Furthermore, price markups for cocaine and heroin are more pronounced when drugs are exported to countries that are better positioned in the legitimate world-economy, suggesting that relations in legal and illegal markets are directed in opposite directions. Consistent with the world-system perspective, evidence is found of coherent world drug markets driven by both local realities and international relations. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Tucker, Russell Jay
2002-09-01
Today the electric industry in the U.S. is transitioning to competitive markets for wholesale electricity. Independent system operators (ISOs) now manage broad regional markets for electrical energy in several areas of the U.S. A recent rulemaking by the Federal Energy Regulatory Commission (FERC) encourages the development of regional transmission organizations (RTOs) and restructured competitive wholesale electricity markets nationwide. To date, the transition to competitive wholesale markets has not been easy. The increased reliance on market forces coupled with unusually high electricity demand for some periods have created conditions amenable to market power abuse in many regions throughout the U.S. In the summer of 1999, hot and humid summer conditions in Pennsylvania, New Jersey, Maryland, Delaware, and the District of Columbia pushed peak demand in the PJM Interconnection to record levels. These demand conditions coincided with the introduction of market-based pricing in the wholesale electricity market. Prices for electricity increased on average by 55 percent, and reached the $1,000/MWh range. This study examines the extent to which generator market power raised prices above competitive levels in the PJM Interconnection during the summer of 1999. It simulates hourly market-clearing prices assuming competitive market behavior and compares these prices with observed market prices in computing price markups over the April 1-August 31, 1999 period. The results of the simulation analysis are supported with an examination of actual generator bid data of incumbent generators. Price markups averaged 14.7 percent above expected marginal cost over the 5-month period for all non-transmission-constrained hours. The evidence presented suggests that the June and July monthly markups were strongly influenced by generator market power as price inelastic peak demand approached the electricity generation capacity constraint of the market. While this analysis of the performance of the PJM market finds evidence of market power, the measured markups are markedly less than estimates from prior analysis of the PJM market.
Dealing Your Own Hands with Hypercard.
ERIC Educational Resources Information Center
Larsen, Mark D.
1988-01-01
Extensively reviews Hypercard, a multifaceted software package for the Macintosh. HyperCard uses a language called "hypertext" which was patterned after everyday language and designed to allow flexibility in the linking and manipulation of text, graphics, and sounds. Describes one use for Hypercard in an advanced course on Latin American…
Using Technology To Enhance Literacy in Elementary School Children.
ERIC Educational Resources Information Center
Christie, Alice
The electronic information age is here, and adults as well as children are using new ways to gather and generate information. Electronics users are writing in hypertext; exploring cyberspace; living in virtual communities; scooping interactively with CD-ROMs and laserdiscs; using File Transfer Protocols to upload and download information from…
Individual Variation in Children's Reading Comprehension across Digital Text Types
ERIC Educational Resources Information Center
Fesel, Sabine S.; Segers, Eliane; Verhoeven, Ludo
2018-01-01
The present study examined children's digital text comprehension of digital text types linear digital text vs hypertext, with or without graphical navigable overviews. We investigated to what extent individual variation in children's comprehension could be explained by lexical quality (word reading efficiency and vocabulary knowledge), cognitive…
A Prospectus for the Future Development of a Speech Lab: Hypertext Applications.
ERIC Educational Resources Information Center
Berube, David M.
This paper presents a plan for the next generation of speech laboratories which integrates technologies of modern communication in order to improve and modernize the instructional process. The paper first examines the application of intermediate technologies including audio-video recording and playback, computer assisted instruction and testing…
The Impact of Hypermedia Instructional Materials on Study Self-Regulation in College Students.
ERIC Educational Resources Information Center
Nelms, Keith R.
The metacognition "calibration of comprehension" research paradigm is used to investigate the question of whether the introduction of hypertext and hypermedia into college instruction impacts students' ability to regulate their own learning processes. Presentation technology (paper or computer) and content structure (linear or nonlinear) were…
ERIC Educational Resources Information Center
Raney, Mardell
1998-01-01
Discussion with Vinton G. Cerf, widely known as father of the Internet and creator of the original email system, focuses on societal implications of the Internet; filtering; hypertext; email; the need for a global legal framework; e-commerce and potential for Web-based businesses; and implications of the Internet for education. (LRW)
Expanding Academic Vocabulary with an Interactive On-Line Database
ERIC Educational Resources Information Center
Horst, Marlise; Cobb, Tom; Nicolae, Ioana
2005-01-01
University students used a set of existing and purpose-built on-line tools for vocabulary learning in an experimental ESL course. The resources included concordance, dictionary, cloze-builder, hypertext, and a database with interactive self-quizzing feature (all freely available at www.lextutor.ca). The vocabulary targeted for learning consisted…
1992-06-01
Paper, Version 2.0, December 1989. [Woodcock90] Gary Woodcock , Automated Generation of Hypertext Documents, CIVC Technical Report (working paper...environment setup, performance testing, assessor testing, and analysis) of the ACEC. A captive scenario example could be developed that would guide the
A Hypertext Tutor for Teaching Principles and Techniques of GIS.
ERIC Educational Resources Information Center
Keller, C. Peter; And Others
1996-01-01
Outlines the teaching environment that led to the conception of a digital tutor for teaching the concepts and techniques of geographic information systems (GIS). Explains the design and prototyping, introduces the tutor's capabilities, and shares insights gained from using this teaching aid. Includes teachers' and students' responses. (MJP)
Screen Miniatures as Icons for Backward Navigation in Content-Based Software.
ERIC Educational Resources Information Center
Boling, Elizabeth; Ma, Guoping; Tao, Chia-Wen; Askun, Cengiz; Green, Tim; Frick, Theodore; Schaumburg, Heike
Users of content-based software programs, including hypertexts and instructional multimedia, rely on the navigation functions provided by the designers of those program. Typical navigation schemes use abstract symbols (arrows) to label basic navigational functions like moving forward or backward through screen displays. In a previous study, the…
Artificial Intelligence and School Library Media Centers.
ERIC Educational Resources Information Center
Young, Robert J.
1990-01-01
Discusses developments in artificial intelligence in terms of their impact on school library media centers and the role of media specialists. Possible uses of expert systems, hypertext, and CD-ROM technologies in school media centers are examined and the challenges presented by these technologies are discussed. Fourteen sources of additional…
Constructing Knowledge from an Ill-Structured Domain: Testing a Multimedia Hamlet.
ERIC Educational Resources Information Center
Barnes, William G. W.
How a multimedia program that employs concept maps and hypertext for teaching "Hamlet" facilitated comprehension in an undergraduate course is described. Results suggest factors that instructional designers should take into account to improve learning. Thirty-six upper-division college students were enrolled in a course on Shakespeare at…
Emerging Pedagogy: Teaching Digital Hypertexts in Social Contexts.
ERIC Educational Resources Information Center
Strasma, Kip
2001-01-01
Considers how in the classroom, particularly, teachers should take advantage of the multiple aspects of narrative time constructed through hypertextual duration, frequency, and order. Uses an ethnographic study of two college courses to illustrate several of these opportunities as they subvert the dominant orders of textuality totalized by…
Books Online: Visions, Plans, and Perspectives for Electronic Text.
ERIC Educational Resources Information Center
Basch, Reva
1991-01-01
Discussion of current applications of and future possibilities for electronic text, or e-text, focuses on activities in the area of higher education. Topics covered are input technology, including optical scanners and keyboarding; standardization; copyright issues; access to e-text through networks; user interface; hypertext; software; shareware;…
ERIC Educational Resources Information Center
Stammen, Ronald M.
This paper explores how educators are using multimedia for distance learning, beginning with definitions of the concepts of multimedia, hypermedia, hypertext, distance education and distance learning. Three types of telecommunications technologies are described: multimedia with broadcast television, multimedia with interactive video (television),…
Generating a Professional Portfolio in the Writing Center: A Hypertext Tutor.
ERIC Educational Resources Information Center
Cullen, Roxanne; Balkema, Sandra
1995-01-01
Notes that Ferris State University's writing center uses HyperCard software in the Macintosh environment to assist students in technical/professional programs to develop professional portfolios. Suggests that this approach offers consistent instruction and equal access to content information as approved by faculty in specified disciplines in a…
The Impact of Text Browsing on Text Retrieval Performance.
ERIC Educational Resources Information Center
Bodner, Richard C.; Chignell, Mark H.; Charoenkitkarn, Nipon; Golovchinsky, Gene; Kopak, Richard W.
2001-01-01
Compares empirical results from three experiments using Text Retrieval Conference (TREC) data and search topics that involved three different user interfaces. Results show that marking Boolean queries on text, which encourages browsing, and hypertext interfaces to text retrieval systems can benefit recall and can also benefit novice users.…
Effect of Hypertextual Reading on Academic Success and Comprehension Skills
ERIC Educational Resources Information Center
Durukan, Erhan
2014-01-01
As computer technology developed, hypertexts emerged as an influential environment for developing language skills. This study aims to evaluate a text prepared in a hypertextual environment and its effects on academic success and comprehension skills. In this study, "preliminary test final test control group experimental pattern" was used…
2015-12-01
FOV Field of view GEO Geosynchronous, or geostationary , earth orbit HEO Highly elliptical earth orbit HTTP Hypertext transfer protocol HTTPS...orbit (MEO), geosynchronous or geostationary earth orbit (GEO), and highly elliptical earth orbit (HEO) [38]. Furthermore, if we consider the actual
Hermeneutics, Accreting Receptions, Hypermedia: A Tool for Reference Versus a Tool for Instruction.
ERIC Educational Resources Information Center
Nissan, Ephraim; Rossler, Isaac; Weiss, Hillel
1997-01-01
Provides a select overview of hypertext and information retrieval tools that support traditional Jewish learning and discusses a project in instructional hypermedia that is applied to teaching, teacher training, and self-instruction in given Bible passages. Highlights include accretion of receptions, hermeneutics, literary appropriations, and…
Hypertext Image Retrieval: The Evolution of an Application.
ERIC Educational Resources Information Center
Roberts, G. Louis; Kenney, Carol E.
1991-01-01
Describes the development and implementation of a full-text image retrieval system at the Boeing Commercial Airplane Group. The conversion of card formats to a microcomputer-based system using HyperCard is described; the online system architecture is explained; and future plans are discussed, including conversion to digital images. (LRW)
Database Management Systems: New Homes for Migrating Bibliographic Records.
ERIC Educational Resources Information Center
Brooks, Terrence A.; Bierbaum, Esther G.
1987-01-01
Assesses bibliographic databases as part of visionary text systems such as hypertext and scholars' workstations. Downloading is discussed in terms of the capability to search records and to maintain unique bibliographic descriptions, and relational database management systems, file managers, and text databases are reviewed as possible hosts for…
Incorporating Digital E-Books into Educational Curriculum
ERIC Educational Resources Information Center
Turner, Freda
2005-01-01
The first books were probably the Egyptian scrolls of papyrus that provided lineal content to readers. Today (2005) the Internet technology presents the Internet lifestyle that has introduced electronic or e-books that can enrich learning experiences. E-books have an advantage over traditional books in that they offer hypertext linking, search…
28 CFR 75.8 - Location of the statement.
Code of Federal Regulations, 2012 CFR
2012-07-01
... that opens upon the viewer's clicking or mousing-over a hypertext link that states, “18 U.S.C. 2257... this section, a digital video disc (DVD) containing multiple depictions is a single matter for which... 29619, May 24, 2005, as amended at 73 FR 77471, Dec. 18, 2008] ...
28 CFR 75.8 - Location of the statement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... that opens upon the viewer's clicking or mousing-over a hypertext link that states, “18 U.S.C. 2257... this section, a digital video disc (DVD) containing multiple depictions is a single matter for which... 29619, May 24, 2005, as amended at 73 FR 77471, Dec. 18, 2008] ...
28 CFR 75.8 - Location of the statement.
Code of Federal Regulations, 2013 CFR
2013-07-01
... that opens upon the viewer's clicking or mousing-over a hypertext link that states, “18 U.S.C. 2257... this section, a digital video disc (DVD) containing multiple depictions is a single matter for which... 29619, May 24, 2005, as amended at 73 FR 77471, Dec. 18, 2008] ...
28 CFR 75.8 - Location of the statement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... that opens upon the viewer's clicking or mousing-over a hypertext link that states, “18 U.S.C. 2257... this section, a digital video disc (DVD) containing multiple depictions is a single matter for which... 29619, May 24, 2005, as amended at 73 FR 77471, Dec. 18, 2008] ...
28 CFR 75.8 - Location of the statement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... that opens upon the viewer's clicking or mousing-over a hypertext link that states, “18 U.S.C. 2257... this section, a digital video disc (DVD) containing multiple depictions is a single matter for which... 29619, May 24, 2005, as amended at 73 FR 77471, Dec. 18, 2008] ...
Knowledge Acquisition by Hypervideo Design: An Instructional Program for University Courses
ERIC Educational Resources Information Center
Stahl, Elmar; Finke, Matthias; Zahn, Carmen
2006-01-01
This article presents an instructional program for collaborative construction of hypervideos. The instructional program integrates (a) hypervideo technology development, (b) assumptions on learning with hypervideo systems, and (c) the application of research on knowledge acquisition by writing texts or hypertexts to hypervideos. The aim of the…
Deductive Error Diagnosis and Inductive Error Generalization for Intelligent Tutoring Systems.
ERIC Educational Resources Information Center
Hoppe, H. Ulrich
1994-01-01
Examines the deductive approach to error diagnosis for intelligent tutoring systems. Topics covered include the principles of the deductive approach to diagnosis; domain-specific heuristics to solve the problem of generalizing error patterns; and deductive diagnosis and the hypertext-based learning environment. (Contains 26 references.) (JLB)
Impact of Market Behavior, Fleet Composition, and Ancillary Services on Revenue Sufficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany; Gallo, Giulia; Brinkman, Gregory
Revenue insufficiency, or the missing money problem, occurs when the revenues that generators earn from the market are not sufficient to cover both fixed and variable costs to remain in the market and/or justify investments in new capacity, which may be needed for reliability. The near-zero marginal cost of variable renewable generators further exacerbates these revenue challenges. Estimating the extent of the missing money problem in current electricity markets is an important, nontrivial task that requires representing both how the power system operates and how market participants behave. This paper explores the missing money problem using a production cost modelmore » that represented a simplified version of the Electric Reliability Council of Texas (ERCOT) energy-only market for the years 2012-2014. We evaluate how various market structures -- including market behavior, ancillary services, and changing fleet compositions -- affect net revenues in this ERCOT-like system. In most production cost modeling exercises, resources are assumed to offer their marginal capabilities at marginal costs. Although this assumption is reasonable for feasibility studies and long-term planning, it does not adequately consider the market behaviors that impact revenue sufficiency. In this work, we simulate a limited set of market participant strategic bidding behaviors by means of different sets of markups; these markups are applied to the true production costs of all gas generators, which are the most prominent generators in ERCOT. Results show that markups can help generators increase their net revenues overall, although net revenues may increase or decrease depending on the technology and the year under study. Results also confirm that conventional, variable-cost-based production cost simulations do not capture prices accurately, and this particular feature calls for proxies for strategic behaviors (e.g., markups) and more accurate representations of how electricity markets work. The analysis also shows that generators face revenue sufficiency challenges in this ERCOT-like energy-only market model; net revenues provided by the market in all base markup cases and sensitivity scenarios (except when a large fraction of the existing coal fleet is retired) are not sufficient to justify investments in new capacity for thermal and nuclear power units. Overall, the work described in this paper points to the need for improved behavioral models of electricity markets to more accurately study current and potential market design issues that could arise in systems with high penetrations of renewable generation.« less
Fast access to the CMS detector condition data employing HTML5 technologies
NASA Astrophysics Data System (ADS)
Pierro, Giuseppe Antonio; Cavallari, Francesca; Di Guida, Salvatore; Innocente, Vincenzo
2011-12-01
This paper focuses on using HTML version 5 (HTML5) for accessing condition data for the CMS experiment, evaluating the benefits and risks posed by the use of this technology. According to the authors of HTML5, this technology attempts to solve issues found in previous iterations of HTML and addresses the needs of web applications, an area previously not adequately covered by HTML. We demonstrate that employing HTML5 brings important benefits in terms of access performance to the CMS condition data. The combined use of web storage and web sockets allows increasing the performance and reducing the costs in term of computation power, memory usage and network bandwidth for client and server. Above all, the web workers allow creating different scripts that can be executed using multi-thread mode, exploiting multi-core microprocessors. Web workers have been employed in order to substantially decrease the web page rendering time to display the condition data stored in the CMS condition database.
Reducing tobacco use and access through strengthened minimum price laws.
McLaughlin, Ian; Pearson, Anne; Laird-Metke, Elisa; Ribisl, Kurt
2014-10-01
Higher prices reduce consumption and initiation of tobacco products. A minimum price law that establishes a high statutory minimum price and prohibits the industry's discounting tactics for tobacco products is a promising pricing strategy as an alternative to excise tax increases. Although some states have adopted minimum price laws on the basis of statutorily defined price "markups" over the invoice price, existing state laws have been largely ineffective at increasing the retail price. We analyzed 3 new variations of minimum price laws that hold great potential for raising tobacco prices and reducing consumption: (1) a flat rate minimum price law similar to a recent enactment in New York City, (2) an enhanced markup law, and (3) a law that incorporates both elements.
Networking observers and observatories with remote telescope markup language
NASA Astrophysics Data System (ADS)
Hessman, Frederic V.; Tuparev, Georg; Allan, Alasdair
2006-06-01
Remote Telescope Markup Language (RTML) is an XML-based protocol for the transport of the high-level description of a set of observations to be carried out on a remote, robotic or service telescope. We describe how RTML is being used in a wide variety of contexts: the transport of service and robotic observing requests in the Hands-On Universe TM, ACP, eSTAR, and MONET networks; how RTML is easily combined with other XML protocols for more localized control of telescopes; RTML as a secondary observation report format for the IVOA's VOEvent protocol; the input format for a general-purpose observation simulator; and the observatory-independent means for carrying out request transactions for the international Heterogeneous Telescope Network (HTN).
The semantics of Chemical Markup Language (CML) for computational chemistry : CompChem.
Phadungsukanan, Weerapong; Kraft, Markus; Townsend, Joe A; Murray-Rust, Peter
2012-08-07
: This paper introduces a subdomain chemistry format for storing computational chemistry data called CompChem. It has been developed based on the design, concepts and methodologies of Chemical Markup Language (CML) by adding computational chemistry semantics on top of the CML Schema. The format allows a wide range of ab initio quantum chemistry calculations of individual molecules to be stored. These calculations include, for example, single point energy calculation, molecular geometry optimization, and vibrational frequency analysis. The paper also describes the supporting infrastructure, such as processing software, dictionaries, validation tools and database repositories. In addition, some of the challenges and difficulties in developing common computational chemistry dictionaries are discussed. The uses of CompChem are illustrated by two practical applications.
Pathology data integration with eXtensible Markup Language.
Berman, Jules J
2005-02-01
It is impossible to overstate the importance of XML (eXtensible Markup Language) as a data organization tool. With XML, pathologists can annotate all of their data (clinical and anatomic) in a format that can transform every pathology report into a database, without compromising narrative structure. The purpose of this manuscript is to provide an overview of XML for pathologists. Examples will demonstrate how pathologists can use XML to annotate individual data elements and to structure reports in a common format that can be merged with other XML files or queried using standard XML tools. This manuscript gives pathologists a glimpse into how XML allows pathology data to be linked to other types of biomedical data and reduces our dependence on centralized proprietary databases.
The semantics of Chemical Markup Language (CML) for computational chemistry : CompChem
2012-01-01
This paper introduces a subdomain chemistry format for storing computational chemistry data called CompChem. It has been developed based on the design, concepts and methodologies of Chemical Markup Language (CML) by adding computational chemistry semantics on top of the CML Schema. The format allows a wide range of ab initio quantum chemistry calculations of individual molecules to be stored. These calculations include, for example, single point energy calculation, molecular geometry optimization, and vibrational frequency analysis. The paper also describes the supporting infrastructure, such as processing software, dictionaries, validation tools and database repositories. In addition, some of the challenges and difficulties in developing common computational chemistry dictionaries are discussed. The uses of CompChem are illustrated by two practical applications. PMID:22870956
Calderon, Karynna; Dadisman, Shawn V.; Flocks, James G.; Kindinger, Jack G.; Wiese, Dana S.
2003-01-01
In June, July, and August of 2001, the U.S. Geological Survey (USGS), in cooperation with the University of New Orleans (UNO), the U.S. Army Corps of Engineers, and the Louisiana Department of Natural Resources, conducted a shallow geophysical and sediment core survey of Timbalier Bay and the Gulf of Mexico offshore East Timbalier Island, Louisiana. This report serves as an archive of unprocessed digital seismic reflection data, trackline navigation files, trackline navigation maps, observers' logbooks, Geographic Information Systems (GIS) information, and formal Federal Geographic Data Committee (FGDC) metadata. In addition, a filtered and gained digital Graphics Interchange Format (GIF) image of each seismic profile is provided. Please see Kulp and others (2002), Flocks and others (2003), and Kulp and others (in prep.) for further information about the sediment cores collected and the geophysical results. For convenience, a list of acronyms and abbreviations frequently used in this report is also included. This Digital Versatile Disc (DVD) document is readable on any computing platform that has standard DVD driver software installed. Documentation on this DVD was produced using Hyper Text Markup Language (HTML) utilized by the World Wide Web (WWW) and allows the user to access the information using a web browser (i.e. Netscape, Internet Explorer). To access the information contained on this disc, open the file 'index.htm' located at the top level of the disc using a web browser. This report also contains WWW links to USGS collaborators and other agencies. These links are only accessible if access to the Internet is available while viewing this DVD. The archived boomer seismic reflection data are in standard Society of Exploration Geophysicists (SEG) SEG-Y format (Barry et al., 1975) and may be downloaded for processing with public domain software such as Seismic Unix (SU), currently located at http://www.cwp.mines.edu/cwpcodes/index.html. Examples of SU processing scripts are provided in the BOOM.tar file located in the SU subfolder of the SOFTWARE folder located at the top level of this disc. In-house (USGS) DOS and Microsoft Windows compatible software for viewing SEG-Y headers - DUMPSEGY.EXE (Zihlman, 1992) - is provided in the USGS subfolder of the SOFTWARE folder. Processed profile images, trackline navigation maps, logbooks, and formal metadata may be viewed with a web browser.
Calderon, Karynna; Dadisman, Shawn V.; Flocks, James G.; Wiese, Dana S.; Kindinger, Jack G.
2003-01-01
In June, July, and August of 2001, the U.S. Geological Survey (USGS), in cooperation with the University of New Orleans, the U.S. Army Corps of Engineers, and the Louisiana Department of Natural Resources, conducted a shallow geophysical and sediment core survey of Timbalier Bay and the Gulf of Mexico offshore East Timbalier Island, Louisiana. This report serves as an archive of unprocessed digital seismic reflection data, trackline navigation files, trackline navigation maps, observers' logbooks, Geographic Information Systems (GIS) information, and formal Federal Geographic Data Committee (FGDC) metadata. In addition, a gained digital Graphics Interchange Format (GIF) image of each seismic profile is provided. Please see Kulp and others (2002), Flocks and others (2003), and Kulp and others (in prep.) for further information about the sediment cores collected and the geophysical results. For convenience, a list of acronyms and abbreviations frequently used in this report is also included. This Digital Versatile Disc (DVD) document is readable on any computing platform that has standard DVD driver software installed. Documentation on this DVD was produced using Hyper Text Markup Language (HTML) utilized by the World Wide Web (WWW) and allows the user to access the information using a web browser (i.e. Netscape, Internet Explorer). To access the information contained on these discs, open the file 'index.htm' located at the top level of each disc using a web browser. This report also contains WWW links to USGS collaborators and other agencies. These links are only accessible if access to the internet is available while viewing these DVDs. The archived chirp seismic reflection data are in standard Society of Exploration Geophysicists (SEG) SEG-Y format (Barry et al., 1975) and may be downloaded for processing with public domain software such as Seismic Unix (SU), currently located at http://www.cwp.mines.edu/cwpcodes/index.html. Examples of SU processing scripts are provided in the CHIRP.tar file located in the SU subfolder of the SOFTWARE folder located at the top level of each disc. In-house (USGS) DOS and Microsoft Windows compatible software for viewing SEG-Y headers - DUMPSEGY.EXE (Zihlman, 1992) - is provided in the USGS subfolder of the SOFTWARE folder. Processed profile images, trackline navigation maps, logbooks, and formal metadata may be viewed with a web browser.
Bowsher, Clive G
2011-02-15
Understanding the encoding and propagation of information by biochemical reaction networks and the relationship of such information processing properties to modular network structure is of fundamental importance in the study of cell signalling and regulation. However, a rigorous, automated approach for general biochemical networks has not been available, and high-throughput analysis has therefore been out of reach. Modularization Identification by Dynamic Independence Algorithms (MIDIA) is a user-friendly, extensible R package that performs automated analysis of how information is processed by biochemical networks. An important component is the algorithm's ability to identify exact network decompositions based on both the mass action kinetics and informational properties of the network. These modularizations are visualized using a tree structure from which important dynamic conditional independence properties can be directly read. Only partial stoichiometric information needs to be used as input to MIDIA, and neither simulations nor knowledge of rate parameters are required. When applied to a signalling network, for example, the method identifies the routes and species involved in the sequential propagation of information between its multiple inputs and outputs. These routes correspond to the relevant paths in the tree structure and may be further visualized using the Input-Output Path Matrix tool. MIDIA remains computationally feasible for the largest network reconstructions currently available and is straightforward to use with models written in Systems Biology Markup Language (SBML). The package is distributed under the GNU General Public License and is available, together with a link to browsable Supplementary Material, at http://code.google.com/p/midia. Further information is at www.maths.bris.ac.uk/~macgb/Software.html.
An object-oriented approach to deploying highly configurable Web interfaces for the ATLAS experiment
NASA Astrophysics Data System (ADS)
Lange, Bruno; Maidantchik, Carmen; Pommes, Kathy; Pavani, Varlen; Arosa, Breno; Abreu, Igor
2015-12-01
The ATLAS Technical Coordination disposes of 17 Web systems to support its operation. These applications, whilst ranging from managing the process of publishing scientific papers to monitoring radiation levels in the equipment in the experimental cavern, are constantly prone to changes in requirements due to the collaborative nature of the experiment and its management. In this context, a Web framework is proposed to unify the generation of the supporting interfaces. FENCE assembles classes to build applications by making extensive use of JSON configuration files. It relies heavily on Glance, a technology that was set forth in 2003 to create an abstraction layer on top of the heterogeneous sources that store the technical coordination data. Once Glance maps out the database modeling, records can be referenced in the configuration files by wrapping unique identifiers around double enclosing brackets. The deployed content can be individually secured by attaching clearance attributes to their description thus ensuring that view/edit privileges are granted to eligible users only. The framework also provides tools for securely writing into a database. Fully HTML5-compliant multi-step forms can be generated from their JSON description to assure that the submitted data comply with a series of constraints. Input validation is carried out primarily on the server- side but, following progressive enhancement guidelines, verification might also be performed on the client-side by enabling specific markup data attributes which are then handed over to the jQuery validation plug-in. User monitoring is accomplished by thoroughly logging user requests along with any POST data. Documentation is built from the source code using the phpDocumentor tool and made readily available for developers online. Fence, therefore, speeds up the implementation of Web interfaces and reduces the response time to requirement changes by minimizing maintenance overhead.
GeoSciML and EarthResourceML Update, 2012
NASA Astrophysics Data System (ADS)
Richard, S. M.; Commissionthe Management; Application Inte, I.
2012-12-01
CGI Interoperability Working Group activities during 2012 include deployment of services using the GeoSciML-Portrayal schema, addition of new vocabularies to support properties added in version 3.0, improvements to server software for deploying services, introduction of EarthResourceML v.2 for mineral resources, and collaboration with the IUSS on a markup language for soils information. GeoSciML and EarthResourceML have been used as the basis for the INSPIRE Geology and Mineral Resources specifications respectively. GeoSciML-Portrayal is an OGC GML simple-feature application schema for presentation of geologic map unit, contact, and shear displacement structure (fault and ductile shear zone) descriptions in web map services. Use of standard vocabularies for geologic age and lithology enables map services using shared legends to achieve visual harmonization of maps provided by different services. New vocabularies have been added to the collection of CGI vocabularies provided to support interoperable GeoSciML services, and can be accessed through http://resource.geosciml.org. Concept URIs can be dereferenced to obtain SKOS rdf or html representations using the SISSVoc vocabulary service. New releases of the FOSS GeoServer application greatly improve support for complex XML feature schemas like GeoSciML, and the ArcGIS for INSPIRE extension implements similar complex feature support for ArcGIS Server. These improved server implementations greatly facilitate deploying GeoSciML services. EarthResourceML v2 adds features for information related to mining activities. SoilML provides an interchange format for soil material, soil profile, and terrain information. Work is underway to add GeoSciML to the portfolio of Open Geospatial Consortium (OGC) specifications.
A Cloud Architecture for Teleradiology-as-a-Service.
Melício Monteiro, Eriksson J; Costa, Carlos; Oliveira, José L
2016-05-17
Telemedicine has been promoted by healthcare professionals as an efficient way to obtain remote assistance from specialised centres, to get a second opinion about complex diagnosis or even to share knowledge among practitioners. The current economic restrictions in many countries are increasing the demand for these solutions even more, in order to optimize processes and reduce costs. However, despite some technological solutions already in place, their adoption has been hindered by the lack of usability, especially in the set-up process. In this article we propose a telemedicine platform that relies on a cloud computing infrastructure and social media principles to simplify the creation of dynamic user-based groups, opening up opportunities for the establishment of teleradiology trust domains. The collaborative platform is provided as a Software-as-a-Service solution, supporting real time and asynchronous collaboration between users. To evaluate the solution, we have deployed the platform in a private cloud infrastructure. The system is made up of three main components - the collaborative framework, the Medical Management Information System (MMIS) and the HTML5 (Hyper Text Markup Language) Web client application - connected by a message-oriented middleware. The solution allows physicians to create easily dynamic network groups for synchronous or asynchronous cooperation. The network created improves dataflow between colleagues and also knowledge sharing and cooperation through social media tools. The platform was implemented and it has already been used in two distinct scenarios: teaching of radiology and tele-reporting. Collaborative systems can simplify the establishment of telemedicine expert groups with tools that enable physicians to improve their clinical practice. Streamlining the usage of this kind of systems through the adoption of Web technologies that are common in social media will increase the quality of current solutions, facilitating the sharing of clinical information, medical imaging studies and patient diagnostics among collaborators.
Linking Different Cultures by Computers: A Study of Computer-Assisted Music Notation Instruction.
ERIC Educational Resources Information Center
Chen, Steve Shihong; Dennis, J. Richard
1993-01-01
Describes a study that investigated the feasibility of using computers to teach music notation systems to Chinese students, as well as to help Western educators study Chinese music and its number notation system. Topics discussed include students' learning sequences; HyperCard software; hypermedia and graphic hypertext indexing; and the…
HyperCard K-12: Classroom Computer Learning Special Supplement Sponsored by Apple Computer.
ERIC Educational Resources Information Center
Classroom Computer Learning, 1989
1989-01-01
Follows the development of hypertext which is the electronic movement of large amounts of text. Probes the use of the Macintosh HyperCard and its applications in education. Notes programs are stackable in the computer. Provides tool, resource, and stack directory along with tips for using HyperCard. (MVL)
An Empirical Comparison of Visualization Tools To Assist Information Retrieval on the Web.
ERIC Educational Resources Information Center
Heo, Misook; Hirtle, Stephen C.
2001-01-01
Discusses problems with navigation in hypertext systems, including cognitive overload, and describes a study that tested information visualization techniques to see which best represented the underlying structure of Web space. Considers the effects of visualization techniques on user performance on information searching tasks and the effects of…
Using Hypermedia: Effects of Prior Knowledge and Goal Strength.
ERIC Educational Resources Information Center
Last, David A.; O'Donnell, Angela M.; Kelly, Anthony E.
The influences of a student's prior knowledge and desired goal on the difficulties and benefits associated with using hypertext were examined in this study. Participants, 12 students from an undergraduate course in educational psychology, were assigned to either the low or high prior knowledge category. Within these two groups, subjects were…
Criminal Justice Research in Libraries and on the Internet.
ERIC Educational Resources Information Center
Nelson, Bonnie R.
In addition to covering the enduring elements of traditional research on criminal justice, this new edition provides full coverage on research using the World Wide Web, hypertext documents, computer indexes, and other online resources. It gives an in-depth explanation of such concepts as databases, networks, and full text, and covers the Internet…
Does Interface Matter? A Study of Web Authoring and Editing by Inexperienced Web Writers
ERIC Educational Resources Information Center
Dick, Rodney F.
2006-01-01
This study explores the complicated nature of the interface as a mediational tool for inexperienced writers as they composed hypertext documents. Because technology can become so quickly and inextricably connected to people's everyday lives, it is essential to explore the effects on these technologies before they become invisible. Because…
Use of Hypertext for the Development of an Office Reference System on Economic Analysis
1990-09-01
that were provided to assist the beginning user received mixed reviews. The table of contents function was the most popular control icon (85 percent...quizzing yourself with flashcards and having someone quiz you with flashcards . A variant on this application (can] be constructed that displayed several
Using Hypercard and Interactive Video in Education: An Application in Cell Biology.
ERIC Educational Resources Information Center
Hall, Wendy; And Others
1989-01-01
Describes the design and implementation of an interactive video system using existing videodiscs and Apple's Hypercard for use in the teaching of cell biology to undergraduate biology students. Hypertext and hypermedia are discussed, the hardware configuration is described, and a preliminary evaluation of the completed system is reported. (five…