Sample records for machine readable format

  1. Documentation for the machine-readable version of the Morphological Catalogue of Galaxies (MCG) of Vorontsov-Velyaminov et al, 1962-1968

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    Modifications, corrections, and the record format are provided for the machine-readable version of the "Morphological Catalogue of Galaxies.' In addition to hundreds of individual corrections, a detailed comparison of the machine-readable with the published catalogue resulted in the addition of 116 missing objects, the deletion of 10 duplicate records, and a format modification to increase storage efficiency.

  2. Reference Manual for Machine-Readable Descriptions of Research Projects and Institutions.

    ERIC Educational Resources Information Center

    Dierickx, Harold; Hopkinson, Alan

    This reference manual presents a standardized communication format for the exchange between databases or other information services of machine-readable information on research in progress. The manual is produced in loose-leaf format to facilitate updating. Its first section defines in broad outline the format and content of applicable records. A…

  3. Documentation for the machine-readable version of the Perth 75: A Catalogue of Positions of 2589 FK4 and FK4S Stars (Nikoloff and Hog 1982)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    Detailed descriptions of the data and format of the machine-readable astronomical catalog are given. The machine version is identical in data content to the published edition, but minor modifications in the data format were made in order to effect uniformity with machine versions of other astronomical catalogs. Stellar motions and positions at epoch and equinox 1950.0 are reported.

  4. Documentation for the machine-readable version of the Henry Draper Catalogue (edition 1985)

    NASA Technical Reports Server (NTRS)

    Roman, N. G.; Warren, W. H., Jr.

    1985-01-01

    An updated, corrected and extended machine-readable version of the catalog is described. Published and unpublished errors discovered in the previous version was corrected; letters indicating supplemental stars in the BD have been moved to a new byte to distinguish them from double-star components; and the machine readable portion of The Henry Draper Extension (HDE) (HA 100) was converted to the same format as the main catalog, with additional data added as necessary.

  5. Toolsets for Airborne Data (TAD): Improving Machine Readability for ICARTT Data Files

    NASA Astrophysics Data System (ADS)

    Northup, E. A.; Early, A. B.; Beach, A. L., III; Kusterer, J.; Quam, B.; Wang, D.; Chen, G.

    2015-12-01

    NASA has conducted airborne tropospheric chemistry studies for about three decades. These field campaigns have generated a great wealth of observations, including a wide range of the trace gases and aerosol properties. The ASDC Toolsets for Airborne Data (TAD) is designed to meet the user community needs for manipulating aircraft data for scientific research on climate change and air quality relevant issues. TAD makes use of aircraft data stored in the International Consortium for Atmospheric Research on Transport and Transformation (ICARTT) file format. ICARTT has been the NASA standard since 2010, and is widely used by NOAA, NSF, and international partners (DLR, FAAM). Its level of acceptance is due in part to it being generally self-describing for researchers, i.e., it provides necessary data descriptions for proper research use. Despite this, there are a number of issues with the current ICARTT format, especially concerning the machine readability. In order to overcome these issues, the TAD team has developed an "idealized" file format. This format is ASCII and is sufficiently machine readable to sustain the TAD system, however, it is not fully compatible with the current ICARTT format. The process of mapping ICARTT metadata to the idealized format, the format specifics, and the actual conversion process will be discussed. The goal of this presentation is to demonstrate an example of how to improve the machine readability of ASCII data format protocols.

  6. Documentation for the machine-readable version of the revised new general catalogue of nonstellar astronomical objects

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    The contents and format of the machine-readable version of the cataloque distributed by the Astronomical Data Center are described. Coding for the various scales and abbreviations used in the catalogue are tabulated and certain revisions to the machine version made to improve storage efficiency and notation are discussed.

  7. Reference Manual for Machine-Readable Bibliographic Descriptions. Second Revised Edition.

    ERIC Educational Resources Information Center

    Dierickx, H., Ed.; Hopkinson, A., Ed.

    A product of the UNISIST International Centre for Bibliographic Descriptions (UNIBIB), this reference manual presents a standardized communication format for the exchange of machine-readable bibliographic information between bibliographic databases or other types of bibliographic information services, including libraries. The manual is produced in…

  8. BLS Machine-Readable Data and Tabulating Routines.

    ERIC Educational Resources Information Center

    DiFillipo, Tony

    This report describes the machine-readable data and tabulating routines that the Bureau of Labor Statistics (BLS) is prepared to distribute. An introduction discusses the LABSTAT (Labor Statistics) database and the BLS policy on release of unpublished data. Descriptions summarizing data stored in 25 files follow this format: overview, data…

  9. Documentation for the machine-readable version of the Survey of the Astrographic Catalogue From 1 to 31 Degrees of Northern Declination (Fresneau 1983)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1983-01-01

    A description of the machine readable catalog, including detailed format and tape file characteristics, is given. The machine file is a computation of mean values for position and magnitude at a mean epoch of observation for each unique star in the Oxford, Paris, Bordeaux, Toulouse and Northern Hemisphere Algiers zone. The format was changed to effect more efficient data searching by position and additional duplicate entries were removed. The final catalog contains data for 997311 stars.

  10. Documentation for the machine-readable version of the Revised S210 Catalog of Far-Ultraviolet Objects (Page, Carruthers and Heckathorn 1982)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    A detailed description of the machine-readable revised catalog as it is currently being distributed from the Astronomical Data Center is given. This catalog of star images was compiled from imagery obtained by the Naval Research Laboratory (NRL) Far-Ultraviolet Camera/Spectrograph (Experiments S201) operated from 21 to 23 April 1972 on the lunar surface during the Apollo 16 mission. The documentation includes a detailed data format description, a table of indigenous characteristics of the magnetic tape file, and a sample listing of data records exactly as they are presented in the machine-readable version.

  11. An Investigation into the Economics of Retrospective Conversion Using a CD-ROM System.

    ERIC Educational Resources Information Center

    Co, Francisca K.

    This study compares the cost effectiveness of using a CD-ROM (compact disk read-only memory) system known as Bibliofile and the currently used OCLC (Online Computer Library Center)-based method to convert a university library's shelflist into a machine-readable database in the MARC (Machine-Readable Cataloging) format. The cost of each method of…

  12. Documentation for the machine-readable version of the revised Catalogue of Stellar Rotational Velocities of Uesugi and Fukuda (1982)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1983-01-01

    The machine-readable catalog provides mean data on the old Slettebak system for 6472 stars. The catalog results from the review, analysis and transformation of 11460 data from 102 sources. Star identification, (major catalog number, name if the star has one, or cluster identification, etc.), a man projected rotational velocity, and a list of source references re included. The references are given in a second file included with the catalog when it is distributed on magnetic tape. The contents and/formats of the the data and reference files of the machine-readable catalog are described to enable users to read and process the data.

  13. Application of XML to Journal Table Archiving

    NASA Astrophysics Data System (ADS)

    Shaya, E. J.; Blackwell, J. H.; Gass, J. E.; Kargatis, V. E.; Schneider, G. L.; Weiland, J. L.; Borne, K. D.; White, R. A.; Cheung, C. Y.

    1998-12-01

    The Astronomical Data Center (ADC) at the NASA Goddard Space Flight Center is a major archive for machine-readable astronomical data tables. Many ADC tables are derived from published journal articles. Article tables are reformatted to be machine-readable and documentation is crafted to facilitate proper reuse by researchers. The recent switch of journals to web based electronic format has resulted in the generation of large amounts of tabular data that could be captured into machine-readable archive format at fairly low cost. The large data flow of the tables from all major North American astronomical journals (a factor of 100 greater than the present rate at the ADC) necessitates the development of rigorous standards for the exchange of data between researchers, publishers, and the archives. We have selected a suitable markup language that can fully describe the large variety of astronomical information contained in ADC tables. The eXtensible Markup Language XML is a powerful internet-ready documentation format for data. It provides a precise and clear data description language that is both machine- and human-readable. It is rapidly becoming the standard format for business and information transactions on the internet and it is an ideal common metadata exchange format. By labelling, or "marking up", all elements of the information content, documents are created that computers can easily parse. An XML archive can easily and automatically be maintained, ingested into standard databases or custom software, and even totally restructured whenever necessary. Structuring astronomical data into XML format will enable efficient and focused search capabilities via off-the-shelf software. The ADC is investigating XML's expanded hyperlinking power to enhance connectivity within the ADC data/metadata and developing XSL display scripts to enhance display of astronomical data. The ADC XML Definition Type Document can be viewed at http://messier.gsfc.nasa.gov/dtdhtml/DTD-TREE.html

  14. Documentation for the machine-readable version of the ANS Ultraviolet Photometry Catalogue of Point Sources (Wesselius et al 1982)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    The machine-readable version of the Astronomical Netherlands Satellite ultraviolet photometry catalog is described in detail, with a byte-by-byte format description and characteristics of the data file given. The catalog is a compilation of ultraviolet photometry in five bands, within the wavelength range 155 nm to 330 nm, for 3573 mostly stellar objects. Additional cross reference data (object identification, UBV photometry and MK spectral types) are included in the catalog.

  15. Documentation for the machine readable version of the Yale Catalogue of the Positions and Proper Motions of Stars between Declinations -60 deg and -70 deg (Fallon 1983)

    NASA Technical Reports Server (NTRS)

    Roman, N. G.; Warren, W. H., Jr.

    1984-01-01

    The machine-readable, character-coded version of the catalog, as it is currently being distributed from the Astronomical Data Center(ADC), is described. The format and data provided in the magnetic tape version differ somewhat from those of the published catalog, which was also produced from a tape prepared at the ADC. The primary catalog data are positions and proper motions (equinox 1950.0) for 14597 stars.

  16. Library Information-Processing System

    NASA Technical Reports Server (NTRS)

    1985-01-01

    System works with Library of Congress MARC II format. System composed of subsystems that provide wide range of library informationprocessing capabilities. Format is American National Standards Institute (ANSI) format for machine-readable bibliographic data. Adaptable to any medium-to-large library.

  17. National Aspects of Creating and Using MARC/RECON Records.

    ERIC Educational Resources Information Center

    Rather, John C., Ed.; Avram, Henriette D., Ed.

    The Retrospective Conversion (RECON) Working Task Force investigated the problems of converting retrospective catalog records to machine readable form. The major conclusions and recommendations of the Task Force cover five areas: the level of machine-readable records, conversion of other machine-readable data bases, a machine-readable National…

  18. Astronomical Data Center Bulletin, volume 1, number 2

    NASA Technical Reports Server (NTRS)

    Nagy, T. A.; Warren, W. H., Jr.; Mead, J. M.

    1981-01-01

    Work in progress on astronomical catalogs is presented in 16 papers. Topics cover astronomical data center operations; automatic astronomical data retrieval at GSFC; interactive computer reference search of astronomical literature 1950-1976; formatting, checking, and documenting machine-readable catalogs; interactive catalog of UV, optical, and HI data for 201 Virgo cluster galaxies; machine-readable version of the general catalog of variable stars, third edition; galactic latitude and magnitude distribution of two astronomical catalogs; the catalog of open star clusters; infrared astronomical data base and catalog of infrared observations; the Air Force geophysics laboratory; revised magnetic tape of the N30 catalog of 5,268 standard stars; positional correlation of the two-micron sky survey and Smithsonian Astrophysical Observatory catalog sources; search capabilities for the catalog of stellar identifications (CSI) 1979 version; CSI statistics: blue magnitude versus spectral type; catalogs available from the Astronomical Data Center; and status report on machine-readable astronomical catalogs.

  19. The machine-readable Durchmusterungen - Classical catalogs in contemporary form. [for positional astronomy and identification of stars

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.; Ochsenbein, Francois; Rappaport, Barry N.

    1990-01-01

    The entire series of Durchmusterung (DM) catalogs (Bonner, Southern, Cordoba, Cape Photographic) has been computerized through a collaborative effort among institutions and individuals in France and the United States of America. Complete verification of the data, both manually and by computer, the inclusion of all supplemental stars (represented by lower case letters), complete representation of all numerical data, and a consistent format for all catalogs, should make this collection of machine-readable data a valuable addition to digitized astronomical archives.

  20. Elements of a next generation time-series ASCII data file format for Earth Sciences

    NASA Astrophysics Data System (ADS)

    Webster, C. J.

    2015-12-01

    Data in ASCII comma separated value (CSV) format are recognized as the most simple, straightforward and readable type of data present in the geosciences. Many scientific workflows developed over the years rely on data using this simple format. However, there is a need for a lightweight ASCII header format standard that is easy to create and easy to work with. Current OGC grade XML standards are complex and difficult to implement for researchers with few resources. Ideally, such a format should provide the data in CSV for easy consumption by generic applications such as spreadsheets. The format should use an existing time standard. The header should be easily human readable as well as machine parsable. The metadata format should be extendable to allow vocabularies to be adopted as they are created by external standards bodies. The creation of such a format will increase the productivity of software engineers and scientists because fewer translators and checkers would be required. Data in ASCII comma separated value (CSV) format are recognized as the most simple, straightforward and readable type of data present in the geosciences. Many scientific workflows developed over the years rely on data using this simple format. However, there is a need for a lightweight ASCII header format standard that is easy to create and easy to work with. Current OGC grade XML standards are complex and difficult to implement for researchers with few resources. Ideally, such a format would provide the data in CSV for easy consumption by generic applications such as spreadsheets. The format would use existing time standard. The header would be easily human readable as well as machine parsable. The metadata format would be extendable to allow vocabularies to be adopted as they are created by external standards bodies. The creation of such a format would increase the productivity of software engineers and scientists because fewer translators would be required.

  1. Documentation for the machine-readable version of the catalog of 5,268 standard stars, 1950.0 based on the normal system N30

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1981-01-01

    The machine-readable version of the N30 catalog available on magnetic tape from the Astronomical Data Center is described. Numerical representations of some data fields on the original catalog were changed to conform more closely to formats being used for star-catalog data, plus all records having asterisks indicating footnotes in the published catalog now have corresponding remarks entries in a second tape file; i.e. the footnotes in the published catalog were computerized and are contained in a second file of the tape.

  2. E-Assessment Data Compatibility Resolution Methodology with Bidirectional Data Transformation

    ERIC Educational Resources Information Center

    Malik, Kaleem Razzaq; Ahmad, Tauqir

    2017-01-01

    Electronic Assessment (E-Assessment) also known as computer aided assessment for the purposes involving diagnostic, formative or summative examining using data analysis. Digital assessments come commonly from social, academic, and adaptive learning in machine readable forms to deliver the machine scoring function. To achieve real-time and smart…

  3. 40 CFR 85.1905 - Alternative report formats.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Alternative report formats. 85.1905... Alternative report formats. (a) Any manufacturer may submit a plan for making either of the reports required by §§ 85.1903 and 85.1904 on computer cards, magnetic tape or other machine readable format. The...

  4. 40 CFR 85.1905 - Alternative report formats.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Alternative report formats. 85.1905... Alternative report formats. (a) Any manufacturer may submit a plan for making either of the reports required by §§ 85.1903 and 85.1904 on computer cards, magnetic tape or other machine readable format. The...

  5. 40 CFR 85.1905 - Alternative report formats.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 19 2014-07-01 2014-07-01 false Alternative report formats. 85.1905... Alternative report formats. (a) Any manufacturer may submit a plan for making either of the reports required by §§ 85.1903 and 85.1904 on computer cards, magnetic tape or other machine readable format. The...

  6. 40 CFR 85.1905 - Alternative report formats.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Alternative report formats. 85.1905... Alternative report formats. (a) Any manufacturer may submit a plan for making either of the reports required by §§ 85.1903 and 85.1904 on computer cards, magnetic tape or other machine readable format. The...

  7. Convergence Toward Common Standards in Machine-Readable Cataloging *

    PubMed Central

    Gull, C. D.

    1969-01-01

    The adoption of the MARC II format for the communication of bibliographic information by the three National Libraries of the U.S.A. makes it possible for those libraries to converge on the remaining necessary common standards for machine-readable cataloging. Three levels of standards are identified: fundamental, the character set; intermediate, MARC II; and detailed, the codes for identifying data elements. The convergence on these standards implies that the National Libraries can create and operate a Joint Bibliographic Data Bank requiring standard book numbers and universal serial numbers for identifying monographs and serials and that the system will thoroughly process contributed catalog entries before adding them to the Data Bank. There is reason to hope that the use of the MARC II format will facilitate catalogers' decision processes. PMID:5782261

  8. Can ASCII data files be standardized for Earth Science?

    NASA Astrophysics Data System (ADS)

    Evans, K. D.; Chen, G.; Wilson, A.; Law, E.; Olding, S. W.; Krotkov, N. A.; Conover, H.

    2015-12-01

    NASA's Earth Science Data Systems Working Groups (ESDSWG) was created over 10 years ago. The role of the ESDSWG is to make recommendations relevant to NASA's Earth science data systems from user experiences. Each group works independently focusing on a unique topic. Participation in ESDSWG groups comes from a variety of NASA-funded science and technology projects, such as MEaSUREs, NASA information technology experts, affiliated contractor, staff and other interested community members from academia and industry. Recommendations from the ESDSWG groups will enhance NASA's efforts to develop long term data products. Each year, the ESDSWG has a face-to-face meeting to discuss recommendations and future efforts. Last year's (2014) ASCII for Science Data Working Group (ASCII WG) completed its goals and made recommendations on a minimum set of information that is needed to make ASCII files at least human readable and usable for the foreseeable future. The 2014 ASCII WG created a table of ASCII files and their components as a means for understanding what kind of ASCII formats exist and what components they have in common. Using this table and adding information from other ASCII file formats, we will discuss the advantages and disadvantages of a standardized format. For instance, Space Geodesy scientists have been using the same RINEX/SINEX ASCII format for decades. Astronomers mostly archive their data in the FITS format. Yet Earth scientists seem to have a slew of ASCII formats, such as ICARTT, netCDF (an ASCII dump) and the IceBridge ASCII format. The 2015 Working Group is focusing on promoting extendibility and machine readability of ASCII data. Questions have been posed, including, Can we have a standardized ASCII file format? Can it be machine-readable and simultaneously human-readable? We will present a summary of the current used ASCII formats in terms of advantages and shortcomings, as well as potential improvements.

  9. The feasibility of using natural language processing to extract clinical information from breast pathology reports.

    PubMed

    Buckley, Julliette M; Coopey, Suzanne B; Sharko, John; Polubriaginof, Fernanda; Drohan, Brian; Belli, Ahmet K; Kim, Elizabeth M H; Garber, Judy E; Smith, Barbara L; Gadd, Michele A; Specht, Michelle C; Roche, Constance A; Gudewicz, Thomas M; Hughes, Kevin S

    2012-01-01

    The opportunity to integrate clinical decision support systems into clinical practice is limited due to the lack of structured, machine readable data in the current format of the electronic health record. Natural language processing has been designed to convert free text into machine readable data. The aim of the current study was to ascertain the feasibility of using natural language processing to extract clinical information from >76,000 breast pathology reports. APPROACH AND PROCEDURE: Breast pathology reports from three institutions were analyzed using natural language processing software (Clearforest, Waltham, MA) to extract information on a variety of pathologic diagnoses of interest. Data tables were created from the extracted information according to date of surgery, side of surgery, and medical record number. The variety of ways in which each diagnosis could be represented was recorded, as a means of demonstrating the complexity of machine interpretation of free text. There was widespread variation in how pathologists reported common pathologic diagnoses. We report, for example, 124 ways of saying invasive ductal carcinoma and 95 ways of saying invasive lobular carcinoma. There were >4000 ways of saying invasive ductal carcinoma was not present. Natural language processor sensitivity and specificity were 99.1% and 96.5% when compared to expert human coders. We have demonstrated how a large body of free text medical information such as seen in breast pathology reports, can be converted to a machine readable format using natural language processing, and described the inherent complexities of the task.

  10. Framework for Building Collaborative Research Environment

    DOE PAGES

    Devarakonda, Ranjeet; Palanisamy, Giriprakash; San Gil, Inigo

    2014-10-25

    Wide range of expertise and technologies are the key to solving some global problems. Semantic web technology can revolutionize the nature of how scientific knowledge is produced and shared. The semantic web is all about enabling machine-machine readability instead of a routine human-human interaction. Carefully structured data, as in machine readable data is the key to enabling these interactions. Drupal is an example of one such toolset that can render all the functionalities of Semantic Web technology right out of the box. Drupal’s content management system automatically stores the data in a structured format enabling it to be machine. Withinmore » this paper, we will discuss how Drupal promotes collaboration in a research setting such as Oak Ridge National Laboratory (ORNL) and Long Term Ecological Research Center (LTER) and how it is effectively using the Semantic Web in achieving this.« less

  11. Structuring supplemental materials in support of reproducibility.

    PubMed

    Greenbaum, Dov; Rozowsky, Joel; Stodden, Victoria; Gerstein, Mark

    2017-04-05

    Supplements are increasingly important to the scientific record, particularly in genomics. However, they are often underutilized. Optimally, supplements should make results findable, accessible, interoperable, and reusable (i.e., "FAIR"). Moreover, properly off-loading to them the data and detail in a paper could make the main text more readable. We propose a hierarchical organization for supplements, with some parts paralleling and "shadowing" the main text and other elements branching off from it, and we suggest a specific formatting to make this structure explicit. Furthermore, sections of the supplement could be presented in multiple scientific "dialects", including machine-readable and lay-friendly formats.

  12. 32 CFR 701.18 - Agency record.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., photographs, machine readable materials, inclusive of those in electronic form or format, or other documentary...) Hard copy or electronic records, which are subject to FOIA requests under 5 U.S.C. 552(a)(3), and which...

  13. 77 FR 7 - Revisions to Labeling Requirements for Blood and Blood Components, Including Source Plasma

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-03

    ... requirements will facilitate the use of a labeling system using machine-readable information that would be... components. Furthermore, we proposed the use of a labeling system using machine-readable information that...; Facilitates the use of a labeling system using machine- readable information that would be acceptable as a...

  14. 6 CFR 37.19 - Machine readable technology on the driver's license or identification card.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., States must use the ISO/IEC 15438:2006(E) Information Technology—Automatic identification and data... 6 Domestic Security 1 2011-01-01 2011-01-01 false Machine readable technology on the driver's..., Verification, and Card Issuance Requirements § 37.19 Machine readable technology on the driver's license or...

  15. 6 CFR 37.19 - Machine readable technology on the driver's license or identification card.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., States must use the ISO/IEC 15438:2006(E) Information Technology—Automatic identification and data... 6 Domestic Security 1 2010-01-01 2010-01-01 false Machine readable technology on the driver's..., Verification, and Card Issuance Requirements § 37.19 Machine readable technology on the driver's license or...

  16. Documentation for the machine-readable version of A Library of Stellar Spectra (Jacoby, Hunter and Christian 1984)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    The machine readable library as it is currently being distributed from the Astronomical Data Center is described. The library contains digital spectral for 161 stars of spectral classes O through M and luminosity classes 1, 3 and 5 in the wavelength range 3510 A to 7427 A. The resolution is approximately 4.5 A, while the typical photometric uncertainty of each resolution element is approximately 1 percent and broadband variations are 3 percent. The documentation includes a format description, a table of the indigenous characteristics of the magnetic tape file, and a sample listing of logical records exactly as they are recorded on the tape.

  17. Documentation for the machine-readable version of the Stellar Spectrophotometric Atlas, 3130 A lambda 10800 A of Gunn and Stryker (1983)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    The machine-readable version of the Atlas as it is currently being distributed from the Astronomical Data Center is described. The data were obtained with the Oke multichannel scanner on the 5-meter Hale reflector for purposes of synthesizing galaxy spectra, and the digitized Atlas contains normalized spectral energy distributions, computed colors, scan line and continuum indices for 175 selected stars covering the complete ranges of spectral type and luminosity class. The documentation includes a byte-by-byte format description, a table of the indigenous characteristics of the magnetic tape file, and a sample listing of logical records exactly as they are recorded on the tape.

  18. Toward Automated Benchmarking of Atomistic Force Fields: Neat Liquid Densities and Static Dielectric Constants from the ThermoML Data Archive.

    PubMed

    Beauchamp, Kyle A; Behr, Julie M; Rustenburg, Ariën S; Bayly, Christopher I; Kroenlein, Kenneth; Chodera, John D

    2015-10-08

    Atomistic molecular simulations are a powerful way to make quantitative predictions, but the accuracy of these predictions depends entirely on the quality of the force field employed. Although experimental measurements of fundamental physical properties offer a straightforward approach for evaluating force field quality, the bulk of this information has been tied up in formats that are not machine-readable. Compiling benchmark data sets of physical properties from non-machine-readable sources requires substantial human effort and is prone to the accumulation of human errors, hindering the development of reproducible benchmarks of force-field accuracy. Here, we examine the feasibility of benchmarking atomistic force fields against the NIST ThermoML data archive of physicochemical measurements, which aggregates thousands of experimental measurements in a portable, machine-readable, self-annotating IUPAC-standard format. As a proof of concept, we present a detailed benchmark of the generalized Amber small-molecule force field (GAFF) using the AM1-BCC charge model against experimental measurements (specifically, bulk liquid densities and static dielectric constants at ambient pressure) automatically extracted from the archive and discuss the extent of data available for use in larger scale (or continuously performed) benchmarks. The results of even this limited initial benchmark highlight a general problem with fixed-charge force fields in the representation low-dielectric environments, such as those seen in binding cavities or biological membranes.

  19. 76 FR 27048 - Information Collection Being Reviewed by the Federal Communications Commission

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-10

    ... Commission; (8) Ex parte notices must be submitted electronically in machine-readable format. PDF images created by scanning a paper document may not be submitted, except in cases in which a word-processing...

  20. Does Machine-Readable Documentation on Online Hosts and CD-ROMs Have a Role or Future?

    ERIC Educational Resources Information Center

    Harris, Stephen; Oppenheim, Charles

    1996-01-01

    Reports results of a United Kingdom-based mail survey of database users, CD-ROM producers, and hosts to assess trends and views concerning documentation in machine-readable form. Cost, convenience, and ease of use of print manuals are cited as reasons for the reluctance to switch to machine-readable documentation. Sample surveys are included.…

  1. Department of Defense Logistics Roadmap 2008. Volume 1

    DTIC Science & Technology

    2008-07-01

    machine readable identification mark on the Department’s tangible qualifying assets, and establishes the data management protocols needed to...uniquely identify items with a Unique Item Identifier (UII) via machine - readable information (MRI) marking represented by a two-dimensional data...property items with a machine -readable Unique Item Identifier (UII), which is a set of globally unique data elements. The UII is used in functional

  2. 76 FR 45794 - Notice of Public Information Collection(s) Being Reviewed by the Federal Communications...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-01

    ... must be submitted electronically in machine-readable format. PDF images created by scanning a paper document may not be submitted, except in cases in which a word- processing version of the document is not...

  3. CDISC SHARE, a Global, Cloud-based Resource of Machine-Readable CDISC Standards for Clinical and Translational Research

    PubMed Central

    Hume, Samuel; Chow, Anthony; Evans, Julie; Malfait, Frederik; Chason, Julie; Wold, J. Darcy; Kubick, Wayne; Becnel, Lauren B.

    2018-01-01

    The Clinical Data Interchange Standards Consortium (CDISC) is a global non-profit standards development organization that creates consensus-based standards for clinical and translational research. Several of these standards are now required by regulators for electronic submissions of regulated clinical trials’ data and by government funding agencies. These standards are free and open, available for download on the CDISC Website as PDFs. While these documents are human readable, they are not amenable to ready use by electronic systems. CDISC launched the CDISC Shared Health And Research Electronic library (SHARE) to provide the standards metadata in machine-readable formats to facilitate the automated management and implementation of the standards. This paper describes how CDISC SHARE’S standards can facilitate collecting, aggregating and analyzing standardized data from early design to end analysis; and its role as a central resource providing information systems with metadata that drives process automation including study setup and data pipelining. PMID:29888049

  4. Comparison of Document Data Bases

    ERIC Educational Resources Information Center

    Schipma, Peter B.; And Others

    This paper presents a detailed analysis of the content and format of seven machine-readable bibliographic data bases: Chemical Abstracts Service Condensates, Chemical and Biological Activities, and Polymer Science and Technology, Biosciences Information Service's BA Previews including Biological Abstracts and BioResearch Index, Institute for…

  5. Survey of Commercially Available Computer-Readable Bibliographic Data Bases.

    ERIC Educational Resources Information Center

    Schneider, John H., Ed.; And Others

    This document contains the results of a survey of 94 U. S. organizations, and 36 organizations in other countries that were thought to prepare machine-readable data bases. Of those surveyed, 55 organizations (40 in U. S., 15 in other countries) provided completed camera-ready forms describing 81 commercially available, machine-readable data bases…

  6. 77 FR 72337 - Apps for Vehicles Challenge

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-05

    ... innovation while rigorously protecting privacy. The primary fuel for the Energy Data Initiative is open data. Open data can take many forms but generally includes information that is machine-readable, freely accessible and in an industry-standard format. In particular, open data from the private sector made...

  7. A search for ultraviolet-excess objects (Kondo, Noguchi, and Maehara 1984): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.

    1990-01-01

    A list of 1186 ultraviolet-excess objects (designated KUV) was compiled as a result of a search conducted with the 105-cm Schmidt telescope of the Kiso station of the Tokyo Astronomical Observatory. This document describes the machine readable version of the KUV survey list and presents a sample listing showing the logical records as they are recorded in the machine readable catalog. The KUV data include equatorial coordinates, magnitudes, color indices, and identifications for previously cataloged objects.

  8. IFLA General Conference, 1986. Bibliographic Control Division. Section: Cataloguing. Papers.

    ERIC Educational Resources Information Center

    International Federation of Library Associations and Institutions, The Hague (Netherlands).

    Papers on cataloging which were presented at the 1986 International Federation of Library Associations (IFLA) conference include: (1) "Cataloging of Government Documents in the Age of Automation" (Chong Y. Yoon, United States), which discusses the use of MARC (Machine-Readable Cataloging) formats to integrate government documents into…

  9. Common Bibliographic Standards for Baylor University Libraries. Revised.

    ERIC Educational Resources Information Center

    Scott, Sharon; And Others

    Developed by a Baylor University (Texas) Task Force, the revised policies of bibliographic standards for the university libraries provide formats for: (1) archives and manuscript control; (2) audiovisual media; (3) books; (4) machine-readable data files; (5) maps; (6) music scores; (7) serials; and (8) sound recordings. The task force assumptions…

  10. Optical Scanning for Retrospective Conversion of Information.

    ERIC Educational Resources Information Center

    Hein, Morten

    1986-01-01

    This discussion of the use of optical scanning and computer formatting for retrospective conversion focuses on a series of applications known as Optical Scanning for Creation of Information Databases (OSCID). Prior research in this area and the usefulness of OSCID for creating low-priced machine-readable data representing older materials are…

  11. Visualization of Learning Scenarios with UML4LD

    ERIC Educational Resources Information Center

    Laforcade, Pierre

    2007-01-01

    Present Educational Modelling Languages are used to formally specify abstract learning scenarios in a machine-interpretable format. Current tooling does not provide teachers/designers with some graphical facilities to help them in reusing existent scenarios. They need human-readable representations. This paper discusses the UML4LD experimental…

  12. NASA STI Program Coordinating Council Twelfth Meeting: Standards

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The theme of this NASA Scientific and Technical Information Program Coordinating Council Meeting was standards and their formation and application. Topics covered included scientific and technical information architecture, the Open Systems Interconnection Transmission Control Protocol/Internet Protocol, Machine-Readable Cataloging (MARC) open system environment procurement, and the Government Information Locator Service.

  13. A survey of machine readable data bases

    NASA Technical Reports Server (NTRS)

    Matlock, P.

    1981-01-01

    Forty-two of the machine readable data bases available to the technologist and researcher in the natural sciences and engineering are described and compared with the data bases and date base services offered by NASA.

  14. The critical evaluation of stellar data

    NASA Technical Reports Server (NTRS)

    Underhill, A. B.; Mead, J. M.; Nagy, T. A.

    1977-01-01

    The paper discusses the importance of evaluating a catalog of stellar data, whether it is an old catalog being made available in machine-readable form, or a new catalog written expressly in machine-readable form, and discusses some principles to be followed in the evaluation of such data. A procedure to be followed when checking out an astronomical catalog on magnetic tape is described. A cross index system which relates the different identification numbers of a star or other astronomical object as they appear in different catalogs in machine-readable form is described.

  15. PDB explorer -- a web based algorithm for protein annotation viewer and 3D visualization.

    PubMed

    Nayarisseri, Anuraj; Shardiwal, Rakesh Kumar; Yadav, Mukesh; Kanungo, Neha; Singh, Pooja; Shah, Pratik; Ahmed, Sheaza

    2014-12-01

    The PDB file format, is a text format characterizing the three dimensional structures of macro molecules available in the Protein Data Bank (PDB). Determined protein structure are found in coalition with other molecules or ions such as nucleic acids, water, ions, Drug molecules and so on, which therefore can be described in the PDB format and have been deposited in PDB database. PDB is a machine generated file, it's not human readable format, to read this file we need any computational tool to understand it. The objective of our present study is to develop a free online software for retrieval, visualization and reading of annotation of a protein 3D structure which is available in PDB database. Main aim is to create PDB file in human readable format, i.e., the information in PDB file is converted in readable sentences. It displays all possible information from a PDB file including 3D structure of that file. Programming languages and scripting languages like Perl, CSS, Javascript, Ajax, and HTML have been used for the development of PDB Explorer. The PDB Explorer directly parses the PDB file, calling methods for parsed element secondary structure element, atoms, coordinates etc. PDB Explorer is freely available at http://www.pdbexplorer.eminentbio.com/home with no requirement of log-in.

  16. The Exchange of Bibliographic Data in Non-Roman Scripts.

    ERIC Educational Resources Information Center

    Wellisch, Hans H.

    1980-01-01

    Advocates the use of machine readable codes to accomplish romanization and promote the exchange of bibliographic data. Proposals are presented for transliteration standards, design of machine readable conversion codes, and the establishment of databases. (RAA)

  17. 26 CFR 1.197-2 - Amortization of goodwill and certain other intangibles.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ..., process, design, pattern, know-how, format, package design, computer software (as defined in paragraph (c... agreement that provides one of the parties to the agreement with the right to distribute, sell, or provide... any program or routine (that is, any sequence of machine-readable code) that is designed to cause a...

  18. Criteria for Labelling Prosodic Aspects of English Speech.

    ERIC Educational Resources Information Center

    Bagshaw, Paul C.; Williams, Briony J.

    A study reports a set of labelling criteria which have been developed to label prosodic events in clear, continuous speech, and proposes a scheme whereby this information can be transcribed in a machine readable format. A prosody in a syllabic domain which is synchronized with a phonemic segmentation was annotated. A procedural definition of…

  19. Facilitating knowledge discovery and visualization through mining contextual data from published studies: lessons from JournalMap

    USDA-ARS?s Scientific Manuscript database

    Valuable information on the location and context of ecological studies are locked up in publications in myriad formats that are not easily machine readable. This presents significant challenges to building geographic-based tools to search for and visualize sources of ecological knowledge. JournalMap...

  20. Design of Formats and Packs of Catalog Cards.

    ERIC Educational Resources Information Center

    OCLC Online Computer Library Center, Inc., Dublin, OH.

    The three major functions of the Ohio College Library Center's Shared Cataloging System are: 1) provision of union catalog location listing; 2) making available cataloging done by one library to all other users of the system; and 3) production of catalog cards. The system, based on a central machine readable data base, speeds cataloging and…

  1. MARC Data, the OPAC, and Library Professionals

    ERIC Educational Resources Information Center

    Williams, Jo

    2009-01-01

    Purpose: The purpose of this paper is to show that knowledge of the Machine-Readable Cataloguing (MARC) format is useful in all aspects of librarianship, not just for cataloguing, and how MARC knowledge can address indexing limitations of the online catalogue. Design/methodology/approach: The paper employs examples and scenarios to show the…

  2. Assessing Information on the Internet: Toward Providing Library Services for Computer-Mediated Communication. A Final Report.

    ERIC Educational Resources Information Center

    Dillon, Martin; And Others

    The Online Computer Library Center Internet Resource project focused on the nature of electronic textual information available through remote access using the Internet and the problems associated with creating machine-readable cataloging (MARC) records for these objects using current USMARC format for computer files and "Anglo-American…

  3. IFLA General Conference, 1989. Introduction to IFLA's Core Programmes; Contributed Papers; Plenary Session Papers. Booklet 00.

    ERIC Educational Resources Information Center

    International Federation of Library Associations, The Hague (Netherlands).

    This collection contains three papers providing an introduction to the International Federation of Library Associations (IFLA) Core Programs, four contributed papers, and two Plenary Session papers: (1) "The Universal Bibliographic Control and International MARC (Machine Readable Cataloging Formats) Program" (Winston D. Roberts); (2) "Le Programme…

  4. Documentation for the machine-readable version of a table of Redshifts for Abell clusters (Sarazin, Rood and Struble 1982)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1983-01-01

    The machine readable catalog is described. The machine version contains the same data as the published table, which includes a second file with the notes. The computerized data files are prepared at the Astronomical Data Center. Detected discrepancies and cluster identifications based on photometric estimators are included.

  5. Documentation for the machine-readable version of A Finding List of Stars of Spectral Type F2 and Earlier in a North Galactic Pole Region

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    The machine-readable data set is the result of an objective-prism survey made with an 80 cm/120 cm Schmidt telescope. The F2 and earlier stars were isolated from later type objects by using the MK classification criteria. The catalog contains 601 stars and includes cross identifications to the BD and HD catalogs, coordinates, photographic magnitudes and spectral types. A separate file contains the remarks from the original data tables merged with those following the data. The machine-readable files are described.

  6. A Machine Learning Approach to Measurement of Text Readability for EFL Learners Using Various Linguistic Features

    ERIC Educational Resources Information Center

    Kotani, Katsunori; Yoshimi, Takehiko; Isahara, Hitoshi

    2011-01-01

    The present paper introduces and evaluates a readability measurement method designed for learners of EFL (English as a foreign language). The proposed readability measurement method (a regression model) estimates the text readability based on linguistic features, such as lexical, syntactic and discourse features. Text readability refers to the…

  7. Constructing and validating readability models: the method of integrating multilevel linguistic features with machine learning.

    PubMed

    Sung, Yao-Ting; Chen, Ju-Ling; Cha, Ji-Her; Tseng, Hou-Chiang; Chang, Tao-Hsing; Chang, Kuo-En

    2015-06-01

    Multilevel linguistic features have been proposed for discourse analysis, but there have been few applications of multilevel linguistic features to readability models and also few validations of such models. Most traditional readability formulae are based on generalized linear models (GLMs; e.g., discriminant analysis and multiple regression), but these models have to comply with certain statistical assumptions about data properties and include all of the data in formulae construction without pruning the outliers in advance. The use of such readability formulae tends to produce a low text classification accuracy, while using a support vector machine (SVM) in machine learning can enhance the classification outcome. The present study constructed readability models by integrating multilevel linguistic features with SVM, which is more appropriate for text classification. Taking the Chinese language as an example, this study developed 31 linguistic features as the predicting variables at the word, semantic, syntax, and cohesion levels, with grade levels of texts as the criterion variable. The study compared four types of readability models by integrating unilevel and multilevel linguistic features with GLMs and an SVM. The results indicate that adopting a multilevel approach in readability analysis provides a better representation of the complexities of both texts and the reading comprehension process.

  8. 26 CFR 1.197-2 - Amortization of goodwill and certain other intangibles.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., process, design, pattern, know-how, format, package design, computer software (as defined in paragraph (c... section 1253(b)(1) and includes any agreement that provides one of the parties to the agreement with the... any program or routine (that is, any sequence of machine-readable code) that is designed to cause a...

  9. Supporting Open Access to European Academic Courses: The ASK-CDM-ECTS Tool

    ERIC Educational Resources Information Center

    Sampson, Demetrios G.; Zervas, Panagiotis

    2013-01-01

    Purpose: This paper aims to present and evaluate a web-based tool, namely ASK-CDM-ECTS, which facilitates authoring and publishing on the web descriptions of (open) academic courses in machine-readable format using an application profile of the Course Description Metadata (CDM) specification, namely CDM-ECTS. Design/methodology/approach: The paper…

  10. Ownership of Machine-Readable Bibliographic Data. Canadian Network Papers Number 5 = Propriete des Donnees Bibliographique Lisibles par Machine. Documents sur les Resaux Canadiens Numero 5.

    ERIC Educational Resources Information Center

    Duchesne, R. M.; And Others

    Because of data ownership questions raised by the interchange and sharing of machine readable bibliographic data, this paper was prepared for the Bibliographic and Communications Network Committee of the National Library Advisory Board. Background information and definitions are followed by a review of the legal aspects relating to property and…

  11. 48 CFR 252.211-7003 - Item unique identification and valuation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... reader or interrogator, used to retrieve data encoded on machine-readable media. Concatenated unique item... identifier. Item means a single hardware article or a single unit formed by a grouping of subassemblies... manufactured under identical conditions. Machine-readable means an automatic identification technology media...

  12. Documentation for the machine-readable version of the AGK3 Star Catalogue of Positions and Proper Motions North of -2 deg .5 declination (Dieckvoss and Collaborators 1975)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    A detailed description of the machine-readable astronomical catalog as it is currently being distributed from the Astronomical Data Center is given. Stellar motions and positions are listed herein in tabular form.

  13. Banknotes and unattended cash transactions

    NASA Astrophysics Data System (ADS)

    Bernardini, Ronald R.

    2000-04-01

    There is a 64 billion dollar annual unattended cash transaction business in the US with 10 to 20 million daily transactions. Even small problems with the machine readability of banknotes can quickly become a major problem to the machine manufacturer and consumer. Traditional note designs incorporate overt security features for visual validation by the public. Many of these features such as fine line engraving, microprinting and watermarks are unsuitable as machine readable features in low cost note acceptors. Current machine readable features, mostly covert, were designed and implemented with the central banks in mind. These features are only usable by the banks large, high speed currency sorting and validation equipment. New note designs should consider and provide for low cost not acceptors, implementing features developed for inexpensive sensing technologies. Machine readable features are only as good as their consistency. Quality of security features as well as that of the overall printing process must be maintained to ensure reliable and secure operation of note readers. Variations in printing and of the components used to make the note are one of the major causes of poor performance in low cost note acceptors. The involvement of machine manufacturers in new currency designs will aid note producers in the design of a note that is machine friendly, helping to secure the acceptance of the note by the public as well as acting asa deterrent to fraud.

  14. A Study of Readability of Texts in Bangla through Machine Learning Approaches

    ERIC Educational Resources Information Center

    Sinha, Manjira; Basu, Anupam

    2016-01-01

    In this work, we have investigated text readability in Bangla language. Text readability is an indicator of the suitability of a given document with respect to a target reader group. Therefore, text readability has huge impact on educational content preparation. The advances in the field of natural language processing have enabled the automatic…

  15. Collaborative Planning of Robotic Exploration

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey; Backes, Paul; Powell, Mark; Vona, Marsette; Steinke, Robert

    2004-01-01

    The Science Activity Planner (SAP) software system includes an uplink-planning component, which enables collaborative planning of activities to be undertaken by an exploratory robot on a remote planet or on Earth. Included in the uplink-planning component is the SAP-Uplink Browser, which enables users to load multiple spacecraft activity plans into a single window, compare them, and merge them. The uplink-planning component includes a subcomponent that implements the Rover Markup Language Activity Planning format (RML-AP), based on the Extensible Markup Language (XML) format that enables the representation, within a single document, of planned spacecraft and robotic activities together with the scientific reasons for the activities. Each such document is highly parseable and can be validated easily. Another subcomponent of the uplink-planning component is the Activity Dictionary Markup Language (ADML), which eliminates the need for two mission activity dictionaries - one in a human-readable format and one in a machine-readable format. Style sheets that have been developed along with the ADML format enable users to edit one dictionary in a user-friendly environment without compromising

  16. Financial Statistics. Higher Education General Information Survey (HEGIS) [machine-readable data file].

    ERIC Educational Resources Information Center

    Center for Education Statistics (ED/OERI), Washington, DC.

    The Financial Statistics machine-readable data file (MRDF) is a subfile of the larger Higher Education General Information Survey (HEGIS). It contains basic financial statistics for over 3,000 institutions of higher education in the United States and its territories. The data are arranged sequentially by institution, with institutional…

  17. 45 CFR 205.57 - Maintenance of a machine readable file; requests for income and eligibility information.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION-PUBLIC ASSISTANCE PROGRAMS § 205... 45 Public Welfare 2 2012-10-01 2012-10-01 false Maintenance of a machine readable file; requests...

  18. 45 CFR 205.57 - Maintenance of a machine readable file; requests for income and eligibility information.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION-PUBLIC ASSISTANCE PROGRAMS § 205... 45 Public Welfare 2 2013-10-01 2012-10-01 true Maintenance of a machine readable file; requests...

  19. 45 CFR 205.57 - Maintenance of a machine readable file; requests for income and eligibility information.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION-PUBLIC ASSISTANCE PROGRAMS § 205... 45 Public Welfare 2 2014-10-01 2012-10-01 true Maintenance of a machine readable file; requests...

  20. 45 CFR 205.57 - Maintenance of a machine readable file; requests for income and eligibility information.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION-PUBLIC ASSISTANCE PROGRAMS § 205... 45 Public Welfare 2 2011-10-01 2011-10-01 false Maintenance of a machine readable file; requests...

  1. Elementary and Secondary School Civil Rights Survey, 1984 [machine-readable data file].

    ERIC Educational Resources Information Center

    DBS Corp., Arlington, VA.

    The "Elementary and Secondary School Civil Rights Survey" machine-readable data file (MRDF) contains data on the characteristics of student populations enrolled in public schools throughout the United States. The emphasis is on data by race/ethnicity and sex in the following areas: stereotyping in courses, special education, vocational education,…

  2. Choosing the Future: College Students' Projections of Their Personal Life Patterns [machine-readable data file].

    ERIC Educational Resources Information Center

    Thomas, Joan

    "Choosing the Future: College Students' Projections of Their Personal Life Patterns" is a machine-readable data file (MRDF) prepared by the principal investigator in connection with her doctoral program studies and her 1986 unpublished doctoral dissertation prepared in the Department of Psychology at the University of Cincinnati. The…

  3. COM: Decisions and Applications in a Small University Library.

    ERIC Educational Resources Information Center

    Schwarz, Philip J.

    Computer-output microfilm (COM) is used at the University of Wisconsin-Stout Library to generate reports from its major machine readable data bases. Conditions indicating the need to convert to COM include existence of a machine readable data base and high cost of report production. Advantages and disadvantages must also be considered before…

  4. Documentation for the machine-readable version of the Lowell Proper Motion Survey northern hemisphere, the G numbered stars

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    Observed positions, proper motions, estimated photographic magnitudes and colors, and references to identifications in other catalogs are included. Photoelectric data on the UBV system are included for many stars, but no attempt was made to find all existing photometry. The machine-readable catalog is described.

  5. Student Achievement Study, 1970-1974. The IEA Six-Subject Data Bank [machine-readable data file].

    ERIC Educational Resources Information Center

    International Association for the Evaluation of Educational Achievement, Stockholm (Sweden).

    The "Student Achievement Study" machine-readable data files (MRDF) (also referred to as the "IEA Six-Subject Survey") are the result of an international data collection effort during 1970-1974 by 21 designated National Centers, which had agreed to cooperate. The countries involved were: Australia, Belgium, Chile, England-Wales,…

  6. Migrant Student Record Transfer System (MSRTS) [machine-readable data file].

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock. General Education Div.

    The Migrant Student Record Transfer System (MSRTS) machine-readable data file (MRDF) is a collection of education and health data on more than 750,000 migrant children in grades K-12 in the United States (except Hawaii), the District of Columbia, and the outlying territories of Puerto Rico and the Mariana and Marshall Islands. The active file…

  7. Machine-Readable Data Files in the Social Sciences: An Anthropologist and a Librarian Look at the Issues.

    ERIC Educational Resources Information Center

    Bernard, H. Russell; Jones, Ray

    1984-01-01

    Focuses on problems in making machine-readable data files (MRDFs) accessible and in using them: quality of data in MRDFs themselves (social scientists' concern) and accessibility--availability of bibliographic control, quality of documentation, level of user skills (librarians' concern). Skills needed by social scientists and librarians are…

  8. 49 CFR 573.4 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... than a tire) that was installed in or on a motor vehicle at the time of its delivery to the first purchaser if the item of equipment was installed on or in the motor vehicle at the time of its delivery to a... readable by machine. If readable by machine, the submitting party must obtain written confirmation from the...

  9. 49 CFR 573.4 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... than a tire) that was installed in or on a motor vehicle at the time of its delivery to the first purchaser if the item of equipment was installed on or in the motor vehicle at the time of its delivery to a... readable by machine. If readable by machine, the submitting party must obtain written confirmation from the...

  10. 49 CFR 573.4 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... than a tire) that was installed in or on a motor vehicle at the time of its delivery to the first purchaser if the item of equipment was installed on or in the motor vehicle at the time of its delivery to a... readable by machine. If readable by machine, the submitting party must obtain written confirmation from the...

  11. Documentation for the machine-readable version of The Revised AFGL Infrared Sky Survey Catalog (Price and Murdock 1983)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    A detailed description of the machine-readable catalog as it is currently being distributed from the Astronomical Data Center is given. The catalog contains a main data file of 2970 sources and a supplemental file of 3176 sources measured at wavelengths of 4.2, 11, 20 and 27 microns.

  12. Documentation for the machine-readable version of the third Santiago-Pulkovo Fundamental Stars Catalogue (SPF-3)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    The machine-readable version of a catalog of right ascensions of 671 bright stars is described. The observations were made in a series consisting of 70 stars observed along the meridian from +42 deg to the pole in upper culmination and from the pole to -70 deg in lower culmination.

  13. A catalog of stellar spectrophotometry (Adelman, et al. 1989): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.; Adelman, Saul J.

    1990-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the astronomical data centers, is described. The catalog is a collection of spectrophotometric observations made using rotating grating scanners and calibrated with the fluxes of Vega. The observations cover various wavelength regions between about 330 and 1080 nm.

  14. Bibliography On Multiprocessors And Distributed Processing

    NASA Technical Reports Server (NTRS)

    Miya, Eugene N.

    1988-01-01

    Multiprocessor and Distributed Processing Bibliography package consists of large machine-readable bibliographic data base, which in addition to usual keyword searches, used for producing citations, indexes, and cross-references. Data base contains UNIX(R) "refer" -formatted ASCII data and implemented on any computer running under UNIX(R) operating system. Easily convertible to other operating systems. Requires approximately one megabyte of secondary storage. Bibliography compiled in 1985.

  15. Southern Durchmusterung (Schoenfeld 1886): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.; Ochsenbein, Francois

    1989-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The Southern Durchmusterung (SD) was computerized at the Centre de Donnees Astronomiques de Strasbourg and at the Astronomical Data Center at the National Space Science Data Center, NASA/Goddard Space Flight Center. Corrigenda listed in the original SD volume and published by Kuenster and Sticker were incorporated into the machine file. In addition, one star indicated to be missing in a published list, and later verified, is flagged so that it can be omitted from computer plotted charts if desired. Stars deleted in the various errata lists were similarly flagged, while those with revised data are flagged and listed in a separate table. This catalog covers the zones -02 to -23 degrees; zones +89 to -01 degrees (the Bonner Durchmusterung) are included in a separate catalog available in machine-readable form.

  16. Documentation for the machine-readable version of the Cape Photographic Durchmusterung (CPD)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The complete catalog is contained in the magnetic tape file, and corrections published in all errata have been made to the data. The machine version contains 454877 records, but only 454875 stars (two stars were later deleted, but their logical records are retained in the file so that the zone counts are not diiferent from the published catalog).

  17. Documentation for the machine-readable version of the Cordoba Durchmusterung (CD)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is presented. The complete catalog is contained in the magnetic tape file, and corrections published in all corrigenda were made to the data. The machine version contains 613959 records, but only 613953 stars (six stars were later deleted, but their logical records are retained in the file so that the zone counts are not different from the published catalog).

  18. Proceedings of the Conference on Machine-Readable Catalog Copy (3rd, Library of Congress, February 25, 1966).

    ERIC Educational Resources Information Center

    Library of Congress, Washington, DC.

    A conference was held to permit a discussion between the libraries that will participate in the Library of Congress machine-readable cataloging (MARC) pilot project. The MARC pilot will provide an opportunity for the Library of Congress to assess the effect which data conversion places on the Library's normal processing procedures; the suitability…

  19. 45 CFR 205.57 - Maintenance of a machine readable file; requests for income and eligibility information.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 2 2010-10-01 2010-10-01 false Maintenance of a machine readable file; requests for income and eligibility information. 205.57 Section 205.57 Public Welfare Regulations Relating to Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES...

  20. CLOSSS: A Machine Readable Data Base of Social Science Serials, Progress Report, 1971-1972. Working Paper No. 8.

    ERIC Educational Resources Information Center

    Roberts, S. A.; Bradshaw, R. G.

    Deisgn of Information Systems in the Social Sciences (DISISS) is a research project conducted to describe the main characteristics of the literature of the social sciences using bibliometric techniques. A comprehensive machine readable file of social science serials was developed which is called CLOSSS (Check List of Social Science Serials). Data…

  1. High-capacity high-speed recording

    NASA Astrophysics Data System (ADS)

    Jamberdino, A. A.

    1981-06-01

    Continuing advances in wideband communications and information handling are leading to extremely large volume digital data systems for which conventional data storage techniques are becoming inadequate. The paper presents an assessment of alternative recording technologies for the extremely wideband, high capacity storage and retrieval systems currently under development. Attention is given to longitudinal and rotary head high density magnetic recording, laser holography in human readable/machine readable devices and a wideband recorder, digital optical disks, and spot recording in microfiche formats. The electro-optical technologies considered are noted to be capable of providing data bandwidths up to 1000 megabits/sec and total data storage capacities in the 10 to the 11th to 10 to the 12th bit range, an order of magnitude improvement over conventional technologies.

  2. Development of OCR system for portable passport and visa reader

    NASA Astrophysics Data System (ADS)

    Visilter, Yury V.; Zheltov, Sergey Y.; Lukin, Anton A.

    1999-01-01

    The modern passport and visa documents include special machine-readable zones satisfied the ICAO standards. This allows to develop the special passport and visa automatic readers. However, there are some special problems in such OCR systems: low resolution of character images captured by CCD-camera (down to 150 dpi), essential shifts and slopes (up to 10 degrees), rich paper texture under the character symbols, non-homogeneous illumination. This paper presents the structure and some special aspects of OCR system for portable passport and visa reader. In our approach the binarization procedure is performed after the segmentation step, and it is applied to the each character site separately. Character recognition procedure uses the structural information of machine-readable zone. Special algorithms are developed for machine-readable zone extraction and character segmentation.

  3. Documentation for the machine-readable version of the catalog of galactic O type stars

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    The Catalog of Galactic O-Type Stars (Garmany, Conti and Chiosi 1982), a compilation from the literature of all O-type stars for which spectral types, luminosity classes and UBV photometry exist, contains 765 stars, for each of which designation (HD, DM, etc.), spectral type, V, B-V, cluster membership, Galactic coordinates, and source references are given. Derived values of absolute visual and bolometric magnitudes, and distances are included. The source reference should be consulted for additional details concerning the derived quantities. This description of the machine-readable version of the catalog seeks to enable users to read and process the data with a minimum of guesswork. A copy of this document should be distributed with any machine readable version of the catalog.

  4. Salaries, Tenure, and Fringe Benefits of Full-Time Instructional Faculty. Higher Education General Information Survey (HEGIS) [machine-readable data file].

    ERIC Educational Resources Information Center

    VSE Corp., Alexandria, VA.

    The "Faculty Salary Survey" machine-readable data file (MRDF) is one component of the Higher Education General Information Survey (HEGIS). It contains data about salaries, tenure, and fringe benefits for full-time instructional faculty from over 3,000 institutions of higher education located in the United States and its outlying areas.…

  5. Documentation for the machine-readable version of A Catalogue of Extragalactic Radio Source Identifications (Veron-Cetty and Veron 1983)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1983-01-01

    Detailed descriptions of the data and reference files of the updated and final version of the machine-readable catalog are given. The computerized catalog has greatly expanded since the original published version (1974), and additional information is given. A separate reference file contains bibliographical citations ordered simultaneously by numerical reference and alphabetically by author.

  6. Documentation for the machine-readable version of the Uppsala general catalogue of galaxies

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    The machine-readable version of the catalog containing descriptions of galaxies, their surrounding areas, and position angles for flattened galaxies is described. In addition to the correction of several errors discovered in a previous computerized version, a few duplicate records were removed and the record structure was revised slightly to accommodate a large data value and to remove superfluous blanks.

  7. A multiplet table for Mn I (Adelman, Svatek, Van Winkler, Warren 1989): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.; Adelman, Saul J.

    1989-01-01

    The machine-readable version of the multiplet table, as it is currently being distributed from the Astronomical Data Center, is described. The computerized version of the table contains data on excitation potentials, J values, multiplet terms, intensities of the transitions, and multiplet numbers. Files ordered by multiplet and by wavelength are included in the distributed version.

  8. Documentation for the machine-readable version of the catalogue of 20457 Star positions obtained by photography in the declination zone -48 deg to -54 deg (1950)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1983-01-01

    The machine readable catalog, as it is distributed from the Astronomical Data Center, is described. Some minor reformatting of the magnetic tape version is received to decrease the record size and conserve space; the data content is identical to the sample shown in Table VI of the source reference.

  9. Documentation for the machine-readable version of the lick Saturn-Voyager Reference Star Catalogue

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    The machine-readable version of the catalog is described. The catalog was prepared in order to determine accurate equatorial coordinates for reference stars in a band of sky against which cameras of the Voyager spacecraft were aligned for observations in the region of Saturn during the flyby. Tape contents and characteristics are described and a sample listing presented.

  10. Xeml Lab: a tool that supports the design of experiments at a graphical interface and generates computer-readable metadata files, which capture information about genotypes, growth conditions, environmental perturbations and sampling strategy.

    PubMed

    Hannemann, Jan; Poorter, Hendrik; Usadel, Björn; Bläsing, Oliver E; Finck, Alex; Tardieu, Francois; Atkin, Owen K; Pons, Thijs; Stitt, Mark; Gibon, Yves

    2009-09-01

    Data mining depends on the ability to access machine-readable metadata that describe genotypes, environmental conditions, and sampling times and strategy. This article presents Xeml Lab. The Xeml Interactive Designer provides an interactive graphical interface at which complex experiments can be designed, and concomitantly generates machine-readable metadata files. It uses a new eXtensible Mark-up Language (XML)-derived dialect termed XEML. Xeml Lab includes a new ontology for environmental conditions, called Xeml Environment Ontology. However, to provide versatility, it is designed to be generic and also accepts other commonly used ontology formats, including OBO and OWL. A review summarizing important environmental conditions that need to be controlled, monitored and captured as metadata is posted in a Wiki (http://www.codeplex.com/XeO) to promote community discussion. The usefulness of Xeml Lab is illustrated by two meta-analyses of a large set of experiments that were performed with Arabidopsis thaliana during 5 years. The first reveals sources of noise that affect measurements of metabolite levels and enzyme activities. The second shows that Arabidopsis maintains remarkably stable levels of sugars and amino acids across a wide range of photoperiod treatments, and that adjustment of starch turnover and the leaf protein content contribute to this metabolic homeostasis.

  11. Documentation for the machine-readable version of the Fourth Cambridge Radio Survey Catalogue (4C) (Pilkington, Gower, Scott and Wills 1965, 1967)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1983-01-01

    The machine readable catalogue contains survey data from the papers of Pilkington and Scott and Gower, Scott and Wills. These data result from a survey of radio sources between declinations -07 deg and +80 deg using the large Cambridge interferometer at 178 MHz. The computerized catalog contains for each source the 4C number, 1950 position, measured flux density, and accuracy class. For some sources miscellaneous brief comments such as cross identifications to the 3C catalog or remarks on contamination from nearby sources are given at the ends of the data records. A detailed description of the machine readable catalog as it is currently being distributed by the Astronomical Data Center is given to enable users to read and process the data.

  12. Nanopublications for exposing experimental data in the life-sciences: a Huntington's Disease case study.

    PubMed

    Mina, Eleni; Thompson, Mark; Kaliyaperumal, Rajaram; Zhao, Jun; der Horst, van Eelke; Tatum, Zuotian; Hettne, Kristina M; Schultes, Erik A; Mons, Barend; Roos, Marco

    2015-01-01

    Data from high throughput experiments often produce far more results than can ever appear in the main text or tables of a single research article. In these cases, the majority of new associations are often archived either as supplemental information in an arbitrary format or in publisher-independent databases that can be difficult to find. These data are not only lost from scientific discourse, but are also elusive to automated search, retrieval and processing. Here, we use the nanopublication model to make scientific assertions that were concluded from a workflow analysis of Huntington's Disease data machine-readable, interoperable, and citable. We followed the nanopublication guidelines to semantically model our assertions as well as their provenance metadata and authorship. We demonstrate interoperability by linking nanopublication provenance to the Research Object model. These results indicate that nanopublications can provide an incentive for researchers to expose data that is interoperable and machine-readable for future use and preservation for which they can get credits for their effort. Nanopublications can have a leading role into hypotheses generation offering opportunities to produce large-scale data integration.

  13. Light at Night Markup Language (LANML): XML Technology for Light at Night Monitoring Data

    NASA Astrophysics Data System (ADS)

    Craine, B. L.; Craine, E. R.; Craine, E. M.; Crawford, D. L.

    2013-05-01

    Light at Night Markup Language (LANML) is a standard, based upon XML, useful in acquiring, validating, transporting, archiving and analyzing multi-dimensional light at night (LAN) datasets of any size. The LANML standard can accommodate a variety of measurement scenarios including single spot measures, static time-series, web based monitoring networks, mobile measurements, and airborne measurements. LANML is human-readable, machine-readable, and does not require a dedicated parser. In addition LANML is flexible; ensuring future extensions of the format will remain backward compatible with analysis software. The XML technology is at the heart of communicating over the internet and can be equally useful at the desktop level, making this standard particularly attractive for web based applications, educational outreach and efficient collaboration between research groups.

  14. Lowell proper motion survey: Southern Hemisphere (Giclas, Burnham, and Thomas 1978). Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.

    1989-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The catalog is a summary compilation of the Lowell Proper Motion Survey for the Southern Hemisphere, as completed to mid-1978 and published in the Lowell Observatory Bulletins. This summary catalog serves as a Southern Hemisphere companion to the Lowell Proper Motion Survey, Northern Hemisphere.

  15. The National Longitudinal Study of the High School Class of 1972 (NLS-72), Fifth Follow-Up (1986) Data File [machine-readable data file].

    ERIC Educational Resources Information Center

    National Center for Education Statistics (ED), Washington, DC.

    This machine-readable data file (MDRF) contains information from the fifth follow-up survey of the National Longitudinal Survey of the High School Class of 1972. The survey was carried out along with the third survey of the High School and Beyond Study. The fifth follow-up data file consists of 12,841 records. The data tape contains information on…

  16. Documentation for the machine-readable version of A Finding List for the Multiplet Tables of NSRDS-NBS 3, Sections 1-10 (Adelman, Adelman, Fischel and Warren 1984)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    The machine-readable finding list, as it is currently being distributed from the Astronomical Data Center, is described. This version of the list supersedes an earlier one (1977) containing only Sections 1 through 7 of the NSRDS-NBS 3 multiplet tables publications. Additional sections are to be incorporated into this list as they are published.

  17. Towards a semantic web of paleoclimatology

    NASA Astrophysics Data System (ADS)

    Emile-Geay, J.; Eshleman, J. A.

    2012-12-01

    The paleoclimate record is information-rich, yet signifiant technical barriers currently exist before it can be used to automatically answer scientific questions. Here we make the case for a universal format to structure paleoclimate data. A simple example demonstrates the scientific utility of such a self-contained way of organizing coral data and meta-data in the Matlab language. This example is generalized to a universal ontology that may form the backbone of an open-source, open-access and crowd-sourced paleoclimate database. Its key attributes are: 1. Parsability: the format is self-contained (hence machine-readable), and would therefore enable a semantic web of paleoclimate information. 2. Universality: the format is platform-independent (readable on all computer and operating systems), and language- independent (readable in major programming languages) 3. Extensibility: the format requires a minimum set of fields to appropriately define a paleoclimate record, but allows for the database to grow organically as more records are added, or - equally important - as more metadata are added to existing records. 4. Citability: The format enables the automatic citation of peer- reviewed articles as well as data citations whenever a data record is being used for analysis, making due recognition of scientific work an automatic part and foundational principle of paleoclimate data analysis. 5. Ergonomy: The format will be easy to use, update and manage. This structure is designed to enable semantic searches, and is expected to help accelerate discovery in all workflows where paleoclimate data are being used. Practical steps towards the implementation of such a system at the community level are then discussed.; Preliminary ontology describing relationships between the data and meta-data fields of the Nurhati et al. [2011] climate record. Several fields are viewed as instances of larger classes (ProxyClass,Site,Reference), which would allow computers to perform operations on all records within a specific class (e.g. if the measurement type is δ18O , or if the proxy class is 'Tree Ring Width', or if the resolution is less than 3 months, etc). All records in such a database would be bound to each other by similar links, allowing machines to automatically process any form of query involving existing information. Such a design would also allow growth, by adding records and/or additional information about each record.

  18. Identification of stars and digital version of the catalogue of 1958 by Brodskaya and Shajn

    NASA Astrophysics Data System (ADS)

    Gorbunov, M. A.; Shlyapnikov, A. A.

    2017-12-01

    The following topics are considered: the identification of objects on search maps, the determination of their coordinates at the epoch of 2000, and converting the published version of the catalogue of 1958 by Brodskaya and Shajn into a machine-readable format. The statistics for photometric and spectral data from the original catalogue is presented. A digital version of the catalogue is described, as well as its presentation in HTML, VOTable and AJS formats and the basic principles of work in the interactive application of International Virtual Observatory - the Aladin Sky Atlas.

  19. The National Longitudinal Study of the High School Class of 1972 (NLS-72), Fifth Follow-Up (1986). Teaching Supplement Data File [machine-readable data file].

    ERIC Educational Resources Information Center

    National Center for Education Statistics (ED), Washington, DC.

    The National Longitudinal Survey of the High School Class of 1972 (NLS-72) Teaching Supplement Data File (TSDF) is presented. Data for the machine-readable data file (MDRF) were collected via a mail questionnaire that was sent to all respondents (N=1,517) to the fifth follow-up survey who indicated that they had a teaching background or training…

  20. Documentation for the machine-readable version of the general catalogue of trigonometric stellar parallaxes and supplement

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    The machine-readable version of the General Catalog of Trigonometric Stellar parallaxes as distributed by the Astronomical Data Center is described. It is intended to enable users to read and process the data without problems and guesswork. The source reference should be consulted for details concerning the compilation of the main catalogue and supplement, the probable errors, and the weighting system used to combine determinations from different observatories.

  1. The HEAO A-1 X Ray Source Catalog (Wood Et Al. 1984): Documentation for the Machine-Readable Version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.

    1990-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The catalog is a compilation of data for 842 sources detected with the U.S. Naval Research Laboratory Large Area Sky Survey Experiment flown aboard the HEAO 1 satellite. The data include source identifications, positions, error boxes, mean X-ray intensities, and cross identifications to other source designations.

  2. Documentation for the machine-readable version of the first Santiago-Pulkovo Fundamental Stars Catalogue (SPF1 catalogue)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    The machine-readable version of the first Santiago-Pulkovo Fundamental Stars catalog is described. It is intended to enable users to read and process the computerized catalog without the problems and guesswork often associated with such a task. The source reference should be consulted for additional details regarding the measurements, instrument characteristics, reductions, construction of the quasi-absolute system of right ascension, and star positions in the catalog.

  3. Index to selected machine-readable geohydrologic data for Precambrian through Cretaceous rocks in Kansas

    USGS Publications Warehouse

    Spinazola, J.M.; Hansen, C.V.; Underwood, E.J.; Kenny, J.F.; Wolf, R.J.

    1987-01-01

    Machine-readable geohydrologic data for Precambrian through Cretaceous rocks in Kansas were compiled as part of the USGS Central Midwest Regional Aquifer System Analysis. The geohydrologic data include log, water quality, water level, hydraulics, and water use information. The log data consist of depths to the top of selected geologic formations determined from about 275 sites with geophysical logs and formation lithologies from about 190 sites with lithologic logs. The water quality data consist of about 10,800 analyses, of which about 1 ,200 are proprietary. The water level data consist of about 4 ,480 measured water levels and about 4,175 equivalent freshwater hydraulic heads, of which about 3,745 are proprietary. The hydraulics data consist of results from about 30 specific capacity tests and about 20 aquifer tests, and interpretations of about 285 drill stem tests (of which about 60 are proprietary) and about 75 core-sample analyses. The water use data consist of estimates of freshwater withdrawals from Precambrian through Cretaceous geohydrologic units for each of the 105 counties in Kansas. Average yearly withdrawals were estimated for each decade from 1940 to 1980. All the log and water use data and the nonproprietary parts of the water quality , water level, and hydraulics data are available on magnetic tape from the USGS office in Lawrence, Kansas. (Author 's abstract)

  4. VizieR Online Data Catalog: de Houtman, Kepler and Halley star catalogs (Verbunt+ 2011)

    NASA Astrophysics Data System (ADS)

    Verbunt, F.; van Gent, R. H.

    2011-04-01

    We present Machine-readable versions of the star catalogues of de Houtman (1602), Kepler (1627: Secunda Classis and Tertia Classis) and Halley (1679). In addition to the data from the Historical catalogue, the machine-readable version contains the modern identification with a Hipparcos star and the latter's magnitude, and based on this identification the positional accuracy. For Kepler's catalogues we also give cross references to the catalogue of Ptolemaios (in the edition by Toomer 1998). (4 data files).

  5. Documentation for the machine-readable version of a supplement to the Bright Star catalogue (Hoffleit, Saladyga and Wlasuk 1983)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    Detailed descriptions of the three files of the machine-readable catalog are given. The files of the original tape have been restructured and the data records reformatted to produce a uniform data file having a single logical record per star and homogeneous data fields. The characteristics of the tape version as it is presently being distributed from the Astronomical Data Center are given and the changes to the original tape supplied are described.

  6. A compilation of redshifts and velocity dispersions for Abell clusters (Struble and Rood 1987): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.

    1989-01-01

    The machine readable version of the compilation, as it is currently being distributed from the Astronomical Data Center, is described. The catalog contains redshifts and velocity dispersions for all Abell clusters for which these data had been published up to 1986 July. Also included are 1950 equatorial coordinates for the centers of the listed clusters, numbers of observations used to determine the redshifts, and bibliographical references citing the data sources.

  7. Far infrared supplement: Catalog of infrared observations

    NASA Technical Reports Server (NTRS)

    Gezari, D. Y.; Schmitz, M.; Mead, J. M.

    1982-01-01

    The development of a new generation of orbital, airborne and ground-based infrared astronomical observatory facilities, including the infrared astronomical satellite (IRAS), the cosmic background explorer (COBE), the NASA Kuiper airborne observatory, and the NASA infrared telescope facility, intensified the need for a comprehensive, machine-readable data base and catalog of current infrared astronomical observations. The Infrared Astronomical Data Base and its principal data product, this catalog, comprise a machine-readable library of infrared (1 micrometer to 1000 micrometers) astronomical observations published in the scientific literature since 1965.

  8. A catalog of selected compact radio sources for the construction of an extragalactic radio/optical reference frame (Argue et al. 1984): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This document describes the machine readable version of the Selected Compact Radio Source Catalog as it is currently being distributed from the international network of astronomical data centers. It is intended to enable users to read and process the computerized catalog. The catalog contains 233 strong, compact extragalactic radio sources having identified optical counterparts. The machine version contains the same data as the published catalog and includes source identifications, equatorial positions at J2000.0 and their mean errors, object classifications, visual magnitudes, redshift, 5-GHz flux densities, and comments.

  9. Documentation for the machine-readable version of the Lowell Proper Motion Survey, Northern Hemisphere, the G numbered stars

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1983-01-01

    This catalog contains a summary of many individual papers published in the Lowell Observatory Bulletins in the years 1958 to 1970. The data in the machine-readable version include observed positions, proper motions, estimated photographic magnitudes and colors, and references to identifications in other catalogs. Photoelectric data on the UBV system are included for many stars, but no attempt was made to find all existing photometry. The machine version contains all data of the published catalog, except the Lowell Bulletin numbers where finding charts can be found. A separate file contains the notes published in the original catalog.

  10. Rosetta: Ensuring the Preservation and Usability of ASCII-based Data into the Future

    NASA Astrophysics Data System (ADS)

    Ramamurthy, M. K.; Arms, S. C.

    2015-12-01

    Field data obtained from dataloggers often take the form of comma separated value (CSV) ASCII text files. While ASCII based data formats have positive aspects, such as the ease of accessing the data from disk and the wide variety of tools available for data analysis, there are some drawbacks, especially when viewing the situation through the lens of data interoperability and stewardship. The Unidata data translation tool, Rosetta, is a web-based service that provides an easy, wizard-based interface for data collectors to transform their datalogger generated ASCII output into Climate and Forecast (CF) compliant netCDF files following the CF-1.6 discrete sampling geometries. These files are complete with metadata describing what data are contained in the file, the instruments used to collect the data, and other critical information that otherwise may be lost in one of many README files. The choice of the machine readable netCDF data format and data model, coupled with the CF conventions, ensures long-term preservation and interoperability, and that future users will have enough information to responsibly use the data. However, with the understanding that the observational community appreciates the ease of use of ASCII files, methods for transforming the netCDF back into a CSV or spreadsheet format are also built-in. One benefit of translating ASCII data into a machine readable format that follows open community-driven standards is that they are instantly able to take advantage of data services provided by the many open-source data server tools, such as the THREDDS Data Server (TDS). While Rosetta is currently a stand-alone service, this talk will also highlight efforts to couple Rosetta with the TDS, thus allowing self-publishing of thoroughly documented datasets by the data producers themselves.

  11. Systems Design and Pilot Operation of a Regional Center for Technical Processing for the Libraries of the New England State Universities. NELINET, New England Library Information Network. Progress Report, July 1, 1967 - March 30, 1968, Volume II, Appendices.

    ERIC Educational Resources Information Center

    Agenbroad, James E.; And Others

    Included in this volume of appendices to LI 000 979 are acquisitions flow charts; a current operations questionnaire; an algorithm for splitting the Library of Congress call number; analysis of the Machine-Readable Cataloging (MARC II) format; production problems and decisions; operating procedures for information transmittal in the New England…

  12. 48 CFR 252.211-7003 - Item identification and valuation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., used to retrieve data encoded on machine-readable media. Concatenated unique item identifier means— (1... (or controlling) authority for the enterprise identifier. Item means a single hardware article or a...-readable means an automatic identification technology media, such as bar codes, contact memory buttons...

  13. 48 CFR 252.211-7003 - Item identification and valuation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., used to retrieve data encoded on machine-readable media. Concatenated unique item identifier means— (1... (or controlling) authority for the enterprise identifier. Item means a single hardware article or a...-readable means an automatic identification technology media, such as bar codes, contact memory buttons...

  14. 48 CFR 252.211-7003 - Item identification and valuation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., used to retrieve data encoded on machine-readable media. Concatenated unique item identifier means— (1... (or controlling) authority for the enterprise identifier. Item means a single hardware article or a...-readable means an automatic identification technology media, such as bar codes, contact memory buttons...

  15. An Evaluation Of Holograms In Training And As Job Performance Aids

    NASA Astrophysics Data System (ADS)

    Frey, Allan H.

    1986-08-01

    Experimentation was carried out to evaluate holograms for use in training and as job aids. Holograms were compared against line drawings and photographs as methods of presenting visual information needed to accomplish a number of tasks. The dependent variables were assembly speed and assembly errors with people unstressed, assembly speed and assembly errors with people stressed, the percentage of discovered errors in assemblies, the number of correct assemblies misidentified as erroneous, and information extraction. Holograms generally were as good as or better visual aids than either photographs or line drawings. The use of holograms tends to reduce errors rather than speed assembly time in the assembly tasks used in these experiments. They also enhance the discovery of errors when the subject is attempting to locate assembly errors in a construction. The results of this experimentation suggest that serious consideration should be given to the use of holography in the development of job aids and in training. Besides these advantages for job aids, other advantages we found are that when page formated information is stored in man-readable holograms they are still useable when scratched or damaged even when similarly damaged microfilm is unuseable. Holography can also be used to store man and machine readable data simultaneously. Such storage would provide simplified backup in the event of machine failure, and it would permit the development of compatible machine and manual systems for job aid applications.

  16. A possible extension to the RInChI as a means of providing machine readable process data.

    PubMed

    Jacob, Philipp-Maximilian; Lan, Tian; Goodman, Jonathan M; Lapkin, Alexei A

    2017-04-11

    The algorithmic, large-scale use and analysis of reaction databases such as Reaxys is currently hindered by the absence of widely adopted standards for publishing reaction data in machine readable formats. Crucial data such as yields of all products or stoichiometry are frequently not explicitly stated in the published papers and, hence, not reported in the database entry for those reactions, limiting their usefulness for algorithmic analysis. This paper presents a possible extension to the IUPAC RInChI standard via an auxiliary layer, termed ProcAuxInfo, which is a standardised, extensible form in which to report certain key reaction parameters such as declaration of all products and reactants as well as auxiliaries known in the reaction, reaction stoichiometry, amounts of substances used, conversion, yield and operating conditions. The standard is demonstrated via creation of the RInChI including the ProcAuxInfo layer based on three published reactions and demonstrates accurate data recoverability via reverse translation of the created strings. Implementation of this or another method of reporting process data by the publishing community would ensure that databases, such as Reaxys, would be able to abstract crucial data for big data analysis of their contents.

  17. Effects of compression and individual variability on face recognition performance

    NASA Astrophysics Data System (ADS)

    McGarry, Delia P.; Arndt, Craig M.; McCabe, Steven A.; D'Amato, Donald P.

    2004-08-01

    The Enhanced Border Security and Visa Entry Reform Act of 2002 requires that the Visa Waiver Program be available only to countries that have a program to issue to their nationals machine-readable passports incorporating biometric identifiers complying with applicable standards established by the International Civil Aviation Organization (ICAO). In June 2002, the New Technologies Working Group of ICAO unanimously endorsed the use of face recognition (FR) as the globally interoperable biometric for machine-assisted identity confirmation with machine-readable travel documents (MRTDs), although Member States may elect to use fingerprint and/or iris recognition as additional biometric technologies. The means and formats are still being developed through which biometric information might be stored in the constrained space of integrated circuit chips embedded within travel documents. Such information will be stored in an open, yet unalterable and very compact format, probably as digitally signed and efficiently compressed images. The objective of this research is to characterize the many factors that affect FR system performance with respect to the legislated mandates concerning FR. A photograph acquisition environment and a commercial face recognition system have been installed at Mitretek, and over 1,400 images have been collected of volunteers. The image database and FR system are being used to analyze the effects of lossy image compression, individual differences, such as eyeglasses and facial hair, and the acquisition environment on FR system performance. Images are compressed by varying ratios using JPEG2000 to determine the trade-off points between recognition accuracy and compression ratio. The various acquisition factors that contribute to differences in FR system performance among individuals are also being measured. The results of this study will be used to refine and test efficient face image interchange standards that ensure highly accurate recognition, both for automated FR systems and human inspectors. Working within the M1-Biometrics Technical Committee of the InterNational Committee for Information Technology Standards (INCITS) organization, a standard face image format will be tested and submitted to organizations such as ICAO.

  18. HD-SAO-DM cross index

    NASA Technical Reports Server (NTRS)

    Nagy, T. A.; Mead, J.

    1978-01-01

    A table of correspondence SAO-HD-DM-GC was prepared by Morin (1973). The machine-readable version of this cross identification was obtained from the Centre de Donnees Stellaires (Strasbourg, France). The table was sorted at the Goddard Space Flight Center by HD number and all blank HD number records were removed to produce the HD-SAO-DM table presented. There were 258997 entries in the original table; there are 180411 entries after removing the blank HD records. The Boss General Catalogue (GC) numbers were retained on the machine-readable version after the sort.

  19. Assessing the Readability of Medical Documents: A Ranking Approach.

    PubMed

    Zheng, Jiaping; Yu, Hong

    2018-03-23

    The use of electronic health record (EHR) systems with patient engagement capabilities, including viewing, downloading, and transmitting health information, has recently grown tremendously. However, using these resources to engage patients in managing their own health remains challenging due to the complex and technical nature of the EHR narratives. Our objective was to develop a machine learning-based system to assess readability levels of complex documents such as EHR notes. We collected difficulty ratings of EHR notes and Wikipedia articles using crowdsourcing from 90 readers. We built a supervised model to assess readability based on relative orders of text difficulty using both surface text features and word embeddings. We evaluated system performance using the Kendall coefficient of concordance against human ratings. Our system achieved significantly higher concordance (.734) with human annotators than did a baseline using the Flesch-Kincaid Grade Level, a widely adopted readability formula (.531). The improvement was also consistent across different disease topics. This method's concordance with an individual human user's ratings was also higher than the concordance between different human annotators (.658). We explored methods to automatically assess the readability levels of clinical narratives. Our ranking-based system using simple textual features and easy-to-learn word embeddings outperformed a widely used readability formula. Our ranking-based method can predict relative difficulties of medical documents. It is not constrained to a predefined set of readability levels, a common design in many machine learning-based systems. Furthermore, the feature set does not rely on complex processing of the documents. One potential application of our readability ranking is personalization, allowing patients to better accommodate their own background knowledge. ©Jiaping Zheng, Hong Yu. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 23.03.2018.

  20. Understanding and Writing G & M Code for CNC Machines

    ERIC Educational Resources Information Center

    Loveland, Thomas

    2012-01-01

    In modern CAD and CAM manufacturing companies, engineers design parts for machines and consumable goods. Many of these parts are cut on CNC machines. Whether using a CNC lathe, milling machine, or router, the ideas and designs of engineers must be translated into a machine-readable form called G & M Code that can be used to cut parts to precise…

  1. Generation of open biomedical datasets through ontology-driven transformation and integration processes.

    PubMed

    Carmen Legaz-García, María Del; Miñarro-Giménez, José Antonio; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2016-06-03

    Biomedical research usually requires combining large volumes of data from multiple heterogeneous sources, which makes difficult the integrated exploitation of such data. The Semantic Web paradigm offers a natural technological space for data integration and exploitation by generating content readable by machines. Linked Open Data is a Semantic Web initiative that promotes the publication and sharing of data in machine readable semantic formats. We present an approach for the transformation and integration of heterogeneous biomedical data with the objective of generating open biomedical datasets in Semantic Web formats. The transformation of the data is based on the mappings between the entities of the data schema and the ontological infrastructure that provides the meaning to the content. Our approach permits different types of mappings and includes the possibility of defining complex transformation patterns. Once the mappings are defined, they can be automatically applied to datasets to generate logically consistent content and the mappings can be reused in further transformation processes. The results of our research are (1) a common transformation and integration process for heterogeneous biomedical data; (2) the application of Linked Open Data principles to generate interoperable, open, biomedical datasets; (3) a software tool, called SWIT, that implements the approach. In this paper we also describe how we have applied SWIT in different biomedical scenarios and some lessons learned. We have presented an approach that is able to generate open biomedical repositories in Semantic Web formats. SWIT is able to apply the Linked Open Data principles in the generation of the datasets, so allowing for linking their content to external repositories and creating linked open datasets. SWIT datasets may contain data from multiple sources and schemas, thus becoming integrated datasets.

  2. Smithsonian Astrophysical Observatory (SAO) star catalog (Sao staff 1966, edition ADC 1989): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Roman, Nancy G.; Warren, Wayne H., Jr.

    1989-01-01

    An updated, corrected, and extended machine readable version of the catalog is described. Published and unpublished errors discovered in the previous version were corrected, and multiple star and supplemental BD identifications were added to stars where more than one SAO entry has the same Durchmusterung number. Henry Draper Extension (HDE) numbers were added for stars found in both volumes of the extension. Data for duplicate SAO entries (those referring to the same star) were flagged. J2000 positions in usual units and in radians were added.

  3. Documentation for the machine-readable version of the general catalogue of 33342 stars for the epoch 1950 (Boss 1937)

    NASA Technical Reports Server (NTRS)

    Roman, N. G.; Warren, W. H., Jr.

    1983-01-01

    A revised and corrected version of the machine-readable catalog has been prepared. Cross identifications of the GC stars to the HD and DM catalogs have been replaced by data from the new SAO-HD-GC-DM Cross Index (Roman, Warren and Schofield 1983), including component identifications for multiple SAO entries having identical DM numbers in the SAO Catalog, supplemental Bonner Durchmusterung stars (lower case letter designations) and codes for multiple HD stars. Additional individual corrections have been incorporated based upon errors found during analyses of other catalogs.

  4. An Overview of the Challenges With and Proposed Solutions for the Ingest and Distribution Processes for Airborne Data Management

    NASA Technical Reports Server (NTRS)

    Beach, Aubrey; Northup, Emily; Early, Amanda; Wang, Dali; Kusterer, John; Quam, Brandi; Chen, Gao

    2015-01-01

    The current data management practices for NASA airborne field projects have successfully served science team data needs over the past 30 years to achieve project science objectives, however, users have discovered a number of issues in terms of data reporting and format. The ICARTT format, a NASA standard since 2010, is currently the most popular among the airborne measurement community. Although easy for humans to use, the format standard is not sufficiently rigorous to be machine-readable. This makes data use and management tedious and resource intensive, and also create problems in Distributed Active Archive Center (DAAC) data ingest procedures and distribution. Further, most DAACs use metadata models that concentrate on satellite data observations, making them less prepared to deal with airborne data.

  5. Bonner Durchmusterung (Argelander 1859-1862): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.; Ochsenbein, Francois

    1989-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The entire Bonner Durchmusterung (BD) was computerized through the collaborative efforts of the Centre de Donnees Astronomiques de Strasbourg, l'Observatoire de Nice, and the Astronomical Data Center at the NASA/Goddard Space Flight Center. All corrigenda published in the original BD volumes were incorporated into the machine file, along with changes published following the 1903 edition. In addition, stars indicated to be missing in published lists and verified by various techniques are flagged so that they can be omitted from computer plotted charts if desired. Stars deleted in the various errata lists were similarly flagged, while those with revised data are flagged and listed in a separate table.

  6. U.S. announces one-year delay for visa waiver program change

    NASA Astrophysics Data System (ADS)

    The U.S. State Department has announced that it is delaying by one year a new rule affecting citizens from visa waiver program countries. The new rule, which was scheduled to go into effect on 1 October 2003, requires visitors from these countries to obtain non-immigrant visas to enter the United States if they do not have machine-readable passports. The announced delay means that this rule will now go into effect 26 October 2004 instead.The delay does not apply to five visa waiver countries—Andorra, Brunei, Liechtenstein, Luxembourg, and Slovenia—because most of the citizens of these nations already carry passports that are machine-readable.

  7. Machine-readable files developed for the High Plains Regional Aquifer-System analysis in parts of Colorado, Kansas, Nebraska, New Mexico, Oklahoma, South Dakota, Texas, and Wyoming

    USGS Publications Warehouse

    Ferrigno, C.F.

    1986-01-01

    Machine-readable files were developed for the High Plains Regional Aquifer-System Analysis project are stored on two magnetic tapes available from the U.S. Geological Survey. The first tape contains computer programs that were used to prepare, store, retrieve, organize, and preserve the areal interpretive data collected by the project staff. The second tape contains 134 data files that can be divided into five general classes: (1) Aquifer geometry data, (2) aquifer and water characteristics , (3) water levels, (4) climatological data, and (5) land use and water use data. (Author 's abstract)

  8. Document fraud deterrent strategies: four case studies

    NASA Astrophysics Data System (ADS)

    Mercer, John W.

    1998-04-01

    This paper discusses the approaches taken to deter fraud committed against four documents: the machine-readable passport; the machine-readable visa; the Consular Report of Birth Abroad; and the Border Crossing Card. General approaches are discussed first, with an emphasis on the reasons for the document, the conditions of its use and the information systems required for it to function. A cost model of counterfeit deterrence is introduced. Specific approaches to each of the four documents are then discussed, in light of the issuance circumstances and criteria, the intent of the issuing authority, the applicable international standards and the level of protection and fraud resistance appropriate for the document.

  9. Documentation for the machine-readable version of the catalogue of individual UBV and UVBY beta observations in the region of the Orion OB1 association

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    The machine-readable files of individual UBV observations of 106 stars in the vicinity of the Orion Nebula (the Sword region) and individual uvby beta observations of 508 stars in all regions of the Orion OB 1 association are described. For the UBV data the stars are identified by their Brun numbers, with cross identifications to the chart numbers used in Warren and Hesser; the uv by beta stars are identified by the aforementioned chart numbers and HD, BD or P ( = pi); numbers in that order of preference.

  10. Lick Saturn-Voyager reference star catalogue (Klemola, Taraji, and Ocampo 1979): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.

    1990-01-01

    The machine readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The catalog contains accurate equatorial coordinates for 4551 stars in a band of sky against which cameras of the Voyager spacecraft were pointed for observations in the region of Saturn during the flyby. All of the reference stars are in the range 12(exp h) 40(exp m) to 14(exp h) 12(exp m) in right ascension (1950) and +02 to -09 degs in declination. Mean errors of the positions are about 0.25 sec.

  11. The US Naval Observatory Zodiacal Zone Catalog (Douglas and Harrington 1990): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.

    1990-01-01

    The machine readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The Zodiacal Zone Catalog is a catalog of positions and proper motions for stars in the magnitude range where m sub v is between 4 and 10, lying within 16 deg of the ecliptic and north of declination -30 deg. The catalog contains positions and proper motions, at epoch, for equator and equinox J2000.0, magnitudes and spectral types taken mostly from the Smithsonian Astrophysical Observatory Star Catalog, and reference positions and proper motions for equinox and epoch B1950.0.

  12. Assessing readability formula differences with written health information materials: application, results, and recommendations.

    PubMed

    Wang, Lih-Wern; Miller, Michael J; Schmitt, Michael R; Wen, Frances K

    2013-01-01

    Readability formulas are often used to guide the development and evaluation of literacy-sensitive written health information. However, readability formula results may vary considerably as a result of differences in software processing algorithms and how each formula is applied. These variations complicate interpretations of reading grade level estimates, particularly without a uniform guideline for applying and interpreting readability formulas. This research sought to (1) identify commonly used readability formulas reported in the health care literature, (2) demonstrate the use of the most commonly used readability formulas on written health information, (3) compare and contrast the differences when applying common readability formulas to identical selections of written health information, and (4) provide recommendations for choosing an appropriate readability formula for written health-related materials to optimize their use. A literature search was conducted to identify the most commonly used readability formulas in health care literature. Each of the identified formulas was subsequently applied to word samples from 15 unique examples of written health information about the topic of depression and its treatment. Readability estimates from common readability formulas were compared based on text sample size, selection, formatting, software type, and/or hand calculations. Recommendations for their use were provided. The Flesch-Kincaid formula was most commonly used (57.42%). Readability formulas demonstrated variability up to 5 reading grade levels on the same text. The Simple Measure of Gobbledygook (SMOG) readability formula performed most consistently. Depending on the text sample size, selection, formatting, software, and/or hand calculations, the individual readability formula estimated up to 6 reading grade levels of variability. The SMOG formula appears best suited for health care applications because of its consistency of results, higher level of expected comprehension, use of more recent validation criteria for determining reading grade level estimates, and simplicity of use. To improve interpretation of readability results, reporting reading grade level estimates from any formula should be accompanied with information about word sample size, location of word sampling in the text, formatting, and method of calculation. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Introducing meta-services for biomedical information extraction

    PubMed Central

    Leitner, Florian; Krallinger, Martin; Rodriguez-Penagos, Carlos; Hakenberg, Jörg; Plake, Conrad; Kuo, Cheng-Ju; Hsu, Chun-Nan; Tsai, Richard Tzong-Han; Hung, Hsi-Chuan; Lau, William W; Johnson, Calvin A; Sætre, Rune; Yoshida, Kazuhiro; Chen, Yan Hua; Kim, Sun; Shin, Soo-Yong; Zhang, Byoung-Tak; Baumgartner, William A; Hunter, Lawrence; Haddow, Barry; Matthews, Michael; Wang, Xinglong; Ruch, Patrick; Ehrler, Frédéric; Özgür, Arzucan; Erkan, Güneş; Radev, Dragomir R; Krauthammer, Michael; Luong, ThaiBinh; Hoffmann, Robert; Sander, Chris; Valencia, Alfonso

    2008-01-01

    We introduce the first meta-service for information extraction in molecular biology, the BioCreative MetaServer (BCMS; ). This prototype platform is a joint effort of 13 research groups and provides automatically generated annotations for PubMed/Medline abstracts. Annotation types cover gene names, gene IDs, species, and protein-protein interactions. The annotations are distributed by the meta-server in both human and machine readable formats (HTML/XML). This service is intended to be used by biomedical researchers and database annotators, and in biomedical language processing. The platform allows direct comparison, unified access, and result aggregation of the annotations. PMID:18834497

  14. NASA Thesaurus Data File

    NASA Technical Reports Server (NTRS)

    2012-01-01

    The NASA Thesaurus contains the authorized NASA subject terms used to index and retrieve materials in the NASA Aeronautics and Space Database (NA&SD) and NASA Technical Reports Server (NTRS). The scope of this controlled vocabulary includes not only aerospace engineering, but all supporting areas of engineering and physics, the natural space sciences (astronomy, astrophysics, planetary science), Earth sciences, and the biological sciences. The NASA Thesaurus Data File contains all valid terms and hierarchical relationships, USE references, and related terms in machine-readable form. The Data File is available in the following formats: RDF/SKOS, RDF/OWL, ZThes-1.0, and CSV/TXT.

  15. Fifth Fundamental Catalogue (FK5). Part 1: Basic fundamental stars (Fricke, Schwan, and Lederle 1988): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.

    1990-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The Basic FK5 provides improved mean positions and proper motions for the 1535 classical fundamental stars that had been included in the FK3 and FK4 catalogs. The machine version of the catalog contains the positions and proper motions of the Basic FK5 stars for the epochs and equinoxes J2000.0 and B1950.0, the mean epochs of individual observed right ascensions and declinations used to determine the final positions, and the mean errors of the final positions and proper motions for the reported epochs. The cross identifications to other designations used for the FK5 stars that are given in the published catalog were not included in the original machine versions, but the Durchmusterung numbers have been added at the Astronomical Data Center.

  16. Documentation for the machine-readable version of the Bright Star Catalogue

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    The machine-readable version of The Bright Star Catalogue, 4th edition, is described. In addition to the large number of newly determined fundamental data, such as photoelectric magnitudes, MK spectral types, parallaxes, and radial velocities, the new version contains data and information not included in the third edition such as the identification of IR sources, U-B and R-I colors, radial velocity comments (indication and identification of spectroscopic and occultation binaries), and projected rotational velocities. The equatorial coordinates for equinoxes 1900 and 2000 are recorded to greater precision details concerning variability, spectral characteristics, duplicity, and group membership are included. Data compiled through 1979, some information and variable-star designations found through 1981 are considered.

  17. Multiple layer identification label using stacked identification symbols

    NASA Technical Reports Server (NTRS)

    Schramm, Harry F. (Inventor)

    2005-01-01

    An automatic identification system and method are provided which employ a machine readable multiple layer label. The label has a plurality of machine readable marking layers stacked one upon another. Each of the marking layers encodes an identification symbol detectable using one or more sensing technologies. The various marking layers may comprise the same marking material or each marking layer may comprise a different medium having characteristics detectable by a different sensing technology. These sensing technologies include x-ray, radar, capacitance, thermal, magnetic and ultrasonic. A complete symbol may be encoded within each marking layer or a symbol may be segmented into fragments which are then divided within a single marking layer or encoded across multiple marking layers.

  18. Documentation for the machine-readable version of the Smithsonian Astrophysical Observatory Star catalogue (SAO) version 1984

    NASA Technical Reports Server (NTRS)

    Roman, N. G.; Warren, W. H., Jr.

    1984-01-01

    An updated, corrected and extended machine readable version of the Smithsonian Astrophysical Observatory star catalog (SAO) is described. Published and unpublished errors discovered in the previous version have been corrected, and multiple star and supplemental BD identifications added to stars where more than one SAO entry has the same Durchmusterung number. Henry Draper Extension (HDE) numbers have been added for stars found in both volumes of the extension. Data for duplicate SAO entries (those referring to the same star) have been blanked out, but the records themselves have been retained and flagged so that sequencing and record count are identical to the published catalog.

  19. Lick Jupiter-Voyager reference star catalogue (Klemola, Morabito, and Taraji 1978): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.

    1990-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The catalog contains accurate equatorial coordinates for 4983 stars in a band of sky against which cameras of the Voyager spacecraft were pointed for observations in the region of Jupiter during the flyby. All of the reference stars are in the range 6 hr 00 min to 8 hr 04 min in right ascension (1950), declination zones +16 to +23 degrees, and 8 hr to 31 min to 8 hr 57 min, zones +08 to +14 degrees. Mean errors of the positions are about 0.4 sec.

  20. Documentation for the machine-readable version of the SAO-HD-GC-DM cross index version 1983

    NASA Technical Reports Server (NTRS)

    Roman, N. G.; Warren, W. H., Jr.; Schofield, N., Jr.

    1983-01-01

    An updated and extended machine readable version of the Smithsonian Astrophysical Observatory star catalog (SAO) is described. A correction of all errors which were found since preparation of the original catalog which resulted from misidentifications and omissions of components in multiple star systems and missing Durchmusterung numbers (the common identifier) in the SAO Catalog are included and component identifications from the Index of Visual Double Stars (IDS) are appended to all multiple SAO entries with the same DM numbers, and lower case letter identifiers for supplemental BD stars are added. A total of 11,398 individual corrections and data additions is incorporated into the present version of the cross index.

  1. LEMS: a language for expressing complex biological models in concise and hierarchical form and its use in underpinning NeuroML 2.

    PubMed

    Cannon, Robert C; Gleeson, Padraig; Crook, Sharon; Ganapathy, Gautham; Marin, Boris; Piasini, Eugenio; Silver, R Angus

    2014-01-01

    Computational models are increasingly important for studying complex neurophysiological systems. As scientific tools, it is essential that such models can be reproduced and critically evaluated by a range of scientists. However, published models are currently implemented using a diverse set of modeling approaches, simulation tools, and computer languages making them inaccessible and difficult to reproduce. Models also typically contain concepts that are tightly linked to domain-specific simulators, or depend on knowledge that is described exclusively in text-based documentation. To address these issues we have developed a compact, hierarchical, XML-based language called LEMS (Low Entropy Model Specification), that can define the structure and dynamics of a wide range of biological models in a fully machine readable format. We describe how LEMS underpins the latest version of NeuroML and show that this framework can define models of ion channels, synapses, neurons and networks. Unit handling, often a source of error when reusing models, is built into the core of the language by specifying physical quantities in models in terms of the base dimensions. We show how LEMS, together with the open source Java and Python based libraries we have developed, facilitates the generation of scripts for multiple neuronal simulators and provides a route for simulator free code generation. We establish that LEMS can be used to define models from systems biology and map them to neuroscience-domain specific simulators, enabling models to be shared between these traditionally separate disciplines. LEMS and NeuroML 2 provide a new, comprehensive framework for defining computational models of neuronal and other biological systems in a machine readable format, making them more reproducible and increasing the transparency and accessibility of their underlying structure and properties.

  2. LEMS: a language for expressing complex biological models in concise and hierarchical form and its use in underpinning NeuroML 2

    PubMed Central

    Cannon, Robert C.; Gleeson, Padraig; Crook, Sharon; Ganapathy, Gautham; Marin, Boris; Piasini, Eugenio; Silver, R. Angus

    2014-01-01

    Computational models are increasingly important for studying complex neurophysiological systems. As scientific tools, it is essential that such models can be reproduced and critically evaluated by a range of scientists. However, published models are currently implemented using a diverse set of modeling approaches, simulation tools, and computer languages making them inaccessible and difficult to reproduce. Models also typically contain concepts that are tightly linked to domain-specific simulators, or depend on knowledge that is described exclusively in text-based documentation. To address these issues we have developed a compact, hierarchical, XML-based language called LEMS (Low Entropy Model Specification), that can define the structure and dynamics of a wide range of biological models in a fully machine readable format. We describe how LEMS underpins the latest version of NeuroML and show that this framework can define models of ion channels, synapses, neurons and networks. Unit handling, often a source of error when reusing models, is built into the core of the language by specifying physical quantities in models in terms of the base dimensions. We show how LEMS, together with the open source Java and Python based libraries we have developed, facilitates the generation of scripts for multiple neuronal simulators and provides a route for simulator free code generation. We establish that LEMS can be used to define models from systems biology and map them to neuroscience-domain specific simulators, enabling models to be shared between these traditionally separate disciplines. LEMS and NeuroML 2 provide a new, comprehensive framework for defining computational models of neuronal and other biological systems in a machine readable format, making them more reproducible and increasing the transparency and accessibility of their underlying structure and properties. PMID:25309419

  3. Biotea: RDFizing PubMed Central in support for the paper as an interface to the Web of Data

    PubMed Central

    2013-01-01

    Background The World Wide Web has become a dissemination platform for scientific and non-scientific publications. However, most of the information remains locked up in discrete documents that are not always interconnected or machine-readable. The connectivity tissue provided by RDF technology has not yet been widely used to support the generation of self-describing, machine-readable documents. Results In this paper, we present our approach to the generation of self-describing machine-readable scholarly documents. We understand the scientific document as an entry point and interface to the Web of Data. We have semantically processed the full-text, open-access subset of PubMed Central. Our RDF model and resulting dataset make extensive use of existing ontologies and semantic enrichment services. We expose our model, services, prototype, and datasets at http://biotea.idiginfo.org/ Conclusions The semantic processing of biomedical literature presented in this paper embeds documents within the Web of Data and facilitates the execution of concept-based queries against the entire digital library. Our approach delivers a flexible and adaptable set of tools for metadata enrichment and semantic processing of biomedical documents. Our model delivers a semantically rich and highly interconnected dataset with self-describing content so that software can make effective use of it. PMID:23734622

  4. Gloved Human-Machine Interface

    NASA Technical Reports Server (NTRS)

    Adams, Richard (Inventor); Hannaford, Blake (Inventor); Olowin, Aaron (Inventor)

    2015-01-01

    Certain exemplary embodiments can provide a system, machine, device, manufacture, circuit, composition of matter, and/or user interface adapted for and/or resulting from, and/or a method and/or machine-readable medium comprising machine-implementable instructions for, activities that can comprise and/or relate to: tracking movement of a gloved hand of a human; interpreting a gloved finger movement of the human; and/or in response to interpreting the gloved finger movement, providing feedback to the human.

  5. U.S. Visa Waiver Program Changes

    NASA Astrophysics Data System (ADS)

    The U.S. State Department has just announced that a change to a new rule affecting citizens from visa waiver program countries. The rule, scheduled to go into effect on 1 October 2003, requires visitors from these countries to obtain non-immigrant visas to enter the United States if they do not have machine-readable passports. The change announced is that a visa waiver country can petition the U.S. government to delay the rule by one year. The State Department recommends that citizens of visa waiver program countries who are contemplating visiting the United States, and do not have machine-readable passports, contact the nearest U.S. embassy or consulate to find out if implementation of the rule has been temporarily waived for their countries.

  6. Documentation for the machine-readable version of OAO 2 filter photometry of 531 stars of diverse types

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    A magnetic tape version of the ultraviolet photometry of 531 stars observed with the Wisconsin Experiment Package aboard the Orbiting Astronomical Observatory (OAO 2) is described. The data were obtained with medium band interference filters and were reduced to a uniform magnitude system. They represent a subset of partially reduced data currently on file at the National Space Science Data Center. The document is intended to enable users of the tape file to read and process data without problems or guesswork. For technical details concerning the observations, instrumentation limitations, and interpretation of the data the reference publication should be consulted. This document was designed for distribution with any machine-readable version of the OAO 2 photometric data.

  7. Controlled English to facilitate human/machine analytical processing

    NASA Astrophysics Data System (ADS)

    Braines, Dave; Mott, David; Laws, Simon; de Mel, Geeth; Pham, Tien

    2013-06-01

    Controlled English is a human-readable information representation format that is implemented using a restricted subset of the English language, but which is unambiguous and directly accessible by simple machine processes. We have been researching the capabilities of CE in a number of contexts, and exploring the degree to which a flexible and more human-friendly information representation format could aid the intelligence analyst in a multi-agent collaborative operational environment; especially in cases where the agents are a mixture of other human users and machine processes aimed at assisting the human users. CE itself is built upon a formal logic basis, but allows users to easily specify models for a domain of interest in a human-friendly language. In our research we have been developing an experimental component known as the "CE Store" in which CE information can be quickly and flexibly processed and shared between human and machine agents. The CE Store environment contains a number of specialized machine agents for common processing tasks and also supports execution of logical inference rules that can be defined in the same CE language. This paper outlines the basic architecture of this approach, discusses some of the example machine agents that have been developed, and provides some typical examples of the CE language and the way in which it has been used to support complex analytical tasks on synthetic data sources. We highlight the fusion of human and machine processing supported through the use of the CE language and CE Store environment, and show this environment with examples of highly dynamic extensions to the model(s) and integration between different user-defined models in a collaborative setting.

  8. STELAR: An experiment in the electronic distribution of astronomical literature

    NASA Technical Reports Server (NTRS)

    Warnock, A.; Vansteenburg, M. E.; Brotzman, L. E.; Gass, J.; Kovalsky, D.

    1992-01-01

    STELAR (Study of Electronic Literature for Astronomical Research) is a Goddard-based project designed to test methods of delivering technical literature in machine readable form. To that end, we have scanned a five year span of the ApJ, ApJ Supp, AJ and PASP, and have obtained abstracts for eight leading academic journals from NASA/STI CASI, which also makes these abstracts available through the NASA RECON system. We have also obtained machine readable versions of some journal volumes from the publishers, although in many instances, the final typeset versions are no longer available. The fundamental data object for the STELAR database is the article, a collection of items associated with a scientific paper - abstract, scanned pages (in a variety of formats), figures, OCR extractions, forward and backward references, errata and versions of the paper in various formats (e.g., TEX, SGML, PostScript, DVI). Articles are uniquely referenced in the database by journal name, volume number and page number. The selection and delivery of articles is accomplished through the WAIS (Wide Area Information Server) client/server models requiring only an Internet connection. Modest modifications to the server code have made it capable of delivering the multiple data types required by STELAR. WAIS is a platform independent and fully open multi-disciplinary delivery system, originally developed by Thinking Machines Corp. and made available free of charge. It is based on the ISO Z39.50 standard communications protocol. WAIS servers run under both UNIX and VMS. WAIS clients run on a wide variety of machines, from UNIX-based Xwindows systems to MS-DOS and macintosh microcomputers. The WAIS system includes full-test indexing and searching of documents, network interface and easy access to a variety of document viewers. ASCII versions of the CASI abstracts have been formatted for display and the full test of the abstracts has been indexed. The entire WAIS database of abstracts is now available for use by the astronomical community. Enhancements of the search and retrieval system are under investigation to include specialized searches (by reference, author or keyword, as opposed to full test searches), improved handling of word stems, improvements in relevancy criteria and other retrieval techniques, such as factor spaces. The STELAR project has been assisted by the full cooperation of the AAS, the ASP, the publishers of the academic journals, librarians from GSFC, NRAO and STScI, the Library of Congress, and the University of North Carolina at Chapel Hill.

  9. 37 CFR 1.824 - Form and format for nucleotide and/or amino acid sequence submissions in computer readable form.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... which the data were recorded on the computer readable form, the operating system used, a reference... in a self-extracting format that will decompress on one of the systems described in paragraph (b) of... these format requirements: (1) Computer Compatibility: IBM PC/XT/AT or Apple Macintosh; (2) Operating...

  10. 18 CFR 125.2 - General instructions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... stored on machine readable media. Internal control procedures must be documented by a responsible... associated companies. Public utilities and licensees must assure the availability of records of services...

  11. 5 CFR 841.1005 - State responsibilities.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ....1005 State responsibilities. The State will, in performance of this agreement: (a) Accept requests and...) Convert these requests on a monthly basis to a machine-readable magnetic tape using specifications...

  12. 5 CFR 841.1005 - State responsibilities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ....1005 State responsibilities. The State will, in performance of this agreement: (a) Accept requests and...) Convert these requests on a monthly basis to a machine-readable magnetic tape using specifications...

  13. Second catalog of interferometric measurements of binary stars (McAlister and Hartkopf 1988): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.

    1989-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The catalog is a compilation of measurements of binary- and multiple-star systems obtained by speckle interferometric techniques; this version supersedes a previous edition of the catalog published in 1985. Stars that have been examined for multiplicity with negative results are included, in which case upper limits for the separation are given. The second version is expanded from the first in that a file of newly resolved systems and six cross-index files of alternate designations are included. The data file contains alternate identifications for the observed systems, epochs of observation, reported errors in position angles and separation, and bibliographical references.

  14. Online NASTRAN documentation

    NASA Technical Reports Server (NTRS)

    Turner, Horace Q.; Harper, David F.

    1991-01-01

    The distribution of NASTRAN User Manual information has been difficult because of the delay in printing and difficulty in identification of all the users. This has caused many users not to have the current information for the release of NASTRAN that could be available to them. The User Manual updates have been supplied with the NASTRAN Releases, but distribution within organizations was not coordinated with access to releases. The Executive Control, Case Control, and Bulk Data sections are supplied in machine readable format with the 91 Release of NASTRAN. This information is supplied on the release tapes in ASCII format, and a FORTRAN program to access this information is supplied on the release tapes. This will allow each user to have immediate access to User Manual level documentation with the release. The sections on utilities, plotting, and substructures are expected to be prepared for the 92 Release.

  15. Data Distribution System (DDS) and Solar Dynamic Observatory Ground Station (SDOGS) Integration Manager

    NASA Technical Reports Server (NTRS)

    Pham, Kim; Bialas, Thomas

    2012-01-01

    The DDS SDOGS Integration Manager (DSIM) provides translation between native control and status formats for systems within DDS and SDOGS, and the ASIST (Advanced Spacecraft Integration and System Test) control environment in the SDO MOC (Solar Dynamics Observatory Mission Operations Center). This system was created in response for a need to centralize remote monitor and control of SDO Ground Station equipments using ASIST control environment in SDO MOC, and to have configurable table definition for equipment. It provides translation of status and monitoring information from the native systems into ASIST-readable format to display on pages in the MOC. The manager is lightweight, user friendly, and efficient. It allows data trending, correlation, and storing. It allows using ASIST as common interface for remote monitor and control of heterogeneous equipments. It also provides failover capability to back up machines.

  16. 32 CFR 701.41 - FOIA fee terms.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... can take the form of paper copy, microfiche, audiovisual, or machine readable documentation (e.g... duplication of computer tapes and audiovisual, the actual cost, including the operator's time, shall be...

  17. 18 CFR 356.2 - General instructions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... procedures that assure the reliability of and ready access to data stored on machine readable media. Internal... of services performed by associated companies. Oil pipeline companies must assure the availability of...

  18. 5 CFR 831.1904 - State responsibilities.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... performance of this agreement: (a) Accept requests and revocations from annuitants who have designated that... machine-readable magnetic tape using specifications received from OPM, and forward that tape to OPM for...

  19. 5 CFR 831.1904 - State responsibilities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... performance of this agreement: (a) Accept requests and revocations from annuitants who have designated that... machine-readable magnetic tape using specifications received from OPM, and forward that tape to OPM for...

  20. 41 CFR 51-8.3 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... COMMITTEE FOR PURCHASE FROM PEOPLE WHO ARE BLIND OR SEVERELY DISABLED 8-PUBLIC AVAILABILITY OF AGENCY... request. Such copies can take the form of paper copy, audio-visual materials, or machine readable...

  1. 41 CFR 51-8.3 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... COMMITTEE FOR PURCHASE FROM PEOPLE WHO ARE BLIND OR SEVERELY DISABLED 8-PUBLIC AVAILABILITY OF AGENCY... request. Such copies can take the form of paper copy, audio-visual materials, or machine readable...

  2. 32 CFR 701.41 - FOIA fee terms.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... OFFICIAL RECORDS AVAILABILITY OF DEPARTMENT OF THE NAVY RECORDS AND PUBLICATION OF DEPARTMENT OF THE NAVY... can take the form of paper copy, microfiche, audiovisual, or machine readable documentation (e.g...

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlossberg, David J.; Bodner, Grant M.; Bongard, Michael W.

    This public data set contains openly-documented, machine readable digital research data corresponding to figures published in D.J. Schlossberg et al., 'Non-Inductively Driven Tokamak Plasmas at Near-Unity Toroidal Beta,' Phys. Rev. Lett. 119, 035001 (2017).

  4. TOLNet Data Format for Lidar Ozone Profile & Surface Observations

    NASA Astrophysics Data System (ADS)

    Chen, G.; Aknan, A. A.; Newchurch, M.; Leblanc, T.

    2015-12-01

    The Tropospheric Ozone Lidar Network (TOLNet) is an interagency initiative started by NASA, NOAA, and EPA in 2011. TOLNet currently has six Lidars and one ozonesonde station. TOLNet provides high-resolution spatio-temporal measurements of tropospheric (surface to tropopause) ozone and aerosol vertical profiles to address fundamental air-quality science questions. The TOLNet data format was developed by TOLNet members as a community standard for reporting ozone profile observations. The development of this new format was primarily based on the existing NDAAC (Network for the Detection of Atmospheric Composition Change) format and ICARTT (International Consortium for Atmospheric Research on Transport and Transformation) format. The main goal is to present the Lidar observations in self-describing and easy-to-use data files. The TOLNet format is an ASCII format containing a general file header, individual profile headers, and the profile data. The last two components repeat for all profiles recorded in the file. The TOLNet format is both human and machine readable as it adopts standard metadata entries and fixed variable names. In addition, software has been developed to check for format compliance. To be presented is a detailed description of the TOLNet format protocol and scanning software.

  5. NASA climate data catalog

    NASA Technical Reports Server (NTRS)

    Reph, M. G.

    1984-01-01

    This document provides a summary of information available in the NASA Climate Data Catalog. The catalog provides scientific users with technical information about selected climate parameter data sets and the associated sensor measurements from which they are derived. It is an integral part of the Pilot Climate Data System (PCDS), an interactive, scientific management system for locating, obtaining, manipulating, and displaying climate research data. The catalog is maintained in a machine readable representation which can easily be accessed via the PCDS. The purposes, format and content of the catalog are discussed. Summarized information is provided about each of the data sets currently described in the catalog. Sample detailed descriptions are included for individual data sets or families of related data sets.

  6. 3/29/2018: Making Data Machine-Readable Webinar | National Agricultural

    Science.gov Websites

    Library Skip to main content Home National Agricultural Library United States Department of | USDA.gov | Agricultural Research Service | Plain Language | FOIA | Accessibility Statement | Information

  7. International Data Archive and Analysis Center. I. International Relations Archive. II. Voluntary International Coordination. III. Attachments.

    ERIC Educational Resources Information Center

    Miller, Warren; Tanter, Raymond

    The International Relations Archive undertakes as its primary goals the acquisition, management and dissemination of international affairs data. The first document enclosed is a copy of the final machine readable codebook prepared for the data from the Political Events Project, 1948-1965. Also included is a copy of the final machine-readable…

  8. Resource Sharing in Montana: A Study of Interlibrary Loan and Alternatives for a Montana Union Catalog.

    ERIC Educational Resources Information Center

    Matthews, Joseph R.

    This study recommends a variety of actions to create and maintain a Montana union catalog (MONCAT) for more effective usage of in-state resources and library funds. Specifically, it advocates (1) merger of existing COM, machine readable bibliographic records, and OCLC tapes into a single microform catalog; (2) acceptance of only machine readable…

  9. 12 CFR 309.5 - Procedures for requesting records.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., microfilm, audiovisual records, or machine readable records (e.g., magnetic tape or computer disk). (4... processing. A requester may contact the FOIA/PA Group to learn whether a particular request has been assigned...

  10. 12 CFR 309.5 - Procedures for requesting records.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., microfilm, audiovisual records, or machine readable records (e.g., magnetic tape or computer disk). (4... processing. A requester may contact the FOIA/PA Group to learn whether a particular request has been assigned...

  11. 75 FR 54052 - Description of Office, Procedures, and Public Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-03

    ..., found at: http://www.ffiec.gov . Requests must reasonably describe the records sought. (ii) Contents of..., microfilm, audiovisual records, or machine readable records (e.g., magnetic tape or computer disk). (D...

  12. 12 CFR 309.5 - Procedures for requesting records.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., microfilm, audiovisual records, or machine readable records (e.g., magnetic tape or computer disk). (4... processing. A requester may contact the FOIA/PA Group to learn whether a particular request has been assigned...

  13. 12 CFR 309.5 - Procedures for requesting records.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., microfilm, audiovisual records, or machine readable records (e.g., magnetic tape or computer disk). (4... processing. A requester may contact the FOIA/PA Group to learn whether a particular request has been assigned...

  14. 12 CFR 309.5 - Procedures for requesting records.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., microfilm, audiovisual records, or machine readable records (e.g., magnetic tape or computer disk). (4... processing. A requester may contact the FOIA/PA Group to learn whether a particular request has been assigned...

  15. Documentation for the machine-readable version of a deep objective-prism survey for large Magellanic cloud members

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    This catalog contains 1273 proven or probable Large Magellanic Cloud (LMC) members, as found on deep objective-prism plates taken with the Curtis Schmidt telescope at Cerro Tololo Inter-American Observatory in Chile. The stars are generally brighter than about photographic magnitude 14. Approximate spectral types were determined by examination of the 580 A/mm objective-prism spectra; approximate 1975 positions were obtained by measuring relative to the 1975 coordinate grids on the Uppsala-Mount Stromlo Atlas of the LMC (Gascoigne and Westerlund 1961), and approximate photographic magnitudes were determined by averaging image density measures from the plates and image-diameter measures on the 'B' charts. The machine-readable version of the LMC survey catalog is described to enable users to read and process the tape file without problems or guesswork.

  16. PIML: the Pathogen Information Markup Language.

    PubMed

    He, Yongqun; Vines, Richard R; Wattam, Alice R; Abramochkin, Georgiy V; Dickerman, Allan W; Eckart, J Dana; Sobral, Bruno W S

    2005-01-01

    A vast amount of information about human, animal and plant pathogens has been acquired, stored and displayed in varied formats through different resources, both electronically and otherwise. However, there is no community standard format for organizing this information or agreement on machine-readable format(s) for data exchange, thereby hampering interoperation efforts across information systems harboring such infectious disease data. The Pathogen Information Markup Language (PIML) is a free, open, XML-based format for representing pathogen information. XSLT-based visual presentations of valid PIML documents were developed and can be accessed through the PathInfo website or as part of the interoperable web services federation known as ToolBus/PathPort. Currently, detailed PIML documents are available for 21 pathogens deemed of high priority with regard to public health and national biological defense. A dynamic query system allows simple queries as well as comparisons among these pathogens. Continuing efforts are being taken to include other groups' supporting PIML and to develop more PIML documents. All the PIML-related information is accessible from http://www.vbi.vt.edu/pathport/pathinfo/

  17. 37 CFR 1.824 - Form and format for nucleotide and/or amino acid sequence submissions in computer readable form.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... which the data were recorded on the computer readable form, the operating system used, a reference...) Operating System Compatibility: MS-DOS, MS-Windows, Unix or Macintosh; (3) Line Terminator: ASCII Carriage... in a self-extracting format that will decompress on one of the systems described in paragraph (b) of...

  18. 37 CFR 1.824 - Form and format for nucleotide and/or amino acid sequence submissions in computer readable form.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... which the data were recorded on the computer readable form, the operating system used, a reference...) Operating System Compatibility: MS-DOS, MS-Windows, Unix or Macintosh; (3) Line Terminator: ASCII Carriage... in a self-extracting format that will decompress on one of the systems described in paragraph (b) of...

  19. 37 CFR 1.824 - Form and format for nucleotide and/or amino acid sequence submissions in computer readable form.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... which the data were recorded on the computer readable form, the operating system used, a reference...) Operating System Compatibility: MS-DOS, MS-Windows, Unix or Macintosh; (3) Line Terminator: ASCII Carriage... in a self-extracting format that will decompress on one of the systems described in paragraph (b) of...

  20. A Web Tool for Generating High Quality Machine-readable Biological Pathways.

    PubMed

    Ramirez-Gaona, Miguel; Marcu, Ana; Pon, Allison; Grant, Jason; Wu, Anthony; Wishart, David S

    2017-02-08

    PathWhiz is a web server built to facilitate the creation of colorful, interactive, visually pleasing pathway diagrams that are rich in biological information. The pathways generated by this online application are machine-readable and fully compatible with essentially all web-browsers and computer operating systems. It uses a specially developed, web-enabled pathway drawing interface that permits the selection and placement of different combinations of pre-drawn biological or biochemical entities to depict reactions, interactions, transport processes and binding events. This palette of entities consists of chemical compounds, proteins, nucleic acids, cellular membranes, subcellular structures, tissues, and organs. All of the visual elements in it can be interactively adjusted and customized. Furthermore, because this tool is a web server, all pathways and pathway elements are publicly accessible. This kind of pathway "crowd sourcing" means that PathWhiz already contains a large and rapidly growing collection of previously drawn pathways and pathway elements. Here we describe a protocol for the quick and easy creation of new pathways and the alteration of existing pathways. To further facilitate pathway editing and creation, the tool contains replication and propagation functions. The replication function allows existing pathways to be used as templates to create or edit new pathways. The propagation function allows one to take an existing pathway and automatically propagate it across different species. Pathways created with this tool can be "re-styled" into different formats (KEGG-like or text-book like), colored with different backgrounds, exported to BioPAX, SBGN-ML, SBML, or PWML data exchange formats, and downloaded as PNG or SVG images. The pathways can easily be incorporated into online databases, integrated into presentations, posters or publications, or used exclusively for online visualization and exploration. This protocol has been successfully applied to generate over 2,000 pathway diagrams, which are now found in many online databases including HMDB, DrugBank, SMPDB, and ECMDB.

  1. Electronic Publishing.

    ERIC Educational Resources Information Center

    Lancaster, F. W.

    1989-01-01

    Describes various stages involved in the applications of electronic media to the publishing industry. Highlights include computer typesetting, or photocomposition; machine-readable databases; the distribution of publications in electronic form; computer conferencing and electronic mail; collaborative authorship; hypertext; hypermedia publications;…

  2. 19 CFR 163.1 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... following: Statements; declarations; documents; electronically generated or machine readable data; electronically stored or transmitted information or data; books; papers; correspondence; accounts; financial accounting data; technical data; computer programs necessary to retrieve information in a usable form; and...

  3. Color pictorial serpentine halftone for secure embedded data

    NASA Astrophysics Data System (ADS)

    Curry, Douglas N.

    1998-04-01

    This paper introduces a new rotatable glyph shape for trusted printing applications that has excellent image rendering, data storage and counterfeit deterrence properties. Referred to as a serpentine because it tiles into a meandering line screen, it can produce high quality images independent of its ability to embed data. The hafltone cell is constructed with hyperbolic curves to enhance its dynamic range, and generates low distortion because of rotational tone invariance with its neighbors. An extension to the process allows the data to be formatted into human readable text patterns, viewable with a magnifying glass, and therefore not requiring input scanning. The resultant embedded halftone patterns can be recognized as simple numbers (0 - 9) or alphanumerics (a - z). The pattern intensity can be offset from the surrounding image field intensity, producing a watermarking effect. We have been able to embed words such as 'original' or license numbers into the background halftone pattern of images which can be readily observed in the original image, and which conveniently disappear upon copying. We have also embedded data blocks with self-clocking codes and error correction data which are machine-readable. Finally, we have successfully printed full color images with both the embedded data and text, simulating a trusted printing application.

  4. A Web-Based Information System for Field Data Management

    NASA Astrophysics Data System (ADS)

    Weng, Y. H.; Sun, F. S.

    2014-12-01

    A web-based field data management system has been designed and developed to allow field geologists to store, organize, manage, and share field data online. System requirements were analyzed and clearly defined first regarding what data are to be stored, who the potential users are, and what system functions are needed in order to deliver the right data in the right way to the right user. A 3-tiered architecture was adopted to create this secure, scalable system that consists of a web browser at the front end while a database at the back end and a functional logic server in the middle. Specifically, HTML, CSS, and JavaScript were used to implement the user interface in the front-end tier, the Apache web server runs PHP scripts, and MySQL to server is used for the back-end database. The system accepts various types of field information, including image, audio, video, numeric, and text. It allows users to select data and populate them on either Google Earth or Google Maps for the examination of the spatial relations. It also makes the sharing of field data easy by converting them into XML format that is both human-readable and machine-readable, and thus ready for reuse.

  5. Machines first, humans second: on the importance of algorithmic interpretation of open chemistry data.

    PubMed

    Clark, Alex M; Williams, Antony J; Ekins, Sean

    2015-01-01

    The current rise in the use of open lab notebook techniques means that there are an increasing number of scientists who make chemical information freely and openly available to the entire community as a series of micropublications that are released shortly after the conclusion of each experiment. We propose that this trend be accompanied by a thorough examination of data sharing priorities. We argue that the most significant immediate benefactor of open data is in fact chemical algorithms, which are capable of absorbing vast quantities of data, and using it to present concise insights to working chemists, on a scale that could not be achieved by traditional publication methods. Making this goal practically achievable will require a paradigm shift in the way individual scientists translate their data into digital form, since most contemporary methods of data entry are designed for presentation to humans rather than consumption by machine learning algorithms. We discuss some of the complex issues involved in fixing current methods, as well as some of the immediate benefits that can be gained when open data is published correctly using unambiguous machine readable formats. Graphical AbstractLab notebook entries must target both visualisation by scientists and use by machine learning algorithms.

  6. ClearTK 2.0: Design Patterns for Machine Learning in UIMA

    PubMed Central

    Bethard, Steven; Ogren, Philip; Becker, Lee

    2014-01-01

    ClearTK adds machine learning functionality to the UIMA framework, providing wrappers to popular machine learning libraries, a rich feature extraction library that works across different classifiers, and utilities for applying and evaluating machine learning models. Since its inception in 2008, ClearTK has evolved in response to feedback from developers and the community. This evolution has followed a number of important design principles including: conceptually simple annotator interfaces, readable pipeline descriptions, minimal collection readers, type system agnostic code, modules organized for ease of import, and assisting user comprehension of the complex UIMA framework. PMID:29104966

  7. ClearTK 2.0: Design Patterns for Machine Learning in UIMA.

    PubMed

    Bethard, Steven; Ogren, Philip; Becker, Lee

    2014-05-01

    ClearTK adds machine learning functionality to the UIMA framework, providing wrappers to popular machine learning libraries, a rich feature extraction library that works across different classifiers, and utilities for applying and evaluating machine learning models. Since its inception in 2008, ClearTK has evolved in response to feedback from developers and the community. This evolution has followed a number of important design principles including: conceptually simple annotator interfaces, readable pipeline descriptions, minimal collection readers, type system agnostic code, modules organized for ease of import, and assisting user comprehension of the complex UIMA framework.

  8. 14 CFR 221.500 - Transmission of electronic tariffs to subscribers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... to any subscriber to the on-line tariff database, including access to the justification required by... machine-readable data (raw tariff data) of all daily transactions made to its on-line tariff database. The...

  9. 14 CFR 221.500 - Transmission of electronic tariffs to subscribers.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... to any subscriber to the on-line tariff database, including access to the justification required by... machine-readable data (raw tariff data) of all daily transactions made to its on-line tariff database. The...

  10. 14 CFR 221.500 - Transmission of electronic tariffs to subscribers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... to any subscriber to the on-line tariff database, including access to the justification required by... machine-readable data (raw tariff data) of all daily transactions made to its on-line tariff database. The...

  11. 14 CFR 221.500 - Transmission of electronic tariffs to subscribers.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... to any subscriber to the on-line tariff database, including access to the justification required by... machine-readable data (raw tariff data) of all daily transactions made to its on-line tariff database. The...

  12. Documentation for the machine-readable version of the Catalogue of Nearby Stars, edition 1969

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    The Catalogue of Nearby Stars, Edition 1969 (Gliese 1969) contains a number of modifications and additions to the 1957 catalog. It should be mentioned that the 1969 edition lists: (1) all 915 stars of the first edition, even though newer parallaxes place some of the stars below the catalog limit; (2) almost all known stars having trigonometric parallaxes or = 0.045 deg, although in some cases the mean values of trigonometric and spectral or photometric parallaxes are or = 0.045 deg. Pleiades stars and the carbon star X Cnc have been omitted; and (3) all stars with mean (resulting) parallaxes or = 0.045 deg. The resulting catalog contains 1529 single stars and systems with a total of 1890 components (not including spectroscopic and astrometric companions). The machine-readable version of the catalog is described. It is intended to enable users to read and process the data without problems or guesswork.

  13. Content, format, gender and grade level differences in elementary students' ability to read science materials as measured by the cloze procedure

    NASA Astrophysics Data System (ADS)

    Williams, Richard L.; Yore, Larry D.

    Present instructional trends in science indicate a need to reexamine a traditional concern in science education: the readability of science textbooks. An area of reading research not well documented is the effect of color, visuals, and page layout on readability of science materials. Using the cloze readability method, the present study explored the relationships between page format, grade level, sex, content, and elementary school students ability to read science material. Significant relationships were found between cloze scores and both grade level and content, and there was a significant interaction effect between grade and sex in favor of older males. No significant relationships could be attributed to page format and sex. In the area of science content, biological materials were most difficult in terms of readability followed by earth science and physical science. Grade level data indicated that grade five materials were more difficult for that level than either grade four or grade six materials were for students at each respective level. In eight of nine cases, the science text materials would be classified at or near the frustration level of readability. The implications for textbook writers and publishers are that science reading materials need to be produced with greater attention to readability and known design principles regarding visual supplements. The implication for teachers is that students need direct instruction in using visual materials to increase their learning from text material. Present visual materials appear to neither help nor hinder the student to gain information from text material.

  14. 26 CFR 31.3406(d)-4 - Special rules for readily tradable instruments acquired through a broker.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... means of magnetic media, machine readable document, or any other medium, provided that the notice... her social security number.) (2) You failed to certify, under penalties of perjury, that you are not...

  15. 26 CFR 31.3406(d)-4 - Special rules for readily tradable instruments acquired through a broker.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... means of magnetic media, machine readable document, or any other medium, provided that the notice... her social security number.) (2) You failed to certify, under penalties of perjury, that you are not...

  16. 26 CFR 31.3406(d)-4 - Special rules for readily tradable instruments acquired through a broker.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... means of magnetic media, machine readable document, or any other medium, provided that the notice... her social security number.) (2) You failed to certify, under penalties of perjury, that you are not...

  17. Procedures and Policies Manual

    ERIC Educational Resources Information Center

    Davis, Jane M.

    2006-01-01

    This document was developed by the Middle Tennessee State University James E. Walker Library Collection Management Department to provide policies and procedural guidelines for the cataloging and processing of bibliographic materials. This document includes policies for cataloging monographs, serials, government documents, machine-readable data…

  18. 26 CFR 31.3406(d)-4 - Special rules for readily tradable instruments acquired through a broker.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... means of magnetic media, machine readable document, or any other medium, provided that the notice... her social security number.) (2) You failed to certify, under penalties of perjury, that you are not...

  19. 12 CFR 602.3 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Definitions. 602.3 Section 602.3 Banks and Banking FARM CREDIT ADMINISTRATION ADMINISTRATIVE PROVISIONS RELEASING INFORMATION Availability of Records... means all documentary materials, such as books, papers, maps, photographs, and machine-readable...

  20. A Summary of Proposed Changes to the Current ICARTT Format Standards and their Implications to Future Airborne Studies

    NASA Astrophysics Data System (ADS)

    Northup, E. A.; Kusterer, J.; Quam, B.; Chen, G.; Early, A. B.; Beach, A. L., III

    2015-12-01

    The current ICARTT file format standards were developed for the purpose of fulfilling the data management needs for the International Consortium for Atmospheric Research on Transport and Transformation (ICARTT) campaign in 2004. The goal of the ICARTT file format was to establish a common and simple to use data file format to promote data exchange and collaboration among science teams with similar science objectives. ICARTT has been the NASA standard since 2010, and is widely used by NOAA, NSF, and international partners (DLR, FAAM). Despite its level of acceptance, there are a number of issues with the current ICARTT format, especially concerning the machine readability. To enhance usability, the ICARTT Refresh Earth Science Data Systems Working Group (ESDSWG) was established to enable a platform for atmospheric science data producers, users (e.g. modelers) and data managers to collaborate on developing criteria for this file format. Ultimately, this is a cross agency effort to improve and aggregate the metadata records being produced. After conducting a survey to identify deficiencies in the current format, we determined which are considered most important to the various communities. Numerous recommendations were made to improve upon the file format while maintaining backward compatibility. The recommendations made to date and their advantages and limitations will be discussed.

  1. The caBIG annotation and image Markup project.

    PubMed

    Channin, David S; Mongkolwat, Pattanasak; Kleper, Vladimir; Sepukar, Kastubh; Rubin, Daniel L

    2010-04-01

    Image annotation and markup are at the core of medical interpretation in both the clinical and the research setting. Digital medical images are managed with the DICOM standard format. While DICOM contains a large amount of meta-data about whom, where, and how the image was acquired, DICOM says little about the content or meaning of the pixel data. An image annotation is the explanatory or descriptive information about the pixel data of an image that is generated by a human or machine observer. An image markup is the graphical symbols placed over the image to depict an annotation. While DICOM is the standard for medical image acquisition, manipulation, transmission, storage, and display, there are no standards for image annotation and markup. Many systems expect annotation to be reported verbally, while markups are stored in graphical overlays or proprietary formats. This makes it difficult to extract and compute with both of them. The goal of the Annotation and Image Markup (AIM) project is to develop a mechanism, for modeling, capturing, and serializing image annotation and markup data that can be adopted as a standard by the medical imaging community. The AIM project produces both human- and machine-readable artifacts. This paper describes the AIM information model, schemas, software libraries, and tools so as to prepare researchers and developers for their use of AIM.

  2. MachineProse: an Ontological Framework for Scientific Assertions

    PubMed Central

    Dinakarpandian, Deendayal; Lee, Yugyung; Vishwanath, Kartik; Lingambhotla, Rohini

    2006-01-01

    Objective: The idea of testing a hypothesis is central to the practice of biomedical research. However, the results of testing a hypothesis are published mainly in the form of prose articles. Encoding the results as scientific assertions that are both human and machine readable would greatly enhance the synergistic growth and dissemination of knowledge. Design: We have developed MachineProse (MP), an ontological framework for the concise specification of scientific assertions. MP is based on the idea of an assertion constituting a fundamental unit of knowledge. This is in contrast to current approaches that use discrete concept terms from domain ontologies for annotation and assertions are only inferred heuristically. Measurements: We use illustrative examples to highlight the advantages of MP over the use of the Medical Subject Headings (MeSH) system and keywords in indexing scientific articles. Results: We show how MP makes it possible to carry out semantic annotation of publications that is machine readable and allows for precise search capabilities. In addition, when used by itself, MP serves as a knowledge repository for emerging discoveries. A prototype for proof of concept has been developed that demonstrates the feasibility and novel benefits of MP. As part of the MP framework, we have created an ontology of relationship types with about 100 terms optimized for the representation of scientific assertions. Conclusion: MachineProse is a novel semantic framework that we believe may be used to summarize research findings, annotate biomedical publications, and support sophisticated searches. PMID:16357355

  3. LiPD and CSciBox: A Case Study in Why Data Standards are Important for Paleoscience

    NASA Astrophysics Data System (ADS)

    Weiss, I.; Bradley, E.; McKay, N.; Emile-Geay, J.; de Vesine, L. R.; Anderson, K. A.; White, J. W. C.; Marchitto, T. M., Jr.

    2016-12-01

    CSciBox [1] is an integrated software system that helps geoscientists build and evaluate age models. Its user chooses from a number of built-in analysis tools, composing them into an analysis workflow and applying it to paleoclimate proxy datasets. CSciBox employs modern database technology to store both the data and the analysis results in an easily accessible and searchable form, and offers the user access to the computational toolbox, the data, and the results via a graphical user interface and a sophisticated plotter. Standards are a staple of modern life, and underlie any form of automation. Without data standards, it is difficult, if not impossible, to construct effective computer tools for paleoscience analysis. The LiPD (Linked Paleo Data) framework [2] enables the storage of both data and metadata in systematic, meaningful, machine-readable ways. LiPD has been a primary enabler of CSciBox's goals of usability, interoperability, and reproducibility. Building LiPD capabilities into CSciBox's importer, for instance, eliminated the need to ask the user about file formats, variable names, relationships between columns in the input file, etc. Building LiPD capabilities into the exporter facilitated the storage of complete details about the input data-provenance, preprocessing steps, etc.-as well as full descriptions of any analyses that were performed using the CSciBox tool, along with citations to appropriate references. This comprehensive collection of data and metadata, which is all linked together in a semantically meaningful, machine-readable way, not only completely documents the analyses and makes them reproducible. It also enables interoperability with any other software system that employs the LiPD standard. [1] www.cs.colorado.edu/ lizb/cscience.html[2] McKay & Emile-Geay, Climate of the Past 12:1093 (2016)

  4. Environmental Research: Communication Studies and Information Sources.

    ERIC Educational Resources Information Center

    Ercegovac, Zorana

    1992-01-01

    Reviews literature on environmental information since 1986, with special emphasis on machine-readable sources as reported in the published literature. Highlights include a new model for studying environmental issues; environmental communication studies, including user studies; and environmental information sources, including pollution media and…

  5. 36 CFR 902.82 - Fee schedule.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... operating duplicating machinery. Not included in direct costs are overhead expenses such as costs of space... form of paper copy, microform, audio-visual materials, or machine-readable documentation (e.g... programs of scholarly research. (5) Non-commercial scientific institution means an institution that is not...

  6. 36 CFR 902.82 - Fee schedule.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... operating duplicating machinery. Not included in direct costs are overhead expenses such as costs of space... form of paper copy, microform, audio-visual materials, or machine-readable documentation (e.g... programs of scholarly research. (5) Non-commercial scientific institution means an institution that is not...

  7. A Mechanized Information Services Catalog.

    ERIC Educational Resources Information Center

    Marron, Beatrice; And Others

    The National Bureau of Standards is mechanizing a catalog of currently available information sources and services. Information from recent surveys of machine-readable, commercially-available bibliographic data bases, and the various current awareness, batch retrospective, and interactive retrospective services which can access them have been…

  8. 5 CFR 294.102 - General definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false General definitions. 294.102 Section 294.102 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS AVAILABILITY OF... the forms that such copies can take are paper, microform, audiovisual materials, or machine readable...

  9. 5 CFR 294.102 - General definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false General definitions. 294.102 Section 294.102 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS AVAILABILITY OF... the forms that such copies can take are paper, microform, audiovisual materials, or machine readable...

  10. On the Application of Syntactic Methodologies in Automatic Text Analysis.

    ERIC Educational Resources Information Center

    Salton, Gerard; And Others

    1990-01-01

    Summarizes various linguistic approaches proposed for document analysis in information retrieval environments. Topics discussed include syntactic analysis; use of machine-readable dictionary information; knowledge base construction; the PLNLP English Grammar (PEG) system; phrase normalization; and statistical and syntactic phrase evaluation used…

  11. 36 CFR 1275.16 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... pictures, sound and video recordings, machine-readable media, plats, maps, models, pictures, works of art... retained or appropriate for retention as evidence of or information about these powers or duties. Included... Activities or the Watergate Special Prosecution Force; or (2) Are circumscribed in the Articles of...

  12. 36 CFR 1275.16 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... pictures, sound and video recordings, machine-readable media, plats, maps, models, pictures, works of art... retained or appropriate for retention as evidence of or information about these powers or duties. Included... Activities or the Watergate Special Prosecution Force; or (2) Are circumscribed in the Articles of...

  13. 36 CFR 1275.16 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... pictures, sound and video recordings, machine-readable media, plats, maps, models, pictures, works of art... retained or appropriate for retention as evidence of or information about these powers or duties. Included... Activities or the Watergate Special Prosecution Force; or (2) Are circumscribed in the Articles of...

  14. 36 CFR 1275.16 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... pictures, sound and video recordings, machine-readable media, plats, maps, models, pictures, works of art... retained or appropriate for retention as evidence of or information about these powers or duties. Included... Activities or the Watergate Special Prosecution Force; or (2) Are circumscribed in the Articles of...

  15. Biological Dynamics Markup Language (BDML): an open format for representing quantitative biological dynamics data

    PubMed Central

    Kyoda, Koji; Tohsato, Yukako; Ho, Kenneth H. L.; Onami, Shuichi

    2015-01-01

    Motivation: Recent progress in live-cell imaging and modeling techniques has resulted in generation of a large amount of quantitative data (from experimental measurements and computer simulations) on spatiotemporal dynamics of biological objects such as molecules, cells and organisms. Although many research groups have independently dedicated their efforts to developing software tools for visualizing and analyzing these data, these tools are often not compatible with each other because of different data formats. Results: We developed an open unified format, Biological Dynamics Markup Language (BDML; current version: 0.2), which provides a basic framework for representing quantitative biological dynamics data for objects ranging from molecules to cells to organisms. BDML is based on Extensible Markup Language (XML). Its advantages are machine and human readability and extensibility. BDML will improve the efficiency of development and evaluation of software tools for data visualization and analysis. Availability and implementation: A specification and a schema file for BDML are freely available online at http://ssbd.qbic.riken.jp/bdml/. Contact: sonami@riken.jp Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:25414366

  16. Biological Dynamics Markup Language (BDML): an open format for representing quantitative biological dynamics data.

    PubMed

    Kyoda, Koji; Tohsato, Yukako; Ho, Kenneth H L; Onami, Shuichi

    2015-04-01

    Recent progress in live-cell imaging and modeling techniques has resulted in generation of a large amount of quantitative data (from experimental measurements and computer simulations) on spatiotemporal dynamics of biological objects such as molecules, cells and organisms. Although many research groups have independently dedicated their efforts to developing software tools for visualizing and analyzing these data, these tools are often not compatible with each other because of different data formats. We developed an open unified format, Biological Dynamics Markup Language (BDML; current version: 0.2), which provides a basic framework for representing quantitative biological dynamics data for objects ranging from molecules to cells to organisms. BDML is based on Extensible Markup Language (XML). Its advantages are machine and human readability and extensibility. BDML will improve the efficiency of development and evaluation of software tools for data visualization and analysis. A specification and a schema file for BDML are freely available online at http://ssbd.qbic.riken.jp/bdml/. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  17. 32 CFR 299.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 2 2014-07-01 2014-07-01 false Definitions. 299.2 Section 299.2 National Defense Department of Defense (Continued) OFFICE OF THE SECRETARY OF DEFENSE (CONTINUED) FREEDOM OF... compilation, such as all books, papers, maps, and photographs, machine readable materials, including those in...

  18. Machine Readable Bibliographic Records: Criteria and Creation.

    ERIC Educational Resources Information Center

    Bregzis, Ritvars

    The centrality of bibliographic records in library automation, objectives of the bibliographic record file and elemental factors involved in bibliographic record creation are discussed. The practical work of creating bibliographic records involves: (1) data base environment, (2) technical aspects, (3) cost and (4) operational methodology. The…

  19. 36 CFR § 902.82 - Fee schedule.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operating duplicating machinery. Not included in direct costs are overhead expenses such as costs of space... form of paper copy, microform, audio-visual materials, or machine-readable documentation (e.g... programs of scholarly research. (5) Non-commercial scientific institution means an institution that is not...

  20. 36 CFR § 1275.16 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... pictures, sound and video recordings, machine-readable media, plats, maps, models, pictures, works of art... retained or appropriate for retention as evidence of or information about these powers or duties. Included... Activities or the Watergate Special Prosecution Force; or (2) Are circumscribed in the Articles of...

  1. 8 CFR 217.2 - Eligibility.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Nationality DEPARTMENT OF HOMELAND SECURITY IMMIGRATION REGULATIONS VISA WAIVER PROGRAM § 217.2 Eligibility... must present a machine-readable passport in order to be granted admission under the Visa Waiver Program. Round trip ticket means any return trip transportation ticket in the name of an arriving Visa Waiver...

  2. Management and Technology Division. Papers.

    ERIC Educational Resources Information Center

    International Federation of Library Associations, The Hague (Netherlands).

    Two papers on copyright and privacy considerations of international information transfer were presented at the 1982 International Federation of Library Associations (IFLA) conference. In "Findings of the IFLA International Study on the Copyright of Bibliographic Records in Machine-Readable Form," Dennis D. McDonald, Eleanor Jo Rodger,…

  3. A catalog of stellar spectrophotometry

    NASA Technical Reports Server (NTRS)

    Adelman, S. J.; Pyper, D. M.; Shore, S. N.; White, R. E.; Warren, W. H., Jr.

    1989-01-01

    A machine-readable catalog of stellar spectrophotometric measurements made with rotating grating scanner is introduced. Consideration is given to the processes by which the stellar data were collected and calibrated with the fluxes of Vega (Hayes and Latham, 1975). A sample page from the spectrophotometric catalog is presented.

  4. Public Data Set: Radially Scanning Magnetic Probes to Study Local Helicity Injection Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richner, Nathan J; Bongard, Michael W; Fonck, Raymond J

    This data set contains openly-documented, machine readable digital research data corresponding to figures published in N.J. Richner et al., 'Radially Scanning Magnetic Probes to Study Local Helicity Injection Dynamics,' accepted for publication in Rev. Sci. Instrum (2018).

  5. GlycoRDF: an ontology to standardize glycomics data in RDF

    PubMed Central

    Ranzinger, Rene; Aoki-Kinoshita, Kiyoko F.; Campbell, Matthew P.; Kawano, Shin; Lütteke, Thomas; Okuda, Shujiro; Shinmachi, Daisuke; Shikanai, Toshihide; Sawaki, Hiromichi; Toukach, Philip; Matsubara, Masaaki; Yamada, Issaku; Narimatsu, Hisashi

    2015-01-01

    Motivation: Over the last decades several glycomics-based bioinformatics resources and databases have been created and released to the public. Unfortunately, there is no common standard in the representation of the stored information or a common machine-readable interface allowing bioinformatics groups to easily extract and cross-reference the stored information. Results: An international group of bioinformatics experts in the field of glycomics have worked together to create a standard Resource Description Framework (RDF) representation for glycomics data, focused on glycan sequences and related biological source, publications and experimental data. This RDF standard is defined by the GlycoRDF ontology and will be used by database providers to generate common machine-readable exports of the data stored in their databases. Availability and implementation: The ontology, supporting documentation and source code used by database providers to generate standardized RDF are available online (http://www.glycoinfo.org/GlycoRDF/). Contact: rene@ccrc.uga.edu or kkiyoko@soka.ac.jp Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25388145

  6. GlycoRDF: an ontology to standardize glycomics data in RDF.

    PubMed

    Ranzinger, Rene; Aoki-Kinoshita, Kiyoko F; Campbell, Matthew P; Kawano, Shin; Lütteke, Thomas; Okuda, Shujiro; Shinmachi, Daisuke; Shikanai, Toshihide; Sawaki, Hiromichi; Toukach, Philip; Matsubara, Masaaki; Yamada, Issaku; Narimatsu, Hisashi

    2015-03-15

    Over the last decades several glycomics-based bioinformatics resources and databases have been created and released to the public. Unfortunately, there is no common standard in the representation of the stored information or a common machine-readable interface allowing bioinformatics groups to easily extract and cross-reference the stored information. An international group of bioinformatics experts in the field of glycomics have worked together to create a standard Resource Description Framework (RDF) representation for glycomics data, focused on glycan sequences and related biological source, publications and experimental data. This RDF standard is defined by the GlycoRDF ontology and will be used by database providers to generate common machine-readable exports of the data stored in their databases. The ontology, supporting documentation and source code used by database providers to generate standardized RDF are available online (http://www.glycoinfo.org/GlycoRDF/). © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Sharing brain mapping statistical results with the neuroimaging data model

    PubMed Central

    Maumet, Camille; Auer, Tibor; Bowring, Alexander; Chen, Gang; Das, Samir; Flandin, Guillaume; Ghosh, Satrajit; Glatard, Tristan; Gorgolewski, Krzysztof J.; Helmer, Karl G.; Jenkinson, Mark; Keator, David B.; Nichols, B. Nolan; Poline, Jean-Baptiste; Reynolds, Richard; Sochat, Vanessa; Turner, Jessica; Nichols, Thomas E.

    2016-01-01

    Only a tiny fraction of the data and metadata produced by an fMRI study is finally conveyed to the community. This lack of transparency not only hinders the reproducibility of neuroimaging results but also impairs future meta-analyses. In this work we introduce NIDM-Results, a format specification providing a machine-readable description of neuroimaging statistical results along with key image data summarising the experiment. NIDM-Results provides a unified representation of mass univariate analyses including a level of detail consistent with available best practices. This standardized representation allows authors to relay methods and results in a platform-independent regularized format that is not tied to a particular neuroimaging software package. Tools are available to export NIDM-Result graphs and associated files from the widely used SPM and FSL software packages, and the NeuroVault repository can import NIDM-Results archives. The specification is publically available at: http://nidm.nidash.org/specs/nidm-results.html. PMID:27922621

  8. Government documents and the online catalog.

    PubMed

    Lynch, F H; Lasater, M C

    1990-01-01

    Prior to planning for implementing the NOTIS system, the Vanderbilt Medical Center Library had not fully cataloged its government publications, and records for these materials were not in machine-readable format. A decision was made that patrons should need to look in only one place for all library materials, including the Health and Human Services Department publications received each year from the central library's Government Documents Unit. Beginning in 1985, these publications were added to the library's database, and the entire 7,200-piece collection is now in the online catalog. Working with these publications has taught the library much about the advantages and disadvantages of cataloging government documents in an online environment. It was found that OCLC cataloging copy is eventually available for most titles, although only about 10% of the records have MeSH headings. Staff time is the major expenditure; problems are caused by documents' irregular nature, frequent format changes, and difficult authority work. Since their addition to the online catalog, documents are used more and the library has better control.

  9. Government documents and the online catalog.

    PubMed Central

    Lynch, F H; Lasater, M C

    1990-01-01

    Prior to planning for implementing the NOTIS system, the Vanderbilt Medical Center Library had not fully cataloged its government publications, and records for these materials were not in machine-readable format. A decision was made that patrons should need to look in only one place for all library materials, including the Health and Human Services Department publications received each year from the central library's Government Documents Unit. Beginning in 1985, these publications were added to the library's database, and the entire 7,200-piece collection is now in the online catalog. Working with these publications has taught the library much about the advantages and disadvantages of cataloging government documents in an online environment. It was found that OCLC cataloging copy is eventually available for most titles, although only about 10% of the records have MeSH headings. Staff time is the major expenditure; problems are caused by documents' irregular nature, frequent format changes, and difficult authority work. Since their addition to the online catalog, documents are used more and the library has better control. PMID:2295010

  10. Documentation for the machine-readable character coded version of the SKYMAP catalogue

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1981-01-01

    The SKYMAP catalogue is a compilation of astronomical data prepared primarily for purposes of attitude guidance for satellites. In addition to the SKYMAP Master Catalogue data base, a software package of data base management and utility programs is available. The tape version of the SKYMAP Catalogue, as received by the Astronomical Data Center (ADC), contains logical records consisting of a combination of binary and EBCDIC data. Certain character coded data in each record are redundant in that the same data are present in binary form. In order to facilitate wider use of all SKYMAP data by the astronomical community, a formatted (character) version was prepared by eliminating all redundant character data and converting all binary data to character form. The character version of the catalogue is described. The document is intended to fully describe the formatted tape so that users can process the data problems and guess work; it should be distributed with any character version of the catalogue.

  11. 32 CFR 518.7 - FOIA terms defined.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... books, papers, maps, photographs, and machine readable materials, inclusive of those in electronic form... create or compile a record to satisfy a FOIA request. (3) Hard copy or electronic records that are... conduct. (h) Electronic record. Records (including e-mail) that are created, stored, and retrievable by...

  12. The Role of the National Agricultural Library.

    ERIC Educational Resources Information Center

    Howard, Joseph H.

    1989-01-01

    Describes the role, users, collections and services of the National Agricultural Library. Some of the services discussed include a machine readable bibliographic database, an international interlibrary loan system, programs to develop library networks and cooperative cataloging, and the development and use of information technologies such as laser…

  13. 46 CFR 503.43 - Fees for services.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Information Act or other request. Such copies can take the form of paper or machine readable documentation (e..., which operates a program or programs of scholarly research. (6) Non-commercial scientific institution... research the results of which are not intended to promote any particular product or industry. (7...

  14. 46 CFR 503.43 - Fees for services.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Information Act or other request. Such copies can take the form of paper or machine readable documentation (e..., which operates a program or programs of scholarly research. (6) Non-commercial scientific institution... research the results of which are not intended to promote any particular product or industry. (7...

  15. Public Data Set: H-mode Plasmas at Very Low Aspect Ratio on the Pegasus Toroidal Experiment

    DOE Data Explorer

    Thome, Kathreen E. [University of Wisconsin-Madison; Oak Ridge Associated Universities] (ORCID:0000000248013922); Bongard, Michael W. [University of Wisconsin-Madison] (ORCID:0000000231609746); Barr, Jayson L. [University of Wisconsin-Madison] (ORCID:0000000177685931); Bodner, Grant M. [University of Wisconsin-Madison] (ORCID:0000000324979172); Burke, Marcus G. [University of Wisconsin-Madison] (ORCID:0000000176193724); Fonck, Raymond J. [University of Wisconsin-Madison] (ORCID:0000000294386762); Kriete, David M. [University of Wisconsin-Madison] (ORCID:0000000236572911); Perry, Justin M. [University of Wisconsin-Madison] (ORCID:0000000171228609); Reusch, Joshua A. [University of Wisconsin-Madison] (ORCID:0000000284249422); Schlossberg, David J. [University of Wisconsin-Madison] (ORCID:0000000287139448)

    2016-09-30

    This data set contains openly-documented, machine readable digital research data corresponding to figures published in K.E. Thome et al., 'H-mode Plasmas at Very Low Aspect Ratio on the Pegasus Toroidal Experiment,' Nucl. Fusion 57, 022018 (2017).

  16. Recent Developments in Social Science Research

    ERIC Educational Resources Information Center

    Jenness, David

    1978-01-01

    In this discussion of recent theoretical and methodological developments in three selected areas of the social sciences--social indicators, social experimentation and evaluation, and longitudinal studies, attention is given to the growing availability of machine-readable data sets and to special archives and resources of interest to librarians.…

  17. 41 CFR 51-8.3 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... request. Such copies can take the form of paper copy, audio-visual materials, or machine readable materials (e.g., magnetic tape or disk), among others. (g) The term search includes all time spent looking... time spent resolving general legal or policy issues regarding the application of exemptions. [54 FR...

  18. 41 CFR 51-8.3 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... request. Such copies can take the form of paper copy, audio-visual materials, or machine readable materials (e.g., magnetic tape or disk), among others. (g) The term search includes all time spent looking... time spent resolving general legal or policy issues regarding the application of exemptions. [54 FR...

  19. Public Data Set: Impedance of an Intense Plasma-Cathode Electron Source for Tokamak Plasma Startup

    DOE Data Explorer

    Hinson, Edward T. [University of Wisconsin-Madison] (ORCID:000000019713140X); Barr, Jayson L. [University of Wisconsin-Madison] (ORCID:0000000177685931); Bongard, Michael W. [University of Wisconsin-Madison] (ORCID:0000000231609746); Burke, Marcus G. [University of Wisconsin-Madison] (ORCID:0000000176193724); Fonck, Raymond J. [University of Wisconsin-Madison] (ORCID:0000000294386762); Perry, Justin M. [University of Wisconsin-Madison] (ORCID:0000000171228609)

    2016-05-31

    This data set contains openly-documented, machine readable digital research data corresponding to figures published in E.T. Hinson et al., 'Impedance of an Intense Plasma-Cathode Electron Source for Tokamak Plasma Startup,' Physics of Plasmas 23, 052515 (2016).

  20. 48 CFR 252.211-7003 - Item identification and valuation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., used to retrieve data encoded on machine-readable media. Concatenated unique item identifier means— (1... Defense Logistics Information System (DLIS) Commercial and Government Entity (CAGE) Code). Issuing agency... identifier. Item means a single hardware article or a single unit formed by a grouping of subassemblies...

  1. Public Data Set: Control and Automation of the Pegasus Multi-point Thomson Scattering System

    DOE Data Explorer

    Bodner, Grant M. [University of Wisconsin-Madison] (ORCID:0000000324979172); Bongard, Michael W. [University of Wisconsin-Madison] (ORCID:0000000231609746); Fonck, Raymond J. [University of Wisconsin-Madison] (ORCID:0000000294386762); Reusch, Joshua A. [University of Wisconsin-Madison] (ORCID:0000000284249422); Rodriguez Sanchez, Cuauhtemoc [University of Wisconsin-Madison] (ORCID:0000000334712586); Schlossberg, David J. [University of Wisconsin-Madison] (ORCID:0000000287139448)

    2016-08-12

    This public data set contains openly-documented, machine readable digital research data corresponding to figures published in G.M. Bodner et al., 'Control and Automation of the Pegasus Multi-point Thomson Scattering System,' Rev. Sci. Instrum. 87, 11E523 (2016).

  2. Documentation for the machine-readable version of the catalog of supplemental stars to the Bonner Durchmusterung

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1980-01-01

    The magnetic tape version of the Bonn catalog is described. The catalog contains a listing of supplemental stars having lower case letter designations following the BD numbers after which they have been inserted. A sample catalog is also presented.

  3. Toward the Development of a Model to Estimate the Readability of Credentialing-Examination Materials

    ERIC Educational Resources Information Center

    Badgett, Barbara A.

    2010-01-01

    The purpose of this study was to develop a set of procedures to establish readability, including an equation, that accommodates the multiple-choice item format and occupational-specific language related to credentialing examinations. The procedures and equation should be appropriate for learning materials, examination materials, and occupational…

  4. Usability and Interoperability Improvements for an EASE-Grid 2.0 Passive Microwave Data Product Using CF Conventions

    NASA Astrophysics Data System (ADS)

    Hardman, M.; Brodzik, M. J.; Long, D. G.

    2017-12-01

    Beginning in 1978, the satellite passive microwave data record has been a mainstay of remote sensing of the cryosphere, providing twice-daily, near-global spatial coverage for monitoring changes in hydrologic and cryospheric parameters that include precipitation, soil moisture, surface water, vegetation, snow water equivalent, sea ice concentration and sea ice motion. Historical versions of the gridded passive microwave data sets were produced as flat binary files described in human-readable documentation. This format is error-prone and makes it difficult to reliably include all processing and provenance. Funded by NASA MEaSUREs, we have completely reprocessed the gridded data record that includes SMMR, SSM/I-SSMIS and AMSR-E. The new Calibrated Enhanced-Resolution Brightness Temperature (CETB) Earth System Data Record (ESDR) files are self-describing. Our approach to the new data set was to create netCDF4 files that use standard metadata conventions and best practices to incorporate file-level, machine- and human-readable contents, geolocation, processing and provenance metadata. We followed the flexible and adaptable Climate and Forecast (CF-1.6) Conventions with respect to their coordinate conventions and map projection parameters. Additionally, we made use of Attribute Conventions for Dataset Discovery (ACDD-1.3) that provided file-level conventions with spatio-temporal bounds that enable indexing software to search for coverage. Our CETB files also include temporal coverage and spatial resolution in the file-level metadata for human-readability. We made use of the JPL CF/ACDD Compliance Checker to guide this work. We tested our file format with real software, for example, netCDF Command-line Operators (NCO) power tools for unlimited control on spatio-temporal subsetting and concatenation of files. The GDAL tools understand the CF metadata and produce fully-compliant geotiff files from our data. ArcMap can then reproject the geotiff files on-the-fly and work with other geolocated data such as coastlines, with no special work required. We expect this combination of standards and well-tested interoperability to significantly improve the usability of this important ESDR for the Earth Science community.

  5. 23 CFR Appendix A to Part 1313 - Tamper Resistant Driver's License

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...) Block graphics. (15) Security fonts and graphics with known hidden flaws. (16) Card stock, layer with colors. (17) Micro-graphics. (18) Retroflective security logos. (19) Machine readable technologies such... permit that has one or more of the following security features: (1) Ghost image. (2) Ghost graphic. (3...

  6. Developing a Large Lexical Database for Information Retrieval, Parsing, and Text Generation Systems.

    ERIC Educational Resources Information Center

    Conlon, Sumali Pin-Ngern; And Others

    1993-01-01

    Important characteristics of lexical databases and their applications in information retrieval and natural language processing are explained. An ongoing project using various machine-readable sources to build a lexical database is described, and detailed designs of individual entries with examples are included. (Contains 66 references.) (EAM)

  7. Comparing Characteristics of Highly Circulated Titles for Demand-Driven Collection Development.

    ERIC Educational Resources Information Center

    Britten, William A; Webster, Judith D.

    1992-01-01

    Describes methodology for analyzing MARC (machine-readable cataloging) records of highly circulating titles to document common characteristics for collection development purposes. Application of the methodology in a university library is discussed, and data are presented on commonality of subject heading, author, language, and imprint date for…

  8. MONTO: A Machine-Readable Ontology for Teaching Word Problems in Mathematics

    ERIC Educational Resources Information Center

    Lalingkar, Aparna; Ramnathan, Chandrashekar; Ramani, Srinivasan

    2015-01-01

    The Indian National Curriculum Framework has as one of its objectives the development of mathematical thinking and problem solving ability. However, recent studies conducted in Indian metros have expressed concern about students' mathematics learning. Except in some private coaching academies, regular classroom teaching does not include problem…

  9. Use of PL/1 in a Bibliographic Information Retrieval System.

    ERIC Educational Resources Information Center

    Schipma, Peter B.; And Others

    The Information Sciences section of ITT Research Institute (IITRI) has developed a Computer Search Center and is currently conducting a research project to explore computer searching of a variety of machine-readable data bases. The Center provides Selective Dissemination of Information services to academic, industrial and research organizations…

  10. Public Data Set: A Power-Balance Model for Local Helicity Injection Startup in a Spherical Tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barr, Jayson L.; Bongard, Michael W.; Burke, Marcus G.

    This public data set contains openly-documented, machine readable digital research data corresponding to figures published in J.L. Barr et. al, 'A Power-Balance Model for Local Helicity Injection Startup in a Spherical Tokamak,' Nuclear Fusion 58, 076011 (2018).

  11. 21 CFR 1401.3 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Definitions. 1401.3 Section 1401.3 Food and Drugs OFFICE OF NATIONAL DRUG CONTROL POLICY PUBLIC AVAILABILITY OF INFORMATION § 1401.3 Definitions. For the... paper, microform, audio-visual materials, or machine-readable documentation. ONDCP will provide a copy...

  12. An Operational System for Subject Switching between Controlled Vocabularies: A Computational Linguistics Approach.

    ERIC Educational Resources Information Center

    Silvester, June P.; And Others

    This report describes a new automated process that pioneers full-scale operational use of subject switching by the NASA (National Aeronautics and Space Administration) Scientific and Technical Information (STI) Facility. The subject switching process routinely translates machine-readable subject terms from one controlled vocabulary into the…

  13. A Model-Driven Approach to e-Course Management

    ERIC Educational Resources Information Center

    Savic, Goran; Segedinac, Milan; Milenkovic, Dušica; Hrin, Tamara; Segedinac, Mirjana

    2018-01-01

    This paper presents research on using a model-driven approach to the development and management of electronic courses. We propose a course management system which stores a course model represented as distinct machine-readable components containing domain knowledge of different course aspects. Based on this formally defined platform-independent…

  14. Method and system for enabling real-time speckle processing using hardware platforms

    NASA Technical Reports Server (NTRS)

    Ortiz, Fernando E. (Inventor); Kelmelis, Eric (Inventor); Durbano, James P. (Inventor); Curt, Peterson F. (Inventor)

    2012-01-01

    An accelerator for the speckle atmospheric compensation algorithm may enable real-time speckle processing of video feeds that may enable the speckle algorithm to be applied in numerous real-time applications. The accelerator may be implemented in various forms, including hardware, software, and/or machine-readable media.

  15. 14 CFR 221.500 - Transmission of electronic tariffs to subscribers.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS TARIFFS Electronically Filed Tariffs § 221.500... to any subscriber to the on-line tariff database, including access to the justification required by... machine-readable data (raw tariff data) of all daily transactions made to its on-line tariff database. The...

  16. 19 CFR 201.20 - Fees.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... requesters, subject to the limitations of paragraph (c) of this section. For a paper photocopy of a record... overhead expenses such as costs of space and heating or lighting of the facility in which the records are... of paper copy, microform, audio-visual materials, or machine-readable documentation (e.g., magnetic...

  17. 19 CFR 201.20 - Fees.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... requesters, subject to the limitations of paragraph (c) of this section. For a paper photocopy of a record... overhead expenses such as costs of space and heating or lighting of the facility in which the records are... of paper copy, microform, audio-visual materials, or machine-readable documentation (e.g., magnetic...

  18. 19 CFR 201.20 - Fees.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... requesters, subject to the limitations of paragraph (c) of this section. For a paper photocopy of a record... overhead expenses such as costs of space and heating or lighting of the facility in which the records are... of paper copy, microform, audio-visual materials, or machine-readable documentation (e.g., magnetic...

  19. 19 CFR 201.20 - Fees.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... requesters, subject to the limitations of paragraph (c) of this section. For a paper photocopy of a record... overhead expenses such as costs of space and heating or lighting of the facility in which the records are... of paper copy, microform, audio-visual materials, or machine-readable documentation (e.g., magnetic...

  20. 32 CFR 1662.6 - Fee schedule; waiver of fees.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... as costs of space, and heating or lighting the facility in which the records are stored. (2) The term... copies may take the form of paper copy, microform, audio-visual materials, or machine readable... institution of vocational education, which operates a program or programs of scholarly research. (7) The term...

  1. 32 CFR 1662.6 - Fee schedule; waiver of fees.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... as costs of space, and heating or lighting the facility in which the records are stored. (2) The term... copies may take the form of paper copy, microform, audio-visual materials, or machine readable... institution of vocational education, which operates a program or programs of scholarly research. (7) The term...

  2. 77 FR 64409 - Designation of Taiwan for the Visa Waiver Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-22

    ... and passport holders from designated Visa Waiver Program countries \\1\\ may apply for admission to the... for nationals of the country; (2) a government certification that it issues machine-readable passports... about the theft or loss of passports; (5) the government acceptance for repatriation any citizen, former...

  3. A Feasibility Study on Data Distribution on Optical Media.

    ERIC Educational Resources Information Center

    Campbell (Bonnie) & Associates, Toronto (Ontario).

    This feasibility study assesses the potential of optical technology in the development of accessible bibliographic and location data networks both in Canada and within the international MARC (Machine-Readable Cataloging) network. The study is divided into four parts: (1) a market survey of cataloging and interlibrary loan librarians to determine…

  4. IFLA General Conference, 1987. IFLA Core Programmes. Open Forum. Papers.

    ERIC Educational Resources Information Center

    International Federation of Library Associations, The Hague (Netherlands).

    The four papers in this compilation report on some of the recent core programs of the International Federation of Library Associations (IFLA): (1) "The IFLA Universal Bibliographic Control and International Machine Readable Cataloging Programme (UBCIM)" (Ross Bourne, IFLA UBCIM Programme Officer); (2) "The IFLA UAP (Universal…

  5. DOBIS and NOTIS: A Contrast in Design.

    ERIC Educational Resources Information Center

    Juergens, Bonnie; Blake, Ruth

    1987-01-01

    Compares and contrasts two systems designed for library automation applications--NOTIS, which was developed in the United States, and DOBIS, which was developed in Europe. The differences in the systems are discussed in terms of the availability or absence of machine readable bibliographic sharing capacities in the countries of origin. (CLB)

  6. 5 CFR 294.102 - General definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... the forms that such copies can take are paper, microform, audiovisual materials, or machine readable... that is necessary to excise them and otherwise prepare them for release. Review does not include time... the time spent looking for material that is responsive to a request, including page-by-page or line-by...

  7. Issues in Retrospective Conversion for a Small Special Collection: A Case Study.

    ERIC Educational Resources Information Center

    Hieb, Fern

    1997-01-01

    Small special collections present unique problems for retrospective conversion of catalogs to machine-readable form. Examines retrospective conversion using the Moravian Music Foundation as a case study. Discusses advantages to automation, options for conversion process, quantifying conversion effort, costs, in-house conversion, national standards…

  8. Planning for the Automation of School Library Media Centers.

    ERIC Educational Resources Information Center

    Caffarella, Edward P.

    1996-01-01

    Geared for school library media specialists whose centers are in the early stages of automation or conversion to a new system, this article focuses on major components of media center automation: circulation control; online public access catalogs; machine readable cataloging; retrospective conversion of print catalog cards; and computer networks…

  9. ARL Statement on Unlimited Use and Exchange of Bibliographic Records.

    ERIC Educational Resources Information Center

    Association of Research Libraries, Washington, DC.

    The Association of Research Libraries is fully committed to the principle of unrestricted access to and dissemination of ideas, i.e., member libraries must have unlimited access to the machine-readable bibliographic records which are created by member libraries and maintained in bibliographic utilities. Coordinated collection development programs…

  10. Retrospective Conversion: A Question of Time, Standards, and Purpose.

    ERIC Educational Resources Information Center

    Valentine, Phyllis A.; McDonald, David R.

    1986-01-01

    Examines the factors that determine the cost of retrospective conversion (definition of conversion, standards of acceptance, method of conversion, hit rate, standards for creation of machine-readable records for nonhits); reports results of cost study at University of Michigan library; and introduces an alternative strategy for discussion. Seven…

  11. Union Listing via OCLC's Serials Control Subsystem.

    ERIC Educational Resources Information Center

    O'Malley, Terrence J.

    1984-01-01

    Describes library use of Conversion of Serials Project's (CONSER) online national machine-readable database for serials to create online union lists of serials via OCLC's Serial Control Subsystem. Problems in selection of appropriate, accurate, and authenticated records and prospects for the future are discussed. Twenty sources and sample records…

  12. COMPENDEX/TEXT-PAC: RETROSPECTIVE SEARCH.

    ERIC Educational Resources Information Center

    Standera, Oldrich

    The Text-Pac System is capable of generating indexes and bulletins to provide a current information service without the selectivity feature. Indexes of the accumulated data base may also be used as a basis for manual retrospective searching. The manual search involves searching computer-prepared indexes from a machine readable data base produced…

  13. Provision of Information to the Research Staff.

    ERIC Educational Resources Information Center

    Williams, Martha E.

    The Information Sciences section at Illinois Institute of Technology Research Institute (IITRI) is now operating a Computer Search Center (CSC) for handling numerous machine-readable data bases. The computer programs are generalized in the sense that they will handle any incoming data base. This is accomplished by means of a preprocessor system…

  14. The Bibliographical Control of Early Books.

    ERIC Educational Resources Information Center

    Cameron, William J.

    Examples are given of the kinds of machine-readable data bases that should be developed in order to extend attempts at universal bibliographical control into neglected areas, the results of which can be used by researchers in the humanities, specifically those using books printed before 1801. The principles of bibliographical description,…

  15. The IHMC CmapTools software in research and education: a multi-level use case in Space Meteorology

    NASA Astrophysics Data System (ADS)

    Messerotti, Mauro

    2010-05-01

    The IHMC (Institute for Human and Machine Cognition, Florida University System, USA) CmapTools software is a powerful multi-platform tool for knowledge modelling in graphical form based on concept maps. In this work we present its application for the high-level development of a set of multi-level concept maps in the framework of Space Meteorology to act as the kernel of a space meteorology domain ontology. This is an example of a research use case, as a domain ontology coded in machine-readable form via e.g. OWL (Web Ontology Language) is suitable to be an active layer of any knowledge management system embedded in a Virtual Observatory (VO). Apart from being manageable at machine level, concept maps developed via CmapTools are intrinsically human-readable and can embed hyperlinks and objects of many kinds. Therefore they are suitable to be published on the web: the coded knowledge can be exploited for educational purposes by the students and the public, as the level of information can be naturally organized among linked concept maps in progressively increasing complexity levels. Hence CmapTools and its advanced version COE (Concept-map Ontology Editor) represent effective and user-friendly software tools for high-level knowledge represention in research and education.

  16. Development of clinical contents model markup language for electronic health records.

    PubMed

    Yun, Ji-Hyun; Ahn, Sun-Ju; Kim, Yoon

    2012-09-01

    To develop dedicated markup language for clinical contents models (CCM) to facilitate the active use of CCM in electronic health record systems. Based on analysis of the structure and characteristics of CCM in the clinical domain, we designed extensible markup language (XML) based CCM markup language (CCML) schema manually. CCML faithfully reflects CCM in both the syntactic and semantic aspects. As this language is based on XML, it can be expressed and processed in computer systems and can be used in a technology-neutral way. CCML HAS THE FOLLOWING STRENGTHS: it is machine-readable and highly human-readable, it does not require a dedicated parser, and it can be applied for existing electronic health record systems.

  17. BacDive--The Bacterial Diversity Metadatabase in 2016.

    PubMed

    Söhngen, Carola; Podstawka, Adam; Bunk, Boyke; Gleim, Dorothea; Vetcininova, Anna; Reimer, Lorenz Christian; Ebeling, Christian; Pendarovski, Cezar; Overmann, Jörg

    2016-01-04

    BacDive-the Bacterial Diversity Metadatabase (http://bacdive.dsmz.de) provides strain-linked information about bacterial and archaeal biodiversity. The range of data encompasses taxonomy, morphology, physiology, sampling and concomitant environmental conditions as well as molecular biology. The majority of data is manually annotated and curated. Currently (with release 9/2015), BacDive covers 53 978 strains. Newly implemented RESTful web services provide instant access to the content in machine-readable XML and JSON format. Besides an overall increase of data content, BacDive offers new data fields and features, e.g. the search for gene names, plasmids or 16S rRNA in the advanced search, as well as improved linkage of entries to external life science web resources. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. ELSA: An integrated, semi-automated nebular abundance package

    NASA Astrophysics Data System (ADS)

    Johnson, Matthew D.; Levitt, Jesse S.; Henry, Richard B. C.; Kwitter, Karen B.

    We present ELSA, a new modular software package, written in C, to analyze and manage spectroscopic data from emission-line objects. In addition to calculating plasma diagnostics and abundances from nebular emission lines, the software provides a number of convenient features including the ability to ingest logs produced by IRAF's splot task, to semi-automatically merge spectra in different wavelength ranges, and to automatically generate various data tables in machine-readable or LaTeX format. ELSA features a highly sophisticated interstellar reddening correction scheme that takes into account temperature and density effects as well as He II contamination of the hydrogen Balmer lines. Abundance calculations are performed using a 5-level atom approximation with recent atomic data, based on R. Henry's ABUN program. Downloading and detailed documentation for all aspects of ELSA are available at the following URL:

  19. How semantics can inform the geological mapping process and support intelligent queries

    NASA Astrophysics Data System (ADS)

    Lombardo, Vincenzo; Piana, Fabrizio; Mimmo, Dario

    2017-04-01

    The geologic mapping process requires the organization of data according to the general knowledge about the objects, namely the geologic units, and to the objectives of a graphic representation of such objects in a map, following an established model of geotectonic evolution. Semantics can greatly help such a process in two concerns: the provision of a terminological base to name and classify the objects of the map; on the other, the implementation of a machine-readable encoding of the geologic knowledge base supports the application of reasoning mechanisms and the derivation of novel properties and relations about the objects of the map. The OntoGeonous initiative has built a terminological base of geological knowledge in a machine-readable format, following the Semantic Web tenets and the Linked Data paradigm. The major knowledge sources of the OntoGeonous initiative are GeoScience Markup Language schemata and vocabularies (through its last version, GeoSciML 4, 2015, published by the IUGS CGI Commission) and the INSPIRE "Data Specification on Geology" directives (an operative simplification of GeoSciML, published by INSPIRE Thematic Working Group Geology of the European Commission). The Linked Data paradigm has been exploited by linking (without replicating, to avoid inconsistencies) the already existing machine-readable encoding for some specific domains, such as the lithology domain (vocabulary Simple Lithology) and the geochronologic time scale (ontology "gts"). Finally, for the upper level knowledge, shared across several geologic domains, we have resorted to NASA SWEET ontology. The OntoGeonous initiative has also produced a wiki that explains how the geologic knowledge has been encoded from shared geoscience vocabularies (https://www.di.unito.it/wikigeo/). In particular, the sections dedicated to axiomatization will support the construction of an appropriate data base schema that can be then filled with the objects of the map. This contribution will discuss how the formal encoding of the geological knowledge opens new perspectives for the analysis and representation of the geological systems. In fact, once that the major concepts are defined, the resulting formal conceptual model of the geologic system can hold across different technical and scientific communities. Furthermore, this would allow for a semi-automatic or automatic classification of the cartographic database, where a significant number of properties (attributes) of the recorded instances could be inferred through computational reasoning. So, for example, the system can be queried for showing the instances that satisfy some property (e.g., "Retrieve all the lithostratigraphic units composed of clastic sedimentary rock") or for classifying some unit according to the properties holding for that unit (e.g., "What is the class of the geologic unit composed of siltstone material?").

  20. A proposed-standard format to represent and distribute tomographic models and other earth spatial data

    NASA Astrophysics Data System (ADS)

    Postpischl, L.; Morelli, A.; Danecek, P.

    2009-04-01

    Formats used to represent (and distribute) tomographic earth models differ considerably and are rarely self-consistent. In fact, each earth scientist, or research group, uses specific conventions to encode the various parameterizations used to describe, e.g., seismic wave speed or density in three dimensions, and complete information is often found in related documents or publications (if available at all) only. As a consequence, use of various tomographic models from different authors requires considerable effort, is more cumbersome than it should be and prevents widespread exchange and circulation within the community. We propose a format, based on modern web standards, able to represent different (grid-based) model parameterizations within the same simple text-based environment, easy to write, to parse, and to visualise. The aim is the creation of self-describing data-structures, both human and machine readable, that are automatically recognised by general-purpose software agents, and easily imported in the scientific programming environment. We think that the adoption of such a representation as a standard for the exchange and distribution of earth models can greatly ease their usage and enhance their circulation, both among fellow seismologists and among a broader non-specialist community. The proposed solution uses semantic web technologies, fully fitting the current trends in data accessibility. It is based on Json (JavaScript Object Notation), a plain-text, human-readable lightweight computer data interchange format, which adopts a hierarchical name-value model for representing simple data structures and associative arrays (called objects). Our implementation allows integration of large datasets with metadata (authors, affiliations, bibliographic references, units of measure etc.) into a single resource. It is equally suited to represent other geo-referenced volumetric quantities — beyond tomographic models — as well as (structured and unstructured) computational meshes. This approach can exploit the capabilities of the web browser as a computing platform: a series of in-page quick tools for comparative analysis between models will be presented, as well as visualisation techniques for tomographic layers in Google Maps and Google Earth. We are working on tools for conversion into common scientific format like netCDF, to allow easy visualisation in GEON-IDV or gmt.

  1. Columba: an integrated database of proteins, structures, and annotations.

    PubMed

    Trissl, Silke; Rother, Kristian; Müller, Heiko; Steinke, Thomas; Koch, Ina; Preissner, Robert; Frömmel, Cornelius; Leser, Ulf

    2005-03-31

    Structural and functional research often requires the computation of sets of protein structures based on certain properties of the proteins, such as sequence features, fold classification, or functional annotation. Compiling such sets using current web resources is tedious because the necessary data are spread over many different databases. To facilitate this task, we have created COLUMBA, an integrated database of annotations of protein structures. COLUMBA currently integrates twelve different databases, including PDB, KEGG, Swiss-Prot, CATH, SCOP, the Gene Ontology, and ENZYME. The database can be searched using either keyword search or data source-specific web forms. Users can thus quickly select and download PDB entries that, for instance, participate in a particular pathway, are classified as containing a certain CATH architecture, are annotated as having a certain molecular function in the Gene Ontology, and whose structures have a resolution under a defined threshold. The results of queries are provided in both machine-readable extensible markup language and human-readable format. The structures themselves can be viewed interactively on the web. The COLUMBA database facilitates the creation of protein structure data sets for many structure-based studies. It allows to combine queries on a number of structure-related databases not covered by other projects at present. Thus, information on both many and few protein structures can be used efficiently. The web interface for COLUMBA is available at http://www.columba-db.de.

  2. Readability of product ingredient labels can be improved by simple means: an experimental study.

    PubMed

    Yazar, Kerem; Seimyr, Gustaf Ö; Novak, Jiri A; White, Ian R; Lidén, Carola

    2014-10-01

    Ingredient labels on products used by consumers and workers every day, such as food, cosmetics, and detergents, can be difficult to read and understand. To assess whether typographical design and ordering of ingredients can improve the readability of product ingredient labels. The study subjects (n = 16) had to search for two target ingredients in 30 cosmetic product labels and three alternative formats of each. Outcome measures were completion time (reading speed), recognition rate, eye movements, task load and subjective rating when the reading of ingredient labels was assessed by video recording, an eye tracking device, and questionnaires. The completion time was significantly lower (p < 0.001) when subjects were reading all alternative formats than when they were reading the original. The recognition rate was generally high, and improved slightly with the alternative formats. The eye movement measures confirmed that the alternative formats were easier to read than the original product labels. Mental and physical demand and effort were significantly lower (p < 0.036) and experience rating was higher (p < 0.042) for the alternative formats. There were also differences between the alternative formats. Simple adjustments in the design of product ingredient labels would significantly improve their readability, benefiting the many allergic individuals and others in their daily struggle to avoid harmful or unwanted exposure. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  3. 26 CFR 301.6721-1 - Failure to file correct information returns.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... file timely includes a failure to file in the required manner, for example, on magnetic media or in... they fall below the 250-threshold requirement) or on magnetic media or other machine-readable form. Filers who are required to file information returns on magnetic media and who file such information...

  4. PASCAL Data Base: File Description and On Line Access on ESA/IRS.

    ERIC Educational Resources Information Center

    Pelissier, Denise

    This report describes the PASCAL database, a machine readable version of the French abstract journal Bulletin Signaletique, which allows use of the file for (1) batch and online retrieval of information, (2) selective dissemination of information, and (3) publishing of the 50 sections of Bulletin Signaletique. The system, which covers nine…

  5. 26 CFR 301.6721-1 - Failure to file correct information returns.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... file timely includes a failure to file in the required manner, for example, on magnetic media or in... they fall below the 250-threshold requirement) or on magnetic media or other machine-readable form. Filers who are required to file information returns on magnetic media and who file such information...

  6. Automatic Selection of Suitable Sentences for Language Learning Exercises

    ERIC Educational Resources Information Center

    Pilán, Ildikó; Volodina, Elena; Johansson, Richard

    2013-01-01

    In our study we investigated second and foreign language (L2) sentence readability, an area little explored so far in the case of several languages, including Swedish. The outcome of our research consists of two methods for sentence selection from native language corpora based on Natural Language Processing (NLP) and machine learning (ML)…

  7. Microcomputer-Based Access to Machine-Readable Numeric Databases.

    ERIC Educational Resources Information Center

    Wenzel, Patrick

    1988-01-01

    Describes the use of microcomputers and relational database management systems to improve access to numeric databases by the Data and Program Library Service at the University of Wisconsin. The internal records management system, in-house reference tools, and plans to extend these tools to the entire campus are discussed. (3 references) (CLB)

  8. 29 CFR 1208.6 - Schedule of fees and methods of payment for services rendered.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... included in direct costs are overhead expenses such as costs of space and heating or lighting the facility... form of paper copy, microfilm, audiovisual materials, or machine readable documentation (e.g., magnetic... scholarly research. (7) Non-commercial scientific institution refers to an institution that is not operated...

  9. 29 CFR 1208.6 - Schedule of fees and methods of payment for services rendered.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... included in direct costs are overhead expenses such as costs of space and heating or lighting the facility... form of paper copy, microfilm, audiovisual materials, or machine readable documentation (e.g., magnetic... scholarly research. (7) Non-commercial scientific institution refers to an institution that is not operated...

  10. 45 CFR 704.1 - Material available pursuant to 5 U.S.C. 552.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... Not included in direct costs are overhead expenses such as costs of space and heating or lighting the... of records. Such copies can take the form of paper or machine readable documentation (e.g., magnetic... vocational education that operates a program or programs of scholarly research. (vii) Noncommercial...

  11. 29 CFR 1208.6 - Schedule of fees and methods of payment for services rendered.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... included in direct costs are overhead expenses such as costs of space and heating or lighting the facility... form of paper copy, microfilm, audiovisual materials, or machine readable documentation (e.g., magnetic... scholarly research. (7) Non-commercial scientific institution refers to an institution that is not operated...

  12. 45 CFR 704.1 - Material available pursuant to 5 U.S.C. 552.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... Not included in direct costs are overhead expenses such as costs of space and heating or lighting the... of records. Such copies can take the form of paper or machine readable documentation (e.g., magnetic... vocational education that operates a program or programs of scholarly research. (vii) Noncommercial...

  13. 45 CFR 704.1 - Material available pursuant to 5 U.S.C. 552.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... Not included in direct costs are overhead expenses such as costs of space and heating or lighting the... of records. Such copies can take the form of paper or machine readable documentation (e.g., magnetic... vocational education that operates a program or programs of scholarly research. (vii) Noncommercial...

  14. 45 CFR 704.1 - Material available pursuant to 5 U.S.C. 552.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... Not included in direct costs are overhead expenses such as costs of space and heating or lighting the... of records. Such copies can take the form of paper or machine readable documentation (e.g., magnetic... vocational education that operates a program or programs of scholarly research. (vii) Noncommercial...

  15. Nonbibliographic Machine-Readable Data Bases in ARL Libraries. SPEC Kit 105.

    ERIC Educational Resources Information Center

    Westerman, Mel

    This document is one of ten kits distributed annually by the Systems and Procedures Exchange Center (SPEC), a clearinghouse operated by the Association of Research Libraries, Office of Management Studies (ARL/OMS) that provides a central source of timely information and materials on the management and operations of large academic and research…

  16. Public Data Set: A Novel, Cost-Effective, Multi-Point Thomson Scattering System on the Pegasus Toroidal Experiment

    DOE Data Explorer

    Schlossberg, David J. [University of Wisconsin-Madison] (ORCID:0000000287139448); Bodner, Grant M. [University of Wisconsin-Madison] (ORCID:0000000324979172); Reusch, Joshua A. [University of Wisconsin-Madison] (ORCID:0000000284249422); Bongard, Michael W. [University of Wisconsin-Madison] (ORCID:0000000231609746); Fonck, Raymond J. [University of Wisconsin-Madison] (ORCID:0000000294386762); Rodriguez Sanchez, Cuauhtemoc [University of Wisconsin-Madison] (ORCID:0000000334712586)

    2016-09-16

    This public data set contains openly-documented, machine readable digital research data corresponding to figures published in D.J. Schlossberg et. al., 'A Novel, Cost-Effective, Multi-Point Thomson Scattering System on the Pegasus Toroidal Experiment,' Rev. Sci. Instrum. 87, 11E403 (2016).

  17. Public Data Set: High Confinement Mode and Edge Localized Mode Characteristics in a Near-Unity Aspect Ratio Tokamak

    DOE Data Explorer

    Thome, Kathreen E. [University of Wisconsin-Madison] (ORCID:0000000248013922); Bongard, Michael W. [University of Wisconsin-Madison] (ORCID:0000000231609746); Barr, Jayson L. [University of Wisconsin-Madison] (ORCID:0000000177685931); Bodner, Grant M. [University of Wisconsin-Madison] (ORCID:0000000324979172); Burke, Marcus G. [University of Wisconsin-Madison] (ORCID:0000000176193724); Fonck, Raymond J. [University of Wisconsin-Madison] (ORCID:0000000294386762); Kriete, David M. [University of Wisconsin-Madison] (ORCID:0000000236572911); Perry, Justin M. [University of Wisconsin-Madison] (ORCID:0000000171228609); Schlossberg, David J. [University of Wisconsin-Madison] (ORCID:0000000287139448)

    2016-04-27

    This data set contains openly-documented, machine readable digital research data corresponding to figures published in K.E. Thome et al., 'High Confinement Mode and Edge Localized Mode Characteristics in a Near-Unity Aspect Ratio Tokamak,' Phys. Rev. Lett. 116, 175001 (2016).

  18. Functional and Software Considerations for Bibliographic Data Base Utilization.

    ERIC Educational Resources Information Center

    Cadwallader, Gouverneur

    This is the fourth in a series of eight reports of a research study for the National Agricultural Library (NAL) on the effective utilization of bibliographic data bases in machine-readable form. It describes the general functional and software requirements of an NAL system using external sources of bibliographic data. Various system design…

  19. 26 CFR 301.6721-1 - Failure to file correct information returns.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... file timely includes a failure to file in the required manner, for example, on magnetic media or in... they fall below the 250-threshold requirement) or on magnetic media or other machine-readable form. Filers who are required to file information returns on magnetic media and who file such information...

  20. 26 CFR 301.6721-1 - Failure to file correct information returns.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... file timely includes a failure to file in the required manner, for example, on magnetic media or in... they fall below the 250-threshold requirement) or on magnetic media or other machine-readable form. Filers who are required to file information returns on magnetic media and who file such information...

  1. Inventory of U.S. Health Care Data Bases, 1976-1987.

    ERIC Educational Resources Information Center

    Kralovec, Peter D.; Andes, Steven M.

    This inventory contains summary abstracts of 305 current (1976-1987) non-bibliographic machine-readable databases and national health care data that have been created by public and private organizations throughout the United States. Each of the abstracts contains pertinent information on the sponsor or database, a description of the purpose and…

  2. 28 CFR 51.20 - Form of submissions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... set. A separate data dictionary file documenting the fields in the data set, the field separators or... data set. Proprietary or commercial software system data files (e.g., SAS, SPSS, dBase, Lotus 1-2-3... General will accept certain machine readable data in the following electronic media: 3.5 inch 1.4 megabyte...

  3. 28 CFR 51.20 - Form of submissions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... set. A separate data dictionary file documenting the fields in the data set, the field separators or... data set. Proprietary or commercial software system data files (e.g., SAS, SPSS, dBase, Lotus 1-2-3... General will accept certain machine readable data in the following electronic media: 3.5 inch 1.4 megabyte...

  4. 28 CFR 51.20 - Form of submissions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... set. A separate data dictionary file documenting the fields in the data set, the field separators or... data set. Proprietary or commercial software system data files (e.g., SAS, SPSS, dBase, Lotus 1-2-3... General will accept certain machine readable data in the following electronic media: 3.5 inch 1.4 megabyte...

  5. Demonstration of Cataloging Support Services and Marc II Conversion. Final Report.

    ERIC Educational Resources Information Center

    Buckland, Lawrence F.; And Others

    Beginning in December, 1967, the New England Library Information Network (NELINET) was demonstrated in actual operation using Machine-Readable Cataloging (MARC I) bibliographic data. Section 1 of this report is an introduction and summary of the project. Section 2 described the library processing function demonstrated which included catalog card…

  6. Corpus Linguistics for Korean Language Learning and Teaching. NFLRC Technical Report No. 26

    ERIC Educational Resources Information Center

    Bley-Vroman, Robert, Ed.; Ko, Hyunsook, Ed.

    2006-01-01

    Dramatic advances in personal computer technology have given language teachers access to vast quantities of machine-readable text, which can be analyzed with a view toward improving the basis of language instruction. Corpus linguistics provides analytic techniques and practical tools for studying language in use. This volume includes both an…

  7. Retrospective Conversion at a Two-Year College.

    ERIC Educational Resources Information Center

    Krieger, Michael T.

    1982-01-01

    Findings of a project to convert a single LC class from cards to machine readable tapes at a two-year college suggest that an in-house retrospective conversion is feasible for academic libraries. A high conversion hit rate, implying minimal original cataloging, will keep project costs and duration low. There are five references. (RAA)

  8. Public Data Set: Initiation and Sustainment of Tokamak Plasmas with Local Helicity Injection as the Majority Current Drive

    DOE Data Explorer

    Perry, Justin M. [University of Wisconsin-Madison] (ORCID:0000000171228609); Bodner, Grant M. [University of Wisconsin-Madison] (ORCID:0000000324979172); Bongard, Michael W. [University of Wisconsin-Madison] (ORCID:0000000231609746); Burke, Marcus G. [University of Wisconsin-Madison] (ORCID:0000000176193724); Fonck, Raymond J. [University of Wisconsin-Madison] (ORCID:0000000294386762); Pachicano, Jessica L. [University of Wisconsin-Madison] (ORCID:0000000207255693); Pierren, Christopher [University of Wisconsin-Madison] (ORCID:0000000228289825); Reusch, Joshua A. [University of Wisconsin-Madison] (ORCID:0000000284249422); Rhodes, Alexander T. [University of Wisconsin-Madison] (ORCID:0000000280735714); Richner, Nathan J. [University of Wisconsin-Madison] (ORCID:0000000155443915); Rodriguez Sanchez, Cuauhtemoc [University of Wisconsin-Madison] (ORCID:0000000334712586); Schaefer, Carolyn E. [University of Wisconsin-Madison] (ORCID:0000000248848727); Weberski, Justin D. [University of Wisconsin-Madison] (ORCID:0000000256267914)

    2018-05-22

    This public data set contains openly-documented, machine readable digital research data corresponding to figures published in J.M. Perry et al., 'Initiation and Sustainment of Tokamak Plasmas with Local Helicity Injection as the Majority Current Drive,' accepted for publication in Nuclear Fusion.

  9. Vatican Library Automates for the 21st Century.

    ERIC Educational Resources Information Center

    Carter, Thomas L.

    1994-01-01

    Because of space and staff constraints, the Vatican Library can issue only 2,000 reader cards a year. Describes IBM's Vatican Library Project: converting the library catalog records (prior to 1985) into machine readable form and digitally scanning 20,000 manuscript pages, print pages, and art works in gray scale and color, creating a database…

  10. Public Data Set: Non-inductively Driven Tokamak Plasmas at Near-Unity βt in the Pegasus Toroidal Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reusch, Joshua A.; Bodner, Grant M.; Bongard, Michael W.

    This public data set contains openly-documented, machine readable digital research data corresponding to figures published in J.A. Reusch et al., 'Non-inductively Driven Tokamak Plasmas at Near-Unity βt in the Pegasus Toroidal Experiment,' Phys. Plasmas 25, 056101 (2018).

  11. 19 CFR 163.5 - Methods for storage of records.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... standard business practice for storage of records include, but are not limited to, machine readable data... 19 Customs Duties 2 2012-04-01 2012-04-01 false Methods for storage of records. 163.5 Section 163... THE TREASURY (CONTINUED) RECORDKEEPING § 163.5 Methods for storage of records. (a) Original records...

  12. 19 CFR 163.5 - Methods for storage of records.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... standard business practice for storage of records include, but are not limited to, machine readable data... 19 Customs Duties 2 2011-04-01 2011-04-01 false Methods for storage of records. 163.5 Section 163... THE TREASURY (CONTINUED) RECORDKEEPING § 163.5 Methods for storage of records. (a) Original records...

  13. 78 FR 28111 - Making Open and Machine Readable the New Default for Government Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-14

    ... warning systems, location-based applications, precision farming tools, and much more, improving Americans... repository of tools and best practices to assist agencies in integrating the Open Data Policy into their... needed to ensure it remains a resource to facilitate the adoption of open data practices. (b) Within 90...

  14. 3 CFR 13642 - Executive Order 13642 of May 9, 2013. Making Open and Machine Readable the New Default for...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., innovation, and scientific discovery that improves Americans' lives and contributes significantly to job... tools, and much more, improving Americans' lives in countless ways and leading to economic growth and... as an asset throughout its life cycle to promote interoperability and openness, and, wherever...

  15. Documentation for the Machine-readable Version of the 0.2-A Resolution Far-ultraviolet Stellar Spectra Measured with COPERNICUS

    NASA Technical Reports Server (NTRS)

    Sheridan, W. T.; Warren, W. H., Jr.

    1981-01-01

    The spectra described represent a subset comprising data for 60 O- and B-type stars. The tape contains data in the spectral region lamda lamda 1000-1450 A with a resolution of 0.2 A. The magnetic tape version of the data is described.

  16. 26 CFR 301.6721-1 - Failure to file correct information returns.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... file timely includes a failure to file in the required manner, for example, on magnetic media or in... they fall below the 250-threshold requirement) or on magnetic media or other machine-readable form. Filers who are required to file information returns on magnetic media and who file such information...

  17. Reference Manual for Machine-Readable Bibliographic Descriptions.

    ERIC Educational Resources Information Center

    Martin, M. D., Comp.

    UNESCO, in cooperation with several other organizations, has produced a manual, the scope and purpose of which has been to define, for most types of scientific and technical literature, a set of data elements which will constitute an adequate bibliographic citation, and to define the representation of these data elements as they should appear in a…

  18. VizieR Online Data Catalog: Parenago Catalog of Stars in Orion Nebula (Parenago 1954)

    NASA Astrophysics Data System (ADS)

    Parenago, P. P.

    1997-10-01

    The present catalogue is a machine-readable version of the catalogue of stars in the area of the Orion nebula, published by P.P. Parenago (1954). The sky area between 5h 24m and 5h 36m in right ascension (1900.0) and between -4 and -7 degrees in declination (1900.0), containing the Orion nebula, has been investigated in that work. Ten of variable stars in original Parenago (1954) catalogue had CSV numbers (Kukarkin et al., 1951) but since that time all of them were confirmed as variables and included in GCVS (Kholopov et al., 1985a&b, 1987). We superseded CSV-numbers by GCVS-names in the machine-readable version for the following stars: ------------------------------------------------ Number in CSV-number GCVS-name the catalogue ------------------------------------------------ 1605 606 V372 ORI 1613 607 V373 ORI 1635 608 V374 ORI 1713 609 V375 ORI 1748 610 V387 ORI 1762 100569 V376 ORI 1974 617 V377 ORI 2183 625 V388 ORI 2393 630 V380 ORI 2478 634 V381 ORI ------------------------------------------------ (1 data file).

  19. Word add-in for ontology recognition: semantic enrichment of scientific literature.

    PubMed

    Fink, J Lynn; Fernicola, Pablo; Chandran, Rahul; Parastatidis, Savas; Wade, Alex; Naim, Oscar; Quinn, Gregory B; Bourne, Philip E

    2010-02-24

    In the current era of scientific research, efficient communication of information is paramount. As such, the nature of scholarly and scientific communication is changing; cyberinfrastructure is now absolutely necessary and new media are allowing information and knowledge to be more interactive and immediate. One approach to making knowledge more accessible is the addition of machine-readable semantic data to scholarly articles. The Word add-in presented here will assist authors in this effort by automatically recognizing and highlighting words or phrases that are likely information-rich, allowing authors to associate semantic data with those words or phrases, and to embed that data in the document as XML. The add-in and source code are publicly available at http://www.codeplex.com/UCSDBioLit. The Word add-in for ontology term recognition makes it possible for an author to add semantic data to a document as it is being written and it encodes these data using XML tags that are effectively a standard in life sciences literature. Allowing authors to mark-up their own work will help increase the amount and quality of machine-readable literature metadata.

  20. Applying Semantic Web technologies to improve the retrieval, credibility and use of health-related web resources.

    PubMed

    Mayer, Miguel A; Karampiperis, Pythagoras; Kukurikos, Antonis; Karkaletsis, Vangelis; Stamatakis, Kostas; Villarroel, Dagmar; Leis, Angela

    2011-06-01

    The number of health-related websites is increasing day-by-day; however, their quality is variable and difficult to assess. Various "trust marks" and filtering portals have been created in order to assist consumers in retrieving quality medical information. Consumers are using search engines as the main tool to get health information; however, the major problem is that the meaning of the web content is not machine-readable in the sense that computers cannot understand words and sentences as humans can. In addition, trust marks are invisible to search engines, thus limiting their usefulness in practice. During the last five years there have been different attempts to use Semantic Web tools to label health-related web resources to help internet users identify trustworthy resources. This paper discusses how Semantic Web technologies can be applied in practice to generate machine-readable labels and display their content, as well as to empower end-users by providing them with the infrastructure for expressing and sharing their opinions on the quality of health-related web resources.

  1. The semantics of Chemical Markup Language (CML): dictionaries and conventions.

    PubMed

    Murray-Rust, Peter; Townsend, Joe A; Adams, Sam E; Phadungsukanan, Weerapong; Thomas, Jens

    2011-10-14

    The semantic architecture of CML consists of conventions, dictionaries and units. The conventions conform to a top-level specification and each convention can constrain compliant documents through machine-processing (validation). Dictionaries conform to a dictionary specification which also imposes machine validation on the dictionaries. Each dictionary can also be used to validate data in a CML document, and provide human-readable descriptions. An additional set of conventions and dictionaries are used to support scientific units. All conventions, dictionaries and dictionary elements are identifiable and addressable through unique URIs.

  2. The semantics of Chemical Markup Language (CML): dictionaries and conventions

    PubMed Central

    2011-01-01

    The semantic architecture of CML consists of conventions, dictionaries and units. The conventions conform to a top-level specification and each convention can constrain compliant documents through machine-processing (validation). Dictionaries conform to a dictionary specification which also imposes machine validation on the dictionaries. Each dictionary can also be used to validate data in a CML document, and provide human-readable descriptions. An additional set of conventions and dictionaries are used to support scientific units. All conventions, dictionaries and dictionary elements are identifiable and addressable through unique URIs. PMID:21999509

  3. PREFACE: Anti-counterfeit Image Analysis Methods (A Special Session of ICSXII)

    NASA Astrophysics Data System (ADS)

    Javidi, B.; Fournel, T.

    2007-06-01

    The International Congress for Stereology is dedicated to theoretical and applied aspects of stochastic tools, image analysis and mathematical morphology. A special emphasis on `anti-counterfeit image analysis methods' has been given this year for the XIIth edition (ICSXII). Facing the economic and social threat of counterfeiting, this devoted session presents recent advances and original solutions in the field. A first group of methods are related to marks located either on the product (physical marks) or on the data (hidden information) to be protected. These methods concern laser fs 3D encoding and source separation for machine-readable identification, moiré and `guilloche' engraving for visual verification and watermarking. Machine-readable travel documents are well-suited examples introducing the second group of methods which are related to cryptography. Used in passports for data authentication and identification (of people), cryptography provides some powerful tools. Opto-digital processing allows some efficient implementations described in the papers and promising applications. We would like to thank the reviewers who have contributed to a session of high quality, and the authors for their fine and hard work. We would like to address some special thanks to the invited lecturers, namely Professor Roger Hersch and Dr Isaac Amidror for their survey of moiré methods, Prof. Serge Vaudenay for his survey of existing protocols concerning machine-readable travel documents, and Dr Elisabet Pérez-Cabré for her presentation on optical encryption for multifactor authentication. We also thank Professor Dominique Jeulin, President of the International Society for Stereology, Professor Michel Jourlin, President of the organizing committee of ICSXII, for their help and advice, and Mr Graham Douglas, the Publisher of Journal of Physics: Conference Series at IOP Publishing, for his efficiency. We hope that this collection of papers will be useful as a tool to further develop a very important field. Bahram Javidi University of Connecticut (USA) Thierry Fournel University of Saint-Etienne (France) Chairs of the special session on `Anti-counterfeit image analysis methods', July 2007

  4. Annotation of rule-based models with formal semantics to enable creation, analysis, reuse and visualization.

    PubMed

    Misirli, Goksel; Cavaliere, Matteo; Waites, William; Pocock, Matthew; Madsen, Curtis; Gilfellon, Owen; Honorato-Zimmer, Ricardo; Zuliani, Paolo; Danos, Vincent; Wipat, Anil

    2016-03-15

    Biological systems are complex and challenging to model and therefore model reuse is highly desirable. To promote model reuse, models should include both information about the specifics of simulations and the underlying biology in the form of metadata. The availability of computationally tractable metadata is especially important for the effective automated interpretation and processing of models. Metadata are typically represented as machine-readable annotations which enhance programmatic access to information about models. Rule-based languages have emerged as a modelling framework to represent the complexity of biological systems. Annotation approaches have been widely used for reaction-based formalisms such as SBML. However, rule-based languages still lack a rich annotation framework to add semantic information, such as machine-readable descriptions, to the components of a model. We present an annotation framework and guidelines for annotating rule-based models, encoded in the commonly used Kappa and BioNetGen languages. We adapt widely adopted annotation approaches to rule-based models. We initially propose a syntax to store machine-readable annotations and describe a mapping between rule-based modelling entities, such as agents and rules, and their annotations. We then describe an ontology to both annotate these models and capture the information contained therein, and demonstrate annotating these models using examples. Finally, we present a proof of concept tool for extracting annotations from a model that can be queried and analyzed in a uniform way. The uniform representation of the annotations can be used to facilitate the creation, analysis, reuse and visualization of rule-based models. Although examples are given, using specific implementations the proposed techniques can be applied to rule-based models in general. The annotation ontology for rule-based models can be found at http://purl.org/rbm/rbmo The krdf tool and associated executable examples are available at http://purl.org/rbm/rbmo/krdf anil.wipat@newcastle.ac.uk or vdanos@inf.ed.ac.uk. © The Author 2015. Published by Oxford University Press.

  5. Development of Clinical Contents Model Markup Language for Electronic Health Records

    PubMed Central

    Yun, Ji-Hyun; Kim, Yoon

    2012-01-01

    Objectives To develop dedicated markup language for clinical contents models (CCM) to facilitate the active use of CCM in electronic health record systems. Methods Based on analysis of the structure and characteristics of CCM in the clinical domain, we designed extensible markup language (XML) based CCM markup language (CCML) schema manually. Results CCML faithfully reflects CCM in both the syntactic and semantic aspects. As this language is based on XML, it can be expressed and processed in computer systems and can be used in a technology-neutral way. Conclusions CCML has the following strengths: it is machine-readable and highly human-readable, it does not require a dedicated parser, and it can be applied for existing electronic health record systems. PMID:23115739

  6. Simplifying the Reuse and Interoperability of Geoscience Data Sets and Models with Semantic Metadata that is Human-Readable and Machine-actionable

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2017-12-01

    Standardized, deep descriptions of digital resources (e.g. data sets, computational models, software tools and publications) make it possible to develop user-friendly software systems that assist scientists with the discovery and appropriate use of these resources. Semantic metadata makes it possible for machines to take actions on behalf of humans, such as automatically identifying the resources needed to solve a given problem, retrieving them and then automatically connecting them (despite their heterogeneity) into a functioning workflow. Standardized model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. A carefully constructed, unambiguous and rules-based schema to address this problem, called the Geoscience Standard Names ontology will be presented that utilizes Semantic Web best practices and technologies. It has also been designed to work across science domains and to be readable by both humans and machines.

  7. Educational and Commercial Utilization of a Chemical Information Center, Four Year Summary.

    ERIC Educational Resources Information Center

    Williams, Martha E.; And Others

    The major objective of the IITRI Computer Search Center is to educate and link industry, academia, and government institutions to chemical and other scientific information systems and sources. The Center was developed to meet this objective and is in full operation providing services to users from a variety of machine-readable data bases with…

  8. Collection Development Analysis Using OCLC Archival Tapes. Final Report.

    ERIC Educational Resources Information Center

    Evans, Glyn T.; And Others

    The purpose of this project is to develop a set of computer programs to perform a variety of collection development analyses on the machine-readable cataloging (MARC) records that are produced as a byproduct of use of the online cataloging subsystem of the Ohio College Library System (OCLC), and made available through the OCLC Distribution Tape…

  9. Intersystem Compatibility and Convertibility of Subject Vocabularies.

    ERIC Educational Resources Information Center

    Wall, E.; Barnes, J.

    This is the fifth in a series of eight reports of a research study for the National Agricultural Library (NAL) on the effective utilization of bibliographic data bases in machine readable form. NAL desires ultimately to develop techniques of interacting with other data bases so that queries put to NAL may be answered with documents or document…

  10. Four-Year Summary, Educational and Commercial Utilization of a Chemical Information Center. Part I.

    ERIC Educational Resources Information Center

    Schipma, Peter B., Ed.

    The major objective of the Illinois Institute of Technology (IIT) Computer Search Center (CSC) is to educate and link industry, academia, and government institutions to chemical and other scientific information systems and sources. The CSC is in full operation providing services to users from a variety of machine-readable data bases with minimal…

  11. An Evaluation of Implementing Koha in a Chinese Language Environment

    ERIC Educational Resources Information Center

    Chang, Naicheng; Tsai, Yuchin; Hopkinson, Alan

    2010-01-01

    Purpose: The purpose of this paper is to evaluate issues of different scripts in the same record (in MARC21 and Chinese machine-readable cataloguing (CMARC)) and Chinese internal codes (i.e. double-byte character set) when implementing Koha. It also discusses successful efforts in promoting the adoption of Koha in Taiwan, particularly the…

  12. Adolescent Fertility: National File [Machine-Readable Data File].

    ERIC Educational Resources Information Center

    Moore, Kristin A.; And Others

    This computer file contains recent cross sectional data on adolescent fertility in the United States for 1960, 1965, 1970, 1975 and 1980-85. The following variables are included: (1) births; (2) birth rates; (3) abortions; (4) non-marital childbearing; (5) infant mortality; and (6) low birth weight. Data for both teenagers and women aged 20-24 are…

  13. Adolescent Fertility: State File [Machine-Readable Data File].

    ERIC Educational Resources Information Center

    Moore, Kristin A.; And Others

    This computer file contains recent cross sectional data on adolescent fertility by state for 1960, 1965, 1970, 1975 and 1980-85. The following variables are included: (1) births; (2) birth rates; (3) abortions; (4) non-marital childbearing; (5) infant mortality; and (6) low birth weight. Data for both teenagers and women aged 20-24 years are…

  14. Experiences Using an Open Source Software Library to Teach Computer Vision Subjects

    ERIC Educational Resources Information Center

    Cazorla, Miguel; Viejo, Diego

    2015-01-01

    Machine vision is an important subject in computer science and engineering degrees. For laboratory experimentation, it is desirable to have a complete and easy-to-use tool. In this work we present a Java library, oriented to teaching computer vision. We have designed and built the library from the scratch with emphasis on readability and…

  15. High School and Beyond. 1980 Sophomore Cohort. First Follow-Up (1982). [machine-readable data file].

    ERIC Educational Resources Information Center

    National Center for Education Statistics (ED), Washington, DC.

    The High School and Beyond 1980 Sophomore Cohort First Follow-Up (1982) data file is presented. The First Follow-Up Sophomore Cohort data tape consists of four related data files: (1) the student data file (including data availability flags, weights, questionnaire data, and composite variables); (2) Statistical Analysis System (SAS) control cards…

  16. Public Data Set: Continuous, Edge Localized Ion Heating During Non-Solenoidal Plasma Startup and Sustainment in a Low Aspect Ratio Tokamak

    DOE Data Explorer

    Burke, Marcus G. [University of Wisconsin-Madison] (ORCID:0000000176193724); Barr, Jayson L. [University of Wisconsin-Madison] (ORCID:0000000177685931); Bongard, Michael W. [University of Wisconsin-Madison] (ORCID:0000000231609746); Fonck, Raymond J. [University of Wisconsin-Madison] (ORCID:0000000294386762); Hinson, Edward T. [University of Wisconsin-Madison] (ORCID:000000019713140X); Perry, Justin M. [University of Wisconsin-Madison] (ORCID:0000000171228609); Reusch, Joshua A. [University of Wisconsin-Madison] (ORCID:0000000284249422); Schlossberg, David J. [University of Wisconsin-Madison] (ORCID:0000000287139448)

    2017-05-16

    This public data set contains openly-documented, machine readable digital research data corresponding to figures published in M.G. Burke et. al., 'Continuous, Edge Localized Ion Heating During Non-Solenoidal Plasma Startup and Sustainment in a Low Aspect Ratio Tokamak,' Nucl. Fusion 57, 076010 (2017).

  17. Handling of Varied Data Bases in an Information Center Environment.

    ERIC Educational Resources Information Center

    Williams, Martha E.

    Information centers exist to provide information from machine-readable data bases to users in industry, universities and other organizations. The computer Search Center of the IIT Research Institute was designed with a number of variables and uncertainties before it. In this paper, the author discusses how the Center was designed to enable it to…

  18. Merged Federal Files [Academic Year] 1978-79 [machine-readable data file].

    ERIC Educational Resources Information Center

    National Center for Education Statistics (ED), Washington, DC.

    The Merged Federal File for 1978-79 contains school district level data from the following six source files: (1) the Census of Governments' Survey of Local Government Finances--School Systems (F-33) (with 16,343 records merged); (2) the National Center for Education Statistics Survey of School Systems (School District Universe) (with 16,743…

  19. NaturalReader: A New Generation Text Reader

    ERIC Educational Resources Information Center

    Flood, Jacqueline

    2007-01-01

    NaturalReader (http://www.naturalreaders.com/) is a new generation text reader, which means that it reads any machine readable text using synthesized speech without having to copy and paste the selected text into the NaturalReader application window. It installs a toolbar directly into all of the Microsoft Office[TM] programs and uses a mini-board…

  20. Graduate student theses supported by DOE`s Environmental Sciences Division

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cushman, Robert M.; Parra, Bobbi M.

    1995-07-01

    This report provides complete bibliographic citations, abstracts, and keywords for 212 doctoral and master`s theses supported fully or partly by the U.S. Department of Energy`s Environmental Sciences Division (and its predecessors) in the following areas: Atmospheric Sciences; Marine Transport; Terrestrial Transport; Ecosystems Function and Response; Carbon, Climate, and Vegetation; Information; Computer Hardware, Advanced Mathematics, and Model Physics (CHAMMP); Atmospheric Radiation Measurement (ARM); Oceans; National Institute for Global Environmental Change (NIGEC); Unmanned Aerial Vehicles (UAV); Integrated Assessment; Graduate Fellowships for Global Change; and Quantitative Links. Information on the major professor, department, principal investigator, and program area is given for each abstract.more » Indexes are provided for major professor, university, principal investigator, program area, and keywords. This bibliography is also available in various machine-readable formats (ASCII text file, WordPerfect{reg_sign} files, and PAPYRUS{trademark} files).« less

  1. Knowledge Extraction and Semantic Annotation of Text from the Encyclopedia of Life

    PubMed Central

    Thessen, Anne E.; Parr, Cynthia Sims

    2014-01-01

    Numerous digitization and ontological initiatives have focused on translating biological knowledge from narrative text to machine-readable formats. In this paper, we describe two workflows for knowledge extraction and semantic annotation of text data objects featured in an online biodiversity aggregator, the Encyclopedia of Life. One workflow tags text with DBpedia URIs based on keywords. Another workflow finds taxon names in text using GNRD for the purpose of building a species association network. Both workflows work well: the annotation workflow has an F1 Score of 0.941 and the association algorithm has an F1 Score of 0.885. Existing text annotators such as Terminizer and DBpedia Spotlight performed well, but require some optimization to be useful in the ecology and evolution domain. Important future work includes scaling up and improving accuracy through the use of distributional semantics. PMID:24594988

  2. Ontology-aided Data Fusion (Invited)

    NASA Astrophysics Data System (ADS)

    Raskin, R.

    2009-12-01

    An ontology provides semantic descriptions that are analogous to those in a dictionary, but are readable by both computers and humans. A data or service is semantically annotated when it is formally associated with elements of an ontology. The ESIP Federation Semantic Web Cluster has developed a set of ontologies to describe datatypes and data services that can be used to support automated data fusion. The service ontology includes descriptors of the service function, its inputs/outputs, and its invocation method. The datatype descriptors resemble typical metadata fields (data format, data model, data structure, originator, etc.) augmented with descriptions of the meaning of the data. These ontologies, in combination with the SWEET science ontology, enable a registered data fusion service to be chained together and implemented that is scientifically meaningful based on machine understanding of the associated data and services. This presentation describes initial results and experiences in automated data fusion.

  3. JSD: Parallel Job Accounting on the IBM SP2

    NASA Technical Reports Server (NTRS)

    Saphir, William; Jones, James Patton; Walter, Howard (Technical Monitor)

    1995-01-01

    The IBM SP2 is one of the most promising parallel computers for scientific supercomputing - it is fast and usually reliable. One of its biggest problems is a lack of robust and comprehensive system software. Among other things, this software allows a collection of Unix processes to be treated as a single parallel application. It does not, however, provide accounting for parallel jobs other than what is provided by AIX for the individual process components. Without parallel job accounting, it is not possible to monitor system use, measure the effectiveness of system administration strategies, or identify system bottlenecks. To address this problem, we have written jsd, a daemon that collects accounting data for parallel jobs. jsd records information in a format that is easily machine- and human-readable, allowing us to extract the most important accounting information with very little effort. jsd also notifies system administrators in certain cases of system failure.

  4. Recent advances in standards for collaborative Digital Anatomic Pathology

    PubMed Central

    2011-01-01

    Context Collaborative Digital Anatomic Pathology refers to the use of information technology that supports the creation and sharing or exchange of information, including data and images, during the complex workflow performed in an Anatomic Pathology department from specimen reception to report transmission and exploitation. Collaborative Digital Anatomic Pathology can only be fully achieved using medical informatics standards. The goal of the international integrating the Healthcare Enterprise (IHE) initiative is precisely specifying how medical informatics standards should be implemented to meet specific health care needs and making systems integration more efficient and less expensive. Objective To define the best use of medical informatics standards in order to share and exchange machine-readable structured reports and their evidences (including whole slide images) within hospitals and across healthcare facilities. Methods Specific working groups dedicated to Anatomy Pathology within multiple standards organizations defined standard-based data structures for Anatomic Pathology reports and images as well as informatic transactions in order to integrate Anatomic Pathology information into the electronic healthcare enterprise. Results The DICOM supplements 122 and 145 provide flexible object information definitions dedicated respectively to specimen description and Whole Slide Image acquisition, storage and display. The content profile “Anatomic Pathology Structured Report” (APSR) provides standard templates for structured reports in which textual observations may be bound to digital images or regions of interest. Anatomic Pathology observations are encoded using an international controlled vocabulary defined by the IHE Anatomic Pathology domain that is currently being mapped to SNOMED CT concepts. Conclusion Recent advances in standards for Collaborative Digital Anatomic Pathology are a unique opportunity to share or exchange Anatomic Pathology structured reports that are interoperable at an international level. The use of machine-readable format of APSR supports the development of decision support as well as secondary use of Anatomic Pathology information for epidemiology or clinical research. PMID:21489187

  5. Requirements-Based Conformance Testing of ARINC 653 Real-Time Operating Systems

    NASA Astrophysics Data System (ADS)

    Maksimov, Andrey

    2010-08-01

    Requirements-based testing is emphasized in avionics certification documents because this strategy has been found to be the most effective at revealing errors. This paper describes the unified requirements-based approach to the creation of conformance test suites for mission-critical systems. The approach uses formal machine-readable specifications of requirements and finite state machine model for test sequences generation on-the-fly. The paper also presents the test system for automated test generation for ARINC 653 services built on this approach. Possible application of the presented approach to various areas of avionics embedded systems testing is discussed.

  6. Actionable, long-term stable and semantic web compatible identifiers for access to biological collection objects

    PubMed Central

    Hyam, Roger; Hagedorn, Gregor; Chagnoux, Simon; Röpert, Dominik; Casino, Ana; Droege, Gabi; Glöckler, Falko; Gödderz, Karsten; Groom, Quentin; Hoffmann, Jana; Holleman, Ayco; Kempa, Matúš; Koivula, Hanna; Marhold, Karol; Nicolson, Nicky; Smith, Vincent S.; Triebel, Dagmar

    2017-01-01

    With biodiversity research activities being increasingly shifted to the web, the need for a system of persistent and stable identifiers for physical collection objects becomes increasingly pressing. The Consortium of European Taxonomic Facilities agreed on a common system of HTTP-URI-based stable identifiers which is now rolled out to its member organizations. The system follows Linked Open Data principles and implements redirection mechanisms to human-readable and machine-readable representations of specimens facilitating seamless integration into the growing semantic web. The implementation of stable identifiers across collection organizations is supported with open source provider software scripts, best practices documentations and recommendations for RDF metadata elements facilitating harmonized access to collection information in web portals. Database URL: http://cetaf.org/cetaf-stable-identifiers PMID:28365724

  7. EduCard. Adult Education Access Card. Policy Option Paper on Strategic Recommendation 4. First Edition.

    ERIC Educational Resources Information Center

    Porter, Dennis

    One recommendation of the 1989 California Strategic Plan for Adult Education is the use of EduCard. EduCard, the Adult Education Access Card, is a means of giving learners access to information about educational opportunities and providing administrators with machine-readable information on learners' prior education and traiing. Three models are:…

  8. California State Library: Processing Center Design and Specifications. Volume III, Coding Manual.

    ERIC Educational Resources Information Center

    Sherman, Don; Shoffner, Ralph M.

    As part of the report on the California State Library Processing Center design and specifications, this volume is a coding manual for the conversion of catalog card data to a machine-readable form. The form is compatible with the national MARC system, while at the same time it contains provisions for problems peculiar to the local situation. This…

  9. The Application of Clustering Techniques to Citation Data. Research Reports Series B No. 6.

    ERIC Educational Resources Information Center

    Arms, William Y.; Arms, Caroline

    This report describes research carried out as part of the Design of Information Systems in the Social Sciences (DISISS) project. Cluster analysis techniques were applied to a machine readable file of bibliographic data in the form of cited journal titles in order to identify groupings which could be used to structure bibliographic files. Practical…

  10. Demographic Profile of U.S. Children: National File [Machine-Readable Data File].

    ERIC Educational Resources Information Center

    Peterson, J. L.; White, R. N.

    These two computer files contain social and demographic data about U.S. children and their families taken from the March 1985 Current Population Survey of the U.S. Census. One file is for all children; the second file is for black children. The following column variables are included: (1) family structure; (2) parent educational attainment; (3)…

  11. The Nation's Memory: The United States National Archives and Records Administration. An Interview with Don W. Wilson, Archivist of the United States, National Archives and Records Administration.

    ERIC Educational Resources Information Center

    Brodhead, Michael J.; Zink, Steven D.

    1993-01-01

    Discusses the National Archives and Records Administration (NARA) through an interview with the Archivist of the United States, Don Wilson. Topics addressed include archival independence and congressional relations; national information policy; expansion plans; machine-readable archival records; preservation activities; and relations with other…

  12. The Role of Mechanized Services in the Provision of Information with Special Reference to the University Environment.

    ERIC Educational Resources Information Center

    Heim, Kathleen M.

    The use, history, and role of machine-readable data base technology is discussed. First the development of data base technology is traced from its beginnings as a special resource for science and technology to its broader use in universities, with descriptions of some specific services. Next the current status of mechanized information services in…

  13. High School and Beyond. 1980 Senior Cohort. First Follow-Up (1982). [machine-readable data file].

    ERIC Educational Resources Information Center

    National Center for Education Statistics (ED), Washington, DC.

    The High School and Beyond 1980 Senior Cohort First Follow-Up (1982) Data File is presented. The First Follow-Up Senior Cohort data tape consists of four related data files: (1) the student data file (including data availability flags, weights, questionnaire data, and composite variables); (2) Statistical Analysis System (SAS) control cards for…

  14. CAMPUS-MINNESOTA User Information Manual. Project PRIME Report, Number 12.

    ERIC Educational Resources Information Center

    Andrew, Gary M.

    The purpose of this report is to aid the use of the computer simulation model, CAMPUS-M, in 4 specific areas: (1) the conceptual modeling of the institution; (2) the preparation of machine readable input data; (3) the preparation of simulation and report commands for the model; and (4) the actual running of the program on a CDC 6600 computer.…

  15. Second International Mathematics Study; Longitudinal, Classroom Process Surveys for Population A: Students, Teachers, and Schools, 1981-1982 [machine-readable data file].

    ERIC Educational Resources Information Center

    Wolfe, Richard G.

    The Second International Mathematics Study (SIMS) of the International Association for the Evaluation of Educational Achievement (IEA) was conducted in 20 countries on two sampled populations: Population A of 13-year-olds and Population B of students studying mathematics in their final year of secondary school. Mathematics achievement was measured…

  16. Virtual Machine Language

    NASA Technical Reports Server (NTRS)

    Grasso, Christopher; Page, Dennis; O'Reilly, Taifun; Fteichert, Ralph; Lock, Patricia; Lin, Imin; Naviaux, Keith; Sisino, John

    2005-01-01

    Virtual Machine Language (VML) is a mission-independent, reusable software system for programming for spacecraft operations. Features of VML include a rich set of data types, named functions, parameters, IF and WHILE control structures, polymorphism, and on-the-fly creation of spacecraft commands from calculated values. Spacecraft functions can be abstracted into named blocks that reside in files aboard the spacecraft. These named blocks accept parameters and execute in a repeatable fashion. The sizes of uplink products are minimized by the ability to call blocks that implement most of the command steps. This block approach also enables some autonomous operations aboard the spacecraft, such as aerobraking, telemetry conditional monitoring, and anomaly response, without developing autonomous flight software. Operators on the ground write blocks and command sequences in a concise, high-level, human-readable programming language (also called VML ). A compiler translates the human-readable blocks and command sequences into binary files (the operations products). The flight portion of VML interprets the uplinked binary files. The ground subsystem of VML also includes an interactive sequence- execution tool hosted on workstations, which runs sequences at several thousand times real-time speed, affords debugging, and generates reports. This tool enables iterative development of blocks and sequences within times of the order of seconds.

  17. Word add-in for ontology recognition: semantic enrichment of scientific literature

    PubMed Central

    2010-01-01

    Background In the current era of scientific research, efficient communication of information is paramount. As such, the nature of scholarly and scientific communication is changing; cyberinfrastructure is now absolutely necessary and new media are allowing information and knowledge to be more interactive and immediate. One approach to making knowledge more accessible is the addition of machine-readable semantic data to scholarly articles. Results The Word add-in presented here will assist authors in this effort by automatically recognizing and highlighting words or phrases that are likely information-rich, allowing authors to associate semantic data with those words or phrases, and to embed that data in the document as XML. The add-in and source code are publicly available at http://www.codeplex.com/UCSDBioLit. Conclusions The Word add-in for ontology term recognition makes it possible for an author to add semantic data to a document as it is being written and it encodes these data using XML tags that are effectively a standard in life sciences literature. Allowing authors to mark-up their own work will help increase the amount and quality of machine-readable literature metadata. PMID:20181245

  18. SAS FORMATS: USES AND ABUSES

    EPA Science Inventory

    SAS formats are a very powerful tool. They allow you to display the data in a more readable manner without modifying it. Formats can also be used to group data into categories for use in various procedures like PROC FREQ, PROC TTEST, and PROC MEANS (as a class variable). As ...

  19. Relationships between abstract features and methodological quality explained variations of social media activity derived from systematic reviews about psoriasis interventions.

    PubMed

    Ruano, J; Aguilar-Luque, M; Isla-Tejera, B; Alcalde-Mellado, P; Gay-Mimbrera, J; Hernandez-Romero, José Luis; Sanz-Cabanillas, J L; Maestre-López, B; González-Padilla, M; Carmona-Fernández, P J; Gómez-García, F; García-Nieto, A Vélez

    2018-05-24

    The aim of this study was to describe the relationship among abstract structure, readability, and completeness, and how these features may influence social media activity and bibliometric results, considering systematic reviews (SRs) about interventions in psoriasis classified by methodological quality. Systematic literature searches about psoriasis interventions were undertaken on relevant databases. For each review, methodological quality was evaluated using the Assessing the Methodological Quality of Systematic Reviews (AMSTAR) tool. Abstract extension, structure, readability, and quality and completeness of reporting were analyzed. Social media activity, which consider Twitter and Facebook mention counts, as well as Mendeley readers and Google scholar citations were obtained for each article. Analyses were conducted to describe any potential influence of abstract characteristics on review's social media diffusion. We classified 139 intervention SRs as displaying high/moderate/low methodological quality. We observed that abstract readability of SRs has been maintained high for last 20 years, although there are some differences based on their methodological quality. Free-format abstracts were most sensitive to the increase of text readability as compared with more structured abstracts (IMRAD or 8-headings), yielding opposite effects on their quality and completeness depending on the methodological quality: a worsening in low quality reviews and an improvement in those of high-quality. Both readability indices and PRISMA for Abstract total scores showed an inverse relationship with social media activity and bibliometric results in high methodological quality reviews but not in those of lower quality. Our results suggest that increasing abstract readability must be specially considered when writing free-format summaries of high-quality reviews, because this fact correlates with an improvement of their completeness and quality, and this may help to achieve broader social media visibility and article usage. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. SAS FORMATS: USES AND ABUSES

    EPA Science Inventory

    SAS formats are a very powerful tool. They allow you to display the data in a more readable manner without modifying it. Formats can also be used to group data into categories for use in various procedures like PROC FREQ, PROC TTEST, and PROC MEANS (as a class variable). As w...

  1. Informatics in radiology: automated structured reporting of imaging findings using the AIM standard and XML.

    PubMed

    Zimmerman, Stefan L; Kim, Woojin; Boonn, William W

    2011-01-01

    Quantitative and descriptive imaging data are a vital component of the radiology report and are frequently of paramount importance to the ordering physician. Unfortunately, current methods of recording these data in the report are both inefficient and error prone. In addition, the free-text, unstructured format of a radiology report makes aggregate analysis of data from multiple reports difficult or even impossible without manual intervention. A structured reporting work flow has been developed that allows quantitative data created at an advanced imaging workstation to be seamlessly integrated into the radiology report with minimal radiologist intervention. As an intermediary step between the workstation and the reporting software, quantitative and descriptive data are converted into an extensible markup language (XML) file in a standardized format specified by the Annotation and Image Markup (AIM) project of the National Institutes of Health Cancer Biomedical Informatics Grid. The AIM standard was created to allow image annotation data to be stored in a uniform machine-readable format. These XML files containing imaging data can also be stored on a local database for data mining and analysis. This structured work flow solution has the potential to improve radiologist efficiency, reduce errors, and facilitate storage of quantitative and descriptive imaging data for research. Copyright © RSNA, 2011.

  2. Automatic Inference of Cryptographic Key Length Based on Analysis of Proof Tightness

    DTIC Science & Technology

    2016-06-01

    within an attack tree structure, then expand attack tree methodology to include cryptographic reductions. We then provide the algorithms for...maintaining and automatically reasoning about these expanded attack trees . We provide a software tool that utilizes machine-readable proof and attack metadata...and the attack tree methodology to provide rapid and precise answers regarding security parameters and effective security. This eliminates the need

  3. Guidelines for Processing and Cataloging Computer Software for Schools and Area Education Agencies. Suggestions to Aid Schools and AEAs.

    ERIC Educational Resources Information Center

    Martin, Elizabeth; And Others

    Based on definitions of a machine-readable data file (MRDF) taken from the Anglo-American Cataloging Rules, second edition (AACR2) and Standards for Cataloging Nonprint Materials, the following recommendations for processing items of computer software are provided: (1) base main and added entry determination on AACR2; (2) place designation of form…

  4. Demographic Profile of U.S. Children: States in 1980/1, 1985/6 [Machine-Readable Data File].

    ERIC Educational Resources Information Center

    Peterson, J. L.

    These six computer files contain social and demographic data about children and their families in the following states: (1) California; (2) Florida; (3) Illinois; (4) New York; (5) Pennsylvania; and (6) Texas. Data for 1980/81 and 1985/86 are reported. Data will eventually be provided for the 11 largest states. One file is for all children; the…

  5. About machine-readable travel documents

    NASA Astrophysics Data System (ADS)

    Vaudenay, S.; Vuagnoux, M.

    2007-07-01

    Passports are documents that help immigration officers to identify people. In order to strongly authenticate their data and to automatically identify people, they are now equipped with RFID chips. These contain private information, biometrics, and a digital signature by issuing authorities. Although they substantially increase security at the border controls, they also come with new security and privacy issues. In this paper, we survey existing protocols and their weaknesses.

  6. Investigation of the Public Library as a Linking Agent to Major Scientific, Educational, Social and Environmental Data Bases. Two-Year Interim Report.

    ERIC Educational Resources Information Center

    Summit, Roger K.; Firschein, Oscar

    Eight public libraries participated in a two-year experiment to investigate the potential of the public library as a "linking agent" between the public and the many machine-readable data bases currently accessible using on line computer terminals. The investigation covered users of the service, impact on the library, conditions for…

  7. Investigation of the Public Library as a Linking Agent to Major Scientific, Educational, Social, and Environmental Data Bases. Final Report.

    ERIC Educational Resources Information Center

    Lockheed Research Lab., Palo Alto, CA.

    The DIALIB Project was a 3-year experiment that investigated the potential of the public library as a "linking agent" between the public and the many machine-readable data bases currently accessible via the telephone using online terminals. The study investigated the following questions: (1) Is online search of use to the patrons of a…

  8. Investigation of the Searching Efficiency and Cost of Creating a Remote Access Catalog for the New York State Library. Final Report.

    ERIC Educational Resources Information Center

    Buckland, Lawrence F.; Madden, Mary

    From experimental work performed, and reported upon in this document, it is concluded that converting the New York State Library (NYSL) shelf list sample to machine readable form, and searching this shelf list using a remote access catalog are technically sound concepts though the capital costs of data conversion and system installation will be…

  9. 19 CFR 163.1 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...; electronically stored or transmitted information or data; books; papers; correspondence; accounts; financial...: (1) Electronic information which was used to develop other electronic records or paper documents; (2) Electronic information which is in a readable format such as a facsimile paper format or an electronic or...

  10. 19 CFR 163.1 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...; electronically stored or transmitted information or data; books; papers; correspondence; accounts; financial...) Electronic information which was used to develop other electronic records or paper documents; (2) Electronic information which is in a readable format such as a facsimile paper format or an electronic or hardcopy...

  11. BrainLiner: A Neuroinformatics Platform for Sharing Time-Aligned Brain-Behavior Data

    PubMed Central

    Takemiya, Makoto; Majima, Kei; Tsukamoto, Mitsuaki; Kamitani, Yukiyasu

    2016-01-01

    Data-driven neuroscience aims to find statistical relationships between brain activity and task behavior from large-scale datasets. To facilitate high-throughput data processing and modeling, we created BrainLiner as a web platform for sharing time-aligned, brain-behavior data. Using an HDF5-based data format, BrainLiner treats brain activity and data related to behavior with the same salience, aligning both behavioral and brain activity data on a common time axis. This facilitates learning the relationship between behavior and brain activity. Using a common data file format also simplifies data processing and analyses. Properties describing data are unambiguously defined using a schema, allowing machine-readable definition of data. The BrainLiner platform allows users to upload and download data, as well as to explore and search for data from the web platform. A WebGL-based data explorer can visualize highly detailed neurophysiological data from within the web browser, and a data-driven search feature allows users to search for similar time windows of data. This increases transparency, and allows for visual inspection of neural coding. BrainLiner thus provides an essential set of tools for data sharing and data-driven modeling. PMID:26858636

  12. Computational approaches to standard-compliant biofilm data for reliable analysis and integration.

    PubMed

    Sousa, Ana Margarida; Ferreira, Andreia; Azevedo, Nuno F; Pereira, Maria Olivia; Lourenço, Anália

    2012-12-01

    The study of microorganism consortia, also known as biofilms, is associated to a number of applications in biotechnology, ecotechnology and clinical domains. Nowadays, biofilm studies are heterogeneous and data-intensive, encompassing different levels of analysis. Computational modelling of biofilm studies has become thus a requirement to make sense of these vast and ever-expanding biofilm data volumes. The rationale of the present work is a machine-readable format for representing biofilm studies and supporting biofilm data interchange and data integration. This format is supported by the Biofilm Science Ontology (BSO), the first ontology on biofilms information. The ontology is decomposed into a number of areas of interest, namely: the Experimental Procedure Ontology (EPO) which describes biofilm experimental procedures; the Colony Morphology Ontology (CMO) which characterises morphologically microorganism colonies; and other modules concerning biofilm phenotype, antimicrobial susceptibility and virulence traits. The overall objective behind BSO is to develop semantic resources to capture, represent and share data on biofilms and related experiments in a regularized fashion manner. Furthermore, the present work also introduces a framework in assistance of biofilm data interchange and analysis - BiofOmics (http://biofomics.org) - and a public repository on colony morphology signatures - MorphoCol (http://stardust.deb.uminho.pt/morphocol).

  13. Computational approaches to standard-compliant biofilm data for reliable analysis and integration.

    PubMed

    Sousa, Ana Margarida; Ferreira, Andreia; Azevedo, Nuno F; Pereira, Maria Olivia; Lourenço, Anália

    2012-07-24

    The study of microorganism consortia, also known as biofilms, is associated to a number of applications in biotechnology, ecotechnology and clinical domains. Nowadays, biofilm studies are heterogeneous and data-intensive, encompassing different levels of analysis. Computational modelling of biofilm studies has become thus a requirement to make sense of these vast and ever-expanding biofilm data volumes. The rationale of the present work is a machine-readable format for representing biofilm studies and supporting biofilm data interchange and data integration. This format is supported by the Biofilm Science Ontology (BSO), the first ontology on biofilms information. The ontology is decomposed into a number of areas of interest, namely: the Experimental Procedure Ontology (EPO) which describes biofilm experimental procedures; the Colony Morphology Ontology (CMO) which characterises morphologically microorganism colonies; and other modules concerning biofilm phenotype, antimicrobial susceptibility and virulence traits. The overall objective behind BSO is to develop semantic resources to capture, represent and share data on biofilms and related experiments in a regularized fashion manner. Furthermore, the present work also introduces a framework in assistance of biofilm data interchange and analysis - BiofOmics (http://biofomics.org) - and a public repository on colony morphology signatures - MorphoCol (http://stardust.deb.uminho.pt/morphocol).

  14. 37 CFR 1.824 - Form and format for nucleotide and/or amino acid sequence submissions in computer readable form.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Form and format for... And/or Amino Acid Sequences § 1.824 Form and format for nucleotide and/or amino acid sequence... Code for Information Interchange (ASCII) text. No other formats shall be allowed. (3) The computer...

  15. Lightweight Advertising and Scalable Discovery of Services, Datasets, and Events Using Feedcasts

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Ramachandran, R.; Movva, S.

    2010-12-01

    Broadcast feeds (Atom or RSS) are a mechanism for advertising the existence of new data objects on the web, with metadata and links to further information. Users then subscribe to the feed to receive updates. This concept has already been used to advertise the new granules of science data as they are produced (datacasting), with browse images and metadata, and to advertise bundles of web services (service casting). Structured metadata is introduced into the XML feed format by embedding new XML tags (in defined namespaces), using typed links, and reusing built-in Atom feed elements. This “infocasting” concept can be extended to include many other science artifacts, including data collections, workflow documents, topical geophysical events (hurricanes, forest fires, etc.), natural hazard warnings, and short articles describing a new science result. The common theme is that each infocast contains machine-readable, structured metadata describing the object and enabling further manipulation. For example, service casts contain type links pointing to the service interface description (e.g., WSDL for SOAP services), service endpoint, and human-readable documentation. Our Infocasting project has three main goals: (1) define and evangelize micro-formats (metadata standards) so that providers can easily advertise their web services, datasets, and topical geophysical events by adding structured information to broadcast feeds; (2) develop authoring tools so that anyone can easily author such service advertisements, data casts, and event descriptions; and (3) provide a one-stop, Google-like search box in the browser that allows discovery of service, data and event casts visible on the web, and services & data registered in the GEOSS repository and other NASA repositories (GCMD & ECHO). To demonstrate the event casting idea, a series of micro-articles—with accompanying event casts containing links to relevant datasets, web services, and science analysis workflows--will be authored for several kinds of geophysical events, such as hurricanes, smoke plume events, tsunamis, etc. The talk will describe our progress so far, and some of the issues with leveraging existing metadata standards to define lightweight micro-formats.

  16. 19 CFR 163.1 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...; electronically stored or transmitted information or data; books; papers; correspondence; accounts; financial... electronic records or paper documents; (2) Electronic information which is in a readable format such as a facsimile paper format or an electronic or hardcopy spreadsheet; (3) In the case of a paper record that is...

  17. A Study with Computer-Based Circulation Data of the Non-Use and Use of a Large Academic Library. Final Report.

    ERIC Educational Resources Information Center

    Lubans, John, Jr.; And Others

    Computer-based circulation systems, it is widely believed, can be utilized to provide data for library use studies. The study described in this report involves using such a data base to analyze aspects of library use and non-use and types of users. Another major objective of this research was the testing of machine-readable circulation data…

  18. Public Data Set: Erratum: "Multi-point, high-speed passive ion velocity distribution diagnostic on the Pegasus Toroidal Experiment" [Rev. Sci. Instrum. 83, 10D516 (2012)

    DOE Data Explorer

    Burke, Marcus G. [University of Wisconsin-Madison] (ORCID:0000000176193724); Fonck, Raymond J. [University of Wisconsin-Madison] (ORCID:0000000294386762); Bongard, Michael W. [University of Wisconsin-Madison] (ORCID:0000000231609746); Schlossberg, David J. [University of Wisconsin-Madison] (ORCID:0000000287139448); Winz, Gregory R. [University of Wisconsin-Madison] (ORCID:0000000177627184)

    2016-07-18

    This data set contains openly-documented, machine readable digital research data corresponding to figures published in M.G. Burke et al., 'Erratum: "Multi-point, high-speed passive ion velocity distribution diagnostic on the Pegasus Toroidal Experiment" [Rev. Sci. Instrum. 83, 10D516 (2012)],' Rev. Sci. Instrum. 87, 079902 (2016).

  19. Design of an Automated Library Information Storage and Retrieval System for Library of Congress Division for the Blind and Physically Handicapped (DBPH). Final Report.

    ERIC Educational Resources Information Center

    Systems Architects, Inc., Randolph, MA.

    A practical system for producing a union catalog of titles in the collections of the Library of Congress Division for the Blind and Physically Handicapped (DBPH), its regional network, and related agencies from a machine-readable data base is presented. The DBPH organization and operations and the associated regional library network are analyzed.…

  20. Minding the Gap: The Growing Divide Between Privacy and Surveillance Technology

    DTIC Science & Technology

    2013-06-01

    MORIS Mobile Offender Recognition and Information System MRZ Machine Readable Zone NIJ National Institute of Justice NSTC National Science and...in an increasingly mobile world, the practical result is that the most restrictive state law controls a criminal surveillance investigation...1065–1066; Smith, 2011, pp. 1–3). GPS is also used for a variety of administrative purposes ranging from mobile asset tracking (Thomas, 2007) to

  1. Description of Merged Data Base: Appendix F. The Development of Institutions of Higher Education. Theory and Assessment of Impact of Four Possible Areas of Federal Intervention.

    ERIC Educational Resources Information Center

    Jackson, Gregory A.

    Data in this appendix were used in the Office of Education-supported research study of the development of higher education institutions. Machine-readable quantitative data were gathered from the National Center for Education Statistics, the Office of Civil Rights, the Council on Financial Aid to Education, the National Education Data Library, and…

  2. Turning Archival Tapes into an Online “Cardless” Catalog

    PubMed Central

    Zuckerman, Alan E.; Ewens, Wilma A.; Cannard, Bonnie G.; Broering, Naomi C.

    1982-01-01

    Georgetown University has created an online card catalog based on machine readable cataloging records (MARC) loaded from archival tapes or online via the OCLC network. The system is programmed in MUMPS and uses the medical subject headings (MeSH) authority file created by the National Library of Medicine. The online catalog may be searched directly by library users and has eliminated the need for manual filing of catalog cards.

  3. Enhancing the Retrieval Effectiveness of Large Information Systems. Final Report for the Period 1 June 1975-31 December 1976.

    ERIC Educational Resources Information Center

    Becker, David S.; Pyrce, Sharon R.

    The goal of this project was to find ways of enhancing the efficiency of searching machine readable data bases. Ways are sought to transfer to the computer some of the tasks that are normally performed by the user, i.e., to further automate information retrieval. Four experiments were conducted to test the feasibility of a sequential processing…

  4. The Integration of Information Science into the Library School Curriculum at the University of Western Ontario.

    ERIC Educational Resources Information Center

    Svenonius, Elaine

    The integration of Information Science into the library school at the University of Western Ontario was the theme of a talk delivered to ASIS in October 1976 and AALS in January 1977. Two problems arise in the pursuit of integration: (1) information exists in both book form and in some other form; e.g., machine readable form, and (2) theory and…

  5. Documentation for the machine-readable version of the University of Michigan Catalogue of two-dimensional spectral types for the HD stars. Volume 2: Declinations minus 53 deg to minus 40 deg

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1981-01-01

    The magnetic tape version of Volume 2 of the University of Michigan systematic reclassification program for the Henry Draper Catalogue (HD) stars is described. Volume 2 contains all HD stars in the declination range -53 degrees to 40 degrees and also exists in printed form.

  6. The Text Retrieval Conferences (TRECs)

    DTIC Science & Technology

    1998-10-01

    per- form a monolingual run in the target language to act as a baseline. Thirteen groups participated in the TREC-6 CLIR track. Three major...language; the use of machine-readable bilingual dictionaries or other existing linguistic re- sources; and the use of corpus resources to train or...formance for each method. In general, the best cross- language performance was between 50%-75% as ef- fective as a quality monolingual run. The TREC-7

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shettel, D.L. Jr.; Langfeldt, S.L.; Youngquist, C.A.

    This report presents a Hydrogeochemical and Stream Sediment Reconnaissance of the Christian NTMS Quadrangle, Alaska. In addition to this abbreviated data release, more complete data are available to the public in machine-readable form. These machine-readable data, as well as quarterly or semiannual program progress reports containing further information on the HSSR program in general, or on the Los Alamos National Laboratory portion of the program in particular, are available from DOE's Technical Library at its Grand Junction Area Office. Presented in this data release are location data, field analyses, and laboratory analyses of several different sample media. For the sakemore » of brevity, many field site observations have not been included in this volume; these data are, however, available on the magnetic tape. Appendices A through D describe the sample media and summarize the analytical results for each medium. The data have been subdivided by one of the Los Alamos National Laboratory sorting programs of Zinkl and others (1981a) into groups of stream-sediment, lake-sediment, stream-water, lake-water, and ground-water samples. For each group which contains a sufficient number of observations, statistical tables, tables of raw data, and 1:1,000,000 scale maps of pertinent elements have been included in this report. Also included are maps showing results of multivariate statistical analyses.« less

  8. The Readability of Online Resources for Mastopexy Surgery.

    PubMed

    Vargas, Christina R; Chuang, Danielle J; Lee, Bernard T

    2016-01-01

    As more patients use Internet resources for health information, there is increasing interest in evaluating the readability of available online materials. The National Institutes of Health and American Medical Association recommend that patient educational content be written at a sixth-grade reading level. This study evaluates the most popular online resources for information about mastopexy relative to average adult literacy in the United States. The 12 most popular sites returned by the largest Internet search engine were identified using the search term "breast lift surgery." Relevant articles from the main sites were downloaded and formatted into text documents. Pictures, captions, links, and references were excluded. The readability of these 100 articles was analyzed overall and subsequently by site using 10 established readability tests. Subgroup analysis was performed for articles discussing the benefits of surgery and those focusing on risks. The overall average readability of online patient information was 13.3 (range, 11.1-15). There was a range of average readability scores overall across the 12 sites from 8.9 to 16.1, suggesting that some may be more appropriate than others for patient demographics with different health literacy levels. Subgroup analysis revealed that articles discussing the risks of mastopexy were significantly harder to read (mean, 14.1) than articles about benefits (11.6). Patient-directed articles from the most popular online resources for mastopexy information are uniformly above the recommended reading level and likely too difficult to be understood by a large number of patients in the United States.

  9. A TEX86 surface sediment database and extended Bayesian calibration

    NASA Astrophysics Data System (ADS)

    Tierney, Jessica E.; Tingley, Martin P.

    2015-06-01

    Quantitative estimates of past temperature changes are a cornerstone of paleoclimatology. For a number of marine sediment-based proxies, the accuracy and precision of past temperature reconstructions depends on a spatial calibration of modern surface sediment measurements to overlying water temperatures. Here, we present a database of 1095 surface sediment measurements of TEX86, a temperature proxy based on the relative cyclization of marine archaeal glycerol dialkyl glycerol tetraether (GDGT) lipids. The dataset is archived in a machine-readable format with geospatial information, fractional abundances of lipids (if available), and metadata. We use this new database to update surface and subsurface temperature calibration models for TEX86 and demonstrate the applicability of the TEX86 proxy to past temperature prediction. The TEX86 database confirms that surface sediment GDGT distribution has a strong relationship to temperature, which accounts for over 70% of the variance in the data. Future efforts, made possible by the data presented here, will seek to identify variables with secondary relationships to GDGT distributions, such as archaeal community composition.

  10. Apply lightweight recognition algorithms in optical music recognition

    NASA Astrophysics Data System (ADS)

    Pham, Viet-Khoi; Nguyen, Hai-Dang; Nguyen-Khac, Tung-Anh; Tran, Minh-Triet

    2015-02-01

    The problems of digitalization and transformation of musical scores into machine-readable format are necessary to be solved since they help people to enjoy music, to learn music, to conserve music sheets, and even to assist music composers. However, the results of existing methods still require improvements for higher accuracy. Therefore, the authors propose lightweight algorithms for Optical Music Recognition to help people to recognize and automatically play musical scores. In our proposal, after removing staff lines and extracting symbols, each music symbol is represented as a grid of identical M ∗ N cells, and the features are extracted and classified with multiple lightweight SVM classifiers. Through experiments, the authors find that the size of 10 ∗ 12 cells yields the highest precision value. Experimental results on the dataset consisting of 4929 music symbols taken from 18 modern music sheets in the Synthetic Score Database show that our proposed method is able to classify printed musical scores with accuracy up to 99.56%.

  11. Hippocampome.org: a knowledge base of neuron types in the rodent hippocampus.

    PubMed

    Wheeler, Diek W; White, Charise M; Rees, Christopher L; Komendantov, Alexander O; Hamilton, David J; Ascoli, Giorgio A

    2015-09-24

    Hippocampome.org is a comprehensive knowledge base of neuron types in the rodent hippocampal formation (dentate gyrus, CA3, CA2, CA1, subiculum, and entorhinal cortex). Although the hippocampal literature is remarkably information-rich, neuron properties are often reported with incompletely defined and notoriously inconsistent terminology, creating a formidable challenge for data integration. Our extensive literature mining and data reconciliation identified 122 neuron types based on neurotransmitter, axonal and dendritic patterns, synaptic specificity, electrophysiology, and molecular biomarkers. All ∼3700 annotated properties are individually supported by specific evidence (∼14,000 pieces) in peer-reviewed publications. Systematic analysis of this unprecedented amount of machine-readable information reveals novel correlations among neuron types and properties, the potential connectivity of the full hippocampal circuitry, and outstanding knowledge gaps. User-friendly browsing and online querying of Hippocampome.org may aid design and interpretation of both experiments and simulations. This powerful, simple, and extensible neuron classification endeavor is unique in its detail, utility, and completeness.

  12. Readability Assessment of Patient Information about Lymphedema and Its Treatment.

    PubMed

    Seth, Akhil K; Vargas, Christina R; Chuang, Danielle J; Lee, Bernard T

    2016-02-01

    Patient use of online resources for health information is increasing, and access to appropriately written information has been associated with improved patient satisfaction and overall outcomes. The American Medical Association and the National Institutes of Health recommend that patient materials be written at a sixth-grade reading level. In this study, the authors simulated a patient search of online educational content for lymphedema and evaluated readability. An online search for the term "lymphedema" was performed, and the first 12 hits were identified. User and location filters were disabled and sponsored results were excluded. Patient information from each site was downloaded and formatted into plain text. Readability was assessed using established tests: Coleman-Liau, Flesch-Kincaid, Flesch Reading Ease Index, FORCAST Readability Formula, Fry Graph, Gunning Fog Index, New Dale-Chall Formula, New Fog Count, Raygor Readability Estimate, and Simple Measure of Gobbledygook Readability Formula. There were 152 patient articles downloaded; the overall mean reading level was 12.6. Individual website reading levels ranged from 9.4 (cancer.org) to 16.7 (wikipedia.org). There were 36 articles dedicated to conservative treatments for lymphedema; surgical treatment was mentioned in nine articles across four sites. The average reading level for conservative management was 12.7, compared with 15.6 for surgery (p < 0.001). Patient information found through an Internet search for lymphedema is too difficult for many American adults to read. Websites queried had a range of readability, and surgeons should direct patients to sites appropriate for their level. There is limited information about surgical treatment available on the most popular sites; this information is significantly harder to read than sections on conservative measures.

  13. SAS FORMATS: USES AND ABUSES

    EPA Science Inventory

    SAS formats are a very powerful tool. They allow you to display the data in a more readable manner without modifying the data. They can also be used to group data into categories for use in various procedures like PROC FREQ, PROC TTEST, and PROC MEANS (as a class variable). ...

  14. Parts Quality Management: Direct Part Marking via Data Matrix Symbols for Mission Assurance

    NASA Technical Reports Server (NTRS)

    Moss, Chantrice

    2013-01-01

    A United States Government Accountability Office (GAO) review of twelve NASA programs found widespread parts quality problems contributing to significant cost overruns, schedule delays, and reduced system reliability. Direct part-marking with Data Matrix symbols could significantly improve the quality of inventory control and parts lifecycle management. This paper examines the feasibility of using 15 marking technologies for use in future NASA programs. A structural analysis is based on marked material type, operational environment (e.g., ground, suborbital, orbital), durability of marks, ease of operation, reliability, and affordability. A cost-benefits analysis considers marking technology (data plates, label printing, direct part marking) and marking types (two-dimensional machine-readable, human-readable). Previous NASA parts marking efforts and historical cost data are accounted for, including in-house vs. outsourced marking. Some marking methods are still under development. While this paper focuses on NASA programs, results may be applicable to a variety of industrial environments.

  15. The Frictionless Data Package: Data Containerization for Automated Scientific Workflows

    NASA Astrophysics Data System (ADS)

    Shepherd, A.; Fils, D.; Kinkade, D.; Saito, M. A.

    2017-12-01

    As cross-disciplinary geoscience research increasingly relies on machines to discover and access data, one of the critical questions facing data repositories is how data and supporting materials should be packaged for consumption. Traditionally, data repositories have relied on a human's involvement throughout discovery and access workflows. This human could assess fitness for purpose by reading loosely coupled, unstructured information from web pages and documentation. In attempts to shorten the time to science and access data resources across may disciplines, expectations for machines to mediate the process of discovery and access is challenging data repository infrastructure. This challenge is to find ways to deliver data and information in ways that enable machines to make better decisions by enabling them to understand the data and metadata of many data types. Additionally, once machines have recommended a data resource as relevant to an investigator's needs, the data resource should be easy to integrate into that investigator's toolkits for analysis and visualization. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) supports NSF-funded OCE and PLR investigators with their project's data management needs. These needs involve a number of varying data types some of which require multiple files with differing formats. Presently, BCO-DMO has described these data types and the important relationships between the type's data files through human-readable documentation on web pages. For machines directly accessing data files from BCO-DMO, this documentation could be overlooked and lead to misinterpreting the data. Instead, BCO-DMO is exploring the idea of data containerization, or packaging data and related information for easier transport, interpretation, and use. In researching the landscape of data containerization, the Frictionlessdata Data Package (http://frictionlessdata.io/) provides a number of valuable advantages over similar solutions. This presentation will focus on these advantages and how the Frictionlessdata Data Package addresses a number of real-world use cases faced for data discovery, access, analysis and visualization.

  16. 10 CFR 75.31 - General requirements.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... appropriate computer-readable format, an initial inventory report, and thereafter shall make accounting... given notice, under § 75.7 that their facilities are subject to the application of IAEA safeguards...

  17. 10 CFR 75.31 - General requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... appropriate computer-readable format, an initial inventory report, and thereafter shall make accounting... given notice, under § 75.7 that their facilities are subject to the application of IAEA safeguards...

  18. 10 CFR 75.31 - General requirements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... appropriate computer-readable format, an initial inventory report, and thereafter shall make accounting... given notice, under § 75.7 that their facilities are subject to the application of IAEA safeguards...

  19. 10 CFR 75.31 - General requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... appropriate computer-readable format, an initial inventory report, and thereafter shall make accounting... given notice, under § 75.7 that their facilities are subject to the application of IAEA safeguards...

  20. 10 CFR 75.31 - General requirements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... appropriate computer-readable format, an initial inventory report, and thereafter shall make accounting... given notice, under § 75.7 that their facilities are subject to the application of IAEA safeguards...

  1. Units in the VO Version 1.0

    NASA Astrophysics Data System (ADS)

    Derriere, Sebastien; Gray, Norman; Demleitner, Markus; Louys, Mireille; Ochsenbein, Francois; Derriere, Sebastien; Gray, Norman

    2014-05-01

    This document describes a recommended syntax for writing the string representation of unit labels ("VOUnits"). In addition, it describes a set of recognised and deprecated units, which is as far as possible consistent with other relevant standards (BIPM, ISO/IEC and the IAU). The intention is that units written to conform to this specification will likely also be parsable by other well-known parsers. To this end, we include machine-readable grammars for other units syntaxes.

  2. OCTANET--an electronic library network: I. Design and development.

    PubMed Central

    Johnson, M F; Pride, R B

    1983-01-01

    The design and development of the OCTANET system for networking among medical libraries in the midcontinental region is described. This system's features and configuration may be attributed, at least in part, to normal evolution of technology in library networking, remote access to computers, and development of machine-readable data bases. Current functions and services of the system are outlined and implications for future developments in computer-based networking are discussed. PMID:6860825

  3. A bill to extend the period during which Iraqis who were employed by the United States Government in Iraq may be granted special immigrant status and to temporarily increase the fee or surcharge for processing machine-readable nonimmigrant visas.

    THOMAS, 113th Congress

    Sen. Shaheen, Jeanne [D-NH

    2013-09-30

    House - 10/01/2013 Held at the desk. (All Actions) Notes: For further action, see H.R.3233, which became Public Law 113-42 on 10/4/2013. Tracker: This bill has the status Passed SenateHere are the steps for Status of Legislation:

  4. Identification of 1.4 Million Active Galactic Nuclei In the Mid-Infrared Using WISE Data

    DTIC Science & Technology

    2015-11-01

    galaxies – infrared: stars – galaxies : active – quasars: general Supporting material: machine-readable table 1. INTRODUCTION The International Celestial...AGN-dominated galaxies , optical emission is thought to originate from the compact accretion disk surrounding the supermassive black hole (SMBH), while... galaxies , an optical centroid can be shifted relative to the radio position because of contamination from the host galaxy . Depending on the distance to

  5. Oil and gas field code master list, 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This document contains data collected through October 1993 and provides standardized field name spellings and codes for all identified oil and/or gas fields in the United States. Other Federal and State government agencies, as well as industry, use the EIA Oil and Gas Field Code Master List as the standard for field identification. A machine-readable version of the Oil and Gas Field Code Master List is available from the National Technical Information Service.

  6. 45 CFR 164.524 - Access of individuals to protected health information.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... individual with access to the protected health information in the form or format requested by the individual, if it is readily producible in such form or format; or, if not, in a readable hard copy form or such other form or format as agreed to by the covered entity and the individual. (ii) The covered entity may...

  7. Readability of internet-sourced patient education material related to "labour analgesia".

    PubMed

    Boztas, Nilay; Omur, Dilek; Ozbılgın, Sule; Altuntas, Gözde; Piskin, Ersan; Ozkardesler, Sevda; Hanci, Volkan

    2017-11-01

    We evaluated the readability of Internet-sourced patient education materials (PEMs) related to "labour analgesia." In addition to assessing the readability of websites, we aimed to compare commercial, personal, and academic websites.We used the most popular search engine (http://www.google.com) in our study. The first 100 websites in English that resulted from a search for the key words "labour analgesia" were scanned. Websites that were not in English, graphs, pictures, videos, tables, figures and list formats in the text, all punctuation, the number of words in the text is less than 100 words, feedback forms not related to education, (Uniform Resource Locator) URL websites, author information, references, legal disclaimers, and addresses and telephone numbers were excluded.The texts included in the study were assessed using the Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), Simple Measure of Gobbledygook (SMOG), and Gunning Frequency of Gobbledygook (FOG) readability formulae. The number of Latin words within the text was determined.Analysis of 300-word sections of the texts revealed that the mean FRES was 47.54 ± 12.54 (quite difficult), mean FKGL and SMOG were 11.92 ± 2.59 and 10.57 ± 1.88 years of education, respectively, and mean Gunning FOG was 14.71 ± 2.76 (very difficult). Within 300-word sections, the mean number of Latin words was identified as 16.56 ± 6.37.In our study, the readability level of Internet-sourced PEM related to "labour analgesia" was identified to be quite high indicating poor readability.

  8. How well are health information websites displayed on mobile phones? Implications for the readability of health information.

    PubMed

    Cheng, Christina; Dunn, Matthew

    2017-03-01

    Issue addressed More than 87% of Australians own a mobile phone with Internet access and 82% of phone owners use their smartphones to search for health information, indicating that mobile phones may be a powerful tool for building health literacy. Yet, online health information has been found to be above the reading ability of the general population. As reading on a smaller screen may further complicate the readability of information, this study aimed to examine how health information is displayed on mobile phones and its implications for readability. Methods Using a cross-sectional design with convenience sampling, a sample of 270 mobile webpages with information on 12 common health conditions was generated for analysis, they were categorised based on design and position of information display. Results The results showed that 71.48% of webpages were mobile-friendly but only 15.93% were mobile-friendly webpages designed in a way to optimise readability, with a paging format and queried information displayed for immediate viewing. Conclusion With inadequate evidence and lack of consensus on how webpage design can best promote reading and comprehension, it is difficult to draw a conclusion on the effect of current mobile health information presentation on readability. So what? Building mobile-responsive websites should be a priority for health information providers and policy-makers. Research efforts are urgently required to identify how best to enhance readability of mobile health information and fully capture the capabilities of mobile phones as a useful device to increase health literacy.

  9. Gene Fusion Markup Language: a prototype for exchanging gene fusion data.

    PubMed

    Kalyana-Sundaram, Shanker; Shanmugam, Achiraman; Chinnaiyan, Arul M

    2012-10-16

    An avalanche of next generation sequencing (NGS) studies has generated an unprecedented amount of genomic structural variation data. These studies have also identified many novel gene fusion candidates with more detailed resolution than previously achieved. However, in the excitement and necessity of publishing the observations from this recently developed cutting-edge technology, no community standardization approach has arisen to organize and represent the data with the essential attributes in an interchangeable manner. As transcriptome studies have been widely used for gene fusion discoveries, the current non-standard mode of data representation could potentially impede data accessibility, critical analyses, and further discoveries in the near future. Here we propose a prototype, Gene Fusion Markup Language (GFML) as an initiative to provide a standard format for organizing and representing the significant features of gene fusion data. GFML will offer the advantage of representing the data in a machine-readable format to enable data exchange, automated analysis interpretation, and independent verification. As this database-independent exchange initiative evolves it will further facilitate the formation of related databases, repositories, and analysis tools. The GFML prototype is made available at http://code.google.com/p/gfml-prototype/. The Gene Fusion Markup Language (GFML) presented here could facilitate the development of a standard format for organizing, integrating and representing the significant features of gene fusion data in an inter-operable and query-able fashion that will enable biologically intuitive access to gene fusion findings and expedite functional characterization. A similar model is envisaged for other NGS data analyses.

  10. Health Literacy Demands of Patient-Reported Evaluation Tools in Orthopedics: A Mixed-Methods Case Study.

    PubMed

    Hadden, Kristie; Prince, Latrina Y; Barnes, C Lowry

    In response to an assessment of organizational health literacy practices at a major academic health center, this case study evaluated the health literacy demands of patient-reported outcome measures commonly used in orthopedic surgery practices to identify areas for improvement. A mixed-methods approach was used to analyze the readability and patient feedback of orthopedic patient-reported outcome materials. Qualitative results were derived from focus group notes, observations, recordings, and consensus documents. Results were combined to formulate recommendations for quality improvement. Readability results indicated that narrative portions of sample patient outcome tools were written within or below the recommended eighth-grade reading level (= 5.9). However, document literacy results were higher than the recommended reading level (= 9.8). Focus group results revealed that participants had consensus on 8 of 12 plain language best practices, including use of bullet lists and jargon or technical words in both instruments. Although the typical readability of both instruments was not exceedingly high, appropriate readability formula and assessment methods gave a more comprehensive assessment of true readability. In addition, participant feedback revealed the need to reduce jargon and improve formatting to lessen the health literacy demands on patients. As clinicians turn more toward patient-reported measures to assess health care quality, it is important to consider the health literacy demands that are inherent in the instruments they are given in our health systems.

  11. Coordinating Communities and Building Governance in the Development of Schematic and Semantic Standards: the Key to Solving Global Earth and Space Science Challenges in the 21st Century.

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.

    2007-12-01

    The Information Age in Science is being driven partly by the data deluge as exponentially growing volumes of data are being generated by research. Such large volumes of data cannot be effectively processed by humans and efficient and timely processing by computers requires development of specific machine readable formats. Further, as key challenges in earth and space sciences, such as climate change, hazard prediction and sustainable development resources require a cross disciplinary approach, data from various domains will need to be integrated from globally distributed sources also via machine to machine formats. However, it is becoming increasingly apparent that the existing standards can be very domain specific and most existing data transfer formats require human intervention. Where groups from different communities do try combine data across the domain/discipline boundaries much time is spent reformatting and reorganizing the data and it is conservatively estimated that this can take 80% of a project's time and resources. Four different types of standards are required for machine to machine interaction: systems, syntactic, schematic and semantic. Standards at the systems (WMS, WFS, etc) and at the syntactic level (GML, Observation and Measurement, SensorML) are being developed through international standards bodies such as ISO, OGC, W3C, IEEE etc. In contrast standards at the schematic level (e.g., GeoSciML, LandslidesML, WaterML, QuakeML) and at the semantic level (ie ontologies and vocabularies) are currently developing rapidly, in a very uncoordinated way and with little governance. As the size of the community that can machine read each others data depends on the size of the community that has developed the schematic or semantic standards, it is essential that to achieve global integration of earth and space science data, the required standards need to be developed through international collaboration using accepted standard proceedures. Once developed the standards also require some form of governance to maintain and then extend the standard as the science evolves to meet new challenges. A standard that does have some governance is GeoSciML, a data transfer standard for geoscience map data. GeoSciML is currently being developed by a consortium of 7 countries under the auspices of the Commission for the Management of and Application of Geoscience Information (CGI), a commission of the International Union of Geological Sciences. Perhaps other `ML' or ontology and vocabulary development `teams' need to look to their international domain specific specialty societies for endorsement and governance. But the issue goes beyond Earth and Space Sciences, as increasingly cross and intra disciplinary science requires machine to machine interaction with other science disciplines such as physics, chemistry and astronomy. For example, for geochemistry do we develop GeochemistryML or do we extend the existing Chemical Markup Language? Again, the question is who will provide the coordination of the development of the required schematic and semantic standards that underpin machine to machine global integration of science data. Is this a role for ICSU or CODATA or who? In order to address this issue, Geoscience Australia and CSIRO established the Solid Earth and Environmental Grid Community website to enable communities to `advertise' standards development and to provide a community TWIKI where standards can be developed in a globally `open' environment.

  12. A functional-based segmentation of human body scans in arbitrary postures.

    PubMed

    Werghi, Naoufel; Xiao, Yijun; Siebert, Jan Paul

    2006-02-01

    This paper presents a general framework that aims to address the task of segmenting three-dimensional (3-D) scan data representing the human form into subsets which correspond to functional human body parts. Such a task is challenging due to the articulated and deformable nature of the human body. A salient feature of this framework is that it is able to cope with various body postures and is in addition robust to noise, holes, irregular sampling and rigid transformations. Although whole human body scanners are now capable of routinely capturing the shape of the whole body in machine readable format, they have not yet realized their potential to provide automatic extraction of key body measurements. Automated production of anthropometric databases is a prerequisite to satisfying the needs of certain industrial sectors (e.g., the clothing industry). This implies that in order to extract specific measurements of interest, whole body 3-D scan data must be segmented by machine into subsets corresponding to functional human body parts. However, previously reported attempts at automating the segmentation process suffer from various limitations, such as being restricted to a standard specific posture and being vulnerable to scan data artifacts. Our human body segmentation algorithm advances the state of the art to overcome the above limitations and we present experimental results obtained using both real and synthetic data that confirm the validity, effectiveness, and robustness of our approach.

  13. Industrial Arts Technology Bibliography; An Annotated Reference for Librarians.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany. Bureau of Secondary Curriculum Development.

    This compilation is designed to assist librarians in selecting books for supplementing the expanding program of industrial arts education. The books were selected for the major subject areas of a broad industrial arts program, on the basis of reflected interest of students, content, format, and readability. The format and coding used in the…

  14. Astronomical Data Center Bulletin, volume 1, number 3

    NASA Technical Reports Server (NTRS)

    Mead, J. M.; Warren, W. H., Jr.; Nagy, T. A.

    1983-01-01

    A catalog of galactic O-type stars, a machine-readable version of the bright star catalog, a two-micron sky survey, sky survey sources with problematical Durchmusterung identifications, data retrieval for visual binary stars, faint blue objects, the sixth catalog of galactic Wolf-Rayet stars, declination versus magnitude distribution, the SAO-HD-GC-DM cross index catalog, star cross-identification tables, astronomical sources, bibliographical star index search updates, DO-HD and HD-DO cross indices, and catalogs, are reviewed.

  15. The NATO thesaurus project

    NASA Technical Reports Server (NTRS)

    Krueger, Jonathan

    1990-01-01

    This document describes functionality to be developed to support the NATO technical thesaurus. Described are the specificity of the thesaurus structure and function; the distinction between the thesaurus information and its representation in a given online, machine readable, or printed form; the enhancement of the thesaurus with the assignment of COSATI codes (fields and groups) to posting terms, the integration of DTIC DRIT and NASA thesauri related terminology, translation of posting terms into French; and the provision of a basis for system design.

  16. Documentation for the machine-readable version of photometric data for nearby stars

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    A computer list of all photometric systems (of those considered), in which each star was measured is provided. The file is a subset of a much larger and more comprehensive compilation, which lists all measured photoelectric photometric systems for any star that has been measured in at least one photoelectric system. In addition to the photometric system identifications, cross identifications to the Henry Draper and Durchmusterung catalogs and apparent visual magnitudes are included.

  17. Catalog of infrared observations

    NASA Technical Reports Server (NTRS)

    Gezari, D. Y.; Schmitz, M.; Mead, J. M.

    1982-01-01

    The infrared astronomical data base and its principal data product, the catalog of Infrared Observations (CIO), comprise a machine readable library of infrared (1 microns to 1000 microns astronomical observations. To date, over 1300 journal articles and 10 major survey catalogs are included in this data base, which contains about 55,000 individual observations of about 10,000 different infrared sources. Of these, some 8,000 sources are identifiable with visible objects, and about 2,000 do not have known visible counterparts.

  18. Catalog of SAS-2 gamma-ray observations (Fichtel, et al. 1990)

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.

    1990-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The SAS-2 gamma ray catalog contains fluxes measured with the high energy gamma ray telescope flown aboard the second NASA Small Astronomy Satellite. The objects measured include various types of galaxies, quasi-stellar, and BL Lacertae objects, and pulsars. The catalog contains separate files for galaxies, pulsars, other objects, notes, and references.

  19. An atlas of stellar spectra between 2.00 and 2.45 micrometers (Arnaud, Gilmore, and Collier Cameron 1989)

    NASA Technical Reports Server (NTRS)

    Warren, Wayne N., Jr.

    1990-01-01

    The machine-readable version of the atlas, as it is currently being distributed from the Astronomical Data Center, is described. The atlas represent a collection of spectra in the wavelength range 2.00 to 2.45 micros having a resolution of approximately 0.02 micron. The sample of 73 stars includes a supergiant, giants, dwarfs, and subdwarfs with a chemical abundance range of about -2 to +0.5 dex.

  20. Passport examination by a confocal-type laser profile microscope.

    PubMed

    Sugawara, Shigeru

    2008-06-10

    The author proposes a nondestructive and highly precise method of measuring the thickness of a film pasted on a passport using a confocal-type laser profile microscope. The effectiveness of this method in passport examination is demonstrated. A confocal-type laser profile microscope is used to create profiles of the film surface and film-paper interface; these profiles are used to calculate the film thickness by employing an algorithm developed by the author. The film thicknesses of the passport samples--35 genuine and 80 counterfeit Japanese passports--are measured nondestructively. The intra-sample standard deviation of the film thicknesses of the genuine and counterfeit Japanese passports was of the order of 1 microm The intersample standard deviations of the film thicknesses of passports forged using the same tools and techniques are expected to be of the order of 1 microm. The thickness values of the films on the machine-readable genuine passports ranged between 31.95 microm and 36.95 microm. The likelihood ratio of this method in the authentication of machine-readable Japanese genuine passports is 11.7. Therefore, this method is effective for the authentification of genuine passports. Since the distribution of the film thickness of all forged passports was considerably larger than the accuracy of this method, this method is considered effective also for revealing the relation among the forged passports and acquiring proof of the crime.

  1. Matrix frequency analysis and its applications to language classification of textual data for English and Hebrew

    NASA Astrophysics Data System (ADS)

    Uchill, Joseph H.; Assadi, Amir H.

    2003-01-01

    The advent of the internet has opened a host of new and exciting questions in the science and mathematics of information organization and data mining. In particular, a highly ambitious promise of the internet is to bring the bulk of human knowledge to everyone with access to a computer network, providing a democratic medium for sharing and communicating knowledge regardless of the language of the communication. The development of sharing and communication of knowledge via transfer of digital files is the first crucial achievement in this direction. Nonetheless, available solutions to numerous ancillary problems remain far from satisfactory. Among such outstanding problems are the first few fundamental questions that have been responsible for the emergence and rapid growth of the new field of Knowledge Engineering, namely, classification of forms of data, their effective organization, and extraction of knowledge from massive distributed data sets, and the design of fast effective search engines. The precision of machine learning algorithms in classification and recognition of image data (e.g. those scanned from books and other printed documents) are still far from human performance and speed in similar tasks. Discriminating the many forms of ASCII data from each other is not as difficult in view of the emerging universal standards for file-format. Nonetheless, most of the past and relatively recent human knowledge is yet to be transformed and saved in such machine readable formats. In particular, an outstanding problem in knowledge engineering is the problem of organization and management--with precision comparable to human performance--of knowledge in the form of images of documents that broadly belong to either text, image or a blend of both. It was shown in that the effectiveness of OCR was intertwined with the success of language and font recognition.

  2. 10 CFR 75.34 - Inventory change reports.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Change Reports) in computer-readable format to be completed in accordance with instructions (NUREG/BR..., completed as specified in the instructions (NUREG/BR-0006 and NMMSS Report D-24 “Personal Computer Data...

  3. Design and evaluation of instrument approach procedure charts

    DOT National Transportation Integrated Search

    1993-01-01

    A new format for instrument approach procedure : charts has been designed. Special attention was paid to : improving the readability of communication frequencies, : approach course heading and missed approach instructions. : Selected components of th...

  4. Practical guide to bar coding for patient medication safety.

    PubMed

    Neuenschwander, Mark; Cohen, Michael R; Vaida, Allen J; Patchett, Jeffrey A; Kelly, Jamie; Trohimovich, Barbara

    2003-04-15

    Bar coding for the medication administration step of the drug-use process is discussed. FDA will propose a rule in 2003 that would require bar-code labels on all human drugs and biologicals. Even with an FDA mandate, manufacturer procrastination and possible shifts in product availability are likely to slow progress. Such delays should not preclude health systems from adopting bar-code-enabled point-of-care (BPOC) systems to achieve gains in patient safety. Bar-code technology is a replacement for traditional keyboard data entry. The elements of bar coding are content, which determines the meaning; data format, which refers to the embedded data and symbology, which describes the "font" in which the machine-readable code is written. For a BPOC system to deliver an acceptable level of patient protection, the hospital must first establish reliable processes for a patient identification band, caregiver badge, and medication bar coding. Medications can have either drug-specific or patient-specific bar codes. Both varieties result in the desired code that supports patient's five rights of drug administration. When medications are not available from the manufacturer in immediate-container bar-coded packaging, other means of applying the bar code must be devised, including the use of repackaging equipment, overwrapping, manual bar coding, and outsourcing. Virtually all medications should be bar coded, the bar code on the label should be easily readable, and appropriate policies, procedures, and checks should be in place. Bar coding has the potential to be not only cost-effective but to produce a return on investment. By bar coding patient identification tags, caregiver badges, and immediate-container medications, health systems can substantially increase patient safety during medication administration.

  5. Readability of internet-sourced patient education material related to “labour analgesia”

    PubMed Central

    Boztas, Nilay; Omur, Dilek; Ozbılgın, Sule; Altuntas, Gözde; Piskin, Ersan; Ozkardesler, Sevda; Hanci, Volkan

    2017-01-01

    Abstract We evaluated the readability of Internet-sourced patient education materials (PEMs) related to “labour analgesia.” In addition to assessing the readability of websites, we aimed to compare commercial, personal, and academic websites. We used the most popular search engine (http://www.google.com) in our study. The first 100 websites in English that resulted from a search for the key words “labour analgesia” were scanned. Websites that were not in English, graphs, pictures, videos, tables, figures and list formats in the text, all punctuation, the number of words in the text is less than 100 words, feedback forms not related to education, (Uniform Resource Locator) URL websites, author information, references, legal disclaimers, and addresses and telephone numbers were excluded. The texts included in the study were assessed using the Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), Simple Measure of Gobbledygook (SMOG), and Gunning Frequency of Gobbledygook (FOG) readability formulae. The number of Latin words within the text was determined. Analysis of 300-word sections of the texts revealed that the mean FRES was 47.54 ± 12.54 (quite difficult), mean FKGL and SMOG were 11.92 ± 2.59 and 10.57 ± 1.88 years of education, respectively, and mean Gunning FOG was 14.71 ± 2.76 (very difficult). Within 300-word sections, the mean number of Latin words was identified as 16.56 ± 6.37. In our study, the readability level of Internet-sourced PEM related to “labour analgesia” was identified to be quite high indicating poor readability. PMID:29137057

  6. FASTQ quality control dashboard

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-07-25

    FQCDB builds up existing open source software, FastQC, implementing a modern web interface for across parsed output of FastQC. In addition, FQCDB is extensible as a web service to include additional plots of type line, boxplot, or heatmap, across data formatted according to guidelines. The interface is also configurable via more readable JSON format, enabling customization by non-web programmers.

  7. Artillery ammunition marking tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weil, B.S.; Lewis, J.C.

    1995-04-01

    This report describes the testing results of two approaches being considered for marking ink artillery ammunition with machine-readable data symbols. The first approach used ink-jet printing directly onto projectiles, and the second approach employed thermal-transfer printing onto self-adhesive labels that are subsequently applied automatically to projectiles. The objectives of this evaluation for each marking technology were to (1) determine typical system performance characteristics using the best commercially available equipment and (2) identify any special requirements necessary for handling ammunition when these technologies are employed.

  8. Method of modifying a volume mesh using sheet extraction

    DOEpatents

    Borden, Michael J [Albuquerque, NM; Shepherd, Jason F [Albuquerque, NM

    2007-02-20

    A method and machine-readable medium provide a technique to modify a hexahedral finite element volume mesh using dual generation and sheet extraction. After generating a dual of a volume stack (mesh), a predetermined algorithm may be followed to modify the volume mesh of hexahedral elements. The predetermined algorithm may include the steps of determining a sheet of hexahedral mesh elements, generating nodes for merging, and merging the nodes to delete the sheet of hexahedral mesh elements and modify the volume mesh.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finnell, Joshua Eugene

    US President Barack Obama issued Executive Order 13642-Making Open and Machine Readable the New Default for Government - on May, 9 2013, mandating, wherever legally permissible and possible, that US Government information be made open to the public.[1] This edict accelerated the construction of and framework for data repositories, and data citation principles and practices, such as data.gov. As a corollary, researchers across the country's national laboratories found themselves creating data management plans, applying data set metadata standards, and ensuring the long-term access of data for federally funded scientific research.

  10. Social Ontology Documentation for Knowledge Externalization

    NASA Astrophysics Data System (ADS)

    Aranda-Corral, Gonzalo A.; Borrego-Díaz, Joaquín; Jiménez-Mavillard, Antonio

    Knowledge externalization and organization is a major challenge that companies must face. Also, they have to ask whether is possible to enhance its management. Mechanical processing of information represents a chance to carry out these tasks, as well as to turn intangible knowledge assets into real assets. Machine-readable knowledge provides a basis to enhance knowledge management. A promising approach is the empowering of Knowledge Externalization by the community (users, employees). In this paper, a social semantic tool (called OntoxicWiki) for enhancing the quality of knowledge is presented.

  11. Documentation for the machine-readable version of the thirteen color photometry of 1380 bright stars

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.; Roman, N. G.

    1981-01-01

    The magnetic tape version of the catalogue of thirteen-color photometry of 1380 bright stars, containing data on the 13 color medium narrow band photometric system is described. Observations of essentially all stars brighter than fifth visual magnitude north of delta = -20 deg and brighter than fourth visual magnitude south of delta = -20 deg are included. It is intended to enable users to read and process the tape without the common difficulties and uncertainties.

  12. Prostate Cancer Information Available in Health-Care Provider Offices: An Analysis of Content, Readability, and Cultural Sensitivity.

    PubMed

    Choi, Seul Ki; Seel, Jessica S; Yelton, Brooks; Steck, Susan E; McCormick, Douglas P; Payne, Johnny; Minter, Anthony; Deutchki, Elizabeth K; Hébert, James R; Friedman, Daniela B

    2018-07-01

    Prostate cancer (PrCA) is the most common cancer affecting men in the United States, and African American men have the highest incidence among men in the United States. Little is known about the PrCA-related educational materials being provided to patients in health-care settings. Content, readability, and cultural sensitivity of materials available in providers' practices in South Carolina were examined. A total of 44 educational materials about PrCA and associated sexual dysfunction was collected from 16 general and specialty practices. The content of the materials was coded, and cultural sensitivity was assessed using the Cultural Sensitivity Assessment Tool. Flesch Reading Ease, Flesch-Kincaid Grade Level, and the Simple Measure of Gobbledygook were used to assess readability. Communication with health-care providers (52.3%), side effects of PrCA treatment (40.9%), sexual dysfunction and its treatment (38.6%), and treatment options (34.1%) were frequently presented. All materials had acceptable cultural sensitivity scores; however, 2.3% and 15.9% of materials demonstrated unacceptable cultural sensitivity regarding format and visual messages, respectively. Readability of the materials varied. More than half of the materials were written above a high-school reading level. PrCA-related materials available in health-care practices may not meet patients' needs regarding content, cultural sensitivity, and readability. A wide range of educational materials that address various aspects of PrCA, including treatment options and side effects, should be presented in plain language and be culturally sensitive.

  13. Parts quality management: Direct part marking of data matrix symbol for mission assurance

    NASA Astrophysics Data System (ADS)

    Moss, Chantrice; Chakrabarti, Suman; Scott, David W.

    A United States Government Accountability Office (GAO) review of twelve NASA programs found widespread parts quality problems contributing to significant cost overruns, schedule delays, and reduced system reliability. Direct part marking with Data Matrix symbols could significantly improve the quality of inventory control and parts lifecycle management. This paper examines the feasibility of using direct part marking technologies for use in future NASA programs. A structural analysis is based on marked material type, operational environment (e.g., ground, suborbital, Low Earth Orbit), durability of marks, ease of operation, reliability, and affordability. A cost-benefits analysis considers marking technology (label printing, data plates, and direct part marking) and marking types (two-dimensional machine-readable, human-readable). Previous NASA parts marking efforts and historical cost data are accounted for, including in-house vs. outsourced marking. Some marking methods are still under development. While this paper focuses on NASA programs, results may be applicable to a variety of industrial environments.

  14. Parts Quality Management: Direct Part Marking of Data Matrix Symbol for Mission Assurance

    NASA Technical Reports Server (NTRS)

    Moss, Chantrice; Chakrabarti, Suman; Scott, David W.

    2013-01-01

    A United States Government Accountability Office (GAO) review of twelve NASA programs found widespread parts quality problems contributing to significant cost overruns, schedule delays, and reduced system reliability. Direct part marking with Data Matrix symbols could significantly improve the quality of inventory control and parts lifecycle management. This paper examines the feasibility of using direct part marking technologies for use in future NASA programs. A structural analysis is based on marked material type, operational environment (e.g., ground, suborbital, Low Earth Orbit), durability of marks, ease of operation, reliability, and affordability. A cost-benefits analysis considers marking technology (label printing, data plates, and direct part marking) and marking types (two-dimensional machine-readable, human-readable). Previous NASA parts marking efforts and historical cost data are accounted for, including inhouse vs. outsourced marking. Some marking methods are still under development. While this paper focuses on NASA programs, results may be applicable to a variety of industrial environments.

  15. MULTIPROCESSOR AND DISTRIBUTED PROCESSING BIBLIOGRAPHIC DATA BASE SOFTWARE SYSTEM

    NASA Technical Reports Server (NTRS)

    Miya, E. N.

    1994-01-01

    Multiprocessors and distributed processing are undergoing increased scientific scrutiny for many reasons. It is more and more difficult to keep track of the existing research in these fields. This package consists of a large machine-readable bibliographic data base which, in addition to the usual keyword searches, can be used for producing citations, indexes, and cross-references. The data base is compiled from smaller existing multiprocessing bibliographies, and tables of contents from journals and significant conferences. There are approximately 4,000 entries covering topics such as parallel and vector processing, networks, supercomputers, fault-tolerant computers, and cellular automata. Each entry is represented by 21 fields including keywords, author, referencing book or journal title, volume and page number, and date and city of publication. The data base contains UNIX 'refer' formatted ASCII data and can be implemented on any computer running under the UNIX operating system. The data base requires approximately one megabyte of secondary storage. The documentation for this program is included with the distribution tape, although it can be purchased for the price below. This bibliography was compiled in 1985 and updated in 1988.

  16. A global gas flaring black carbon emission rate dataset from 1994 to 2012

    PubMed Central

    Huang, Kan; Fu, Joshua S.

    2016-01-01

    Global flaring of associated petroleum gas is a potential emission source of particulate matters (PM) and could be notable in some specific regions that are in urgent need of mitigation. PM emitted from gas flaring is mainly in the form of black carbon (BC), which is a strong short-lived climate forcer. However, BC from gas flaring has been neglected in most global/regional emission inventories and is rarely considered in climate modeling. Here we present a global gas flaring BC emission rate dataset for the period 1994–2012 in a machine-readable format. We develop a region-dependent gas flaring BC emission factor database based on the chemical compositions of associated petroleum gas at various oil fields. Gas flaring BC emission rates are estimated using this emission factor database and flaring volumes retrieved from satellite imagery. Evaluation using a chemical transport model suggests that consideration of gas flaring emissions can improve model performance. This dataset will benefit and inform a broad range of research topics, e.g., carbon budget, air quality/climate modeling, and environmental/human exposure. PMID:27874852

  17. A global gas flaring black carbon emission rate dataset from 1994 to 2012

    NASA Astrophysics Data System (ADS)

    Huang, Kan; Fu, Joshua S.

    2016-11-01

    Global flaring of associated petroleum gas is a potential emission source of particulate matters (PM) and could be notable in some specific regions that are in urgent need of mitigation. PM emitted from gas flaring is mainly in the form of black carbon (BC), which is a strong short-lived climate forcer. However, BC from gas flaring has been neglected in most global/regional emission inventories and is rarely considered in climate modeling. Here we present a global gas flaring BC emission rate dataset for the period 1994-2012 in a machine-readable format. We develop a region-dependent gas flaring BC emission factor database based on the chemical compositions of associated petroleum gas at various oil fields. Gas flaring BC emission rates are estimated using this emission factor database and flaring volumes retrieved from satellite imagery. Evaluation using a chemical transport model suggests that consideration of gas flaring emissions can improve model performance. This dataset will benefit and inform a broad range of research topics, e.g., carbon budget, air quality/climate modeling, and environmental/human exposure.

  18. Hippocampome.org: a knowledge base of neuron types in the rodent hippocampus

    PubMed Central

    Wheeler, Diek W; White, Charise M; Rees, Christopher L; Komendantov, Alexander O; Hamilton, David J; Ascoli, Giorgio A

    2015-01-01

    Hippocampome.org is a comprehensive knowledge base of neuron types in the rodent hippocampal formation (dentate gyrus, CA3, CA2, CA1, subiculum, and entorhinal cortex). Although the hippocampal literature is remarkably information-rich, neuron properties are often reported with incompletely defined and notoriously inconsistent terminology, creating a formidable challenge for data integration. Our extensive literature mining and data reconciliation identified 122 neuron types based on neurotransmitter, axonal and dendritic patterns, synaptic specificity, electrophysiology, and molecular biomarkers. All ∼3700 annotated properties are individually supported by specific evidence (∼14,000 pieces) in peer-reviewed publications. Systematic analysis of this unprecedented amount of machine-readable information reveals novel correlations among neuron types and properties, the potential connectivity of the full hippocampal circuitry, and outstanding knowledge gaps. User-friendly browsing and online querying of Hippocampome.org may aid design and interpretation of both experiments and simulations. This powerful, simple, and extensible neuron classification endeavor is unique in its detail, utility, and completeness. DOI: http://dx.doi.org/10.7554/eLife.09960.001 PMID:26402459

  19. Organizing Community-Based Data Standards: Lessons from Developing a Successful Open Standard in Systems Biology

    NASA Astrophysics Data System (ADS)

    Hucka, M.

    2015-09-01

    In common with many fields, including astronomy, a vast number of software tools for computational modeling and simulation are available today in systems biology. This wealth of resources is a boon to researchers, but it also presents interoperability problems. Despite working with different software tools, researchers want to disseminate their work widely as well as reuse and extend the models of other researchers. This situation led in the year 2000 to an effort to create a tool-independent, machine-readable file format for representing models: SBML, the Systems Biology Markup Language. SBML has since become the de facto standard for its purpose. Its success and general approach has inspired and influenced other community-oriented standardization efforts in systems biology. Open standards are essential for the progress of science in all fields, but it is often difficult for academic researchers to organize successful community-based standards. I draw on personal experiences from the development of SBML and summarize some of the lessons learned, in the hope that this may be useful to other groups seeking to develop open standards in a community-oriented fashion.

  20. Gene Fusion Markup Language: a prototype for exchanging gene fusion data

    PubMed Central

    2012-01-01

    Background An avalanche of next generation sequencing (NGS) studies has generated an unprecedented amount of genomic structural variation data. These studies have also identified many novel gene fusion candidates with more detailed resolution than previously achieved. However, in the excitement and necessity of publishing the observations from this recently developed cutting-edge technology, no community standardization approach has arisen to organize and represent the data with the essential attributes in an interchangeable manner. As transcriptome studies have been widely used for gene fusion discoveries, the current non-standard mode of data representation could potentially impede data accessibility, critical analyses, and further discoveries in the near future. Results Here we propose a prototype, Gene Fusion Markup Language (GFML) as an initiative to provide a standard format for organizing and representing the significant features of gene fusion data. GFML will offer the advantage of representing the data in a machine-readable format to enable data exchange, automated analysis interpretation, and independent verification. As this database-independent exchange initiative evolves it will further facilitate the formation of related databases, repositories, and analysis tools. The GFML prototype is made available at http://code.google.com/p/gfml-prototype/. Conclusion The Gene Fusion Markup Language (GFML) presented here could facilitate the development of a standard format for organizing, integrating and representing the significant features of gene fusion data in an inter-operable and query-able fashion that will enable biologically intuitive access to gene fusion findings and expedite functional characterization. A similar model is envisaged for other NGS data analyses. PMID:23072312

  1. 40 CFR 63.11995 - In what form and how long must I keep my records?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... years. Records may be maintained in hard copy or computer-readable format including, but not limited to, on paper, microfilm, hard disk drive, floppy disk, compact disk, magnetic tape or microfiche. ...

  2. 40 CFR 63.11995 - In what form and how long must I keep my records?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... years. Records may be maintained in hard copy or computer-readable format including, but not limited to, on paper, microfilm, hard disk drive, floppy disk, compact disk, magnetic tape or microfiche. ...

  3. 40 CFR 63.11995 - In what form and how long must I keep my records?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... years. Records may be maintained in hard copy or computer-readable format including, but not limited to, on paper, microfilm, hard disk drive, floppy disk, compact disk, magnetic tape or microfiche. ...

  4. Semantic Technologies and Bio-Ontologies.

    PubMed

    Gutierrez, Fernando

    2017-01-01

    As information available through data repositories constantly grows, the need for automated mechanisms for linking, querying, and sharing data has become a relevant factor both in research and industry. This situation is more evident in research fields such as the life sciences, where new experiments by different research groups are constantly generating new information regarding a wide variety of related study objects. However, current methods for representing information and knowledge are not suited for machine processing. The Semantic Technologies are a set of standards and protocols that intend to provide methods for representing and handling data that encourages reusability of information and is machine-readable. In this chapter, we will provide a brief introduction to Semantic Technologies, and how these protocols and standards have been incorporated into the life sciences to facilitate dissemination and access to information.

  5. QBCov: A Linked Data interface for Discrete Global Grid Systems, a new approach to delivering coverage data on the web

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Toyer, S.; Brizhinev, D.; Ledger, M.; Taylor, K.; Purss, M. B. J.

    2016-12-01

    We are witnessing a rapid proliferation of geoscientific and geospatial data from an increasing variety of sensors and sensor networks. This data presents great opportunities to resolve cross-disciplinary problems. However, working with it often requires an understanding of file formats and protocols seldom used outside of scientific computing, potentially limiting the data's value to other disciplines. In this paper, we present a new approach to serving satellite coverage data on the web, which improves ease-of-access using the principles of linked data. Linked data adapts the concepts and protocols of the human-readable web to machine-readable data; the number of developers familiar with web technologies makes linked data a natural choice for bringing coverages to a wider audience. Our approach to using linked data also makes it possible to efficiently service high-level SPARQL queries: for example, "Retrieve all Landsat ETM+ observations of San Francisco between July and August 2016" can easily be encoded in a single query. We validate the new approach, which we call QBCov, with a reference implementation of the entire stack, including a simple web-based client for interacting with Landsat observations. In addition to demonstrating the utility of linked data for publishing coverages, we investigate the heretofore unexplored relationship between Discrete Global Grid Systems (DGGS) and linked data. Our conclusions are informed by the aforementioned reference implementation of QBCov, which is backed by a hierarchical file format designed around the rHEALPix DGGS. Not only does the choice of a DGGS-based representation provide an efficient mechanism for accessing large coverages at multiple scales, but the ability of DGGS to produce persistent, unique identifiers for spatial regions is especially valuable in a linked data context. This suggests that DGGS has an important role to play in creating sustainable and scalable linked data infrastructures. QBCov is being developed as a contribution to the Spatial Data on the Web working group--a joint activity of the Open Geospatial Consortium and World Wide Web Consortium.

  6. 40 CFR 63.9060 - In what form and how long must I keep my records?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... may be maintained in hard copy or computer-readable format including, but not limited to, on paper, microfilm, hard disk drive, floppy disk, compact disk, magnetic tape, or microfiche. (d) You must keep each...

  7. A Comparison of Costs of Searching the Machine-Readable Data Bases ERIC and "Psychological Abstracts" in an Annual Subscription Rate System Against Costs Estimated for the Same Searches Done in the Lockheed DIALOG System and the System Development Corporation for ERIC, and the Lockheed DIALOG System and PASAT for "Psychological Abstracts."

    ERIC Educational Resources Information Center

    Palmer, Crescentia

    A comparison of costs for computer-based searching of Psychological Abstracts and Educational Resources Information Center (ERIC) systems by the New York State Library at Albany was produced by combining data available from search request forms and from bills from the contract subscription service, the State University of New…

  8. Ink-constrained halftoning with application to QR codes

    NASA Astrophysics Data System (ADS)

    Bayeh, Marzieh; Compaan, Erin; Lindsey, Theodore; Orlow, Nathan; Melczer, Stephen; Voller, Zachary

    2014-01-01

    This paper examines adding visually significant, human recognizable data into QR codes without affecting their machine readability by utilizing known methods in image processing. Each module of a given QR code is broken down into pixels, which are halftoned in such a way as to keep the QR code structure while revealing aspects of the secondary image to the human eye. The loss of information associated to this procedure is discussed, and entropy values are calculated for examples given in the paper. Numerous examples of QR codes with embedded images are included.

  9. Method of generating a surface mesh

    DOEpatents

    Shepherd, Jason F [Albuquerque, NM; Benzley, Steven [Provo, UT; Grover, Benjamin T [Tracy, CA

    2008-03-04

    A method and machine-readable medium provide a technique to generate and modify a quadrilateral finite element surface mesh using dual creation and modification. After generating a dual of a surface (mesh), a predetermined algorithm may be followed to generate and modify a surface mesh of quadrilateral elements. The predetermined algorithm may include the steps of generating two-dimensional cell regions in dual space, determining existing nodes in primal space, generating new nodes in the dual space, and connecting nodes to form the quadrilateral elements (faces) for the generated and modifiable surface mesh.

  10. Beyond the online catalog: developing an academic information system in the sciences.

    PubMed Central

    Crawford, S; Halbrook, B; Kelly, E; Stucki, L

    1987-01-01

    The online public access catalog consists essentially of a machine-readable database with network capabilities. Like other computer-based information systems, it may be continuously enhanced by the addition of new capabilities and databases. It may also become a gateway to other information networks. This paper reports the evolution of the Bibliographic Access and Control System (BACS) of Washington University in end-user searching, current awareness services, information management, and administrative functions. Ongoing research and development and the future of the online catalog are also discussed. PMID:3315052

  11. Beyond the online catalog: developing an academic information system in the sciences.

    PubMed

    Crawford, S; Halbrook, B; Kelly, E; Stucki, L

    1987-07-01

    The online public access catalog consists essentially of a machine-readable database with network capabilities. Like other computer-based information systems, it may be continuously enhanced by the addition of new capabilities and databases. It may also become a gateway to other information networks. This paper reports the evolution of the Bibliographic Access and Control System (BACS) of Washington University in end-user searching, current awareness services, information management, and administrative functions. Ongoing research and development and the future of the online catalog are also discussed.

  12. Atlas of Vega: 3850-6860 Å

    NASA Astrophysics Data System (ADS)

    Kim, Hyun-Sook; Han, Inwoo; Valyavin, G.; Lee, Byeong-Cheol; Shimansky, V.; Galazutdinov, G. A.

    2009-10-01

    We present a high resolving power (λ/Δλ = 90,000) and high signal-to-noise ratio (˜700) spectral atlas of Vega covering the 3850-6860 Å wavelength range. The atlas is a result of averaging of spectra recorded with the aid of the echelle spectrograph BOES fed by the 1.8 m telescope at Bohyunsan Observatory (Korea). The atlas is provided only in machine-readable form (electronic data file) and will be available in the SIMBAD database upon publication. Based on data collected with the 1.8 m telescope operated at BOAO Observatory, Korea.

  13. State of the art of geoscience libraries and information services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pruett, N.J.

    Geoscience libraries and geoscience information services are closely related. Both are trying to meet the needs of the geoscientists for information and data. Both are also being affected by many trends: increased availability of personal computers; decreased costs of machine readable storage; increased availability of maps in digital format (Pallatto, 1986); progress in graphic displays and in developing Geographic Information System, (GIS) (Kelly and Phillips, 1986); development in artificial intelligence; and the availability of new formats (e.g. CD-ROM). Some additional factors are at work at changing the role of libraries: libraries are coming to recognize the impossibility of collecting everythingmore » and the validity of Bradford's Law unobtrustive studies of library reference services have pointed out that only 50% of the questions are answered correctly it is clear that the number of databases is increasing although good figures for specifically geoscience databases are not available; lists of numeric database are beginning to appear; evaluative (as opposed to purely descriptive) reviews of available bibliographic databases are beginning to appear; more and more libraries are getting online catalogs and results of studies of users of online catalog are being used to improve catalog design; and research is raising consciousness about the value of; and research is raising consciousness about the value of information. All these trends are having or will have an effect on geoscience information.« less

  14. A fisheye viewer for microarray-based gene expression data

    PubMed Central

    Wu, Min; Thao, Cheng; Mu, Xiangming; Munson, Ethan V

    2006-01-01

    Background Microarray has been widely used to measure the relative amounts of every mRNA transcript from the genome in a single scan. Biologists have been accustomed to reading their experimental data directly from tables. However, microarray data are quite large and are stored in a series of files in a machine-readable format, so direct reading of the full data set is not feasible. The challenge is to design a user interface that allows biologists to usefully view large tables of raw microarray-based gene expression data. This paper presents one such interface – an electronic table (E-table) that uses fisheye distortion technology. Results The Fisheye Viewer for microarray-based gene expression data has been successfully developed to view MIAME data stored in the MAGE-ML format. The viewer can be downloaded from the project web site . The fisheye viewer was implemented in Java so that it could run on multiple platforms. We implemented the E-table by adapting JTable, a default table implementation in the Java Swing user interface library. Fisheye views use variable magnification to balance magnification for easy viewing and compression for maximizing the amount of data on the screen. Conclusion This Fisheye Viewer is a lightweight but useful tool for biologists to quickly overview the raw microarray-based gene expression data in an E-table. PMID:17038193

  15. College of American Pathologists Cancer Protocols: Optimizing Format for Accuracy and Efficiency.

    PubMed

    Strickland-Marmol, Leah B; Muro-Cacho, Carlos A; Barnett, Scott D; Banas, Matthew R; Foulis, Philip R

    2016-06-01

    -The data in College of American Pathologists cancer protocols have to be presented effectively to health care providers. There is no consensus on the format of those protocols, resulting in various designs among pathologists. Cancer protocols are independently created by site-specific experts, so there is inconsistent wording and repetition of data. This lack of standardization can be confusing and may lead to interpretation errors. -To define a synopsis format that is effective in delivering essential pathologic information and to evaluate the aesthetic appeal and the impact of varying format styles on the speed and accuracy of data extraction. -We queried individuals from several health care backgrounds using varying formats of the fallopian tube protocol of the College of American Pathologists without content modification to investigate their aesthetic appeal, accuracy, efficiency, and readability/complexity. Descriptive statistics, an item difficulty index, and 3 tests of readability were used. -Columned formats were aesthetically more appealing than justified formats (P < .001) and were associated with greater accuracy and efficiency. Incorrect assumptions were made about items not included in the protocol. Uniform wording and short sentences were associated with better performance by participants. -Based on these data, we propose standardized protocol formats for cancer resections of the fallopian tube and the more-familiar colon, employing headers, short phrases, and uniform terminology. This template can be easily and minimally modified for other sites, standardizing format and verbiage and increasing user accuracy and efficiency. Principles of human factors engineering should be considered in the display of patient data.

  16. Readability versus Leveling.

    ERIC Educational Resources Information Center

    Fry, Edward

    2002-01-01

    Shows some similarities and differences between readability formulas and leveling procedures and reports some current large-scale uses of readability formulas. Presents a dictionary definition of readability and leveling, and a history and background of readability and leveling. Discusses what goes into determining readability and leveling scores.…

  17. A Modular Framework for Transforming Structured Data into HTML with Machine-Readable Annotations

    NASA Astrophysics Data System (ADS)

    Patton, E. W.; West, P.; Rozell, E.; Zheng, J.

    2010-12-01

    There is a plethora of web-based Content Management Systems (CMS) available for maintaining projects and data, i.a. However, each system varies in its capabilities and often content is stored separately and accessed via non-uniform web interfaces. Moving from one CMS to another (e.g., MediaWiki to Drupal) can be cumbersome, especially if a large quantity of data must be adapted to the new system. To standardize the creation, display, management, and sharing of project information, we have assembled a framework that uses existing web technologies to transform data provided by any service that supports the SPARQL Protocol and RDF Query Language (SPARQL) queries into HTML fragments, allowing it to be embedded in any existing website. The framework utilizes a two-tier XML Stylesheet Transformation (XSLT) that uses existing ontologies (e.g., Friend-of-a-Friend, Dublin Core) to interpret query results and render them as HTML documents. These ontologies can be used in conjunction with custom ontologies suited to individual needs (e.g., domain-specific ontologies for describing data records). Furthermore, this transformation process encodes machine-readable annotations, namely, the Resource Description Framework in attributes (RDFa), into the resulting HTML, so that capable parsers and search engines can extract the relationships between entities (e.g, people, organizations, datasets). To facilitate editing of content, the framework provides a web-based form system, mapping each query to a dynamically generated form that can be used to modify and create entities, while keeping the native data store up-to-date. This open framework makes it easy to duplicate data across many different sites, allowing researchers to distribute their data in many different online forums. In this presentation we will outline the structure of queries and the stylesheets used to transform them, followed by a brief walkthrough that follows the data from storage to human- and machine-accessible web page. We conclude with a discussion on content caching and steps toward performing queries across multiple domains.

  18. Data Publication and Interoperability for Long Tail Researchers via the Open Data Repository's (ODR) Data Publisher.

    NASA Astrophysics Data System (ADS)

    Stone, N.; Lafuente, B.; Bristow, T.; Keller, R.; Downs, R. T.; Blake, D. F.; Fonda, M.; Pires, A.

    2016-12-01

    Working primarily with astrobiology researchers at NASA Ames, the Open Data Repository (ODR) has been conducting a software pilot to meet the varying needs of this multidisciplinary community. Astrobiology researchers often have small communities or operate individually with unique data sets that don't easily fit into existing database structures. The ODR constructed its Data Publisher software to allow researchers to create databases with common metadata structures and subsequently extend them to meet their individual needs and data requirements. The software accomplishes these tasks through a web-based interface that allows collaborative creation and revision of common metadata templates and individual extensions to these templates for custom data sets. This allows researchers to search disparate datasets based on common metadata established through the metadata tools, but still facilitates distinct analyses and data that may be stored alongside the required common metadata. The software produces web pages that can be made publicly available at the researcher's discretion so that users may search and browse the data in an effort to make interoperability and data discovery a human-friendly task while also providing semantic data for machine-based discovery. Once relevant data has been identified, researchers can utilize the built-in application programming interface (API) that exposes the data for machine-based consumption and integration with existing data analysis tools (e.g. R, MATLAB, Project Jupyter - http://jupyter.org). The current evolution of the project has created the Astrobiology Habitable Environments Database (AHED)[1] which provides an interface to databases connected through a common metadata core. In the next project phase, the goal is for small research teams and groups to be self-sufficient in publishing their research data to meet funding mandates and academic requirements as well as fostering increased data discovery and interoperability through human-readable and machine-readable interfaces. This project is supported by the Science-Enabling Research Activity (SERA) and NASA NNX11AP82A, MSL. [1] B. Lafuente et al. (2016) AGU, submitted.

  19. Outstanding Science Trade Books for Children in 1988.

    ERIC Educational Resources Information Center

    Science and Children, 1989

    1989-01-01

    Lists annotations of books based on accuracy of contents, readability, format, and illustrations. Includes number of pages in each entry, price, and availability. Topics cover animals, biographies, space science, astronomy, archaeology, anthropology, earth and life sciences, medical and health sciences, physics, technology, and engineering. (RT)

  20. Prose Checklist: Strategies for Improving School-to-Home Written Communication

    ERIC Educational Resources Information Center

    Nagro, Sarah A.

    2015-01-01

    Effective communication enhances school-family partnerships. Written communication is a common, efficient way of communicating with families, but potential barriers to effective communication include readability level, clarity of presentation, complexity of format, and structural components. The PROSE Checklist presented in this article can…

  1. Modeling Nanocomposites for Molecular Dynamics (MD) Simulations

    DTIC Science & Technology

    2015-01-01

    Parallel Simulator ( LAMMPS ) is used as the MD simulator [9], the coordinates must be formatted for use in LAMMPSs. VMD has a set of tools (TopoTools...that can be used to generate a LAMMPS -readable format [6]. 3 Figure 4. Ethylene Monomer Produced From Coordinates in PDB and Rendered Using...where, i and j are the atom subscripts. Simulations are performed using LAMMPS simulation software. Periodic boundary conditions are

  2. Online Patient Resources for Liposuction: A Comparative Analysis of Readability.

    PubMed

    Vargas, Christina R; Ricci, Joseph A; Chuang, Danielle J; Lee, Bernard T

    2016-03-01

    As patients strive to become informed about health care, inadequate functional health literacy is a significant barrier. Nearly half of American adults have poor or marginal health literacy skills and the National Institutes of Health and American Medical Association have recommended that patient information should be written at a sixth grade level. The aim of this study is to identify the most commonly used online patient information about liposuction and to evaluate its readability relative to average American literacy. An internet search of "liposuction" was performed and the 10 most popular websites identified. User and location data were disabled and sponsored results excluded. All relevant, patient-directed articles were downloaded and formatted into plain text. Articles were then analyzed using 10 established readability tests. A comparison group was constructed to identify the most popular online consumer information about tattooing. Mean readability scores and specific article characteristics were compared. A total of 80 articles were collected from websites about liposuction. Readability analysis revealed an overall 13.6 grade reading level (range, 10-16 grade); all articles exceeded the target sixth grade level. Consumer websites about tattooing were significantly easier to read, with a mean 7.8 grade level. These sites contained significantly fewer characters per word and words per sentence, as well as a smaller proportion of complex, long, and unfamiliar words. Online patient resources about liposuction are potentially too difficult for a large number of Americans to understand. Liposuction websites are significantly harder to read than consumer websites about tattooing. Aesthetic surgeons are advised to discuss with patients resources they use and guide patients to appropriate information for their skill level.

  3. A formal MIM specification and tools for the common exchange of MIM diagrams: an XML-Based format, an API, and a validation method

    PubMed Central

    2011-01-01

    Background The Molecular Interaction Map (MIM) notation offers a standard set of symbols and rules on their usage for the depiction of cellular signaling network diagrams. Such diagrams are essential for disseminating biological information in a concise manner. A lack of software tools for the notation restricts wider usage of the notation. Development of software is facilitated by a more detailed specification regarding software requirements than has previously existed for the MIM notation. Results A formal implementation of the MIM notation was developed based on a core set of previously defined glyphs. This implementation provides a detailed specification of the properties of the elements of the MIM notation. Building upon this specification, a machine-readable format is provided as a standardized mechanism for the storage and exchange of MIM diagrams. This new format is accompanied by a Java-based application programming interface to help software developers to integrate MIM support into software projects. A validation mechanism is also provided to determine whether MIM datasets are in accordance with syntax rules provided by the new specification. Conclusions The work presented here provides key foundational components to promote software development for the MIM notation. These components will speed up the development of interoperable tools supporting the MIM notation and will aid in the translation of data stored in MIM diagrams to other standardized formats. Several projects utilizing this implementation of the notation are outlined herein. The MIM specification is available as an additional file to this publication. Source code, libraries, documentation, and examples are available at http://discover.nci.nih.gov/mim. PMID:21586134

  4. A formal MIM specification and tools for the common exchange of MIM diagrams: an XML-Based format, an API, and a validation method.

    PubMed

    Luna, Augustin; Karac, Evrim I; Sunshine, Margot; Chang, Lucas; Nussinov, Ruth; Aladjem, Mirit I; Kohn, Kurt W

    2011-05-17

    The Molecular Interaction Map (MIM) notation offers a standard set of symbols and rules on their usage for the depiction of cellular signaling network diagrams. Such diagrams are essential for disseminating biological information in a concise manner. A lack of software tools for the notation restricts wider usage of the notation. Development of software is facilitated by a more detailed specification regarding software requirements than has previously existed for the MIM notation. A formal implementation of the MIM notation was developed based on a core set of previously defined glyphs. This implementation provides a detailed specification of the properties of the elements of the MIM notation. Building upon this specification, a machine-readable format is provided as a standardized mechanism for the storage and exchange of MIM diagrams. This new format is accompanied by a Java-based application programming interface to help software developers to integrate MIM support into software projects. A validation mechanism is also provided to determine whether MIM datasets are in accordance with syntax rules provided by the new specification. The work presented here provides key foundational components to promote software development for the MIM notation. These components will speed up the development of interoperable tools supporting the MIM notation and will aid in the translation of data stored in MIM diagrams to other standardized formats. Several projects utilizing this implementation of the notation are outlined herein. The MIM specification is available as an additional file to this publication. Source code, libraries, documentation, and examples are available at http://discover.nci.nih.gov/mim.

  5. Samples: The Story That They Tell and Our Role in Better Connecting Their Physical and Data Lifecycles.

    NASA Astrophysics Data System (ADS)

    Stall, S.

    2016-12-01

    The story of a sample starts with a proposal, a data management plan, and funded research. The sample is created, given a unique identifier (IGSN) and properly cared for during its journey to an appropriate storage location. Through its metadata, and publication information, the sample can become well known and shared with other researchers. Ultimately, a valuable sample can tell its entire story through its IGSN, associated ORCIDs, associated publication DOIs, and DOIs of data generated from sample analysis. This journey, or workflow, is in many ways still manual. Tools exist to generate IGSNs for the sample and subsamples. Publishers are committed to making IGSNs machine readable in their journals, but the connection back to the IGSN management system, specifically the System for Earth Sample Registration (SESAR) is not fully complete. Through encouragement of publishers, like AGU, and improved data management practices, such as those promoted by AGU's Data Management Assessment program, the complete lifecycle of a sample can and will be told through the journey it takes from creation, documentation (metadata), analysis, subsamples, publication, and sharing. Publishers and data facilities are using efforts like the Coalition for Publishing Data in the Earth and Space Sciences (COPDESS) to "implement and promote common policies and procedures for the publication and citation of data across Earth Science journals", including IGSNs. As our community improves its data management practices and publishers adopt and enforce machine readable use of unique sample identifiers, the ability to tell the entire story of a sample is close at hand. Better Data Management results in Better Science.

  6. Electronic Data Interchange: Selected Issues and Trends.

    ERIC Educational Resources Information Center

    Wigand, Rolf T.; And Others

    1993-01-01

    Describes electronic data interchange (EDI) as the application-to-application exchange of business documents in a computer-readable format. Topics discussed include EDI in various industries, EDI in finance and banking, organizational impacts of EDI, future EDI markets and organizations, and implications for information resources management.…

  7. 21 CFR 1311.205 - Pharmacy application requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... that indicates the prescription was signed; or (ii) Display the field for the pharmacist's verification... must display the information for the pharmacist's verification. (10) The pharmacy application must... § 1311.215 in a format that is readable by the pharmacist. Such an internal audit may be automated and...

  8. 21 CFR 1311.205 - Pharmacy application requirements.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... that indicates the prescription was signed; or (ii) Display the field for the pharmacist's verification... must display the information for the pharmacist's verification. (10) The pharmacy application must... § 1311.215 in a format that is readable by the pharmacist. Such an internal audit may be automated and...

  9. 21 CFR 1311.205 - Pharmacy application requirements.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... that indicates the prescription was signed; or (ii) Display the field for the pharmacist's verification... must display the information for the pharmacist's verification. (10) The pharmacy application must... § 1311.215 in a format that is readable by the pharmacist. Such an internal audit may be automated and...

  10. 21 CFR 1311.205 - Pharmacy application requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... that indicates the prescription was signed; or (ii) Display the field for the pharmacist's verification... must display the information for the pharmacist's verification. (10) The pharmacy application must... § 1311.215 in a format that is readable by the pharmacist. Such an internal audit may be automated and...

  11. Analyzing readability of medicines information material in Slovenia

    PubMed Central

    Kasesnik, Karin; Kline, Mihael

    2011-01-01

    Objective: Readability has been claimed to be an important factor for understanding texts describing health symptoms and medications. Such texts may be a factor which indirectly affects the health of the population. Despite the expertise of physicians, the readability of information sources may be important for acquiring essential treatment information. The aim of this study was to measure the readability level of medicines promotion material in Slovenia. Methods: The Flesch readability formula was modified to comply with Slovene texts. On the basis of determining the Slovene readability algorithm, the readability ease related to the readability grade level of different Slovene texts was established. In order to estimate an adjustment of the texts to the recommended readability grade level of the targeted population, readability values of English texts were set. One sample t-test and standard deviations from the arithmetic mean values were used as statistical tests. Results: The results of the research showed low readability scores of the Slovene texts. Difficult readability values were seen in different types of examined texts: in patient information leaflets, in the summaries of product characteristics, in promotional materials, while describing over-the-counter medications and in the materials for creating disease awareness. Especially low readability values were found within the texts belonging to promotional materials intended for the physicians. None of researched items, not even for the general public, were close to primary school grade readability levels and therefore could not be described as easily readable. Conclusion: This study provides an understanding of the level of readability of selected Slovene medicines information material. It was concluded that health-related texts were not compliant with general public or with healthcare professional needs. PMID:23093886

  12. The readability of American Academy of Pediatrics patient education brochures.

    PubMed

    Freda, Margaret Comerford

    2005-01-01

    The purpose of this study was to evaluate the readability of American Academy of Pediatrics (AAP) patient education brochures. Seventy-four brochures were analyzed using two readability formulas. Mean readability for all 74 brochures was grade 7.94 using the Flesch-Kincaid formula, and grade 10.1 with SMOG formula (P = .001). Using the SMOG formula, no brochures were of acceptably low (< or =8th grade) readability levels (range 8.3 to 12.7). Using the Flesch-Kincaid formula, 41 of the 74 had acceptable readability levels (< or =8th grade). The SMOG formula routinely assessed brochures 2 to 3 grade levels higher than did the Flesch-Kincaid formula. Some AAP patient education brochures have acceptably low levels of readability, but at least half are written at higher than acceptable readability levels for the general public. This study also demonstrated statistically significant variability between the two different readability formulas; had only the SMOG formula been used, all of the brochures would have had unacceptably high readability levels. Readability is an essential concept for patient education materials. Professional associations that develop and market patient education materials should test for readability and publish those readability levels on each piece of patient education so health care providers will know if the materials are appropriate for their patients.

  13. Evaluation of the readability of ACOG patient education pamphlets. The American College of Obstetricians and Gynecologists.

    PubMed

    Freda, M C; Damus, K; Merkatz, I R

    1999-05-01

    To evaluate whether ACOG's patient education pamphlets comply with the recommended readability level for health education materials intended for the general public. All 100 English-language pamphlets available during 1997 (created or revised between 1988 and 1997) were evaluated using four standard readability formulas. Mean readability levels of ACOG's pamphlets were between grade 7.0 to grade 9.3, depending on the formula used. Analysis of readability over the 10 years showed a trend toward lower readability levels. Analysis by category of pamphlet found that the lowest readability levels were in "Especially for teens" pamphlets. Our data suggested that most of ACOG's patient education pamphlets currently available are written at a higher readability level than recommended for the general public. The readability of those pamphlets improved in the 10 years since the organization published its first pamphlet, but the goal of sixth-grade readability level has not been reached.

  14. Jointly creating digital abstracts: dealing with synonymy and polysemy

    PubMed Central

    2012-01-01

    Background Ideally each Life Science article should get a ‘structured digital abstract’. This is a structured summary of the paper’s findings that is both human-verified and machine-readable. But articles can contain a large variety of information types and contextual details that all need to be reconciled with appropriate names, terms and identifiers, which poses a challenge to any curator. Current approaches mostly use tagging or limited entry-forms for semantic encoding. Findings We implemented a ‘controlled language’ as a more expressive representation method. We studied how usable this format was for wet-lab-biologists that volunteered as curators. We assessed some issues that arise with the usability of ontologies and other controlled vocabularies, for the encoding of structured information by ‘untrained’ curators. We take a user-oriented viewpoint, and make recommendations that may prove useful for creating a better curation environment: one that can engage a large community of volunteer curators. Conclusions Entering information in a biocuration environment could improve in expressiveness and user-friendliness, if curators would be enabled to use synonymous and polysemous terms literally, whereby each term stays linked to an identifier. PMID:23110757

  15. Investigation of The Omaha System for dentistry.

    PubMed

    Jurkovich, M W; Ophaug, M; Salberg, S; Monsen, K

    2014-01-01

    Today, dentists and hygienists have inadequate tools to identify contributing factors to dental disease, diagnosis of disease or to document outcomes in a standardized and machine readable format. Increasing demand to find the most effective care methodologies make the development of further terminologies for dentistry more urgent. Preventive care is the focus of early efforts to define best practices. We reviewed one possibility with a history of public health documentation that might assist in these early efforts at identifying best practices. This paper examines, through a survey of dentists, the Omaha System Problem Classification Scheme. The survey requested that dentists rate the usefulness of knowing about specific signs and symptoms for each of the 42 problems within the Problem list of the Omaha System. Using a weighted scoring system, 22 of the 42 problems received over 50% of the possible maximum score and 30 of the 42 problems received at least 25% of the possible points. These findings suggests that further evaluation of The Omaha System, may be useful to dentistry. At a minimum, the survey provides additional information about non-physiological problems, signs, and symptoms that may be appropriate for documentation purposes within an electronic health record (EHR) used in dentistry.

  16. Using XML and Java for Astronomical Instrumentation Control

    NASA Technical Reports Server (NTRS)

    Ames, Troy; Koons, Lisa; Sall, Ken; Warsaw, Craig

    2000-01-01

    Traditionally, instrument command and control systems have been highly specialized, consisting mostly of custom code that is difficult to develop, maintain, and extend. Such solutions are initially very costly and are inflexible to subsequent engineering change requests, increasing software maintenance costs. Instrument description is too tightly coupled with details of implementation. NASA Goddard Space Flight Center is developing a general and highly extensible framework that applies to any kind of instrument that can be controlled by a computer. The software architecture combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML), a human readable and machine understandable way to describe structured data. A key aspect of the object-oriented architecture is software that is driven by an instrument description, written using the Instrument Markup Language (IML). ]ML is used to describe graphical user interfaces to control and monitor the instrument, command sets and command formats, data streams, and communication mechanisms. Although the current effort is targeted for the High-resolution Airborne Wideband Camera, a first-light instrument of the Stratospheric Observatory for Infrared Astronomy, the framework is designed to be generic and extensible so that it can be applied to any instrument.

  17. 21 CFR 1311.205 - Pharmacy application requirements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... pharmacist's verification. (7) The pharmacy application must read and retain the full DEA number including... information is present or must display the information for the pharmacist's verification. (10) The pharmacy... events specified in § 1311.215 in a format that is readable by the pharmacist. Such an internal audit may...

  18. Readability and usability of scientific information in the poster presentation format

    USDA-ARS?s Scientific Manuscript database

    The European Geosciences Union convenes an annual international conference that boasts over 13,000 academic presentations of which more than half are poster presentations. This research effort studied a sample of more than 500 posters presented during the 2012 conference to identify best practices f...

  19. mmView: a web-based viewer of the mmCIF format

    PubMed Central

    2011-01-01

    Background Structural biomolecular data are commonly stored in the PDB format. The PDB format is widely supported by software vendors because of its simplicity and readability. However, the PDB format cannot fully address many informatics challenges related to the growing amount of structural data. To overcome the limitations of the PDB format, a new textual format mmCIF was released in June 1997 in its version 1.0. mmCIF provides extra information which has the advantage of being in a computer readable form. However, this advantage becomes a disadvantage if a human must read and understand the stored data. While software tools exist to help to prepare mmCIF files, the number of available systems simplifying the comprehension and interpretation of the mmCIF files is limited. Findings In this paper we present mmView - a cross-platform web-based application that allows to explore comfortably the structural data of biomacromolecules stored in the mmCIF format. The mmCIF categories can be easily browsed in a tree-like structure, and the corresponding data are presented in a well arranged tabular form. The application also allows to display and investigate biomolecular structures via an integrated Java application Jmol. Conclusions The mmView software system is primarily intended for educational purposes, but it can also serve as a useful research tool. The mmView application is offered in two flavors: as an open-source stand-alone application (available from http://sourceforge.net/projects/mmview) that can be installed on the user's computer, and as a publicly available web server. PMID:21486459

  20. Advanced, Analytic, Automated (AAA) Measurement of Engagement During Learning

    PubMed Central

    D’Mello, Sidney; Dieterle, Ed; Duckworth, Angela

    2017-01-01

    It is generally acknowledged that engagement plays a critical role in learning. Unfortunately, the study of engagement has been stymied by a lack of valid and efficient measures. We introduce the advanced, analytic, and automated (AAA) approach to measure engagement at fine-grained temporal resolutions. The AAA measurement approach is grounded in embodied theories of cognition and affect, which advocate a close coupling between thought and action. It uses machine-learned computational models to automatically infer mental states associated with engagement (e.g., interest, flow) from machine-readable behavioral and physiological signals (e.g., facial expressions, eye tracking, click-stream data) and from aspects of the environmental context. We present15 case studies that illustrate the potential of the AAA approach for measuring engagement in digital learning environments. We discuss strengths and weaknesses of the AAA approach, concluding that it has significant promise to catalyze engagement research. PMID:29038607

Top