Sample records for image file format

  1. 77 FR 59692 - 2014 Diversity Immigrant Visa Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-28

    ... the E-DV system. The entry will not be accepted and must be resubmitted. Group or family photographs... must be in the Joint Photographic Experts Group (JPEG) format. Image File Size: The maximum file size...). Image File Format: The image must be in the Joint Photographic Experts Group (JPEG) format. Image File...

  2. 78 FR 59743 - Bureau of Consular Affairs; Registration for the Diversity Immigrant (DV-2015) Visa Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-27

    ... already a U.S. citizen or a Lawful Permanent Resident, but you will not be penalized if you do. Group... specifications: Image File Format: The miage must be in the Joint Photographic Experts Group (JPEG) format. Image... in the Joint Photographic Experts Group (JPEG) format. Image File Size: The maximum image file size...

  3. Image Size Variation Influence on Corrupted and Non-viewable BMP Image

    NASA Astrophysics Data System (ADS)

    Azmi, Tengku Norsuhaila T.; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Hamid, Isredza Rahmi A.; Chai Wen, Chuah

    2017-08-01

    Image is one of the evidence component seek in digital forensics. Joint Photographic Experts Group (JPEG) format is most popular used in the Internet because JPEG files are very lossy and easy to compress that can speed up Internet transmitting processes. However, corrupted JPEG images are hard to recover due to the complexities of determining corruption point. Nowadays Bitmap (BMP) images are preferred in image processing compared to another formats because BMP image contain all the image information in a simple format. Therefore, in order to investigate the corruption point in JPEG, the file is required to be converted into BMP format. Nevertheless, there are many things that can influence the corrupting of BMP image such as the changes of image size that make the file non-viewable. In this paper, the experiment indicates that the size of BMP file influences the changes in the image itself through three conditions, deleting, replacing and insertion. From the experiment, we learnt by correcting the file size, it can able to produce a viewable file though partially. Then, it can be investigated further to identify the corruption point.

  4. Representation of thermal infrared imaging data in the DICOM using XML configuration files.

    PubMed

    Ruminski, Jacek

    2007-01-01

    The DICOM standard has become a widely accepted and implemented format for the exchange and storage of medical imaging data. Different imaging modalities are supported however there is not a dedicated solution for thermal infrared imaging in medicine. In this article we propose new ideas and improvements to final proposal of the new DICOM Thermal Infrared Imaging structures and services. Additionally, we designed, implemented and tested software packages for universal conversion of existing thermal imaging files to the DICOM format using XML configuration files. The proposed solution works fast and requires minimal number of user interactions. The XML configuration file enables to compose a set of attributes for any source file format of thermal imaging camera.

  5. Extracting the Data From the LCM vk4 Formatted Output File

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, James G.

    These are slides about extracting the data from the LCM vk4 formatted output file. The following is covered: vk4 file produced by Keyence VK Software, custom analysis, no off the shelf way to read the file, reading the binary data in a vk4 file, various offsets in decimal lines, finding the height image data, directly in MATLAB, binary output beginning of height image data, color image information, color image binary data, color image decimal and binary data, MATLAB code to read vk4 file (choose a file, read the file, compute offsets, read optical image, laser optical image, read and computemore » laser intensity image, read height image, timing, display height image, display laser intensity image, display RGB laser optical images, display RGB optical images, display beginning data and save images to workspace, gamma correction subroutine), reading intensity form the vk4 file, linear in the low range, linear in the high range, gamma correction for vk4 files, computing the gamma intensity correction, observations.« less

  6. 75 FR 60846 - Bureau of Consular Affairs; Registration for the Diversity Immigrant (DV-2012) Visa Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-01

    ... need to submit a photo for a child who is already a U.S. citizen or a Legal Permanent Resident. Group... Joint Photographic Experts Group (JPEG) format; it must have a maximum image file size of two hundred... (dpi); the image file format in Joint Photographic Experts Group (JPEG) format; the maximum image file...

  7. Pancreatic Cancer Detection Consortium (PCDC) | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"2256","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"A 3-dimensional image of a human torso highlighting the pancreas.","field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"A 3-dimensional image of a human torso

  8. Early Detection | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"171","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Early Detection Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Early Detection Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Early

  9. Arkansas and Louisiana Aeromagnetic and Gravity Maps and Data - A Website for Distribution of Data

    USGS Publications Warehouse

    Bankey, Viki; Daniels, David L.

    2008-01-01

    This report contains digital data, image files, and text files describing data formats for aeromagnetic and gravity data used to compile the State aeromagnetic and gravity maps of Arkansas and Louisiana. The digital files include grids, images, ArcInfo, and Geosoft compatible files. In some of the data folders, ASCII files with the extension 'txt' describe the format and contents of the data files. Read the 'txt' files before using the data files.

  10. Data Science Bowl Launched to Improve Lung Cancer Screening | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"2078","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Data Science Bowl Logo","field_file_image_title_text[und][0][value]":"Data Science Bowl Logo","field_folder[und]":"76"},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Data Science Bowl

  11. [Intranet-based integrated information system of radiotherapy-related images and diagnostic reports].

    PubMed

    Nakamura, R; Sasaki, M; Oikawa, H; Harada, S; Tamakawa, Y

    2000-03-01

    To use an intranet technique to develop an information system that simultaneously supports both diagnostic reports and radiotherapy planning images. Using a file server as the gateway a radiation oncology LAN was connected to an already operative RIS LAN. Dose-distribution images were saved in tagged-image-file format by way of a screen dump to the file server. X-ray simulator images and portal images were saved in encapsulated postscript format in the file server and automatically converted to portable document format. The files on the file server were automatically registered to the Web server by the search engine and were available for searching and browsing using the Web browser. It took less than a minute to register planning images. For clients, searching and browsing the file took less than 3 seconds. Over 150,000 reports and 4,000 images from a six-month period were accessible. Because the intranet technique was used, construction and maintenance was completed without specialty. Prompt access to essential information about radiotherapy has been made possible by this system. It promotes public access to radiotherapy planning that may improve the quality of treatment.

  12. DICOM to print, 35-mm slides, web, and video projector: tutorial using Adobe Photoshop.

    PubMed

    Gurney, Jud W

    2002-10-01

    Preparing images for publication has dealt with film and the photographic process. With picture archiving and communications systems, many departments will no longer produce film. This will change how images are produced for publication. DICOM, the file format for radiographic images, has to be converted and then prepared for traditional publication, 35-mm slides, the newest techniques of video projection, and the World Wide Web. Tagged image file format is the common format for traditional print publication, whereas joint photographic expert group is the current file format for the World Wide Web. Each medium has specific requirements that can be met with a common image-editing program such as Adobe Photoshop (Adobe Systems, San Jose, CA). High-resolution images are required for print, a process that requires interpolation. However, the Internet requires images with a small file size for rapid transmission. The resolution of each output differs and the image resolution must be optimized to match the output of the publishing medium.

  13. NIH Seeks Input on In-patient Clinical Research Areas | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"2476","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of the National Institutes of Health Clinical Center (Building 10) in Bethesda, Maryland.","field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of

  14. Mapping DICOM to OpenDocument format

    NASA Astrophysics Data System (ADS)

    Yu, Cong; Yao, Zhihong

    2009-02-01

    In order to enhance the readability, extensibility and sharing of DICOM files, we have introduced XML into DICOM file system (SPIE Volume 5748)[1] and the multilayer tree structure into DICOM (SPIE Volume 6145)[2]. In this paper, we proposed mapping DICOM to ODF(OpenDocument Format), for it is also based on XML. As a result, the new format realizes the separation of content(including text content and image) and display style. Meanwhile, since OpenDocument files take the format of a ZIP compressed archive, the new kind of DICOM files can benefit from ZIP's lossless compression to reduce file size. Moreover, this open format can also guarantee long-term access to data without legal or technical barriers, making medical images accessible to various fields.

  15. The Open Microscopy Environment: open image informatics for the biological sciences

    NASA Astrophysics Data System (ADS)

    Blackburn, Colin; Allan, Chris; Besson, Sébastien; Burel, Jean-Marie; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gault, David; Gillen, Kenneth; Leigh, Roger; Leo, Simone; Li, Simon; Lindner, Dominik; Linkert, Melissa; Moore, Josh; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Swedlow, Jason R.

    2016-07-01

    Despite significant advances in biological imaging and analysis, major informatics challenges remain unsolved: file formats are proprietary, storage and analysis facilities are lacking, as are standards for sharing image data and results. While the open FITS file format is ubiquitous in astronomy, astronomical imaging shares many challenges with biological imaging, including the need to share large image sets using secure, cross-platform APIs, and the need for scalable applications for processing and visualization. The Open Microscopy Environment (OME) is an open-source software framework developed to address these challenges. OME tools include: an open data model for multidimensional imaging (OME Data Model); an open file format (OME-TIFF) and library (Bio-Formats) enabling free access to images (5D+) written in more than 145 formats from many imaging domains, including FITS; and a data management server (OMERO). The Java-based OMERO client-server platform comprises an image metadata store, an image repository, visualization and analysis by remote access, allowing sharing and publishing of image data. OMERO provides a means to manage the data through a multi-platform API. OMERO's model-based architecture has enabled its extension into a range of imaging domains, including light and electron microscopy, high content screening, digital pathology and recently into applications using non-image data from clinical and genomic studies. This is made possible using the Bio-Formats library. The current release includes a single mechanism for accessing image data of all types, regardless of original file format, via Java, C/C++ and Python and a variety of applications and environments (e.g. ImageJ, Matlab and R).

  16. Java Library for Input and Output of Image Data and Metadata

    NASA Technical Reports Server (NTRS)

    Deen, Robert; Levoe, Steven

    2003-01-01

    A Java-language library supports input and output (I/O) of image data and metadata (label data) in the format of the Video Image Communication and Retrieval (VICAR) image-processing software and in several similar formats, including a subset of the Planetary Data System (PDS) image file format. The library does the following: It provides low-level, direct access layer, enabling an application subprogram to read and write specific image files, lines, or pixels, and manipulate metadata directly. Two coding/decoding subprograms ("codecs" for short) based on the Java Advanced Imaging (JAI) software provide access to VICAR and PDS images in a file-format-independent manner. The VICAR and PDS codecs enable any program that conforms to the specification of the JAI codec to use VICAR or PDS images automatically, without specific knowledge of the VICAR or PDS format. The library also includes Image I/O plugin subprograms for VICAR and PDS formats. Application programs that conform to the Image I/O specification of Java version 1.4 can utilize any image format for which such a plug-in subprogram exists, without specific knowledge of the format itself. Like the aforementioned codecs, the VICAR and PDS Image I/O plug-in subprograms support reading and writing of metadata.

  17. Transforming Dermatologic Imaging for the Digital Era: Metadata and Standards.

    PubMed

    Caffery, Liam J; Clunie, David; Curiel-Lewandrowski, Clara; Malvehy, Josep; Soyer, H Peter; Halpern, Allan C

    2018-01-17

    Imaging is increasingly being used in dermatology for documentation, diagnosis, and management of cutaneous disease. The lack of standards for dermatologic imaging is an impediment to clinical uptake. Standardization can occur in image acquisition, terminology, interoperability, and metadata. This paper presents the International Skin Imaging Collaboration position on standardization of metadata for dermatologic imaging. Metadata is essential to ensure that dermatologic images are properly managed and interpreted. There are two standards-based approaches to recording and storing metadata in dermatologic imaging. The first uses standard consumer image file formats, and the second is the file format and metadata model developed for the Digital Imaging and Communication in Medicine (DICOM) standard. DICOM would appear to provide an advantage over using consumer image file formats for metadata as it includes all the patient, study, and technical metadata necessary to use images clinically. Whereas, consumer image file formats only include technical metadata and need to be used in conjunction with another actor-for example, an electronic medical record-to supply the patient and study metadata. The use of DICOM may have some ancillary benefits in dermatologic imaging including leveraging DICOM network and workflow services, interoperability of images and metadata, leveraging existing enterprise imaging infrastructure, greater patient safety, and better compliance to legislative requirements for image retention.

  18. SEGY to ASCII Conversion and Plotting Program 2.0

    USGS Publications Warehouse

    Goldman, Mark R.

    2005-01-01

    INTRODUCTION SEGY has long been a standard format for storing seismic data and header information. Almost every seismic processing package can read and write seismic data in SEGY format. In the data processing world, however, ASCII format is the 'universal' standard format. Very few general-purpose plotting or computation programs will accept data in SEGY format. The software presented in this report, referred to as SEGY to ASCII (SAC), converts seismic data written in SEGY format (Barry et al., 1975) to an ASCII data file, and then creates a postscript file of the seismic data using a general plotting package (GMT, Wessel and Smith, 1995). The resulting postscript file may be plotted by any standard postscript plotting program. There are two versions of SAC: one version for plotting a SEGY file that contains a single gather, such as a stacked CDP or migrated section, and a second version for plotting multiple gathers from a SEGY file containing more than one gather, such as a collection of shot gathers. Note that if a SEGY file has multiple gathers, then each gather must have the same number of traces per gather, and each trace must have the same sample interval and number of samples per trace. SAC will read several common standards of SEGY data, including SEGY files with sample values written in either IBM or IEEE floating-point format. In addition, utility programs are present to convert non-standard Seismic Unix (.sux) SEGY files and PASSCAL (.rsy) SEGY files to standard SEGY files. SAC allows complete user control over all plotting parameters including label size and font, tick mark intervals, trace scaling, and the inclusion of a title and descriptive text. SAC shell scripts create a postscript image of the seismic data in vector rather than bitmap format, using GMT's pswiggle command. Although this can produce a very large postscript file, the image quality is generally superior to that of a bitmap image, and commercial programs such as Adobe Illustrator? can manipulate the image more efficiently.

  19. 78 FR 17233 - Notice of Opportunity To File Amicus Briefs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-20

    .... Any commonly-used word processing format or PDF format is acceptable; text formats are preferable to image formats. Briefs may also be filed with the Office of the Clerk of the Board, Merit Systems...

  20. imzML: Imaging Mass Spectrometry Markup Language: A common data format for mass spectrometry imaging.

    PubMed

    Römpp, Andreas; Schramm, Thorsten; Hester, Alfons; Klinkert, Ivo; Both, Jean-Pierre; Heeren, Ron M A; Stöckli, Markus; Spengler, Bernhard

    2011-01-01

    Imaging mass spectrometry is the method of scanning a sample of interest and generating an "image" of the intensity distribution of a specific analyte. The data sets consist of a large number of mass spectra which are usually acquired with identical settings. Existing data formats are not sufficient to describe an MS imaging experiment completely. The data format imzML was developed to allow the flexible and efficient exchange of MS imaging data between different instruments and data analysis software.For this purpose, the MS imaging data is divided in two separate files. The mass spectral data is stored in a binary file to ensure efficient storage. All metadata (e.g., instrumental parameters, sample details) are stored in an XML file which is based on the standard data format mzML developed by HUPO-PSI. The original mzML controlled vocabulary was extended to include specific parameters of imaging mass spectrometry (such as x/y position and spatial resolution). The two files (XML and binary) are connected by offset values in the XML file and are unambiguously linked by a universally unique identifier. The resulting datasets are comparable in size to the raw data and the separate metadata file allows flexible handling of large datasets.Several imaging MS software tools already support imzML. This allows choosing from a (growing) number of processing tools. One is no longer limited to proprietary software, but is able to use the processing software which is best suited for a specific question or application. On the other hand, measurements from different instruments can be compared within one software application using identical settings for data processing. All necessary information for evaluating and implementing imzML can be found at http://www.imzML.org .

  1. Cardio-PACs: a new opportunity

    NASA Astrophysics Data System (ADS)

    Heupler, Frederick A., Jr.; Thomas, James D.; Blume, Hartwig R.; Cecil, Robert A.; Heisler, Mary

    2000-05-01

    It is now possible to replace film-based image management in the cardiac catheterization laboratory with a Cardiology Picture Archiving and Communication System (Cardio-PACS) based on digital imaging technology. The first step in the conversion process is installation of a digital image acquisition system that is capable of generating high-quality DICOM-compatible images. The next three steps, which are the subject of this presentation, involve image display, distribution, and storage. Clinical requirements and associated cost considerations for these three steps are listed below: Image display: (1) Image quality equal to film, with DICOM format, lossless compression, image processing, desktop PC-based with color monitor, and physician-friendly imaging software; (2) Performance specifications include: acquire 30 frames/sec; replay 15 frames/sec; access to file server 5 seconds, and to archive 5 minutes; (3) Compatibility of image file, transmission, and processing formats; (4) Image manipulation: brightness, contrast, gray scale, zoom, biplane display, and quantification; (5) User-friendly control of image review. Image distribution: (1) Standard IP-based network between cardiac catheterization laboratories, file server, long-term archive, review stations, and remote sites; (2) Non-proprietary formats; (3) Bidirectional distribution. Image storage: (1) CD-ROM vs disk vs tape; (2) Verification of data integrity; (3) User-designated storage capacity for catheterization laboratory, file server, long-term archive. Costs: (1) Image acquisition equipment, file server, long-term archive; (2) Network infrastructure; (3) Review stations and software; (4) Maintenance and administration; (5) Future upgrades and expansion; (6) Personnel.

  2. Integration of DICOM and openEHR standards

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Yao, Zhihong; Liu, Lei

    2011-03-01

    The standard format for medical imaging storage and transmission is DICOM. openEHR is an open standard specification in health informatics that describes the management and storage, retrieval and exchange of health data in electronic health records. Considering that the integration of DICOM and openEHR is beneficial to information sharing, on the basis of XML-based DICOM format, we developed a method of creating a DICOM Imaging Archetype in openEHR to enable the integration of DICOM and openEHR. Each DICOM file contains abundant imaging information. However, because reading a DICOM involves looking up the DICOM Data Dictionary, the readability of a DICOM file has been limited. openEHR has innovatively adopted two level modeling method, making clinical information divided into lower level, the information model, and upper level, archetypes and templates. But one critical challenge posed to the development of openEHR is the information sharing problem, especially in imaging information sharing. For example, some important imaging information cannot be displayed in an openEHR file. In this paper, to enhance the readability of a DICOM file and semantic interoperability of an openEHR file, we developed a method of mapping a DICOM file to an openEHR file by adopting the form of archetype defined in openEHR. Because an archetype has a tree structure, after mapping a DICOM file to an openEHR file, the converted information is structuralized in conformance with openEHR format. This method enables the integration of DICOM and openEHR and data exchange without losing imaging information between two standards.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Temple, Brian Allen; Armstrong, Jerawan Chudoung

    This document is a mid-year report on a deliverable for the PYTHON Radiography Analysis Tool (PyRAT) for project LANL12-RS-107J in FY15. The deliverable is deliverable number 2 in the work package and is titled “Add the ability to read in more types of image file formats in PyRAT”. Right now PyRAT can only read in uncompressed TIF files (tiff files). It is planned to expand the file formats that can be read by PyRAT, making it easier to use in more situations. A summary of the file formats added include jpeg, jpg, png and formatted ASCII files.

  4. OMERO and Bio-Formats 5: flexible access to large bioimaging datasets at scale

    NASA Astrophysics Data System (ADS)

    Moore, Josh; Linkert, Melissa; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Li, Simon; Lindner, Dominik; Moore, William J.; Patterson, Andrew J.; Pindelski, Blazej; Ramalingam, Balaji; Rozbicki, Emil; Tarkowska, Aleksandra; Walczysko, Petr; Allan, Chris; Burel, Jean-Marie; Swedlow, Jason

    2015-03-01

    The Open Microscopy Environment (OME) has built and released Bio-Formats, a Java-based proprietary file format conversion tool and OMERO, an enterprise data management platform under open source licenses. In this report, we describe new versions of Bio-Formats and OMERO that are specifically designed to support large, multi-gigabyte or terabyte scale datasets that are routinely collected across most domains of biological and biomedical research. Bio- Formats reads image data directly from native proprietary formats, bypassing the need for conversion into a standard format. It implements the concept of a file set, a container that defines the contents of multi-dimensional data comprised of many files. OMERO uses Bio-Formats to read files natively, and provides a flexible access mechanism that supports several different storage and access strategies. These new capabilities of OMERO and Bio-Formats make them especially useful for use in imaging applications like digital pathology, high content screening and light sheet microscopy that create routinely large datasets that must be managed and analyzed.

  5. BOREAS RSS-14 Level-1a GOES-8 Visible, IR and Water Vapor Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Newcomer, Jeffrey A.; Faysash, David; Cooper, Harry J.; Smith, Eric A.

    2000-01-01

    The BOREAS RSS-14 team collected and processed several GOES-7 and GOES-8 image data sets that covered the BOREAS study region. The level-1a GOES-8 images were created by BORIS personnel from the level-1 images delivered by FSU personnel. The data cover 14-Jul-1995 to 21-Sep-1995 and 12-Feb-1996 to 03-Oct-1996. The data start out as three bands with 8-bit pixel values and end up as five bands with 10-bit pixel values. No major problems with the data have been identified. The differences between the level-1 and level-1a GOES-8 data are the formatting and packaging of the data. The images missing from the temporal series of level-1 GOES-8 images were zero-filled by BORIS staff to create files consistent in size and format. In addition, BORIS staff packaged all the images of a given type from a given day into a single file, removed the header information from the individual level-1 files, and placed it into a single descriptive ASCII header file. The data are contained in binary image format files. Due to the large size of the images, the level-1a GOES-8 data are not contained on the BOREAS CD-ROM set. An inventory listing file is supplied on the CD-ROM to inform users of what data were collected. The level-1a GOES-8 image data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). See sections 15 and 16 for more information. The data files are available on a CD-ROM (see document number 20010000884).

  6. 76 FR 10405 - Federal Copyright Protection of Sound Recordings Fixed Before February 15, 1972

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-24

    ... file in either the Adobe Portable Document File (PDF) format that contains searchable, accessible text (not an image); Microsoft Word; WordPerfect; Rich Text Format (RTF); or ASCII text file format (not a..., comments may be delivered in hard copy. If hand delivered by a private party, an original [[Page 10406...

  7. TM digital image products for applications. [computer compatible tapes

    NASA Technical Reports Server (NTRS)

    Barker, J. L.; Gunther, F. J.; Abrams, R. B.; Ball, D.

    1984-01-01

    The image characteristics of digital data generated by LANDSAT 4 thematic mapper (TM) are discussed. Digital data from the TM resides in tape files at various stages of image processing. Within each image data file, the image lines are blocked by a factor of either 5 for a computer compatible tape CCT-BT, or 4 for a CCT-AT and CCT-PT; in each format, the image file has a different format. Nominal geometric corrections which provide proper geodetic relationships between different parts of the image are available only for the CCT-PT. It is concluded that detector 3 of band 5 on the TM does not respond; this channel of data needs replacement. The empty bin phenomenon in CCT-AT images results from integer truncations of mixed-mode arithmetric operations.

  8. Informatics in radiology (infoRAD): free DICOM image viewing and processing software for the Macintosh computer: what's available and what it can do for you.

    PubMed

    Escott, Edward J; Rubinstein, David

    2004-01-01

    It is often necessary for radiologists to use digital images in presentations and conferences. Most imaging modalities produce images in the Digital Imaging and Communications in Medicine (DICOM) format. The image files tend to be large and thus cannot be directly imported into most presentation software, such as Microsoft PowerPoint; the large files also consume storage space. There are many free programs that allow viewing and processing of these files on a personal computer, including conversion to more common file formats such as the Joint Photographic Experts Group (JPEG) format. Free DICOM image viewing and processing software for computers running on the Microsoft Windows operating system has already been evaluated. However, many people use the Macintosh (Apple Computer) platform, and a number of programs are available for these users. The World Wide Web was searched for free DICOM image viewing or processing software that was designed for the Macintosh platform or is written in Java and is therefore platform independent. The features of these programs and their usability were evaluated. There are many free programs for the Macintosh platform that enable viewing and processing of DICOM images. (c) RSNA, 2004.

  9. Dependency Tree Annotation Software

    DTIC Science & Technology

    2015-11-01

    formats, and it provides numerous options for customizing how dependency trees are displayed. Built entirely in Java , it can run on a wide range of...tree can be saved as an image, .mxe (a mxGraph editing file), a .conll file, and several other file formats. DTE uses the open source Java version

  10. Community Oncology and Prevention Trials | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"168","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Early Detection Research Group Homepage Image","field_file_image_title_text[und][0][value]":"Early Detection Research Group Homepage Image","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Early Detection Research Group Homepage Image","title":"Early

  11. Developing a radiology-based teaching approach for gross anatomy in the digital era.

    PubMed

    Marker, David R; Bansal, Anshuman K; Juluru, Krishna; Magid, Donna

    2010-08-01

    The purpose of this study was to assess the implementation of a digital anatomy lecture series based largely on annotated, radiographic images and the utility of the Radiological Society of North America-developed Medical Imaging Resource Center (MIRC) for providing an online educational resource. A series of digital teaching images were collected and organized to correspond to lecture and dissection topics. MIRC was used to provide the images in a Web-based educational format for incorporation into anatomy lectures and as a review resource. A survey assessed the impressions of the medical students regarding this educational format. MIRC teaching files were successfully used in our teaching approach. The lectures were interactive with questions to and from the medical student audience regarding the labeled images used in the presentation. Eighty-five of 120 students completed the survey. The majority of students (87%) indicated that the MIRC teaching files were "somewhat useful" to "very useful" when incorporated into the lecture. The students who used the MIRC files were most likely to access the material from home (82%) on an occasional basis (76%). With regard to areas for improvement, 63% of the students reported that they would have benefited from more teaching files, and only 9% of the students indicated that the online files were not user friendly. The combination of electronic radiology resources available in lecture format and on the Internet can provide multiple opportunities for medical students to learn and revisit first-year anatomy. MIRC provides a user-friendly format for presenting radiology education files for medical students. 2010 AUR. Published by Elsevier Inc. All rights reserved.

  12. Main image file tape description

    USGS Publications Warehouse

    Warriner, Howard W.

    1980-01-01

    This Main Image File Tape document defines the data content and file structure of the Main Image File Tape (MIFT) produced by the EROS Data Center (EDC). This document also defines an INQUIRY tape, which is just a subset of the MIFT. The format of the INQUIRY tape is identical to the MIFT except for two records; therefore, with the exception of these two records (described elsewhere in this document), every remark made about the MIFT is true for the INQUIRY tape.

  13. MSiReader: an open-source interface to view and analyze high resolving power MS imaging files on Matlab platform.

    PubMed

    Robichaud, Guillaume; Garrard, Kenneth P; Barry, Jeremy A; Muddiman, David C

    2013-05-01

    During the past decade, the field of mass spectrometry imaging (MSI) has greatly evolved, to a point where it has now been fully integrated by most vendors as an optional or dedicated platform that can be purchased with their instruments. However, the technology is not mature and multiple research groups in both academia and industry are still very actively studying the fundamentals of imaging techniques, adapting the technology to new ionization sources, and developing new applications. As a result, there important varieties of data file formats used to store mass spectrometry imaging data and, concurrent to the development of MSi, collaborative efforts have been undertaken to introduce common imaging data file formats. However, few free software packages to read and analyze files of these different formats are readily available. We introduce here MSiReader, a free open source application to read and analyze high resolution MSI data from the most common MSi data formats. The application is built on the Matlab platform (Mathworks, Natick, MA, USA) and includes a large selection of data analysis tools and features. People who are unfamiliar with the Matlab language will have little difficult navigating the user-friendly interface, and users with Matlab programming experience can adapt and customize MSiReader for their own needs.

  14. MSiReader: An Open-Source Interface to View and Analyze High Resolving Power MS Imaging Files on Matlab Platform

    NASA Astrophysics Data System (ADS)

    Robichaud, Guillaume; Garrard, Kenneth P.; Barry, Jeremy A.; Muddiman, David C.

    2013-05-01

    During the past decade, the field of mass spectrometry imaging (MSI) has greatly evolved, to a point where it has now been fully integrated by most vendors as an optional or dedicated platform that can be purchased with their instruments. However, the technology is not mature and multiple research groups in both academia and industry are still very actively studying the fundamentals of imaging techniques, adapting the technology to new ionization sources, and developing new applications. As a result, there important varieties of data file formats used to store mass spectrometry imaging data and, concurrent to the development of MSi, collaborative efforts have been undertaken to introduce common imaging data file formats. However, few free software packages to read and analyze files of these different formats are readily available. We introduce here MSiReader, a free open source application to read and analyze high resolution MSI data from the most common MSi data formats. The application is built on the Matlab platform (Mathworks, Natick, MA, USA) and includes a large selection of data analysis tools and features. People who are unfamiliar with the Matlab language will have little difficult navigating the user-friendly interface, and users with Matlab programming experience can adapt and customize MSiReader for their own needs.

  15. 5 CFR 1201.14 - Electronic filing procedures.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... (PDF), and image files (files created by scanning). A list of formats allowed can be found at e-Appeal..., or by uploading the supporting documents in the form of one or more PDF files in which each...

  16. Chemopreventive Agent Development | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"174","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Chemoprevenentive Agent Development Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Chemoprevenentive Agent Development Research Group Homepage

  17. Informatics in radiology (infoRAD): Vendor-neutral case input into a server-based digital teaching file system.

    PubMed

    Kamauu, Aaron W C; DuVall, Scott L; Robison, Reid J; Liimatta, Andrew P; Wiggins, Richard H; Avrin, David E

    2006-01-01

    Although digital teaching files are important to radiology education, there are no current satisfactory solutions for export of Digital Imaging and Communications in Medicine (DICOM) images from picture archiving and communication systems (PACS) in desktop publishing format. A vendor-neutral digital teaching file, the Radiology Interesting Case Server (RadICS), offers an efficient tool for harvesting interesting cases from PACS without requiring modifications of the PACS configurations. Radiologists push imaging studies from PACS to RadICS via the standard DICOM Send process, and the RadICS server automatically converts the DICOM images into the Joint Photographic Experts Group format, a common desktop publishing format. They can then select key images and create an interesting case series at the PACS workstation. RadICS was tested successfully against multiple unmodified commercial PACS. Using RadICS, radiologists are able to harvest and author interesting cases at the point of clinical interpretation with minimal disruption in clinical work flow. RSNA, 2006

  18. Prostate and Urologic Cancer | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"183","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Prostate and Urologic Cancer Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Prostate and Urologic Cancer Research Group Homepage

  19. Reprocessing of multi-channel seismic-reflection data collected in the Beaufort Sea

    USGS Publications Warehouse

    Agena, W.F.; Lee, Myung W.; Hart, P.E.

    2000-01-01

    Contained on this set of two CD-ROMs are stacked and migrated multi-channel seismic-reflection data for 65 lines recorded in the Beaufort Sea by the United States Geological Survey in 1977. All data were reprocessed by the USGS using updated processing methods resulting in improved interpretability. Each of the two CD-ROMs contains the following files: 1) 65 files containing the digital seismic data in standard, SEG-Y format; 2) 1 file containing navigation data for the 65 lines in standard SEG-P1 format; 3) an ASCII text file with cross-reference information for relating the sequential trace numbers on each line to cdp numbers and shotpoint numbers; 4) 2 small scale graphic images (stacked and migrated) of a segment of line 722 in Adobe Acrobat (R) PDF format; 5) a graphic image of the location map, generated from the navigation file; 6) PlotSeis, an MS-DOS Application that allows PC users to interactively view the SEG-Y files; 7) a PlotSeis documentation file; and 8) an explanation of the processing used to create the final seismic sections (this document).

  20. Image tools for UNIX

    NASA Technical Reports Server (NTRS)

    Banks, David C.

    1994-01-01

    This talk features two simple and useful tools for digital image processing in the UNIX environment. They are xv and pbmplus. The xv image viewer which runs under the X window system reads images in a number of different file formats and writes them out in different formats. The view area supports a pop-up control panel. The 'algorithms' menu lets you blur an image. The xv control panel also activates the color editor which displays the image's color map (if one exists). The xv image viewer is available through the internet. The pbmplus package is a set of tools designed to perform image processing from within a UNIX shell. The acronym 'pbm' stands for portable bit map. Like xv, the pbm plus tool can convert images from and to many different file formats. The source code and manual pages for pbmplus are also available through the internet. This software is in the public domain.

  1. Java Image I/O for VICAR, PDS, and ISIS

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Levoe, Steven R.

    2011-01-01

    This library, written in Java, supports input and output of images and metadata (labels) in the VICAR, PDS image, and ISIS-2 and ISIS-3 file formats. Three levels of access exist. The first level comprises the low-level, direct access to the file. This allows an application to read and write specific image tiles, lines, or pixels and to manipulate the label data directly. This layer is analogous to the C-language "VICAR Run-Time Library" (RTL), which is the image I/O library for the (C/C++/Fortran) VICAR image processing system from JPL MIPL (Multimission Image Processing Lab). This low-level library can also be used to read and write labeled, uncompressed images stored in formats similar to VICAR, such as ISIS-2 and -3, and a subset of PDS (image format). The second level of access involves two codecs based on Java Advanced Imaging (JAI) to provide access to VICAR and PDS images in a file-format-independent manner. JAI is supplied by Sun Microsystems as an extension to desktop Java, and has a number of codecs for formats such as GIF, TIFF, JPEG, etc. Although Sun has deprecated the codec mechanism (replaced by IIO), it is still used in many places. The VICAR and PDS codecs allow any program written using the JAI codec spec to use VICAR or PDS images automatically, with no specific knowledge of the VICAR or PDS formats. Support for metadata (labels) is included, but is format-dependent. The PDS codec, when processing PDS images with an embedded VIAR label ("dual-labeled images," such as used for MER), presents the VICAR label in a new way that is compatible with the VICAR codec. The third level of access involves VICAR, PDS, and ISIS Image I/O plugins. The Java core includes an "Image I/O" (IIO) package that is similar in concept to the JAI codec, but is newer and more capable. Applications written to the IIO specification can use any image format for which a plug-in exists, with no specific knowledge of the format itself.

  2. Is HDF5 a Good Format to Replace UVFITS?

    NASA Astrophysics Data System (ADS)

    Price, D. C.; Barsdell, B. R.; Greenhill, L. J.

    2015-09-01

    The FITS (Flexible Image Transport System) data format was developed in the late 1970s for storage and exchange of astronomy-related image data. Since then, it has become a standard file format not only for images, but also for radio interferometer data (e.g. UVFITS, FITS-IDI). But is FITS the right format for next-generation telescopes to adopt? The newer Hierarchical Data Format (HDF5) file format offers considerable advantages over FITS, but has yet to gain widespread adoption within the radio astronomy. One of the major holdbacks is that HDF5 is not well supported by data reduction software packages. Here, we present a comparison of FITS, HDF5, and the MeasurementSet (MS) format for storage of interferometric data. In addition, we present a tool for converting between formats. We show that the underlying data model of FITS can be ported to HDF5, a first step toward achieving wider HDF5 support.

  3. Regional seismic lines reprocessed using post-stack processing techniques; National Petroleum Reserve, Alaska

    USGS Publications Warehouse

    Miller, John J.; Agena, W.F.; Lee, M.W.; Zihlman, F.N.; Grow, J.A.; Taylor, D.J.; Killgore, Michele; Oliver, H.L.

    2000-01-01

    This CD-ROM contains stacked, migrated, 2-Dimensional seismic reflection data and associated support information for 22 regional seismic lines (3,470 line-miles) recorded in the National Petroleum Reserve ? Alaska (NPRA) from 1974 through 1981. Together, these lines constitute about one-quarter of the seismic data collected as part of the Federal Government?s program to evaluate the petroleum potential of the Reserve. The regional lines, which form a grid covering the entire NPRA, were created by combining various individual lines recorded in different years using different recording parameters. These data were reprocessed by the USGS using modern, post-stack processing techniques, to create a data set suitable for interpretation on interactive seismic interpretation computer workstations. Reprocessing was done in support of ongoing petroleum resource studies by the USGS Energy Program. The CD-ROM contains the following files: 1) 22 files containing the digital seismic data in standard, SEG-Y format; 2) 1 file containing navigation data for the 22 lines in standard SEG-P1 format; 3) 22 small scale graphic images of each seismic line in Adobe Acrobat? PDF format; 4) a graphic image of the location map, generated from the navigation file, with hyperlinks to the graphic images of the seismic lines; 5) an ASCII text file with cross-reference information for relating the sequential trace numbers on each regional line to the line number and shotpoint number of the original component lines; and 6) an explanation of the processing used to create the final seismic sections (this document). The SEG-Y format seismic files and SEG-P1 format navigation file contain all the information necessary for loading the data onto a seismic interpretation workstation.

  4. Software for browsing sectioned images of a dog body and generating a 3D model.

    PubMed

    Park, Jin Seo; Jung, Yong Wook

    2016-01-01

    The goals of this study were (1) to provide accessible and instructive browsing software for sectioned images and a portable document format (PDF) file that includes three-dimensional (3D) models of an entire dog body and (2) to develop techniques for segmentation and 3D modeling that would enable an investigator to perform these tasks without the aid of a computer engineer. To achieve these goals, relatively important or large structures in the sectioned images were outlined to generate segmented images. The sectioned and segmented images were then packaged into browsing software. In this software, structures in the sectioned images are shown in detail and in real color. After 3D models were made from the segmented images, the 3D models were exported into a PDF file. In this format, the 3D models could be manipulated freely. The browsing software and PDF file are available for study by students, for lecture for teachers, and for training for clinicians. These files will be helpful for anatomical study by and clinical training of veterinary students and clinicians. Furthermore, these techniques will be useful for researchers who study two-dimensional images and 3D models. © 2015 Wiley Periodicals, Inc.

  5. Pre-Launch Algorithm and Data Format for the Level 1 Calibration Products for the EOS AM-1 Moderate Resolution Imaging Spectroradiometer (MODIS)

    NASA Technical Reports Server (NTRS)

    Guenther, Bruce W.; Godden, Gerald D.; Xiong, Xiao-Xiong; Knight, Edward J.; Qiu, Shi-Yue; Montgomery, Harry; Hopkins, M. M.; Khayat, Mohammad G.; Hao, Zhi-Dong; Smith, David E. (Technical Monitor)

    2000-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) radiometric calibration product is described for the thermal emissive and the reflective solar bands. Specific sensor design characteristics are identified to assist in understanding how the calibration algorithm software product is designed. The reflected solar band software products of radiance and reflectance factor both are described. The product file format is summarized and the MODIS Characterization Support Team (MCST) Homepage location for the current file format is provided.

  6. MINC 2.0: A Flexible Format for Multi-Modal Images.

    PubMed

    Vincent, Robert D; Neelin, Peter; Khalili-Mahani, Najmeh; Janke, Andrew L; Fonov, Vladimir S; Robbins, Steven M; Baghdadi, Leila; Lerch, Jason; Sled, John G; Adalat, Reza; MacDonald, David; Zijdenbos, Alex P; Collins, D Louis; Evans, Alan C

    2016-01-01

    It is often useful that an imaging data format can afford rich metadata, be flexible, scale to very large file sizes, support multi-modal data, and have strong inbuilt mechanisms for data provenance. Beginning in 1992, MINC was developed as a system for flexible, self-documenting representation of neuroscientific imaging data with arbitrary orientation and dimensionality. The MINC system incorporates three broad components: a file format specification, a programming library, and a growing set of tools. In the early 2000's the MINC developers created MINC 2.0, which added support for 64-bit file sizes, internal compression, and a number of other modern features. Because of its extensible design, it has been easy to incorporate details of provenance in the header metadata, including an explicit processing history, unique identifiers, and vendor-specific scanner settings. This makes MINC ideal for use in large scale imaging studies and databases. It also makes it easy to adapt to new scanning sequences and modalities.

  7. Workflow opportunities using JPEG 2000

    NASA Astrophysics Data System (ADS)

    Foshee, Scott

    2002-11-01

    JPEG 2000 is a new image compression standard from ISO/IEC JTC1 SC29 WG1, the Joint Photographic Experts Group (JPEG) committee. Better thought of as a sibling to JPEG rather than descendant, the JPEG 2000 standard offers wavelet based compression as well as companion file formats and related standardized technology. This paper examines the JPEG 2000 standard for features in four specific areas-compression, file formats, client-server, and conformance/compliance that enable image workflows.

  8. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  9. OpenMSI: A High-Performance Web-Based Platform for Mass Spectrometry Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Greiner, Annette; Cholia, Shreyas

    Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data accessmore » (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis.« less

  10. Chapter 6. Tabular data and graphical images in support of the U.S. Geological Survey National Oil and Gas Assessment-East Texas basin and Louisiana-Mississippi salt basins provinces, Jurassic Smackover interior salt basins total petroleum system (504902), Travis Peak and Hosston formations.

    USGS Publications Warehouse

    ,

    2006-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on the CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  11. Effect of Instrumentation Length and Instrumentation Systems: Hand Versus Rotary Files on Apical Crack Formation – An In vitro Study

    PubMed Central

    Mahesh, MC; Bhandary, Shreetha

    2017-01-01

    Introduction Stresses generated during root canal instrumentation have been reported to cause apical cracks. The smaller, less pronounced defects like cracks can later propagate into vertical root fracture, when the tooth is subjected to repeated stresses from endodontic or restorative procedures. Aim This study evaluated occurrence of apical cracks with stainless steel hand files, rotary NiTi RaCe and K3 files at two different instrumentation lengths. Materials and Methods In the present in vitro study, 60 mandibular premolars were mounted in resin blocks with simulated periodontal ligament. Apical 3 mm of the root surfaces were exposed and stained using India ink. Preoperative images of root apices were obtained at 100x using stereomicroscope. The teeth were divided into six groups of 10 each. First two groups were instrumented with stainless steel files, next two groups with rotary NiTi RaCe files and the last two groups with rotary NiTi K3 files. The instrumentation was carried out till the apical foramen (Working Length-WL) and 1 mm short of the apical foramen (WL-1) with each file system. After root canal instrumentation, postoperative images of root apices were obtained. Preoperative and postoperative images were compared and the occurrence of cracks was recorded. Descriptive statistical analysis and Chi-square tests were used to analyze the results. Results Apical root cracks were seen in 30%, 35% and 20% of teeth instrumented with K-files, RaCe files and K3 files respectively. There was no statistical significance among three instrumentation systems in the formation of apical cracks (p=0.563). Apical cracks were seen in 40% and 20% of teeth instrumented with K-files; 60% and 10% of teeth with RaCe files and 40% and 0% of teeth with K3 files at WL and WL-1 respectively. For groups instrumented with hand files there was no statistical significance in number of cracks at WL and WL-1 (p=0.628). But for teeth instrumented with RaCe files and K3 files significantly more number of cracks were seen at WL than WL-1 (p=0.057 for RaCe files and p=0.087 for K3 files). Conclusion There was no statistical significance between stainless steel hand files and rotary files in terms of crack formation. Instrumentation length had a significant effect on the formation of cracks when rotary files were used. Using rotary instruments 1 mm short of apical foramen caused lesser crack formation. But, there was no statistically significant difference in number of cracks formed with hand files at two instrumentation levels. PMID:28274036

  12. Effect of Instrumentation Length and Instrumentation Systems: Hand Versus Rotary Files on Apical Crack Formation - An In vitro Study.

    PubMed

    Devale, Madhuri R; Mahesh, M C; Bhandary, Shreetha

    2017-01-01

    Stresses generated during root canal instrumentation have been reported to cause apical cracks. The smaller, less pronounced defects like cracks can later propagate into vertical root fracture, when the tooth is subjected to repeated stresses from endodontic or restorative procedures. This study evaluated occurrence of apical cracks with stainless steel hand files, rotary NiTi RaCe and K3 files at two different instrumentation lengths. In the present in vitro study, 60 mandibular premolars were mounted in resin blocks with simulated periodontal ligament. Apical 3 mm of the root surfaces were exposed and stained using India ink. Preoperative images of root apices were obtained at 100x using stereomicroscope. The teeth were divided into six groups of 10 each. First two groups were instrumented with stainless steel files, next two groups with rotary NiTi RaCe files and the last two groups with rotary NiTi K3 files. The instrumentation was carried out till the apical foramen (Working Length-WL) and 1 mm short of the apical foramen (WL-1) with each file system. After root canal instrumentation, postoperative images of root apices were obtained. Preoperative and postoperative images were compared and the occurrence of cracks was recorded. Descriptive statistical analysis and Chi-square tests were used to analyze the results. Apical root cracks were seen in 30%, 35% and 20% of teeth instrumented with K-files, RaCe files and K3 files respectively. There was no statistical significance among three instrumentation systems in the formation of apical cracks (p=0.563). Apical cracks were seen in 40% and 20% of teeth instrumented with K-files; 60% and 10% of teeth with RaCe files and 40% and 0% of teeth with K3 files at WL and WL-1 respectively. For groups instrumented with hand files there was no statistical significance in number of cracks at WL and WL-1 (p=0.628). But for teeth instrumented with RaCe files and K3 files significantly more number of cracks were seen at WL than WL-1 (p=0.057 for RaCe files and p=0.087 for K3 files). There was no statistical significance between stainless steel hand files and rotary files in terms of crack formation. Instrumentation length had a significant effect on the formation of cracks when rotary files were used. Using rotary instruments 1 mm short of apical foramen caused lesser crack formation. But, there was no statistically significant difference in number of cracks formed with hand files at two instrumentation levels.

  13. Cancer Biomarkers | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"175","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Cancer Biomarkers Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Cancer Biomarkers Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Cancer Biomarkers Research Group Homepage Logo","title":"Cancer

  14. Gastrointestinal and Other Cancers | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"181","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Gastrointestinal and Other Cancers Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Gastrointestinal and Other Cancers Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Gastrointestinal and Other

  15. Biometry | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"66","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Biometry Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Biometry Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Biometry Research Group Homepage Logo","title":"Biometry Research Group Homepage

  16. Possible costs associated with investigating and mitigating geologic hazards in rural areas of western San Mateo County, California with a section on using the USGS website to determine the cost of developing property for residences in rural parts of San Mateo County, California

    USGS Publications Warehouse

    Brabb, Earl E.; Roberts, Sebastian; Cotton, William R.; Kropp, Alan L.; Wright, Robert H.; Zinn, Erik N.; Digital database by Roberts, Sebastian; Mills, Suzanne K.; Barnes, Jason B.; Marsolek, Joanna E.

    2000-01-01

    This publication consists of a digital map database on a geohazards web site, http://kaibab.wr.usgs.gov/geohazweb/intro.htm, this text, and 43 digital map images available for downloading at this site. The report is stored as several digital files, in ARC export (uncompressed) format for the database, and Postscript and PDF formats for the map images. Several of the source data layers for the images have already been released in other publications by the USGS and are available for downloading on the Internet. These source layers are not included in this digital database, but rather a reference is given for the web site where the data can be found in digital format. The exported ARC coverages and grids lie in UTM zone 10 projection. The pamphlet, which only describes the content and character of the digital map database, is included as Postscript, PDF, and ASCII text files and is also available on paper as USGS Open-File Report 00-127. The full versatility of the spatial database is realized by importing the ARC export files into ARC/INFO or an equivalent GIS. Other GIS packages, including MapInfo and ARCVIEW, can also use the ARC export files. The Postscript map image can be used for viewing or plotting in computer systems with sufficient capacity, and the considerably smaller PDF image files can be viewed or plotted in full or in part from Adobe ACROBAT software running on Macintosh, PC, or UNIX platforms.

  17. A system for verifying models and classification maps by extraction of information from a variety of data sources

    NASA Technical Reports Server (NTRS)

    Norikane, L.; Freeman, A.; Way, J.; Okonek, S.; Casey, R.

    1992-01-01

    Recent updates to a geographical information system (GIS) called VICAR (Video Image Communication and Retrieval)/IBIS are described. The system is designed to handle data from many different formats (vector, raster, tabular) and many different sources (models, radar images, ground truth surveys, optical images). All the data are referenced to a single georeference plane, and average or typical values for parameters defined within a polygonal region are stored in a tabular file, called an info file. The info file format allows tracking of data in time, maintenance of links between component data sets and the georeference image, conversion of pixel values to `actual' values (e.g., radar cross-section, luminance, temperature), graph plotting, data manipulation, generation of training vectors for classification algorithms, and comparison between actual measurements and model predictions (with ground truth data as input).

  18. 10 CFR 2.1011 - Management of electronic information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... participants shall make textual (or, where non-text, image) versions of their documents available on a web... of the following acceptable formats: ASCII, native word processing (Word, WordPerfect), PDF Normal, or HTML. (iv) Image files must be formatted as TIFF CCITT G4 for bi-tonal images or PNG (Portable...

  19. 10 CFR 2.1011 - Management of electronic information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... participants shall make textual (or, where non-text, image) versions of their documents available on a web... of the following acceptable formats: ASCII, native word processing (Word, WordPerfect), PDF Normal, or HTML. (iv) Image files must be formatted as TIFF CCITT G4 for bi-tonal images or PNG (Portable...

  20. 5 CFR 1201.14 - Electronic filing procedures.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...-Appeal Online, in which case service is governed by paragraph (j) of this section, or by non-electronic... (PDF), and image files (files created by scanning). A list of formats allowed can be found at e-Appeal... representatives of the appeals in which they were filed. (j) Service of electronic pleadings and MSPB documents...

  1. 5 CFR 1201.14 - Electronic filing procedures.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...-Appeal Online, in which case service is governed by paragraph (j) of this section, or by non-electronic... (PDF), and image files (files created by scanning). A list of formats allowed can be found at e-Appeal... representatives of the appeals in which they were filed. (j) Service of electronic pleadings and MSPB documents...

  2. 5 CFR 1201.14 - Electronic filing procedures.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...-Appeal Online, in which case service is governed by paragraph (j) of this section, or by non-electronic... (PDF), and image files (files created by scanning). A list of formats allowed can be found at e-Appeal... representatives of the appeals in which they were filed. (j) Service of electronic pleadings and MSPB documents...

  3. 5 CFR 1201.14 - Electronic filing procedures.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...-Appeal Online, in which case service is governed by paragraph (j) of this section, or by non-electronic... (PDF), and image files (files created by scanning). A list of formats allowed can be found at e-Appeal... representatives of the appeals in which they were filed. (j) Service of electronic pleadings and MSPB documents...

  4. Development of a user-friendly system for image processing of electron microscopy by integrating a web browser and PIONE with Eos.

    PubMed

    Tsukamoto, Takafumi; Yasunaga, Takuo

    2014-11-01

    Eos (Extensible object-oriented system) is one of the powerful applications for image processing of electron micrographs. In usual cases, Eos works with only character user interfaces (CUI) under the operating systems (OS) such as OS-X or Linux, not user-friendly. Thus, users of Eos need to be expert at image processing of electron micrographs, and have a little knowledge of computer science, as well. However, all the persons who require Eos does not an expert for CUI. Thus we extended Eos to a web system independent of OS with graphical user interfaces (GUI) by integrating web browser.Advantage to use web browser is not only to extend Eos with GUI, but also extend Eos to work under distributed computational environment. Using Ajax (Asynchronous JavaScript and XML) technology, we implemented more comfortable user-interface on web browser. Eos has more than 400 commands related to image processing for electron microscopy, and the usage of each command is different from each other. Since the beginning of development, Eos has managed their user-interface by using the interface definition file of "OptionControlFile" written in CSV (Comma-Separated Value) format, i.e., Each command has "OptionControlFile", which notes information for interface and its usage generation. Developed GUI system called "Zephyr" (Zone for Easy Processing of HYpermedia Resources) also accessed "OptionControlFIle" and produced a web user-interface automatically, because its mechanism is mature and convenient,The basic actions of client side system was implemented properly and can supply auto-generation of web-form, which has functions of execution, image preview, file-uploading to a web server. Thus the system can execute Eos commands with unique options for each commands, and process image analysis. There remain problems of image file format for visualization and workspace for analysis: The image file format information is useful to check whether the input/output file is correct and we also need to provide common workspace for analysis because the client is physically separated from a server. We solved the file format problem by extension of rules of OptionControlFile of Eos. Furthermore, to solve workspace problems, we have developed two type of system. The first system is to use only local environments. The user runs a web server provided by Eos, access to a web client through a web browser, and manipulate the local files with GUI on the web browser. The second system is employing PIONE (Process-rule for Input/Output Negotiation Environment), which is our developing platform that works under heterogenic distributed environment. The users can put their resources, such as microscopic images, text files and so on, into the server-side environment supported by PIONE, and so experts can write PIONE rule definition, which defines a workflow of image processing. PIONE run each image processing on suitable computers, following the defined rule. PIONE has the ability of interactive manipulation, and user is able to try a command with various setting values. In this situation, we contribute to auto-generation of GUI for a PIONE workflow.As advanced functions, we have developed a module to log user actions. The logs include information such as setting values in image processing, procedure of commands and so on. If we use the logs effectively, we can get a lot of advantages. For example, when an expert may discover some know-how of image processing, other users can also share logs including his know-hows and so we may obtain recommendation workflow of image analysis, if we analyze logs. To implement social platform of image processing for electron microscopists, we have developed system infrastructure, as well. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Informatics in radiology (infoRAD): multimedia extension of medical imaging resource center teaching files.

    PubMed

    Yang, Guo Liang; Aziz, Aamer; Narayanaswami, Banukumar; Anand, Ananthasubramaniam; Lim, C C Tchoyoson; Nowinski, Wieslaw Lucjan

    2005-01-01

    A new method has been developed for multimedia enhancement of electronic teaching files created by using the standard protocols and formats offered by the Medical Imaging Resource Center (MIRC) project of the Radiological Society of North America. The typical MIRC electronic teaching file consists of static pages only; with the new method, audio and visual content may be added to the MIRC electronic teaching file so that the entire image interpretation process can be recorded for teaching purposes. With an efficient system for encoding the audiovisual record of on-screen manipulation of radiologic images, the multimedia teaching files generated are small enough to be transmitted via the Internet with acceptable resolution. Students may respond with the addition of new audio and visual content and thereby participate in a discussion about a particular case. MIRC electronic teaching files with multimedia enhancement have the potential to augment the effectiveness of diagnostic radiology teaching. RSNA, 2005.

  6. The Design and Usage of the New Data Management Features in NASTRAN

    NASA Technical Reports Server (NTRS)

    Pamidi, P. R.; Brown, W. K.

    1984-01-01

    Two new data management features are installed in the April 1984 release of NASTRAN. These two features are the Rigid Format Data Base and the READFILE capability. The Rigid Format Data Base is stored on external files in card image format and can be easily maintained and expanded by the use of standard text editors. This data base provides the user and the NASTRAN maintenance contractor with an easy means for making changes to a Rigid Format or for generating new Rigid Formats without unnecessary compilations and link editing of NASTRAN. Each Rigid Format entry in the data base contains the Direct Matrix Abstraction Program (DMAP), along with the associated restart, DMAP sequence subset and substructure control flags. The READFILE capability allows an user to reference an external secondary file from the NASTRAN primary input file and to read data from this secondary file. There is no limit to the number of external secondary files that may be referenced and read.

  7. ISMRM Raw data format: A proposed standard for MRI raw datasets.

    PubMed

    Inati, Souheil J; Naegele, Joseph D; Zwart, Nicholas R; Roopchansingh, Vinai; Lizak, Martin J; Hansen, David C; Liu, Chia-Ying; Atkinson, David; Kellman, Peter; Kozerke, Sebastian; Xue, Hui; Campbell-Washburn, Adrienne E; Sørensen, Thomas S; Hansen, Michael S

    2017-01-01

    This work proposes the ISMRM Raw Data format as a common MR raw data format, which promotes algorithm and data sharing. A file format consisting of a flexible header and tagged frames of k-space data was designed. Application Programming Interfaces were implemented in C/C++, MATLAB, and Python. Converters for Bruker, General Electric, Philips, and Siemens proprietary file formats were implemented in C++. Raw data were collected using magnetic resonance imaging scanners from four vendors, converted to ISMRM Raw Data format, and reconstructed using software implemented in three programming languages (C++, MATLAB, Python). Images were obtained by reconstructing the raw data from all vendors. The source code, raw data, and images comprising this work are shared online, serving as an example of an image reconstruction project following a paradigm of reproducible research. The proposed raw data format solves a practical problem for the magnetic resonance imaging community. It may serve as a foundation for reproducible research and collaborations. The ISMRM Raw Data format is a completely open and community-driven format, and the scientific community is invited (including commercial vendors) to participate either as users or developers. Magn Reson Med 77:411-421, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  8. Breast and Gynecologic Cancer | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"184","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Breast and Gynecologic Cancer Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Breast and Gynecologic Cancer Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Breast and Gynecologic Cancer Research

  9. Lung and Upper Aerodigestive Cancer | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"180","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Lung and Upper Aerodigestive Cancer Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Lung and Upper Aerodigestive Cancer Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Lung and Upper Aerodigestive

  10. PySE: Python Source Extractor for radio astronomical images

    NASA Astrophysics Data System (ADS)

    Spreeuw, Hanno; Swinbank, John; Molenaar, Gijs; Staley, Tim; Rol, Evert; Sanders, John; Scheers, Bart; Kuiack, Mark

    2018-05-01

    PySE finds and measures sources in radio telescope images. It is run with several options, such as the detection threshold (a multiple of the local noise), grid size, and the forced clean beam fit, followed by a list of input image files in standard FITS or CASA format. From these, PySe provides a list of found sources; information such as the calculated background image, source list in different formats (e.g. text, region files importable in DS9), and other data may be saved. PySe can be integrated into a pipeline; it was originally written as part of the LOFAR Transient Detection Pipeline (TraP, ascl:1412.011).

  11. Volcanoes of the Wrangell Mountains and Cook Inlet region, Alaska: selected photographs

    USGS Publications Warehouse

    Neal, Christina A.; McGimsey, Robert G.; Diggles, Michael F.

    2001-01-01

    Alaska is home to more than 40 active volcanoes, many of which have erupted violently and repeatedly in the last 200 years. This CD-ROM contains 97 digitized color 35-mm images which represent a small fraction of thousands of photographs taken by Alaska Volcano Observatory scientists, other researchers, and private citizens. The photographs were selected to portray Alaska's volcanoes, to document recent eruptive activity, and to illustrate the range of volcanic phenomena observed in Alaska. These images are for use by the interested public, multimedia producers, desktop publishers, and the high-end printing industry. The digital images are stored in the 'images' folder and can be read across Macintosh, Windows, DOS, OS/2, SGI, and UNIX platforms with applications that can read JPG (JPEG - Joint Photographic Experts Group format) or PCD (Kodak's PhotoCD (YCC) format) files. Throughout this publication, the image numbers match among the file names, figure captions, thumbnail labels, and other references. Also included on this CD-ROM are Windows and Macintosh viewers and engines for keyword searches (Adobe Acrobat Reader with Search). At the time of this publication, Kodak's policy on the distribution of color-management files is still unresolved, and so none is included on this CD-ROM. However, using the Universal Ektachrome or Universal Kodachrome transforms found in your software will provide excellent color. In addition to PhotoCD (PCD) files, this CD-ROM contains large (14.2'x19.5') and small (4'x6') screen-resolution (72 dots per inch; dpi) images in JPEG format. These undergo downsizing and compression relative to the PhotoCD images.

  12. ISMRM Raw Data Format: A Proposed Standard for MRI Raw Datasets

    PubMed Central

    Inati, Souheil J.; Naegele, Joseph D.; Zwart, Nicholas R.; Roopchansingh, Vinai; Lizak, Martin J.; Hansen, David C.; Liu, Chia-Ying; Atkinson, David; Kellman, Peter; Kozerke, Sebastian; Xue, Hui; Campbell-Washburn, Adrienne E.; Sørensen, Thomas S.; Hansen, Michael S.

    2015-01-01

    Purpose This work proposes the ISMRM Raw Data (ISMRMRD) format as a common MR raw data format, which promotes algorithm and data sharing. Methods A file format consisting of a flexible header and tagged frames of k-space data was designed. Application Programming Interfaces were implemented in C/C++, MATLAB, and Python. Converters for Bruker, General Electric, Philips, and Siemens proprietary file formats were implemented in C++. Raw data were collected using MRI scanners from four vendors, converted to ISMRMRD format, and reconstructed using software implemented in three programming languages (C++, MATLAB, Python). Results Images were obtained by reconstructing the raw data from all vendors. The source code, raw data, and images comprising this work are shared online, serving as an example of an image reconstruction project following a paradigm of reproducible research. Conclusion The proposed raw data format solves a practical problem for the MRI community. It may serve as a foundation for reproducible research and collaborations. The ISMRMRD format is a completely open and community-driven format, and the scientific community is invited (including commercial vendors) to participate either as users or developers. PMID:26822475

  13. 10 CFR 2.1011 - Management of electronic information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... production and service: (i) The participants shall make textual (or, where non-text, image) versions of their... set and be in one of the following acceptable formats: ASCII, native word processing (Word, WordPerfect), PDF Normal, or HTML. (iv) Image files must be formatted as TIFF CCITT G4 for bi-tonal images or...

  14. Cryptography Would Reveal Alterations In Photographs

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L.

    1995-01-01

    Public-key decryption method proposed to guarantee authenticity of photographic images represented in form of digital files. In method, digital camera generates original data from image in standard public format; also produces coded signature to verify standard-format image data. Scheme also helps protect against other forms of lying, such as attaching false captions.

  15. PIXEL PUSHER

    NASA Technical Reports Server (NTRS)

    Stanfill, D. F.

    1994-01-01

    Pixel Pusher is a Macintosh application used for viewing and performing minor enhancements on imagery. It will read image files in JPL's two primary image formats- VICAR and PDS - as well as the Macintosh PICT format. VICAR (NPO-18076) handles an array of image processing capabilities which may be used for a variety of applications including biomedical image processing, cartography, earth resources, and geological exploration. Pixel Pusher can also import VICAR format color lookup tables for viewing images in pseudocolor (256 colors). This program currently supports only eight bit images but will work on monitors with any number of colors. Arbitrarily large image files may be viewed in a normal Macintosh window. Color and contrast enhancement can be performed with a graphical "stretch" editor (as in contrast stretch). In addition, VICAR images may be saved as Macintosh PICT files for exporting into other Macintosh programs, and individual pixels can be queried to determine their locations and actual data values. Pixel Pusher is written in Symantec's Think C and was developed for use on a Macintosh SE30, LC, or II series computer running System Software 6.0.3 or later and 32 bit QuickDraw. Pixel Pusher will only run on a Macintosh which supports color (whether a color monitor is being used or not). The standard distribution medium for this program is a set of three 3.5 inch Macintosh format diskettes. The program price includes documentation. Pixel Pusher was developed in 1991 and is a copyrighted work with all copyright vested in NASA. Think C is a trademark of Symantec Corporation. Macintosh is a registered trademark of Apple Computer, Inc.

  16. Likelihood Ratio Test Polarimetric SAR Ship Detection Application

    DTIC Science & Technology

    2005-12-01

    menu. Under the Matlab menu, the user can export an area of an image to the MatlabTM MAT file format, as well as call RGB image and Pauli...must specify various parameters such as the area of the image to analyze. Export Image Area to MatlabTM (PoIGASP & COASP) Generates a MatlabTM file...represented by the Minister of National Defence, 2005 (0 Sa majest6 la reine, repr(sent(e par le ministre de la Defense nationale, 2005 Abstract This

  17. Digital Data from the Great Sand Dunes and Poncha Springs Aeromagnetic Surveys, South-Central Colorado

    USGS Publications Warehouse

    Drenth, B.J.; Grauch, V.J.S.; Bankey, Viki; New Sense Geophysics, Ltd.

    2009-01-01

    This report contains digital data, image files, and text files describing data formats and survey procedures for two high-resolution aeromagnetic surveys in south-central Colorado: one in the eastern San Luis Valley, Alamosa and Saguache Counties, and the other in the southern Upper Arkansas Valley, Chaffee County. In the San Luis Valley, the Great Sand Dunes survey covers a large part of Great Sand Dunes National Park and Preserve and extends south along the mountain front to the foot of Mount Blanca. In the Upper Arkansas Valley, the Poncha Springs survey covers the town of Poncha Springs and vicinity. The digital files include grids, images, and flight-line data. Several derivative products from these data are also presented as grids and images, including two grids of reduced-to-pole aeromagnetic data and data continued to a reference surface. Images are presented in various formats and are intended to be used as input to geographic information systems, standard graphics software, or map plotting packages.

  18. Incidence of apical crack formation and propagation during removal of root canal filling materials with different engine driven nickel-titanium instruments.

    PubMed

    Özyürek, Taha; Tek, Vildan; Yılmaz, Koray; Uslu, Gülşah

    2017-11-01

    To determine the incidence of crack formation and propagation in apical root dentin after retreatment procedures performed using ProTaper Universal Retreatment (PTR), Mtwo-R, ProTaper Next (PTN), and Twisted File Adaptive (TFA) systems. The study consisted of 120 extracted mandibular premolars. One millimeter from the apex of each tooth was ground perpendicular to the long axis of the tooth, and the apical surface was polished. Twenty teeth served as the negative control group. One hundred teeth were prepared, obturated, and then divided into 5 retreatment groups. The retreatment procedures were performed using the following files: PTR, Mtwo-R, PTN, TFA, and hand files. After filling material removal, apical enlargement was done using apical size 0.50 mm ProTaper Universal (PTU), Mtwo, PTN, TFA, and hand files. Digital images of the apical root surfaces were recorded before preparation, after preparation, after obturation, after filling removal, and after apical enlargement using a stereomicroscope. The images were then inspected for the presence of new apical cracks and crack propagation. Data were analyzed with χ 2 tests using SPSS 21.0 software. New cracks and crack propagation occurred in all the experimental groups during the retreatment process. Nickel-titanium rotary file systems caused significantly more apical crack formation and propagation than the hand files. The PTU system caused significantly more apical cracks than the other groups after the apical enlargement stage. This study showed that retreatment procedures and apical enlargement after the use of retreatment files can cause crack formation and propagation in apical dentin.

  19. An analysis of image storage systems for scalable training of deep neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Young, Steven R; Patton, Robert M

    This study presents a principled empirical evaluation of image storage systems for training deep neural networks. We employ the Caffe deep learning framework to train neural network models for three different data sets, MNIST, CIFAR-10, and ImageNet. While training the models, we evaluate five different options to retrieve training image data: (1) PNG-formatted image files on local file system; (2) pushing pixel arrays from image files into a single HDF5 file on local file system; (3) in-memory arrays to hold the pixel arrays in Python and C++; (4) loading the training data into LevelDB, a log-structured merge tree based key-valuemore » storage; and (5) loading the training data into LMDB, a B+tree based key-value storage. The experimental results quantitatively highlight the disadvantage of using normal image files on local file systems to train deep neural networks and demonstrate reliable performance with key-value storage based storage systems. When training a model on the ImageNet dataset, the image file option was more than 17 times slower than the key-value storage option. Along with measurements on training time, this study provides in-depth analysis on the cause of performance advantages/disadvantages of each back-end to train deep neural networks. We envision the provided measurements and analysis will shed light on the optimal way to architect systems for training neural networks in a scalable manner.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillion, D.

    This code enables one to display, take line-outs on, and perform various transformations on an image created by an array of integer*2 data. Uncompressed eight-bit TIFF files created on either the Macintosh or the IBM PC may also be read in and converted to a 16 bit signed integer image. This code is designed to handle all the formats used for PDS (photo-densitometer) files at the Lawrence Livermore National Laboratory. These formats are all explained by the application code. The image may be zoomed infinitely and the gray scale mapping can be easily changed. Line-outs may be horizontal or verticalmore » with arbitrary width, angled with arbitrary end points, or taken along any path. This code is usually used to examine spectrograph data. Spectral lines may be identified and a polynomial fit from position to wavelength may be found. The image array can be remapped so that the pixels all have the same change of lambda width. It is not necessary to do this, however. Lineouts may be printed, saved as Cricket tab-delimited files, or saved as PICT2 files. The plots may be linear, semilog, or logarithmic with nice values and proper scientific notation. Typically, spectral lines are curved.« less

  1. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  2. MSL: Facilitating automatic and physical analysis of published scientific literature in PDF format.

    PubMed

    Ahmed, Zeeshan; Dandekar, Thomas

    2015-01-01

    Published scientific literature contains millions of figures, including information about the results obtained from different scientific experiments e.g. PCR-ELISA data, microarray analysis, gel electrophoresis, mass spectrometry data, DNA/RNA sequencing, diagnostic imaging (CT/MRI and ultrasound scans), and medicinal imaging like electroencephalography (EEG), magnetoencephalography (MEG), echocardiography  (ECG), positron-emission tomography (PET) images. The importance of biomedical figures has been widely recognized in scientific and medicine communities, as they play a vital role in providing major original data, experimental and computational results in concise form. One major challenge for implementing a system for scientific literature analysis is extracting and analyzing text and figures from published PDF files by physical and logical document analysis. Here we present a product line architecture based bioinformatics tool 'Mining Scientific Literature (MSL)', which supports the extraction of text and images by interpreting all kinds of published PDF files using advanced data mining and image processing techniques. It provides modules for the marginalization of extracted text based on different coordinates and keywords, visualization of extracted figures and extraction of embedded text from all kinds of biological and biomedical figures using applied Optimal Character Recognition (OCR). Moreover, for further analysis and usage, it generates the system's output in different formats including text, PDF, XML and images files. Hence, MSL is an easy to install and use analysis tool to interpret published scientific literature in PDF format.

  3. User's manual for SEDCALC, a computer program for computation of suspended-sediment discharge

    USGS Publications Warehouse

    Koltun, G.F.; Gray, John R.; McElhone, T.J.

    1994-01-01

    Sediment-Record Calculations (SEDCALC), a menu-driven set of interactive computer programs, was developed to facilitate computation of suspended-sediment records. The programs comprising SEDCALC were developed independently in several District offices of the U.S. Geological Survey (USGS) to minimize the intensive labor associated with various aspects of sediment-record computations. SEDCALC operates on suspended-sediment-concentration data stored in American Standard Code for Information Interchange (ASCII) files in a predefined card-image format. Program options within SEDCALC can be used to assist in creating and editing the card-image files, as well as to reformat card-image files to and from formats used by the USGS Water-Quality System. SEDCALC provides options for creating card-image files containing time series of equal-interval suspended-sediment concentrations from 1. digitized suspended-sediment-concentration traces, 2. linear interpolation between log-transformed instantaneous suspended-sediment-concentration data stored at unequal time intervals, and 3. nonlinear interpolation between log-transformed instantaneous suspended-sediment-concentration data stored at unequal time intervals. Suspended-sediment discharge can be computed from the streamflow and suspended-sediment-concentration data or by application of transport relations derived by regressing log-transformed instantaneous streamflows on log-transformed instantaneous suspended-sediment concentrations or discharges. The computed suspended-sediment discharge data are stored in card-image files that can be either directly imported to the USGS Automated Data Processing System or used to generate plots by means of other SEDCALC options.

  4. System Integration Issues in Digital Photogrammetric Mapping

    DTIC Science & Technology

    1992-01-01

    elevation models, and/or rectified imagery/ orthophotos . Imagery exported from the DSPW can be either in a tiled image format or standard raster format...data. In the near future, correlation using "window shaping" operations along with an iterative orthophoto refinements methodology (Norvelle, 1992) is...components of TIES. The IDS passes tiled image data and ASCII header data to the DSPW. The tiled image file contains only image data. The ASCII header

  5. TOPPE: A framework for rapid prototyping of MR pulse sequences.

    PubMed

    Nielsen, Jon-Fredrik; Noll, Douglas C

    2018-06-01

    To introduce a framework for rapid prototyping of MR pulse sequences. We propose a simple file format, called "TOPPE", for specifying all details of an MR imaging experiment, such as gradient and radiofrequency waveforms and the complete scan loop. In addition, we provide a TOPPE file "interpreter" for GE scanners, which is a binary executable that loads TOPPE files and executes the sequence on the scanner. We also provide MATLAB scripts for reading and writing TOPPE files and previewing the sequence prior to hardware execution. With this setup, the task of the pulse sequence programmer is reduced to creating TOPPE files, eliminating the need for hardware-specific programming. No sequence-specific compilation is necessary; the interpreter only needs to be compiled once (for every scanner software upgrade). We demonstrate TOPPE in three different applications: k-space mapping, non-Cartesian PRESTO whole-brain dynamic imaging, and myelin mapping in the brain using inhomogeneous magnetization transfer. We successfully implemented and executed the three example sequences. By simply changing the various TOPPE sequence files, a single binary executable (interpreter) was used to execute several different sequences. The TOPPE file format is a complete specification of an MR imaging experiment, based on arbitrary sequences of a (typically small) number of unique modules. Along with the GE interpreter, TOPPE comprises a modular and flexible platform for rapid prototyping of new pulse sequences. Magn Reson Med 79:3128-3134, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  6. Migration of the digital interactive breast-imaging teaching file

    NASA Astrophysics Data System (ADS)

    Cao, Fei; Sickles, Edward A.; Huang, H. K.; Zhou, Xiaoqiang

    1998-06-01

    The digital breast imaging teaching file developed during the last two years in our laboratory has been used successfully at UCSF (University of California, San Francisco) as a routine teaching tool for training radiology residents and fellows in mammography. Building on this success, we have ported the teaching file from an old Pixar imaging/Sun SPARC 470 display system to our newly designed telemammography display workstation (Ultra SPARC 2 platform with two DOME Md5/SBX display boards). The old Pixar/Sun 470 system, although adequate for fast and high-resolution image display, is 4- year-old technology, expensive to maintain and difficult to upgrade. The new display workstation is more cost-effective and is also compatible with the digital image format from a full-field direct digital mammography system. The digital teaching file is built on a sophisticated computer-aided instruction (CAI) model, which simulates the management sequences used in imaging interpretation and work-up. Each user can be prompted to respond by making his/her own observations, assessments, and work-up decisions as well as the marking of image abnormalities. This effectively replaces the traditional 'show-and-tell' teaching file experience with an interactive, response-driven type of instruction.

  7. Chapter 2: Tabular Data and Graphical Images in Support of the U.S. Geological Survey National Oil and Gas Assessment - The Wind River Basin Province

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2007-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files) because of the number and variety of platforms and software available.

  8. BOREAS Level-2 MAS Surface Reflectance and Temperature Images in BSQ Format

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Newcomer, Jeffrey (Editor); Lobitz, Brad; Spanner, Michael; Strub, Richard; Lobitz, Brad

    2000-01-01

    The BOReal Ecosystem-Atmosphere Study (BOREAS) Staff Science Aircraft Data Acquisition Program focused on providing the research teams with the remotely sensed aircraft data products they needed to compare and spatially extend point results. The MODIS Airborne Simulator (MAS) images, along with other remotely sensed data, were collected to provide spatially extensive information over the primary study areas. This information includes biophysical parameter maps such as surface reflectance and temperature. Collection of the MAS images occurred over the study areas during the 1994 field campaigns. The level-2 MAS data cover the dates of 21-Jul-1994, 24-Jul-1994, 04-Aug-1994, and 08-Aug-1994. The data are not geographically/geometrically corrected; however, files of relative X and Y coordinates for each image pixel were derived by using the C130 navigation data in a MAS scan model. The data are provided in binary image format files.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillion, D.

    This code enables one to display, take line-outs on, and perform various transformations on an image created by an array of integer*2 data. Uncompressed eight-bit TIFF files created on either the Macintosh or the IBM PC may also be read in and converted to a 16 bit signed integer image. This code is designed to handle all the formates used for PDS (photo-densitometer) files at the Lawrence Livermore National Laboratory. These formats are all explained by the application code. The image may be zoomed infinitely and the gray scale mapping can be easily changed. Line-outs may be horizontal or verticalmore » with arbitrary width, angled with arbitrary end points, or taken along any path. This code is usually used to examine spectrograph data. Spectral lines may be identified and a polynomial fit from position to wavelength may be found. The image array can be remapped so that the pixels all have the same change of lambda width. It is not necessary to do this, however. Lineouts may be printed, saved as Cricket tab-delimited files, or saved as PICT2 files. The plots may be linear, semilog, or logarithmic with nice values and proper scientific notation. Typically, spectral lines are curved. By identifying points on these lines and fitting their shapes by polyn.« less

  10. MSL: Facilitating automatic and physical analysis of published scientific literature in PDF format

    PubMed Central

    Ahmed, Zeeshan; Dandekar, Thomas

    2018-01-01

    Published scientific literature contains millions of figures, including information about the results obtained from different scientific experiments e.g. PCR-ELISA data, microarray analysis, gel electrophoresis, mass spectrometry data, DNA/RNA sequencing, diagnostic imaging (CT/MRI and ultrasound scans), and medicinal imaging like electroencephalography (EEG), magnetoencephalography (MEG), echocardiography  (ECG), positron-emission tomography (PET) images. The importance of biomedical figures has been widely recognized in scientific and medicine communities, as they play a vital role in providing major original data, experimental and computational results in concise form. One major challenge for implementing a system for scientific literature analysis is extracting and analyzing text and figures from published PDF files by physical and logical document analysis. Here we present a product line architecture based bioinformatics tool ‘Mining Scientific Literature (MSL)’, which supports the extraction of text and images by interpreting all kinds of published PDF files using advanced data mining and image processing techniques. It provides modules for the marginalization of extracted text based on different coordinates and keywords, visualization of extracted figures and extraction of embedded text from all kinds of biological and biomedical figures using applied Optimal Character Recognition (OCR). Moreover, for further analysis and usage, it generates the system’s output in different formats including text, PDF, XML and images files. Hence, MSL is an easy to install and use analysis tool to interpret published scientific literature in PDF format. PMID:29721305

  11. Optimizing Cloud Based Image Storage, Dissemination and Processing Through Use of Mrf and Lerc

    NASA Astrophysics Data System (ADS)

    Becker, Peter; Plesea, Lucian; Maurer, Thomas

    2016-06-01

    The volume and numbers of geospatial images being collected continue to increase exponentially with the ever increasing number of airborne and satellite imaging platforms, and the increasing rate of data collection. As a result, the cost of fast storage required to provide access to the imagery is a major cost factor in enterprise image management solutions to handle, process and disseminate the imagery and information extracted from the imagery. Cloud based object storage offers to provide significantly lower cost and elastic storage for this imagery, but also adds some disadvantages in terms of greater latency for data access and lack of traditional file access. Although traditional file formats geoTIF, JPEG2000 and NITF can be downloaded from such object storage, their structure and available compression are not optimum and access performance is curtailed. This paper provides details on a solution by utilizing a new open image formats for storage and access to geospatial imagery optimized for cloud storage and processing. MRF (Meta Raster Format) is optimized for large collections of scenes such as those acquired from optical sensors. The format enables optimized data access from cloud storage, along with the use of new compression options which cannot easily be added to existing formats. The paper also provides an overview of LERC a new image compression that can be used with MRF that provides very good lossless and controlled lossy compression.

  12. Digital atlas of Oklahoma

    USGS Publications Warehouse

    Rea, A.H.; Becker, C.J.

    1997-01-01

    This compact disc contains 25 digital map data sets covering the State of Oklahoma that may be of interest to the general public, private industry, schools, and government agencies. Fourteen data sets are statewide. These data sets include: administrative boundaries; 104th U.S. Congressional district boundaries; county boundaries; latitudinal lines; longitudinal lines; geographic names; indexes of U.S. Geological Survey 1:100,000, and 1:250,000-scale topographic quadrangles; a shaded-relief image; Oklahoma State House of Representatives district boundaries; Oklahoma State Senate district boundaries; locations of U.S. Geological Survey stream gages; watershed boundaries and hydrologic cataloging unit numbers; and locations of weather stations. Eleven data sets are divided by county and are located in 77 county subdirectories. These data sets include: census block group boundaries with selected demographic data; city and major highways text; geographic names; land surface elevation contours; elevation points; an index of U.S. Geological Survey 1:24,000-scale topographic quadrangles; roads, streets and address ranges; highway text; school district boundaries; streams, river and lakes; and the public land survey system. All data sets are provided in a readily accessible format. Most data sets are provided in Digital Line Graph (DLG) format. The attributes for many of the DLG files are stored in related dBASE(R)-format files and may be joined to the data set polygon attribute or arc attribute tables using dBASE(R)-compatible software. (Any use of trade names in this publication is for descriptive purposes only and does not imply endorsement by the U.S. Government.) Point attribute tables are provided in dBASE(R) format only, and include the X and Y map coordinates of each point. Annotation (text plotted in map coordinates) are provided in AutoCAD Drawing Exchange format (DXF) files. The shaded-relief image is provided in TIFF format. All data sets except the shaded-relief image also are provided in ARC/INFO export-file format.

  13. 76 FR 62134 - Bureau of Consular Affairs; Registration for the Diversity Immigrant (DV-2013) Visa Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-06

    ... Resident. We will not accept group or family photographs; you must include a separate photograph for each... new digital image: The image file format must be in the Joint Photographic Experts Group (JPEG) format... Web site four to six weeks before the scheduled interviews with U.S. consular officers at overseas...

  14. LAS - LAND ANALYSIS SYSTEM, VERSION 5.0

    NASA Technical Reports Server (NTRS)

    Pease, P. B.

    1994-01-01

    The Land Analysis System (LAS) is an image analysis system designed to manipulate and analyze digital data in raster format and provide the user with a wide spectrum of functions and statistical tools for analysis. LAS offers these features under VMS with optional image display capabilities for IVAS and other display devices as well as the X-Windows environment. LAS provides a flexible framework for algorithm development as well as for the processing and analysis of image data. Users may choose between mouse-driven commands or the traditional command line input mode. LAS functions include supervised and unsupervised image classification, film product generation, geometric registration, image repair, radiometric correction and image statistical analysis. Data files accepted by LAS include formats such as Multi-Spectral Scanner (MSS), Thematic Mapper (TM) and Advanced Very High Resolution Radiometer (AVHRR). The enhanced geometric registration package now includes both image to image and map to map transformations. The over 200 LAS functions fall into image processing scenario categories which include: arithmetic and logical functions, data transformations, fourier transforms, geometric registration, hard copy output, image restoration, intensity transformation, multispectral and statistical analysis, file transfer, tape profiling and file management among others. Internal improvements to the LAS code have eliminated the VAX VMS dependencies and improved overall system performance. The maximum LAS image size has been increased to 20,000 lines by 20,000 samples with a maximum of 256 bands per image. The catalog management system used in earlier versions of LAS has been replaced by a more streamlined and maintenance-free method of file management. This system is not dependent on VAX/VMS and relies on file naming conventions alone to allow the use of identical LAS file names on different operating systems. While the LAS code has been improved, the original capabilities of the system have been preserved. These include maintaining associated image history, session logging, and batch, asynchronous and interactive mode of operation. The LAS application programs are integrated under version 4.1 of an interface called the Transportable Applications Executive (TAE). TAE 4.1 has four modes of user interaction: menu, direct command, tutor (or help), and dynamic tutor. In addition TAE 4.1 allows the operation of LAS functions using mouse-driven commands under the TAE-Facelift environment provided with TAE 4.1. These modes of operation allow users, from the beginner to the expert, to exercise specific application options. LAS is written in C-language and FORTRAN 77 for use with DEC VAX computers running VMS with approximately 16Mb of physical memory. This program runs under TAE 4.1. Since TAE 4.1 is not a current version of TAE, TAE 4.1 is included within the LAS distribution. Approximately 130,000 blocks (65Mb) of disk storage space are necessary to store the source code and files generated by the installation procedure for LAS and 44,000 blocks (22Mb) of disk storage space are necessary for TAE 4.1 installation. The only other dependencies for LAS are the subroutine libraries for the specific display device(s) that will be used with LAS/DMS (e.g. X-Windows and/or IVAS). The standard distribution medium for LAS is a set of two 9track 6250 BPI magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. This program was developed in 1986 and last updated in 1992.

  15. BOREAS RSS-14 Level-2 GOES-7 Shortwave and Longwave Radiation Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Gu, Jiujing; Smith, Eric A.

    2000-01-01

    The BOREAS RSS-14 team collected and processed several GOES-7 and GOES-8 image data sets that covered the BOREAS study region. This data set contains images of shortwave and longwave radiation at the surface and top of the atmosphere derived from collected GOES-7 data. The data cover the time period of 05-Feb-1994 to 20-Sep-1994. The images missing from the temporal series were zero-filled to create a consistent sequence of files. The data are stored in binary image format files. Due to the large size of the images, the level-1a GOES-7 data are not contained on the BOREAS CD-ROM set. An inventory listing file is supplied on the CD-ROM to inform users of what data were collected. The level-1a GOES-7 image data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). See sections 15 and 16 for more information. The data files are available on a CD-ROM (see document number 20010000884).

  16. BOREAS Forest Cover Data Layers over the SSA-MSA in Raster Format

    NASA Technical Reports Server (NTRS)

    Nickeson, Jaime; Gruszka, F; Hall, F.

    2000-01-01

    This data set, originally provided as vector polygons with attributes, has been processed by BORIS staff to provide raster files that can be used for modeling or for comparison purposes. The original data were received as ARC/INFO coverages or as export files from SERM. The data include information on forest parameters for the BOREAS SSA-MSA. Most of the data used for this product were acquired by BORIS in 1993; the maps were produced from aerial photography taken as recently as 1988. The data are stored in binary, image format files.

  17. High throughput imaging cytometer with acoustic focussing† †Electronic supplementary information (ESI) available: High throughput imaging cytometer with acoustic focussing. See DOI: 10.1039/c5ra19497k Click here for additional data file. Click here for additional data file. Click here for additional data file. Click here for additional data file. Click here for additional data file. Click here for additional data file. Click here for additional data file. Click here for additional data file.

    PubMed Central

    Zmijan, Robert; Jonnalagadda, Umesh S.; Carugo, Dario; Kochi, Yu; Lemm, Elizabeth; Packham, Graham; Hill, Martyn

    2015-01-01

    We demonstrate an imaging flow cytometer that uses acoustic levitation to assemble cells and other particles into a sheet structure. This technique enables a high resolution, low noise CMOS camera to capture images of thousands of cells with each frame. While ultrasonic focussing has previously been demonstrated for 1D cytometry systems, extending the technology to a planar, much higher throughput format and integrating imaging is non-trivial, and represents a significant jump forward in capability, leading to diagnostic possibilities not achievable with current systems. A galvo mirror is used to track the images of the moving cells permitting exposure times of 10 ms at frame rates of 50 fps with motion blur of only a few pixels. At 80 fps, we demonstrate a throughput of 208 000 beads per second. We investigate the factors affecting motion blur and throughput, and demonstrate the system with fluorescent beads, leukaemia cells and a chondrocyte cell line. Cells require more time to reach the acoustic focus than beads, resulting in lower throughputs; however a longer device would remove this constraint. PMID:29456838

  18. A JPEG backward-compatible HDR image compression

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Ebrahimi, Touradj

    2012-10-01

    High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.

  19. Processed Thematic Mapper Satellite Imagery for Selected Areas within the U.S.-Mexico Borderlands

    USGS Publications Warehouse

    Dohrenwend, John C.; Gray, Floyd; Miller, Robert J.

    2000-01-01

    The study is summarized in the Adobe Acrobat Portable Document Format (PDF) file OF00-309.PDF. This publication also contain satellite full-scene images of selected areas along the U.S.-Mexico border. These images are presented as high-resolution images in jpeg format (IMAGES). The folder LOCATIONS in contains TIFF images showing exact positions of easily-identified reference locations for each of the Landsat TM scenes located at least partly within the U.S. A reference location table (BDRLOCS.DOC in MS Word format) lists the latitude and longitude of each reference location with a nominal precision of 0.001 minute of arc

  20. Chapter 3: Tabular Data and Graphical Images in Support of the U.S. Geological Survey National Oil and Gas Assessment - Western Gulf Province, Smackover-Austin-Eagle Ford Composite Total Petroleum System (504702)

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2006-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  1. HDFITS: Porting the FITS data model to HDF5

    NASA Astrophysics Data System (ADS)

    Price, D. C.; Barsdell, B. R.; Greenhill, L. J.

    2015-09-01

    The FITS (Flexible Image Transport System) data format has been the de facto data format for astronomy-related data products since its inception in the late 1970s. While the FITS file format is widely supported, it lacks many of the features of more modern data serialization, such as the Hierarchical Data Format (HDF5). The HDF5 file format offers considerable advantages over FITS, such as improved I/O speed and compression, but has yet to gain widespread adoption within astronomy. One of the major holdbacks is that HDF5 is not well supported by data reduction software packages and image viewers. Here, we present a comparison of FITS and HDF5 as a format for storage of astronomy datasets. We show that the underlying data model of FITS can be ported to HDF5 in a straightforward manner, and that by doing so the advantages of the HDF5 file format can be leveraged immediately. In addition, we present a software tool, fits2hdf, for converting between FITS and a new 'HDFITS' format, where data are stored in HDF5 in a FITS-like manner. We show that HDFITS allows faster reading of data (up to 100x of FITS in some use cases), and improved compression (higher compression ratios and higher throughput). Finally, we show that by only changing the import lines in Python-based FITS utilities, HDFITS formatted data can be presented transparently as an in-memory FITS equivalent.

  2. BOREAS Level-2 NS001 TMS Imagery: Reflectance and Temperature in BSQ Format

    NASA Technical Reports Server (NTRS)

    Lobitz, Brad; Spanner, Michael; Hall, Forrest G. (Editor); Newcomer, Jeffrey A. (Editor); Strub, Richard

    2000-01-01

    For BOREAS, the NS001 TMS images, along with the other remotely sensed data, were collected to provide spatially extensive information over the primary study areas. This information includes detailed land cover and biophysical parameter maps such as fPAR and LAI. Collection of the NS001 images occurred over the study areas during the 1994 field campaigns. The level-2 NS001 data are atmospherically corrected versions of some of the best original NS001 imagery and cover the dates of 19-Apr-1994, 07-Jun-1994, 21-Jul-1994, 08-Aug-1994, and 16-Sep-1994. The data are not geographically/geometrically corrected; however, files of relative X and Y coordinates for each image pixel were derived by using the C130 INS data in an NS001 scan model. The data are provided in binary image format files.

  3. 75 FR 4310 - Credit Reforms in Organized Wholesale Electric Markets

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-27

    ... electricity markets typically use bilateral contracts such as the Western Systems Power Pool (WSPP) standard... scanned image format. Commenters filing electronically do not need to make a paper filing. Commenters that..., Secretary. In consideration of the foregoing, the Commission proposes to amend part 35, Chapter J, Title 18...

  4. Image Viewer using Digital Imaging and Communications in Medicine (DICOM)

    NASA Astrophysics Data System (ADS)

    Baraskar, Trupti N.

    2010-11-01

    Digital Imaging and Communications in Medicine is a standard for handling, storing, printing, and transmitting information in medical imaging. The National Electrical Manufacturers Association holds the copyright to this standard. It was developed by the DICOM Standards committee. The other image viewers cannot collectively store the image details as well as the patient's information. So the image may get separated from the details, but DICOM file format stores the patient's information and the image details. Main objective is to develop a DICOM image viewer. The image viewer will open .dcm i.e. DICOM image file and also will have additional features such as zoom in, zoom out, black and white inverter, magnifier, blur, B/W inverter, horizontal and vertical flipping, sharpening, contrast, brightness and .gif converter are incorporated.

  5. A Review of Aeromagnetic Anomalies in the Sawatch Range, Central Colorado

    USGS Publications Warehouse

    Bankey, Viki

    2010-01-01

    This report contains digital data and image files of aeromagnetic anomalies in the Sawatch Range of central Colorado. The primary product is a data layer of polygons with linked data records that summarize previous interpretations of aeromagnetic anomalies in this region. None of these data files and images are new; rather, they are presented in updated formats that are intended to be used as input to geographic information systems, standard graphics software, or map-plotting packages.

  6. Development and evaluation of oral reporting system for PACS.

    PubMed

    Umeda, T; Inamura, K; Inamoto, K; Ikezoe, J; Kozuka, T; Kawase, I; Fujii, Y; Karasawa, H

    1994-05-01

    Experimental workstations for oral reporting and synchronized image filing have been developed and evaluated by radiologists and referring physicians. The file media is a 5.25-inch rewritable magneto-optical disk of 600-Mb capacity whose file format is in accordance with the IS&C specification. The results of evaluation tell that this system is superior to other existing methods of the same kind such as transcribing, dictating, handwriting, typewriting and key selections. The most significant advantage of the system is that images and their interpretation are never separated. The first practical application to the teaching file and the teaching conference is contemplated in the Osaka University Hospital. This system is a complete digital system in terms of images, voices and demographic data, so that on-line transmission, off-line communication or filing to any database will be easily realized in a PACS environment. We are developing an integrated system of a speech recognizer connected to this digitized oral system.

  7. BOREAS TE-20 Soils Data Over the NSA-MSA and Tower Sites in Raster Format

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Veldhuis, Hugo; Knapp, David; Veldhuis, Hugo

    2000-01-01

    The BOREAS TE-20 team collected several data sets for use in developing and testing models of forest ecosystem dynamics. This data set was gridded from vector layers of soil maps that were received from Dr. Hugo Veldhuis, who did the original mapping in the field during 1994. The vector layers were gridded into raster files that cover the NSA-MSA and tower sites. The data are stored in binary, image format files. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Center (DAAC).

  8. Tabular data and graphical images in support of the U.S. Geological Survey National Oil and Gas Assessment -- San Joaquin Basin (5010): Chapter 28 in Petroleum systems and geologic assessment of oil and gas in the San Joaquin Basin Province, California

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2007-01-01

    This chapter describes data used in support of the assessment process. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD–ROM. Computers and software may import the data without transcription from the portable document format (.pdf) files of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  9. Low-Speed Fingerprint Image Capture System User`s Guide, June 1, 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitus, B.R.; Goddard, J.S.; Jatko, W.B.

    1993-06-01

    The Low-Speed Fingerprint Image Capture System (LS-FICS) uses a Sun workstation controlling a Lenzar ElectroOptics Opacity 1000 imaging system to digitize fingerprint card images to support the Federal Bureau of Investigation`s (FBI`s) Automated Fingerprint Identification System (AFIS) program. The system also supports the operations performed by the Oak Ridge National Laboratory- (ORNL-) developed Image Transmission Network (ITN) prototype card scanning system. The input to the system is a single FBI fingerprint card of the agreed-upon standard format and a user-specified identification number. The output is a file formatted to be compatible with the National Institute of Standards and Technology (NIST)more » draft standard for fingerprint data exchange dated June 10, 1992. These NIST compatible files contain the required print and text images. The LS-FICS is designed to provide the FBI with the capability of scanning fingerprint cards into a digital format. The FBI will replicate the system to generate a data base of test images. The Host Workstation contains the image data paths and the compression algorithm. A local area network interface, disk storage, and tape drive are used for the image storage and retrieval, and the Lenzar Opacity 1000 scanner is used to acquire the image. The scanner is capable of resolving 500 pixels/in. in both x and y directions. The print images are maintained in full 8-bit gray scale and compressed with an FBI-approved wavelet-based compression algorithm. The text fields are downsampled to 250 pixels/in. and 2-bit gray scale. The text images are then compressed using a lossless Huffman coding scheme. The text fields retrieved from the output files are easily interpreted when displayed on the screen. Detailed procedures are provided for system calibration and operation. Software tools are provided to verify proper system operation.« less

  10. Digital seismic-reflection data from western Rhode Island Sound, 1980

    USGS Publications Warehouse

    McMullen, K.Y.; Poppe, L.J.; Soderberg, N.K.

    2009-01-01

    During 1980, the U.S. Geological Survey (USGS) conducted a seismic-reflection survey in western Rhode Island Sound aboard the Research Vessel Neecho. Data from this survey were recorded in analog form and archived at the USGS Woods Hole Science Center's Data Library. Due to recent interest in the geology of Rhode Island Sound and in an effort to make the data more readily accessible while preserving the original paper records, the seismic data from this cruise were scanned and converted to Tagged Image File Format (TIFF) images and SEG-Y data files. Navigation data were converted from U.S. Coast Guard Long Range Aids to Navigation (LORAN-C) time delays to latitudes and longitudes, which are available in Environmental Systems Research Institute, Inc. (ESRI) shapefile format and as eastings and northings in space-delimited text format.

  11. Informatics in radiology: automated structured reporting of imaging findings using the AIM standard and XML.

    PubMed

    Zimmerman, Stefan L; Kim, Woojin; Boonn, William W

    2011-01-01

    Quantitative and descriptive imaging data are a vital component of the radiology report and are frequently of paramount importance to the ordering physician. Unfortunately, current methods of recording these data in the report are both inefficient and error prone. In addition, the free-text, unstructured format of a radiology report makes aggregate analysis of data from multiple reports difficult or even impossible without manual intervention. A structured reporting work flow has been developed that allows quantitative data created at an advanced imaging workstation to be seamlessly integrated into the radiology report with minimal radiologist intervention. As an intermediary step between the workstation and the reporting software, quantitative and descriptive data are converted into an extensible markup language (XML) file in a standardized format specified by the Annotation and Image Markup (AIM) project of the National Institutes of Health Cancer Biomedical Informatics Grid. The AIM standard was created to allow image annotation data to be stored in a uniform machine-readable format. These XML files containing imaging data can also be stored on a local database for data mining and analysis. This structured work flow solution has the potential to improve radiologist efficiency, reduce errors, and facilitate storage of quantitative and descriptive imaging data for research. Copyright © RSNA, 2011.

  12. Visualizing 3D data obtained from microscopy on the Internet.

    PubMed

    Pittet, J J; Henn, C; Engel, A; Heymann, J B

    1999-01-01

    The Internet is a powerful communication medium increasingly exploited by business and science alike, especially in structural biology and bioinformatics. The traditional presentation of static two-dimensional images of real-world objects on the limited medium of paper can now be shown interactively in three dimensions. Many facets of this new capability have already been developed, particularly in the form of VRML (virtual reality modeling language), but there is a need to extend this capability for visualizing scientific data. Here we introduce a real-time isosurfacing node for VRML, based on the marching cube approach, allowing interactive isosurfacing. A second node does three-dimensional (3D) texture-based volume-rendering for a variety of representations. The use of computers in the microscopic and structural biosciences is extensive, and many scientific file formats exist. To overcome the problem of accessing such data from VRML and other tools, we implemented extensions to SGI's IFL (image format library). IFL is a file format abstraction layer defining communication between a program and a data file. These technologies are developed in support of the BioImage project, aiming to establish a database prototype for multidimensional microscopic data with the ability to view the data within a 3D interactive environment. Copyright 1999 Academic Press.

  13. Image editing with Adobe Photoshop 6.0.

    PubMed

    Caruso, Ronald D; Postel, Gregory C

    2002-01-01

    The authors introduce Photoshop 6.0 for radiologists and demonstrate basic techniques of editing gray-scale cross-sectional images intended for publication and for incorporation into computerized presentations. For basic editing of gray-scale cross-sectional images, the Tools palette and the History/Actions palette pair should be displayed. The History palette may be used to undo a step or series of steps. The Actions palette is a menu of user-defined macros that save time by automating an action or series of actions. Converting an image to 8-bit gray scale is the first editing function. Cropping is the next action. Both decrease file size. Use of the smallest file size necessary for the purpose at hand is recommended. Final file size for gray-scale cross-sectional neuroradiologic images (8-bit, single-layer TIFF [tagged image file format] at 300 pixels per inch) intended for publication varies from about 700 Kbytes to 3 Mbytes. Final file size for incorporation into computerized presentations is about 10-100 Kbytes (8-bit, single-layer, gray-scale, high-quality JPEG [Joint Photographic Experts Group]), depending on source and intended use. Editing and annotating images before they are inserted into presentation software is highly recommended, both for convenience and flexibility. Radiologists should find that image editing can be carried out very rapidly once the basic steps are learned and automated. Copyright RSNA, 2002

  14. Chapter 3. Tabular data and graphical images in support of the U.S. Geological Survey National Oil and Gas Assessment--East Texas basin and Louisiana-Mississippi salt basins provinces, Jurassic Smackover Interior salt basins total petroleum system (504902), Cotton Valley group.

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2006-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on the CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  15. Development of Software to Model AXAF-I Image Quality

    NASA Technical Reports Server (NTRS)

    Geary, Joseph; Hawkins, Lamar; Ahmad, Anees; Gong, Qian

    1997-01-01

    This report describes work conducted on Delivery Order 181 between October 1996 through June 1997. During this period software was written to: compute axial PSD's from RDOS AXAF-I mirror surface maps; plot axial surface errors and compute PSD's from HDOS "Big 8" axial scans; plot PSD's from FITS format PSD files; plot band-limited RMS vs axial and azimuthal position for multiple PSD files; combine and organize PSD's from multiple mirror surface measurements formatted as input to GRAZTRACE; modify GRAZTRACE to read FITS formatted PSD files; evaluate AXAF-I test results; improve and expand the capabilities of the GT x-ray mirror analysis package. During this period work began on a more user-friendly manual for the GT program, and improvements were made to the on-line help manual.

  16. BOREAS RSS-20 POLDER Radiance Images From the NASA C-130

    NASA Technical Reports Server (NTRS)

    Leroy, M.; Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Smith, David E. (Technical Monitor)

    2000-01-01

    These Boreal Ecosystem-Atmosphere Study (BOREAS) Remote Sensing Science (RSS)-20 data are a subset of images collected by the Polarization and Directionality of Earth's Reflectance (POLDER) instrument over tower sites in the BOREAS study areas during the intensive field campaigns (IFCs) in 1994. The POLDER images presented here from the NASA ARC C-130 aircraft are made available for illustration purposes only. The data are stored in binary image-format files. The POLDER radiance images are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). The data files are available on a CD-ROM (see document number 20010000884).

  17. Electronic hand-drafting and picture management system.

    PubMed

    Yang, Tsung-Han; Ku, Cheng-Yuan; Yen, David C; Hsieh, Wen-Huai

    2012-08-01

    The Department of Health of Executive Yuan in Taiwan (R.O.C.) is implementing a five-stage project entitled Electronic Medical Record (EMR) converting all health records from written to electronic form. Traditionally, physicians record patients' symptoms, related examinations, and suggested treatments on paper medical records. Currently when implementing the EMR, all text files and image files in the Hospital Information System (HIS) and Picture Archiving and Communication Systems (PACS) are kept separate. The current medical system environment is unable to combine text files, hand-drafted files, and photographs in the same system, so it is difficult to support physicians with the recording of medical data. Furthermore, in surgical and other related departments, physicians need immediate access to medical records in order to understand the details of a patient's condition. In order to address these problems, the Department of Health has implemented an EMR project, with the primary goal of building an electronic hand-drafting and picture management system (HDP system) that can be used by medical personnel to record medical information in a convenient way. This system can simultaneously edit text files, hand-drafted files, and image files and then integrate these data into Portable Document Format (PDF) files. In addition, the output is designed to fit a variety of formats in order to meet various laws and regulations. By combining the HDP system with HIS and PACS, the applicability can be enhanced to fit various scenarios and can assist the medical industry in moving into the final phase of EMR.

  18. Digital Aeromagnetic Data and Derivative Products from a Helicopter Survey over the Town of Taos and Surrounding Areas, Taos County, New Mexico

    USGS Publications Warehouse

    Bankey, Viki; Grauch, V.J.S.; ,

    2004-01-01

    This report contains digital data, image files, and text files describing data formats and survey procedures for aeromagnetic data collected during a helicopter geophysical survey in northern New Mexico during October 2003. The survey covers the Town of Taos, Taos Pueblo, and surrounding communities in Taos County. Several derivative products from these data are also presented, including reduced-to-pole, horizontal gradient magnitude, and downward continued grids and images.

  19. Digital aeromagnetic data and derivative products from a helicopter survey over the town of Blanca and surrounding areas, Alamosa and Costilla counties, Colorado

    USGS Publications Warehouse

    Bankey, Viki; Grauch, V.J.S.; ,

    2004-01-01

    This CD-ROM contains digital data, image files, and text files describing data formats and survey procedures for aeromagnetic data collected during a helicopter geophysical survey in southern Colorado during October 2003. The survey covers the town of Blanca and surrounding communities in Alamosa and Costilla Counties. Several derivative products from these data are also presented, including reduced-to-pole, horizontal gradient magnitude, and downward continued grids and images.

  20. 37 CFR 2.53 - Requirements for drawings filed through the TEAS.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... attach a digitized image of the mark to the TEAS submission that meets the requirements of paragraph (c) of this section. (c) Requirements for digitized image: The image must be in .jpg format and scanned... crowded, and produce a high quality image when copied. [68 FR 55764, Sept. 26, 2003, as amended at 70 FR...

  1. 37 CFR 2.53 - Requirements for drawings filed through the TEAS.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... attach a digitized image of the mark to the TEAS submission that meets the requirements of paragraph (c) of this section. (c) Requirements for digitized image: The image must be in .jpg format and scanned... crowded, and produce a high quality image when copied. [68 FR 55764, Sept. 26, 2003, as amended at 70 FR...

  2. 37 CFR 2.53 - Requirements for drawings filed through the TEAS.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... attach a digitized image of the mark to the TEAS submission that meets the requirements of paragraph (c) of this section. (c) Requirements for digitized image: The image must be in .jpg format and scanned... crowded, and produce a high quality image when copied. [68 FR 55764, Sept. 26, 2003, as amended at 70 FR...

  3. 37 CFR 2.53 - Requirements for drawings filed through the TEAS.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... attach a digitized image of the mark to the TEAS submission that meets the requirements of paragraph (c) of this section. (c) Requirements for digitized image: The image must be in .jpg format and scanned... crowded, and produce a high quality image when copied. [68 FR 55764, Sept. 26, 2003, as amended at 70 FR...

  4. Sinus Meridiani: uncontrolled Mars Global Surveyor (MGS) Mars Orbital Camera (MOC): digital context photomosaic (250 megapixel resolution)

    USGS Publications Warehouse

    Noreen, Eric

    2000-01-01

    These images were processed from a raw format using Integrated Software for Images and Spectrometers (ISIS) to perform radiometric corrections and projection. All the images were projected in sinusoidal using a center longitude of 0 degrees. There are two versions of the mosaic, one unfiltered (sinusmos.tif), and one produced with all images processed through a box filter with an averaged pixel tone of 7.5 (sinusmosflt.tif). Both mosaics are ArcView-ArcInfo(2) ready in TIF format with associated world files (*.tfw).

  5. Central Valles Marineris: uncontrolled Mars Global Surveyor (MGS) Mars Orbital Camera (MOC) digital context photomosaic (250 megapixel resolution)

    USGS Publications Warehouse

    Noreen, Eric

    2000-01-01

    These images were processed from a raw format using Integrated Software for Images and Spectrometers (ISIS) to perform radiometric corrections and projection. All the images were projected in sinusoidal using a center longitude of 70 degrees. There are two versions of the mosaic, one unfiltered (vallesmos.tif), and one produced with all images processed through a box filter with an averaged pixel tone of 7.699 (vallesmosflt.tif). Both mosaics are ArcView-ArcInfo ready in TIF format with associated world files (*.tfw).

  6. Efficient stereoscopic contents file format on the basis of ISO base media file format

    NASA Astrophysics Data System (ADS)

    Kim, Kyuheon; Lee, Jangwon; Suh, Doug Young; Park, Gwang Hoon

    2009-02-01

    A lot of 3D contents haven been widely used for multimedia services, however, real 3D video contents have been adopted for a limited applications such as a specially designed 3D cinema. This is because of the difficulty of capturing real 3D video contents and the limitation of display devices available in a market. However, diverse types of display devices for stereoscopic video contents for real 3D video contents have been recently released in a market. Especially, a mobile phone with a stereoscopic camera has been released in a market, which provides a user as a consumer to have more realistic experiences without glasses, and also, as a content creator to take stereoscopic images or record the stereoscopic video contents. However, a user can only store and display these acquired stereoscopic contents with his/her own devices due to the non-existence of a common file format for these contents. This limitation causes a user not share his/her contents with any other users, which makes it difficult the relevant market to stereoscopic contents is getting expanded. Therefore, this paper proposes the common file format on the basis of ISO base media file format for stereoscopic contents, which enables users to store and exchange pure stereoscopic contents. This technology is also currently under development for an international standard of MPEG as being called as a stereoscopic video application format.

  7. Why can't I manage my digital images like MP3s? The evolution and intent of multimedia metadata

    NASA Astrophysics Data System (ADS)

    Goodrum, Abby; Howison, James

    2005-01-01

    This paper considers the deceptively simple question: Why can't digital images be managed in the simple and effective manner in which digital music files are managed? We make the case that the answer is different treatments of metadata in different domains with different goals. A central difference between the two formats stems from the fact that digital music metadata lookup services are collaborative and automate the movement from a digital file to the appropriate metadata, while image metadata services do not. To understand why this difference exists we examine the divergent evolution of metadata standards for digital music and digital images and observed that the processes differ in interesting ways according to their intent. Specifically music metadata was developed primarily for personal file management and community resource sharing, while the focus of image metadata has largely been on information retrieval. We argue that lessons from MP3 metadata can assist individuals facing their growing personal image management challenges. Our focus therefore is not on metadata for cultural heritage institutions or the publishing industry, it is limited to the personal libraries growing on our hard-drives. This bottom-up approach to file management combined with p2p distribution radically altered the music landscape. Might such an approach have a similar impact on image publishing? This paper outlines plans for improving the personal management of digital images-doing image metadata and file management the MP3 way-and considers the likelihood of success.

  8. Why can't I manage my digital images like MP3s? The evolution and intent of multimedia metadata

    NASA Astrophysics Data System (ADS)

    Goodrum, Abby; Howison, James

    2004-12-01

    This paper considers the deceptively simple question: Why can"t digital images be managed in the simple and effective manner in which digital music files are managed? We make the case that the answer is different treatments of metadata in different domains with different goals. A central difference between the two formats stems from the fact that digital music metadata lookup services are collaborative and automate the movement from a digital file to the appropriate metadata, while image metadata services do not. To understand why this difference exists we examine the divergent evolution of metadata standards for digital music and digital images and observed that the processes differ in interesting ways according to their intent. Specifically music metadata was developed primarily for personal file management and community resource sharing, while the focus of image metadata has largely been on information retrieval. We argue that lessons from MP3 metadata can assist individuals facing their growing personal image management challenges. Our focus therefore is not on metadata for cultural heritage institutions or the publishing industry, it is limited to the personal libraries growing on our hard-drives. This bottom-up approach to file management combined with p2p distribution radically altered the music landscape. Might such an approach have a similar impact on image publishing? This paper outlines plans for improving the personal management of digital images-doing image metadata and file management the MP3 way-and considers the likelihood of success.

  9. Retrieving high-resolution images over the Internet from an anatomical image database

    NASA Astrophysics Data System (ADS)

    Strupp-Adams, Annette; Henderson, Earl

    1999-12-01

    The Visible Human Data set is an important contribution to the national collection of anatomical images. To enhance the availability of these images, the National Library of Medicine has supported the design and development of a prototype object-oriented image database which imports, stores, and distributes high resolution anatomical images in both pixel and voxel formats. One of the key database modules is its client-server Internet interface. This Web interface provides a query engine with retrieval access to high-resolution anatomical images that range in size from 100KB for browser viewable rendered images, to 1GB for anatomical structures in voxel file formats. The Web query and retrieval client-server system is composed of applet GUIs, servlets, and RMI application modules which communicate with each other to allow users to query for specific anatomical structures, and retrieve image data as well as associated anatomical images from the database. Selected images can be downloaded individually as single files via HTTP or downloaded in batch-mode over the Internet to the user's machine through an applet that uses Netscape's Object Signing mechanism. The image database uses ObjectDesign's object-oriented DBMS, ObjectStore that has a Java interface. The query and retrieval systems has been tested with a Java-CDE window system, and on the x86 architecture using Windows NT 4.0. This paper describes the Java applet client search engine that queries the database; the Java client module that enables users to view anatomical images online; the Java application server interface to the database which organizes data returned to the user, and its distribution engine that allow users to download image files individually and/or in batch-mode.

  10. Planetary image conversion task

    NASA Technical Reports Server (NTRS)

    Martin, M. D.; Stanley, C. L.; Laughlin, G.

    1985-01-01

    The Planetary Image Conversion Task group processed 12,500 magnetic tapes containing raw imaging data from JPL planetary missions and produced an image data base in consistent format on 1200 fully packed 6250-bpi tapes. The output tapes will remain at JPL. A copy of the entire tape set was delivered to US Geological Survey, Flagstaff, Ariz. A secondary task converted computer datalogs, which had been stored in project specific MARK IV File Management System data types and structures, to flat-file, text format that is processable on any modern computer system. The conversion processing took place at JPL's Image Processing Laboratory on an IBM 370-158 with existing software modified slightly to meet the needs of the conversion task. More than 99% of the original digital image data was successfully recovered by the conversion task. However, processing data tapes recorded before 1975 was destructive. This discovery is of critical importance to facilities responsible for maintaining digital archives since normal periodic random sampling techniques would be unlikely to detect this phenomenon, and entire data sets could be wiped out in the act of generating seemingly positive sampling results. Reccomended follow-on activities are also included.

  11. Determining the Completeness of the Nimbus Meteorological Data Archive

    NASA Technical Reports Server (NTRS)

    Johnson, James; Moses, John; Kempler, Steven; Zamkoff, Emily; Al-Jazrawi, Atheer; Gerasimov, Irina; Trivedi, Bhagirath

    2011-01-01

    NASA launched the Nimbus series of meteorological satellites in the 1960s and 70s. These satellites carried instruments for making observations of the Earth in the visible, infrared, ultraviolet, and microwave wavelengths. The original data archive consisted of a combination of digital data written to 7-track computer tapes and on various film media. Many of these data sets are now being migrated from the old media to the GES DISC modern online archive. The process involves recovering the digital data files from tape as well as scanning images of the data from film strips. Some of the challenges of archiving the Nimbus data include the lack of any metadata from these old data sets. Metadata standards and self-describing data files did not exist at that time, and files were written on now obsolete hardware systems and outdated file formats. This requires creating metadata by reading the contents of the old data files. Some digital data files were corrupted over time, or were possibly improperly copied at the time of creation. Thus there are data gaps in the collections. The film strips were stored in boxes and are now being scanned as JPEG-2000 images. The only information describing these images is what was written on them when they were originally created, and sometimes this information is incomplete or missing. We have the ability to cross-reference the scanned images against the digital data files to determine which of these best represents the data set from the various missions, or to see how complete the data sets are. In this presentation we compared data files and scanned images from the Nimbus-2 High-Resolution Infrared Radiometer (HRIR) for September 1966 to determine whether the data and images are properly archived with correct metadata.

  12. Tabular data and graphical images in support of the U.S. Geological Survey National Oil and Gas Assessment--San Juan Basin Province (5022): Chapter 7 in Total petroleum systems and geologic assessment of undiscovered oil and gas resources in the San Juan Basin Province, exclusive of Paleozoic rocks, New Mexico and Colorado

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2013-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD–ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  13. Digital data from the Questa-San Luis and Santa Fe East helicopter magnetic surveys in Santa Fe and Taos Counties, New Mexico, and Costilla County, Colorado

    USGS Publications Warehouse

    Bankey, Viki; Grauch, V.J.S.; Drenth, B.J.; ,

    2006-01-01

    This report contains digital data, image files, and text files describing data formats and survey procedures for aeromagnetic data collected during high-resolution aeromagnetic surveys in southern Colorado and northern New Mexico in December, 2005. One survey covers the eastern edge of the San Luis basin, including the towns of Questa, New Mexico and San Luis, Colorado. A second survey covers the mountain front east of Santa Fe, New Mexico, including the town of Chimayo and portions of the Pueblos of Tesuque and Nambe. Several derivative products from these data are also presented as grids and images, including reduced-to-pole data and data continued to a reference surface. Images are presented in various formats and are intended to be used as input to geographic information systems, standard graphics software, or map plotting packages.

  14. Automatic Feature Extraction System.

    DTIC Science & Technology

    1982-12-01

    exploitation. It was used for * processing of black and white and multispectral reconnaissance photography, side-looking synthetic aperture radar imagery...the image data and different software modules for image queing and formatting, the result of the input process will be images in standard AFES file...timely manner. The FFS configuration provides the environment necessary for integrated testing of image processing functions and design and

  15. 1995 Joseph E. Whitley, MD, Award. A World Wide Web gateway to the radiologic learning file.

    PubMed

    Channin, D S

    1995-12-01

    Computer networks in general, and the Internet specifically, are changing the way information is manipulated in the world at large and in radiology. The goal of this project was to develop a computer system in which images from the Radiologic Learning File, available previously only via a single-user laser disc, are made available over a generic, high-availability computer network to many potential users simultaneously. Using a networked workstation in our laboratory and freely available distributed hypertext software, we established a World Wide Web (WWW) information server for radiology. Images from the Radiologic Learning File are requested through the WWW client software, digitized from a single laser disc containing the entire teaching file and then transmitted over the network to the client. The text accompanying each image is incorporated into the transmitted document. The Radiologic Learning File is now on-line, and requests to view the cases result in the delivery of the text and images. Image digitization via a frame grabber takes 1/30th of a second. Conversion of the image to a standard computer graphic format takes 45-60 sec. Text and image transmission speed on a local area network varies between 200 and 400 kilobytes (KB) per second depending on the network load. We have made images from a laser disc of the Radiologic Learning File available through an Internet-based hypertext server. The images previously available through a single-user system located in a remote section of our department are now ubiquitously available throughout our department via the department's computer network. We have thus converted a single-user, limited functionality system into a multiuser, widely available resource.

  16. Providing Internet Access to High-Resolution Lunar Images

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2008-01-01

    The OnMoon server is a computer program that provides Internet access to high-resolution Lunar images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of the Moon. The OnMoon server implements the Open Geospatial Consortium (OGC) Web Map Service (WMS) server protocol and supports Moon-specific extensions. Unlike other Internet map servers that provide Lunar data using an Earth coordinate system, the OnMoon server supports encoding of data in Moon-specific coordinate systems. The OnMoon server offers access to most of the available high-resolution Lunar image and elevation data. This server can generate image and map files in the tagged image file format (TIFF) or the Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. Full-precision spectral arithmetic processing is also available, by use of a custom SLD extension. This server can dynamically add shaded relief based on the Lunar elevation to any image layer. This server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.

  17. Providing Internet Access to High-Resolution Mars Images

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2008-01-01

    The OnMars server is a computer program that provides Internet access to high-resolution Mars images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of Mars. The OnMars server is an implementation of the Open Geospatial Consortium (OGC) Web Map Service (WMS) server. Unlike other Mars Internet map servers that provide Martian data using an Earth coordinate system, the OnMars WMS server supports encoding of data in Mars-specific coordinate systems. The OnMars server offers access to most of the available high-resolution Martian image and elevation data, including an 8-meter-per-pixel uncontrolled mosaic of most of the Mars Global Surveyor (MGS) Mars Observer Camera Narrow Angle (MOCNA) image collection, which is not available elsewhere. This server can generate image and map files in the tagged image file format (TIFF), Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. The OnMars server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.

  18. Archive of Digitized Analog Boomer Seismic Reflection Data Collected from the Mississippi-Alabama-Florida Shelf During Cruises Onboard the R/V Kit Jones, June 1990 and July 1991

    USGS Publications Warehouse

    Sanford, Jordan M.; Harrison, Arnell S.; Wiese, Dana S.; Flocks, James G.

    2009-01-01

    In June of 1990 and July of 1991, the U.S. Geological Survey (USGS) conducted geophysical surveys to investigate the shallow geologic framework of the Mississippi-Alabama-Florida shelf in the northern Gulf of Mexico, from Mississippi Sound to the Florida Panhandle. Work was done onboard the Mississippi Mineral Resources Institute R/V Kit Jones as part of a project to study coastal erosion and offshore sand resources. This report is part of a series to digitally archive the legacy analog data collected from the Mississippi-Alabama SHelf (MASH). The MASH data rescue project is a cooperative effort by the USGS and the Minerals Management Service (MMS). This report serves as an archive of high-resolution scanned Tagged Image File Format (TIFF) and Graphics Interchange Format (GIF) images of the original boomer paper records, navigation files, trackline maps, Geographic Information System (GIS) files, cruise logs, and formal Federal Geographic Data Committee (FGDC) metadata.

  19. Selective document image data compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1998-05-19

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.

  20. Selective document image data compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1998-01-01

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)

  1. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  2. Context-dependent JPEG backward-compatible high-dynamic range image compression

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Ebrahimi, Touradj

    2013-10-01

    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  3. BOREAS RSS-16 AIRSAR CM Images: Integrated Processor Version 6.1 Level-3b

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Saatchi, Susan; Newcomer, Jeffrey A.; Strub, Richard; Irani, Fred

    2000-01-01

    The BOREAS RSS-16 team used satellite and aircraft SAR data in conjunction with various ground measurements to determine the moisture regime of the boreal forest. RSS-16 assisted with the acquisition and ordering of NASA JPL AIRSAR data collected from the NASA DC-8 aircraft. The NASA JPL AIRSAR is a side-looking imaging radar system that utilizes the SAR principle to obtain high resolution images that represent the radar backscatter of the imaged surface at different frequencies and polarizations. The information contained in each pixel of the AIRSAR data represents the radar backscatter for all possible combinations of horizontal and vertical transmit and receive polarizations (i.e., HH, HV, VH, and VV). Geographically, the data cover portions of the BOREAS SSA and NSA. Temporally, the data were acquired from 12-Aug-1993 to 31-Jul-1995. The level-3b AIRSAR CM data are in compressed Stokes matrix format, which has 10 bytes per pixel. From this data format, it is possible to synthesize a number of different radar backscatter measurements. The data are stored in binary image-format files. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  4. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    PubMed

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  5. Toward a standard reference database for computer-aided mammography

    NASA Astrophysics Data System (ADS)

    Oliveira, Júlia E. E.; Gueld, Mark O.; de A. Araújo, Arnaldo; Ott, Bastian; Deserno, Thomas M.

    2008-03-01

    Because of the lack of mammography databases with a large amount of codified images and identified characteristics like pathology, type of breast tissue, and abnormality, there is a problem for the development of robust systems for computer-aided diagnosis. Integrated to the Image Retrieval in Medical Applications (IRMA) project, we present an available mammography database developed from the union of: The Mammographic Image Analysis Society Digital Mammogram Database (MIAS), The Digital Database for Screening Mammography (DDSM), the Lawrence Livermore National Laboratory (LLNL), and routine images from the Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen. Using the IRMA code, standardized coding of tissue type, tumor staging, and lesion description was developed according to the American College of Radiology (ACR) tissue codes and the ACR breast imaging reporting and data system (BI-RADS). The import was done automatically using scripts for image download, file format conversion, file name, web page and information file browsing. Disregarding the resolution, this resulted in a total of 10,509 reference images, and 6,767 images are associated with an IRMA contour information feature file. In accordance to the respective license agreements, the database will be made freely available for research purposes, and may be used for image based evaluation campaigns such as the Cross Language Evaluation Forum (CLEF). We have also shown that it can be extended easily with further cases imported from a picture archiving and communication system (PACS).

  6. Preliminary Image Map of the 2007 Harris Fire Perimeter, Barrett Lake Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  7. Preliminary Image Map of the 2007 Santiago Fire Perimeter, Santiago Peak Quadrangle, Orange and Riverside Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  8. Preliminary Image Map of the 2007 Buckweed Fire Perimeter, Green Valley Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  9. Preliminary Image Map of the 2007 Witch Fire Perimeter, Warners Ranch Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  10. Preliminary Image Map of the 2007 Harris Fire Perimeter, Otay Mesa Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  11. Preliminary Image Map of the 2007 Rice Fire Perimeter, Bonsall Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  12. Preliminary Image Map of the 2007 Poomacha Fire Perimeter, Pechanga Quadrangle, Riverside and San Diego Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  13. Preliminary Image Map of the 2007 Harris Fire Perimeter, Tecate Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  14. Preliminary Image Map of the 2007 Poomacha Fire Perimeter, Temecula Quadrangle, Riverside and San Diego Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  15. Preliminary Image Map of the 2007 Buckweed Fire Perimeter, Agua Dulce Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  16. Preliminary Image Map of the 2007 Witch Fire Perimeter, San Pasqual Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  17. Preliminary Image Map of the 2007 Buckweed Fire Perimeter, Mint Canyon Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  18. Preliminary Image Map of the 2007 Witch Fire Perimeter, Escondido Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  19. Preliminary Image Map of the 2007 Poomacha Fire Perimeter, Boucher Hill Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  20. Preliminary Image Map of the 2007 Ammo Fire Perimeter, Margarita Peak Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  1. Preliminary Image Map of the 2007 Witch Fire Perimeter, Ramona Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  2. Preliminary Image Map of the 2007 Ammo Fire Perimeter, San Onofre Bluff Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  3. Preliminary Image Map of the 2007 Santiago Fire Perimeter, Orange Quadrangle, Orange County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  4. Preliminary Image Map of the 2007 Harris Fire Perimeter, Otay Mountain Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  5. Preliminary Image Map of the 2007 Ranch Fire Perimeter, Cobblestone Mountain Quadrangle, Los Angeles and Ventura Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  6. Preliminary Image Map of the 2007 Poomacha Fire Perimeter, Palomar Observatory Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  7. Preliminary Image Map of the 2007 Witch Fire Perimeter, El Cajon Mountain Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  8. Preliminary Image Map of the 2007 Witch and Poomacha Fire Perimeters, Rodriguez Mountain Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  9. Preliminary Image Map of the 2007 Witch Fire Perimeter, Santa Ysabel Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  10. Preliminary Image Map of the 2007 Ammo Fire Perimeter, Las Pulgas Canyon Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  11. Preliminary Image Map of the 2007 Harris Fire Perimeter, Jamul Mountains Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  12. Preliminary Image Map of the 2007 Santiago Fire Perimeter, Lake Forest Quadrangle, Orange County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  13. Preliminary Image Map of the 2007 Cajon Fire Perimeter, San Bernardino North Quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  14. Preliminary Image Map of the 2007 Slide Fire Perimeter, Butler Peak Quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  15. Preliminary Image Map of the 2007 Witch Fire Perimeter, San Vicente Reservoir Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  16. Preliminary Image Map of the 2007 Ammo Fire Perimeter, San Clemente Quadrangle, Orange and San Diego Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  17. Preliminary Image Map of the 2007 Cajon Fire Perimeter, Devore Quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  18. Preliminary Image Map of the 2007 Ranch Fire Perimeter, Fillmore Quadrangle, Ventura County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  19. Preliminary Image Map of the 2007 Ranch Fire Perimeter, Piru Quadrangle, Ventura County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  20. Preliminary Image Map of the 2007 Magic and Buckweed Fire Perimeters, Newhall Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  1. Preliminary Image Map of the 2007 Harris Fire Perimeter, Dulzura Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  2. Preliminary Image Map of the 2007 Grass Valley Fire Perimeter, Lake Arrowhead Quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  3. Preliminary Image Map of the 2007 Harris Fire Perimeter, Potrero Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  4. Preliminary Image Map of the 2007 Witch and Poomacha Fire Perimeters, Mesa Grande Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  5. Preliminary Image Map of the 2007 Canyon Fire Perimeter, Malibu Beach Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  6. Preliminary Image Map of the 2007 Santiago Fire Perimeter, Black Star Canyon Quadrangle, Orange, Riverside, and San Bernardino Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  7. Preliminary Image Map of the 2007 Buckweed Fire Perimeter, Warm Springs Mountain Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  8. Preliminary Image Map of the 2007 Ranch Fire Perimeter, Whitaker Peak Quadrangle, Los Angeles and Ventura Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  9. Preliminary Image Map of the 2007 Poomacha Fire Perimeter, Vail Lake Quadrangle, Riverside and San Diego Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  10. Preliminary Image Map of the 2007 Witch Fire Perimeter, Valley Center Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  11. Preliminary Image Map of the 2007 Santiago Fire Perimeter, Tustin Quadrangle, Orange County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  12. Preliminary Image Map of the 2007 Witch Fire Perimeter, Rancho Santa Fe Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  13. Preliminary Image Map of the 2007 Slide Fire Perimeter, Harrison Mountain Quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  14. Preliminary Image Map of the 2007 Buckweed Fire Perimeter, Sleepy Valley Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  15. Preliminary Image Map of the 2007 Ranch and Magic Fire Perimeters, Val Verde Quadrangle, Los Angeles and Ventura Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  16. Preliminary Image Map of the 2007 Witch Fire Perimeter, Poway Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  17. Preliminary Image Map of the 2007 Poomacha Fire Perimeter, Pala Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  18. Preliminary Image Map of the 2007 Witch Fire Perimeter, Tule Springs Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  19. Preliminary Image Map of the 2007 Harris Fire Perimeter, Morena Reservoir Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  20. Preliminary Image Map of the 2007 Slide Fire Perimeter, Keller Peak Quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  1. BOREAS Level-3b Landsat TM Imagery: At-sensor Radiances in BSQ Format

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime; Knapp, David; Newcomer, Jeffrey A.; Cihlar, Josef

    2000-01-01

    For BOREAS, the level-3b Landsat TM data, along with the other remotely sensed images, were collected in order to provide spatially extensive information over the primary study areas. This information includes radiant energy, detailed land cover, and biophysical parameter maps such as FPAR and LAI. Although very similar in content to the level-3a Landsat TM products, the level-3b images were created to provide users with a directly usable at-sensor radiance image. Geographically, the level-3b images cover the BOREAS NSA and SSA. Temporally, the images cover the period of 22-Jun-1984 to 09-Jul-1996. The images are available in binary, image format files.

  2. Loose, Falling Characters and Sentences: The Persistence of the OCR Problem in Digital Repository E-Books

    ERIC Educational Resources Information Center

    Kichuk, Diana

    2015-01-01

    The electronic conversion of scanned image files to readable text using optical character recognition (OCR) software and the subsequent migration of raw OCR text to e-book text file formats are key remediation or media conversion technologies used in digital repository e-book production. Despite real progress, the OCR problem of reliability and…

  3. Incidence of Apical Crack Initiation during Canal Preparation using Hand Stainless Steel (K-File) and Hand NiTi (Protaper) Files.

    PubMed

    Soni, Dileep; Raisingani, Deepak; Mathur, Rachit; Madan, Nidha; Visnoi, Suchita

    2016-01-01

    To evaluate the incidence of apical crack initiation during canal preparation with stainless steel K-files and hand protaper files (in vitro study). Sixty extracted mandibular premo-lar teeth are randomly selected and embedded in an acrylic tube filled with autopolymerizing resin. A baseline image of the apical surface of each specimen was recorded under a digital microscope (80×). The cervical and middle thirds of all samples were flared with #2 and #1 Gates-Glidden (GG) drills, and a second image was recorded. The teeth were randomly divided into four groups of 15 teeth each according to the file type (hand K-file and hand-protaper) and working length (WL) (instrumented at WL and 1 mm less than WL). Final image after dye penetration and photomicrograph of the apical root surface were digitally recorded. Maximum numbers of cracks were observed with hand protaper files compared with hand K-file at the WL and 1 mm short of WL. Chi-square testing revealed a highly significant effect of WL on crack formation at WL and 1 mm short of WL (p = 0.000). Minimum numbers of cracks at WL and 1 mm short of WL were observed with hand K-file and maximum with hand protaper files. Soni D, Raisingani D, Mathur R, Madan N, Visnoi S. Incidence of Apical Crack Initiation during Canal Preparation using Hand Stainless Steel (K-File) and Hand NiTi (Protaper) Files. Int J Clin Pediatr Dent 2016;9(4):303-307.

  4. Preliminary results of 3D dose calculations with MCNP-4B code from a SPECT image.

    PubMed

    Rodríguez Gual, M; Lima, F F; Sospedra Alfonso, R; González González, J; Calderón Marín, C

    2004-01-01

    Interface software was developed to generate the input file to run Monte Carlo MCNP-4B code from medical image in Interfile format version 3.3. The software was tested using a spherical phantom of tomography slides with known cumulated activity distribution in Interfile format generated with IMAGAMMA medical image processing system. The 3D dose calculation obtained with Monte Carlo MCNP-4B code was compared with the voxel S factor method. The results show a relative error between both methods less than 1 %.

  5. Survey of Non-Rigid Registration Tools in Medicine.

    PubMed

    Keszei, András P; Berkels, Benjamin; Deserno, Thomas M

    2017-02-01

    We catalogue available software solutions for non-rigid image registration to support scientists in selecting suitable tools for specific medical registration purposes. Registration tools were identified using non-systematic search in Pubmed, Web of Science, IEEE Xplore® Digital Library, Google Scholar, and through references in identified sources (n = 22). Exclusions are due to unavailability or inappropriateness. The remaining (n = 18) tools were classified by (i) access and technology, (ii) interfaces and application, (iii) living community, (iv) supported file formats, and (v) types of registration methodologies emphasizing the similarity measures implemented. Out of the 18 tools, (i) 12 are open source, 8 are released under a permissive free license, which imposes the least restrictions on the use and further development of the tool, 8 provide graphical processing unit (GPU) support; (ii) 7 are built on software platforms, 5 were developed for brain image registration; (iii) 6 are under active development but only 3 have had their last update in 2015 or 2016; (iv) 16 support the Analyze format, while 7 file formats can be read with only one of the tools; and (v) 6 provide multiple registration methods and 6 provide landmark-based registration methods. Based on open source, licensing, GPU support, active community, several file formats, algorithms, and similarity measures, the tools Elastics and Plastimatch are chosen for the platform ITK and without platform requirements, respectively. Researchers in medical image analysis already have a large choice of registration tools freely available. However, the most recently published algorithms may not be included in the tools, yet.

  6. Application of XML in DICOM

    NASA Astrophysics Data System (ADS)

    You, Xiaozhen; Yao, Zhihong

    2005-04-01

    As a standard of communication and storage for medical digital images, DICOM has been playing a very important role in integration of hospital information. In DICOM, tags are expressed by numbers, and only standard data elements can be shared by looking up Data Dictionary while private tags can not. As such, a DICOM file's readability and extensibility is limited. In addition, reading DICOM files needs special software. In our research, we introduced XML into DICOM, defining an XML-based DICOM special transfer format, XML-DCM, a DICOM storage format, X-DCM, as well as developing a program package to realize format interchange among DICOM, XML-DCM, and X-DCM. XML-DCM is based on the DICOM structure while replacing numeric tags with accessible XML character string tags. The merits are as following: a) every character string tag of XML-DCM has explicit meaning, so users can understand standard data elements and those private data elements easily without looking up the Data Dictionary. In this way, the readability and data sharing of DICOM files are greatly improved; b) According to requirements, users can set new character string tags with explicit meaning to their own system to extend the capacity of data elements; c) User can read the medical image and associated information conveniently through IE, ultimately enlarging the scope of data sharing. The application of storage format X-DCM will reduce data redundancy and save storage memory. The result of practical application shows that XML-DCM does favor integration and share of medical image data among different systems or devices.

  7. GIF Animation of Mode Shapes and Other Data on the Internet

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.

    1998-01-01

    The World Wide Web abounds with animated cartoons and advertisements competing for our attention. Most of these figures are animated Graphics Interchange Format (GIF) files. These files contain a series of ordinary GIF images plus control information, and they provide an exceptionally simple, effective way to animate on the Internet. To date, however, this format has rarely been used for technical data, although there is no inherent reason not to do so. This paper describes a procedure for creating high-resolution animated GIFs of mode shapes and other types of structural dynamics data with readily available software. The paper shows three example applications using recent modal test data and video footage of a high-speed sled run. A fairly detailed summary of the GIF file format is provided in the appendix. All of the animations discussed in the paper are posted on the Internet available through the following address: http://sdb-www.larc.nasa.gov/.

  8. A Study of NetCDF as an Approach for High Performance Medical Image Storage

    NASA Astrophysics Data System (ADS)

    Magnus, Marcone; Coelho Prado, Thiago; von Wangenhein, Aldo; de Macedo, Douglas D. J.; Dantas, M. A. R.

    2012-02-01

    The spread of telemedicine systems increases every day. The systems and PACS based on DICOM images has become common. This rise reflects the need to develop new storage systems, more efficient and with lower computational costs. With this in mind, this article discusses a study for application in NetCDF data format as the basic platform for storage of DICOM images. The study case comparison adopts an ordinary database, the HDF5 and the NetCDF to storage the medical images. Empirical results, using a real set of images, indicate that the time to retrieve images from the NetCDF for large scale images has a higher latency compared to the other two methods. In addition, the latency is proportional to the file size, which represents a drawback to a telemedicine system that is characterized by a large amount of large image files.

  9. Incorporating the APS Catalog of the POSS I and Image Archive in ADS

    NASA Technical Reports Server (NTRS)

    Humphreys, Roberta M.

    1998-01-01

    The primary purpose of this contract was to develop the software to both create and access an on-line database of images from digital scans of the Palomar Sky Survey. This required modifying our DBMS (called Star Base) to create an image database from the actual raw pixel data from the scans. The digitized images are processed into a set of coordinate-reference index and pixel files that are stored in run-length files, thus achieving an efficient lossless compression. For efficiency and ease of referencing, each digitized POSS I plate is then divided into 900 subplates. Our custom DBMS maps each query into the corresponding POSS plate(s) and subplate(s). All images from the appropriate subplates are retrieved from disk with byte-offsets taken from the index files. These are assembled on-the-fly into a GIF image file for browser display, and a FITS format image file for retrieval. The FITS images have a pixel size of 0.33 arcseconds. The FITS header contains astrometric and photometric information. This method keeps the disk requirements manageable while allowing for future improvements. When complete, the APS Image Database will contain over 130 Gb of data. A set of web pages query forms are available on-line, as well as an on-line tutorial and documentation. The database is distributed to the Internet by a high-speed SGI server and a high-bandwidth disk system. URL is http://aps.umn.edu/IDB/. The image database software is written in perl and C and has been compiled on SGI computers with MIX5.3. A copy of the written documentation is included and the software is on the accompanying exabyte tape.

  10. Using compressed images in multimedia education

    NASA Astrophysics Data System (ADS)

    Guy, William L.; Hefner, Lance V.

    1996-04-01

    The classic radiologic teaching file consists of hundreds, if not thousands, of films of various ages, housed in paper jackets with brief descriptions written on the jackets. The development of a good teaching file has been both time consuming and voluminous. Also, any radiograph to be copied was unavailable during the reproduction interval, inconveniencing other medical professionals needing to view the images at that time. These factors hinder motivation to copy films of interest. If a busy radiologist already has an adequate example of a radiological manifestation, it is unlikely that he or she will exert the effort to make a copy of another similar image even if a better example comes along. Digitized radiographs stored on CD-ROM offer marked improvement over the copied film teaching files. Our institution has several laser digitizers which are used to rapidly scan radiographs and produce high quality digital images which can then be converted into standard microcomputer (IBM, Mac, etc.) image format. These images can be stored on floppy disks, hard drives, rewritable optical disks, recordable CD-ROM disks, or removable cartridge media. Most hospital computer information systems include radiology reports in their database. We demonstrate that the reports for the images included in the users teaching file can be copied and stored on the same storage media as the images. The radiographic or sonographic image and the corresponding dictated report can then be 'linked' together. The description of the finding or findings of interest on the digitized image is thus electronically tethered to the image. This obviates the need to write much additional detail concerning the radiograph, saving time. In addition, the text on this disk can be indexed such that all files with user specified features can be instantly retrieve and combined in a single report, if desired. With the use of newer image compression techniques, hundreds of cases may be stored on a single CD-ROM depending on the quality of image required for the finding in question. This reduces the weight of a teaching file from that of a baby elephant to that of a single CD-ROM disc. Thus, with this method of teaching file preparation and storage the following advantages are realized: (1) Technically easier and less time consuming image reproduction. (2) Considerably less unwieldy and substantially more portable teaching files. (3) Novel ability to index files and then retrieve specific cases of choice based on descriptive text.

  11. Computer image analysis in obtaining characteristics of images: greenhouse tomatoes in the process of generating learning sets of artificial neural networks

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Przybył, J.; Koszela, K.; Boniecki, P.; Mueller, W.; Raba, B.; Lewicki, A.; Przybył, K.

    2014-04-01

    The aim of the project was to make the software which on the basis on image of greenhouse tomato allows for the extraction of its characteristics. Data gathered during the image analysis and processing were used to build learning sets of artificial neural networks. Program enables to process pictures in jpeg format, acquisition of statistical information of the picture and export them to an external file. Produced software is intended to batch analyze collected research material and obtained information saved as a csv file. Program allows for analysis of 33 independent parameters implicitly to describe tested image. The application is dedicated to processing and image analysis of greenhouse tomatoes. The program can be used for analysis of other fruits and vegetables of a spherical shape.

  12. Do you also have problems with the file format syndrome?

    PubMed

    De Cuyper, B; Nyssen, E; Christophe, Y; Cornelis, J

    1991-11-01

    In a biomedical data processing environment, an essential requirement is the ability to integrate a large class of standard modules for the acquisition, processing and display of the (image) data. Our approach to the management and manipulation of the different data formats is based on the specification of a common standard for the representation of data formats, called 'data nature descriptions' to emphasise that this representation not only specifies the structure but also the contents of data objects (files). The idea behind this concept is to associate each hardware and software component that produces or uses medical data with a description of the data objects manipulated by that component. In our approach a special software module (a format convertor generator) takes care of the appropriate data format conversions, required when two or more components of the system exchange data.

  13. Home teleradiology system

    NASA Astrophysics Data System (ADS)

    Komo, Darmadi; Garra, Brian S.; Freedman, Matthew T.; Mun, Seong K.

    1997-05-01

    The Home Teleradiology Server system has been developed and installed at the Department of Radiology, Georgetown University Medical Center. The main purpose of the system is to provide a service for on-call physicians to view patients' medical images at home during off-hours. This service will reduce the overhead time required by on-call physicians to travel to the hospital, thereby increasing the efficiency of patient care and improving the total quality of the health care. Typically when a new case is conducted, the medical images generated from CT, US, and/or MRI modalities are transferred to a central server at the hospital via DICOM messages over an existing hospital network. The server has a DICOM network agent that listens to DICOM messages sent by CT, US, and MRI modalities and stores them into separate DICOM files for sending purposes. The server also has a general purpose, flexible scheduling software that can be configured to send image files to specific user(s) at certain times on any day(s) of the week. The server will then distribute the medical images to on- call physicians' homes via a high-speed modem. All file transmissions occur in the background without human interaction after the scheduling software is pre-configured accordingly. At the receiving end, the physicians' computers consist of high-end workstations that have high-speed modems to receive the medical images sent by the central server from the hospital, and DICOM compatible viewer software to view the transmitted medical images in DICOM format. A technician from the hospital, and DICOM compatible viewer software to view the transmitted medical images in DICOM format. A technician from the hospital will notify the physician(s) after all the image files have been completely sent. The physician(s) will then examine the medical images and decide if it is necessary to travel to the hospital for further examination on the patients. Overall, the Home Teleradiology system provides the on-call physicians with a cost-effective and convenient environment for viewing patients' medical images at home.

  14. BOREAS RSS-8 Snow Maps Derived from Landsat TM Imagery

    NASA Technical Reports Server (NTRS)

    Hall, Dorothy; Chang, Alfred T. C.; Foster, James L.; Chien, Janeet Y. L.; Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Smith, David E. (Technical Monitor)

    2000-01-01

    The Boreal Ecosystem-Atmosphere Study (BOREAS) Remote Sensing Science (RSS)-8 team utilized Landsat Thematic Mapper (TM) images to perform mapping of snow extent over the Southern Study Area (SSA). This data set consists of two Landsat TM images that were used to determine the snow-covered pixels over the BOREAS SSA on 18 Jan 1993 and on 06 Feb 1994. The data are stored in binary image format files. The RSS-08 snow map data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). The data files are available on a CD-ROM (see document number 20010000884).

  15. VizieR Online Data Catalog: PN and HII regions of West and East of NGC 3109 (Pena+, 2007)

    NASA Astrophysics Data System (ADS)

    Pena, M.; Richer, M. G.; Stasińska, G.

    2007-03-01

    Six files (fits format, 16MB) containing images of the West (W) and East (E) zones of NGC 3109 are presented. The images are a combination of frames obtained with the ESO Very Large Telescope and the Focal Reducer Spectrograph FORS1. All the frames were obtained on 29 November and 1 December 2005, with air masses smaller than 1.16 and seeing better than 0.7 arcsec. They constitute the pre-imaging of the ESO program ID 076.B-0166(A). Central coordinates of images are: West side (images named NGC3109W-xxxx.fits) RA=10:02:54.5, DE=-26:09:22, equinox 2000. East side (images named NGC3109E-xxx.fits) RA=10:03:19.8, DE=-26:09:32, equinox 2000. The image size is 6.8x6.8arcmin2. (3 data files).

  16. 17 CFR 232.302 - Signatures.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...) must be in typed form rather than manual format. Signatures in an HTML document that are not required may, but are not required to, be presented in an HTML graphic or image file within the electronic...

  17. Incidence of Apical Crack Initiation during Canal Preparation using Hand Stainless Steel (K-File) and Hand NiTi (Protaper) Files

    PubMed Central

    Raisingani, Deepak; Mathur, Rachit; Madan, Nidha; Visnoi, Suchita

    2016-01-01

    Aim To evaluate the incidence of apical crack initiation during canal preparation with stainless steel K-files and hand protaper files (in vitro study). Materials and methods Sixty extracted mandibular premo-lar teeth are randomly selected and embedded in an acrylic tube filled with autopolymerizing resin. A baseline image of the apical surface of each specimen was recorded under a digital microscope (80×). The cervical and middle thirds of all samples were flared with #2 and #1 Gates-Glidden (GG) drills, and a second image was recorded. The teeth were randomly divided into four groups of 15 teeth each according to the file type (hand K-file and hand-protaper) and working length (WL) (instrumented at WL and 1 mm less than WL). Final image after dye penetration and photomicrograph of the apical root surface were digitally recorded. Results Maximum numbers of cracks were observed with hand protaper files compared with hand K-file at the WL and 1 mm short of WL. Chi-square testing revealed a highly significant effect of WL on crack formation at WL and 1 mm short of WL (p = 0.000). Conclusion Minimum numbers of cracks at WL and 1 mm short of WL were observed with hand K-file and maximum with hand protaper files. How to cite this article Soni D, Raisingani D, Mathur R, Madan N, Visnoi S. Incidence of Apical Crack Initiation during Canal Preparation using Hand Stainless Steel (K-File) and Hand NiTi (Protaper) Files. Int J Clin Pediatr Dent 2016;9(4):303-307. PMID:28127160

  18. Bringing the Digital Camera to the Physics Lab

    ERIC Educational Resources Information Center

    Rossi, M.; Gratton, L. M.; Oss, S.

    2013-01-01

    We discuss how compressed images created by modern digital cameras can lead to even severe problems in the quantitative analysis of experiments based on such images. Difficulties result from the nonlinear treatment of lighting intensity values stored in compressed files. To overcome such troubles, one has to adopt noncompressed, native formats, as…

  19. BOREAS TE-17 Production Efficiency Model Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G.; Papagno, Andrea (Editor); Goetz, Scott J.; Goward, Samual N.; Prince, Stephen D.; Czajkowski, Kevin; Dubayah, Ralph O.

    2000-01-01

    A Boreal Ecosystem-Atmospheric Study (BOREAS) version of the Global Production Efficiency Model (http://www.inform.umd.edu/glopem/) was developed by TE-17 (Terrestrial Ecology) to generate maps of gross and net primary production, autotrophic respiration, and light use efficiency for the BOREAS region. This document provides basic information on the model and how the maps were generated. The data generated by the model are stored in binary image-format files. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  20. Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service.

    PubMed

    Bao, Shunxing; Plassard, Andrew J; Landman, Bennett A; Gokhale, Aniruddha

    2017-04-01

    Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based "medical image processing-as-a-service" offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop's distributed file system. Despite this promise, HBase's load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage.

  1. IMAGEP - A FORTRAN ALGORITHM FOR DIGITAL IMAGE PROCESSING

    NASA Technical Reports Server (NTRS)

    Roth, D. J.

    1994-01-01

    IMAGEP is a FORTRAN computer algorithm containing various image processing, analysis, and enhancement functions. It is a keyboard-driven program organized into nine subroutines. Within the subroutines are other routines, also, selected via keyboard. Some of the functions performed by IMAGEP include digitization, storage and retrieval of images; image enhancement by contrast expansion, addition and subtraction, magnification, inversion, and bit shifting; display and movement of cursor; display of grey level histogram of image; and display of the variation of grey level intensity as a function of image position. This algorithm has possible scientific, industrial, and biomedical applications in material flaw studies, steel and ore analysis, and pathology, respectively. IMAGEP is written in VAX FORTRAN for DEC VAX series computers running VMS. The program requires the use of a Grinnell 274 image processor which can be obtained from Mark McCloud Associates, Campbell, CA. An object library of the required GMR series software is included on the distribution media. IMAGEP requires 1Mb of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in VAX FILES-11 format. It is also available on a TK50 tape cartridge in VAX FILES-11 format. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation.

  2. Challenges for data storage in medical imaging research.

    PubMed

    Langer, Steve G

    2011-04-01

    Researchers in medical imaging have multiple challenges for storing, indexing, maintaining viability, and sharing their data. Addressing all these concerns requires a constellation of tools, but not all of them need to be local to the site. In particular, the data storage challenges faced by researchers can begin to require professional information technology skills. With limited human resources and funds, the medical imaging researcher may be better served with an outsourcing strategy for some management aspects. This paper outlines an approach to manage the main objectives faced by medical imaging scientists whose work includes processing and data mining on non-standard file formats, and relating those files to the their DICOM standard descendents. The capacity of the approach scales as the researcher's need grows by leveraging the on-demand provisioning ability of cloud computing.

  3. Automated DICOM metadata and volumetric anatomical information extraction for radiation dosimetry

    NASA Astrophysics Data System (ADS)

    Papamichail, D.; Ploussi, A.; Kordolaimi, S.; Karavasilis, E.; Papadimitroulas, P.; Syrgiamiotis, V.; Efstathopoulos, E.

    2015-09-01

    Patient-specific dosimetry calculations based on simulation techniques have as a prerequisite the modeling of the modality system and the creation of voxelized phantoms. This procedure requires the knowledge of scanning parameters and patients’ information included in a DICOM file as well as image segmentation. However, the extraction of this information is complicated and time-consuming. The objective of this study was to develop a simple graphical user interface (GUI) to (i) automatically extract metadata from every slice image of a DICOM file in a single query and (ii) interactively specify the regions of interest (ROI) without explicit access to the radiology information system. The user-friendly application developed in Matlab environment. The user can select a series of DICOM files and manage their text and graphical data. The metadata are automatically formatted and presented to the user as a Microsoft Excel file. The volumetric maps are formed by interactively specifying the ROIs and by assigning a specific value in every ROI. The result is stored in DICOM format, for data and trend analysis. The developed GUI is easy, fast and and constitutes a very useful tool for individualized dosimetry. One of the future goals is to incorporate a remote access to a PACS server functionality.

  4. BOREAS RSS-7 Landsat TM LAI IMages of the SSA and NSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Chen, Jing; Cihlar, Josef

    2000-01-01

    The BOReal Ecosystem-Atmosphere Study Remote Sensing Science (BOREAS RSS-7) team used Landsat Thematic Mapper (TM) images processed at CCRS to produce images of Leaf Area Index (LAI) for the BOREAS study areas. Two images acquired on 06-Jun and 09-Aug-1991 were used for the SSA, and one image acquired on 09-Jun-1994 was used for the NSA. The LAI images are based on ground measurements and Landsat TM Reduced Simple Ratio (RSR) images. The data are stored in binary image-format files.

  5. Web-based document and content management with off-the-shelf software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuster, J

    1999-03-18

    This, then, is the current status of the project: Since we made the switch to Intradoc, we are now treating the project as a document and image management system. In reality, it could be considered a document and content management system since we can manage almost any file input to the system such as video or audio. At present, however, we are concentrating on images. As mentioned above, my CRADA funding was only targeted at including thumbnails of images in Intradoc. We still had to modify Intradoc so that it would compress images submitted to the system. All processing ofmore » files submitted to Intradoc is handled in what is called the Document Refinery. Even though MrSID created thumbnails in the process of compressing an image, work needed to be done to somehow build this capability into the Document Refinery. Therefore we made the decision to contract the Intradoc Engineering Team to perform this custom development work. To make Intradoc even more capable of handling images, we have also contracted for customization of the Document Refinery to accept Adobe PhotoShop and Illustrator file in their native format.« less

  6. BOREAS RSS-14 Level-1 GOES-8 Visible, IR and Water Vapor Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Faysash, David; Cooper, Harry J.; Smith, Eric A.; Newcomer, Jeffrey A.

    2000-01-01

    The BOREAS RSS-14 team collected and processed several GOES-7 and GOES-8 image data sets that covered the BOREAS study region. The level-1 BOREAS GOES-8 images are raw data values collected by RSS-14 personnel at FSU and delivered to BORIS. The data cover 14-Jul-1995 to 21-Sep-1995 and 01-Jan-1996 to 03-Oct-1996. The data start out containing three 8-bit spectral bands and end up containing five 10-bit spectral bands. No major problems with the data have been identified. The data are contained in binary image format files. Due to the large size of the images, the level-1 GOES-8 data are not contained on the BOREAS CD-ROM set. An inventory listing file is supplied on the CD-ROM to inform users of what data were collected. The level-1 GOES-8 image data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). See sections 15 and 16 for more information. The data files are available on a CD-ROM (see document number 20010000884).

  7. Bringing the Digital Camera to the Physics Lab

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Gratton, L. M.; Oss, S.

    2013-03-01

    We discuss how compressed images created by modern digital cameras can lead to even severe problems in the quantitative analysis of experiments based on such images. Difficulties result from the nonlinear treatment of lighting intensity values stored in compressed files. To overcome such troubles, one has to adopt noncompressed, native formats, as we examine in this work.

  8. Metadata requirements for results of diagnostic imaging procedures: a BIIF profile to support user applications

    NASA Astrophysics Data System (ADS)

    Brown, Nicholas J.; Lloyd, David S.; Reynolds, Melvin I.; Plummer, David L.

    2002-05-01

    A visible digital image is rendered from a set of digital image data. Medical digital image data can be stored as either: (a) pre-rendered format, corresponding to a photographic print, or (b) un-rendered format, corresponding to a photographic negative. The appropriate image data storage format and associated header data (metadata) required by a user of the results of a diagnostic procedure recorded electronically depends on the task(s) to be performed. The DICOM standard provides a rich set of metadata that supports the needs of complex applications. Many end user applications, such as simple report text viewing and display of a selected image, are not so demanding and generic image formats such as JPEG are sometimes used. However, these are lacking some basic identification requirements. In this paper we make specific proposals for minimal extensions to generic image metadata of value in various domains, which enable safe use in the case of two simple healthcare end user scenarios: (a) viewing of text and a selected JPEG image activated by a hyperlink and (b) viewing of one or more JPEG images together with superimposed text and graphics annotation using a file specified by a profile of the ISO/IEC Basic Image Interchange Format (BIIF).

  9. Portable document format file showing the surface models of cadaver whole body.

    PubMed

    Shin, Dong Sun; Chung, Min Suk; Park, Jin Seo; Park, Hyung Seon; Lee, Sangho; Moon, Young Lae; Jang, Hae Gwon

    2012-08-01

    In the Visible Korean project, 642 three-dimensional (3D) surface models have been built from the sectioned images of a male cadaver. It was recently discovered that popular PDF file enables users to approach the numerous surface models conveniently on Adobe Reader. Purpose of this study was to present a PDF file including systematized surface models of human body as the beneficial contents. To achieve the purpose, fitting software packages were employed in accordance with the procedures. Two-dimensional (2D) surface models including the original sectioned images were embedded into the 3D surface models. The surface models were categorized into systems and then groups. The adjusted surface models were inserted to a PDF file, where relevant multimedia data were added. The finalized PDF file containing comprehensive data of a whole body could be explored in varying manners. The PDF file, downloadable freely from the homepage (http://anatomy.co.kr), is expected to be used as a satisfactory self-learning tool of anatomy. Raw data of the surface models can be extracted from the PDF file and employed for various simulations for clinical practice. The technique to organize the surface models will be applied to manufacture of other PDF files containing various multimedia contents.

  10. BioVEC: a program for biomolecule visualization with ellipsoidal coarse-graining.

    PubMed

    Abrahamsson, Erik; Plotkin, Steven S

    2009-09-01

    Biomolecule Visualization with Ellipsoidal Coarse-graining (BioVEC) is a tool for visualizing molecular dynamics simulation data while allowing coarse-grained residues to be rendered as ellipsoids. BioVEC reads in configuration files, which may be output from molecular dynamics simulations that include orientation output in either quaternion or ANISOU format, and can render frames of the trajectory in several common image formats for subsequent concatenation into a movie file. The BioVEC program is written in C++, uses the OpenGL API for rendering, and is open source. It is lightweight, allows for user-defined settings for and texture, and runs on either Windows or Linux platforms.

  11. BOREAS Regional Soils Data in Raster Format and AEAC Projection

    NASA Technical Reports Server (NTRS)

    Monette, Bryan; Knapp, David; Hall, Forrest G. (Editor); Nickeson, Jaime (Editor)

    2000-01-01

    This data set was gridded by BOREAS Information System (BORIS) Staff from a vector data set received from the Canadian Soil Information System (CanSIS). The original data came in two parts that covered Saskatchewan and Manitoba. The data were gridded and merged into one data set of 84 files covering the BOREAS region. The data were gridded into the AEAC projection. Because the mapping of the two provinces was done separately in the original vector data, there may be discontinuities in some of the soil layers because of different interpretations of certain soil properties. The data are stored in binary, image format files.

  12. BOREAS TE-18 Landsat TM Maximum Likelihood Classification Image of the NSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. The objective of this classification is to provide the BOREAS investigators with a data product that characterizes the land cover of the NSA. A Landsat-5 TM image from 20-Aug-1988 was used to derive this classification. A standard supervised maximum likelihood classification approach was used to produce this classification. The data are provided in a binary image format file. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  13. Simplified generation of biomedical 3D surface model data for embedding into 3D portable document format (PDF) files for publication and education.

    PubMed

    Newe, Axel; Ganslandt, Thomas

    2013-01-01

    The usefulness of the 3D Portable Document Format (PDF) for clinical, educational, and research purposes has recently been shown. However, the lack of a simple tool for converting biomedical data into the model data in the necessary Universal 3D (U3D) file format is a drawback for the broad acceptance of this new technology. A new module for the image processing and rapid prototyping framework MeVisLab does not only provide a platform-independent possibility to create surface meshes out of biomedical/DICOM and other data and to export them into U3D--it also lets the user add meta data to these meshes to predefine colors and names that can be processed by a PDF authoring software while generating 3D PDF files. Furthermore, the source code of the respective module is available and well documented so that it can easily be modified for own purposes.

  14. Archive of Digital Chirp Subbottom Profile Data Collected During USGS Cruise 14BIM05 Offshore of Breton Island, Louisiana, August 2014

    USGS Publications Warehouse

    Forde, Arnell S.; Flocks, James G.; Wiese, Dana S.; Fredericks, Jake J.

    2016-03-29

    The archived trace data are in standard SEG Y rev. 0 format (Barry and others, 1975); the first 3,200 bytes of the card image header are in American Standard Code for Information Interchange (ASCII) format instead of Extended Binary Coded Decimal Interchange Code (EBCDIC) format. The SEG Y files are available on the DVD version of this report or online, downloadable via the USGS Coastal and Marine Geoscience Data System (http://cmgds.marine.usgs.gov). The data are also available for viewing using GeoMapApp (http://www.geomapapp.org) and Virtual Ocean (http://www.virtualocean.org) multi-platform open source software. The Web version of this archive does not contain the SEG Y trace files. To obtain the complete DVD archive, contact USGS Information Services at 1-888-ASK-USGS or infoservices@usgs.gov. The SEG Y files may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU) (Cohen and Stockwell, 2010). See the How To Download SEG Y Data page for download instructions. The printable profiles are provided as Graphics Interchange Format (GIF) images processed and gained using SU software and can be viewed from theProfiles page or by using the links located on the trackline maps; refer to the Software page for links to example SU processing scripts.

  15. Clementine High Resolution Camera Mosaicking Project. Volume 14; CL 6014; 0 deg N to 80 deg N Latitude, 270 deg E to 300 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  16. Clementine High Resolution Camera Mosaicking Project. Volume 17; CL 6017; 0 deg to 80 deg S Latitude, 330 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  17. Clementine High Resolution Camera Mosaicking Project. Volume 15; CL 6015; 0 deg S to 80 deg S Latitude, 270 deg E to 300 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U. S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  18. Clementine High Resolution Camera Mosaicking Project. Volume 13; CL 6013; 0 deg S to 80 deg S Latitude, 240 deg to 270 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  19. Clementine High Resolution Camera Mosaicking Project. Volume 18; CL 6018; 80 deg N to 80 deg S Latitude, 330 deg E to 360 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U. S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  20. Clementine High Resolution Camera Mosaicking Project. Volume 12; CL 6012; 0 deg N to 80 deg N Latitude, 240 deg to 270 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  1. Clementine High Resolution Camera Mosaicking Project. Volume 10; CL 6010; 0 deg N to 80 deg N Latitude, 210 deg E to 240 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  2. Clementine High Resolution Camera Mosaicking Project. Volume 16; CL 6016; 0 deg N to 80 deg N Latitude, 300 deg E to 330 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  3. Image processing techniques for digital orthophotoquad production

    USGS Publications Warehouse

    Hood, Joy J.; Ladner, L. J.; Champion, Richard A.

    1989-01-01

    Orthophotographs have long been recognized for their value as supplements or alternatives to standard maps. Recent trends towards digital cartography have resulted in efforts by the US Geological Survey to develop a digital orthophotoquad production system. Digital image files were created by scanning color infrared photographs on a microdensitometer. Rectification techniques were applied to remove tile and relief displacement, thereby creating digital orthophotos. Image mosaicking software was then used to join the rectified images, producing digital orthophotos in quadrangle format.

  4. BOREAS Level-1B TIMS Imagery: At-sensor Radiance in BSQ Format

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Strub, Richard; Newcomer, Jeffrey A.; Chernobieff, Sonia

    2000-01-01

    The Boreal Ecosystem-Atmospheric Study (BOREAS) Staff Science Aircraft Data Acquisition Program focused on providing the research teams with the remotely sensed satellite data products they needed to compare and spatially extend point results. For BOREAS, the Thermal Infrared Multispectral Scanner (TIMS) imagery, along with other aircraft images, was collected to provide spatially extensive information over the primary study areas. The Level-1b TIMS images cover the time periods of 16 to 20 Apr 1994 and 06 to 17 Sep 1994. The system calibrated images are stored in binary image format files. The TIMS images are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  5. BOREAS Forest Cover Data Layers of the NSA in Raster Format

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David; Tuinhoff, Manning

    2000-01-01

    This data set was processed by BORIS staff from the original vector data of species, crown closure, cutting class, and site classification/subtype into raster files. The original polygon data were received from Linnet Graphics, the distributor of data for MNR. In the case of the species layer, the percentages of species composition were removed. This reduced the amount of information contained in the species layer of the gridded product, but it was necessary in order to make the gridded product easier to use. The original maps were produced from 1:15,840-scale aerial photography collected in 1988 over an area of the BOREAS NSA MSA. The data are stored in binary, image format files and they are available from Oak Ridge National Laboratory. The data files are available on a CD-ROM (see document number 20010000884).

  6. BOREAS TE-18 Biomass Density Image of the SSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. This biomass density image covers almost the entire BOREAS SSA. The pixels for which biomass density is computed include areas, that are in conifer land cover classes only. The biomass density values represent the amount of overstory biomass (i.e., tree biomass only) per unit area. It is derived from a Landsat-5 TM image collected on 02-Sep-1994. The technique that was used to create this image is very similar to the technique that was as used to create the physical classification of the SSA. The data are provided in a binary image file format. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  7. BOREAS Level-1B MAS Imagery At-sensor Radiance, Relative X and Y Coordinates

    NASA Technical Reports Server (NTRS)

    Strub, Richard; Strub, Richard; Newcomer, Jeffrey A.; Ungar, Stephen

    2000-01-01

    For BOReal Ecosystem-Atmosphere Study (BOREAS), the MODIS Airborne Simulator (MAS) images, along with the other remotely sensed data, were collected to provide spatially extensive information over the primary study areas. This information includes detailed land cover and biophysical parameter maps such as fraction of Photosynthetically Active Radiation (fPAR) and Leaf Area Index (LAI). Collection of the MAS images occurred over the study areas during the 1994 field campaigns. The level-1b MAS data cover the dates of 21-Jul-1994, 24-Jul-1994, 04-Aug-1994, and 08-Aug-1994. The data are not geographically/geometrically corrected; however, files of relative X and Y coordinates for each image pixel were derived by using the C-130 INS data in a MAS scan model. The data are provided in binary image format files.

  8. KEGGParser: parsing and editing KEGG pathway maps in Matlab.

    PubMed

    Arakelyan, Arsen; Nersisyan, Lilit

    2013-02-15

    KEGG pathway database is a collection of manually drawn pathway maps accompanied with KGML format files intended for use in automatic analysis. KGML files, however, do not contain the required information for complete reproduction of all the events indicated in the static image of a pathway map. Several parsers and editors of KEGG pathways exist for processing KGML files. We introduce KEGGParser-a MATLAB based tool for KEGG pathway parsing, semiautomatic fixing, editing, visualization and analysis in MATLAB environment. It also works with Scilab. The source code is available at http://www.mathworks.com/matlabcentral/fileexchange/37561.

  9. Development of a web-based DICOM-SR viewer for CAD data of multiple sclerosis lesions in an imaging informatics-based efolder

    NASA Astrophysics Data System (ADS)

    Ma, Kevin; Wong, Jonathan; Zhong, Mark; Zhang, Jeff; Liu, Brent

    2014-03-01

    In the past, we have presented an imaging-informatics based eFolder system for managing and analyzing imaging and lesion data of multiple sclerosis (MS) patients, which allows for data storage, data analysis, and data mining in clinical and research settings. The system integrates the patient's clinical data with imaging studies and a computer-aided detection (CAD) algorithm for quantifying MS lesion volume, lesion contour, locations, and sizes in brain MRI studies. For compliance with IHE integration protocols, long-term storage in PACS, and data query and display in a DICOM compliant clinical setting, CAD results need to be converted into DICOM-Structured Report (SR) format. Open-source dcmtk and customized XML templates are used to convert quantitative MS CAD results from MATLAB to DICOM-SR format. A web-based GUI based on our existing web-accessible DICOM object (WADO) image viewer has been designed to display the CAD results from generated SR files. The GUI is able to parse DICOM-SR files and extract SR document data, then display lesion volume, location, and brain matter volume along with the referenced DICOM imaging study. In addition, the GUI supports lesion contour overlay, which matches a detected MS lesion with its corresponding DICOM-SR data when a user selects either the lesion or the data. The methodology of converting CAD data in native MATLAB format to DICOM-SR and displaying the tabulated DICOM-SR along with the patient's clinical information, and relevant study images in the GUI will be demonstrated. The developed SR conversion model and GUI support aim to further demonstrate how to incorporate CAD post-processing components in a PACS and imaging informatics-based environment.

  10. Legato: Personal Computer Software for Analyzing Pressure-Sensitive Paint Data

    NASA Technical Reports Server (NTRS)

    Schairer, Edward T.

    2001-01-01

    'Legato' is personal computer software for analyzing radiometric pressure-sensitive paint (PSP) data. The software is written in the C programming language and executes under Windows 95/98/NT operating systems. It includes all operations normally required to convert pressure-paint image intensities to normalized pressure distributions mapped to physical coordinates of the test article. The program can analyze data from both single- and bi-luminophore paints and provides for both in situ and a priori paint calibration. In addition, there are functions for determining paint calibration coefficients from calibration-chamber data. The software is designed as a self-contained, interactive research tool that requires as input only the bare minimum of information needed to accomplish each function, e.g., images, model geometry, and paint calibration coefficients (for a priori calibration) or pressure-tap data (for in situ calibration). The program includes functions that can be used to generate needed model geometry files for simple model geometries (e.g., airfoils, trapezoidal wings, rotor blades) based on the model planform and airfoil section. All data files except images are in ASCII format and thus are easily created, read, and edited. The program does not use database files. This simplifies setup but makes the program inappropriate for analyzing massive amounts of data from production wind tunnels. Program output consists of Cartesian plots, false-colored real and virtual images, pressure distributions mapped to the surface of the model, assorted ASCII data files, and a text file of tabulated results. Graphical output is displayed on the computer screen and can be saved as publication-quality (PostScript) files.

  11. Interpretation of Endodontic File Length Adjustments Using Radiovisiography

    DTIC Science & Technology

    1993-01-01

    periapical tissues would cause apical granulomas, and sometimes epithelial proliferation leading to cyst formation. They believed that better results...RVG) images. Comparisons were made between RVG images and conventional periapical radiographs. Maxillary and mandibular human cadaver sections with a...Biologic aspects of endodontics IV. Periapical tissue reactions to root-filled teeth whose canals had been instrumented short of their apices. Oral

  12. An open library of CT patient projection data

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Holmes, David; Fletcher, Joel; McCollough, Cynthia

    2016-03-01

    Lack of access to projection data from patient CT scans is a major limitation for development and validation of new reconstruction algorithms. To meet this critical need, we are building a library of CT patient projection data in an open and vendor-neutral format, DICOM-CT-PD, which is an extended DICOM format that contains sinogram data, acquisition geometry, patient information, and pathology identification. The library consists of scans of various types, including head scans, chest scans, abdomen scans, electrocardiogram (ECG)-gated scans, and dual-energy scans. For each scan, three types of data are provided, including DICOM-CT-PD projection data at various dose levels, reconstructed CT images, and a free-form text file. Several instructional documents are provided to help the users extract information from DICOM-CT-PD files, including a dictionary file for the DICOM-CT-PD format, a DICOM-CT-PD reader, and a user manual. Radiologist detection performance based on the reconstructed CT images is also provided. So far 328 head cases, 228 chest cases, and 228 abdomen cases have been collected for potential inclusion. The final library will include a selection of 50 head, chest, and abdomen scans each from at least two different manufacturers, and a few ECG-gated scans and dual-source, dual-energy scans. It will be freely available to academic researchers, and is expected to greatly facilitate the development and validation of CT reconstruction algorithms.

  13. High-performance web viewer for cardiac images

    NASA Astrophysics Data System (ADS)

    dos Santos, Marcelo; Furuie, Sergio S.

    2004-04-01

    With the advent of the digital devices for medical diagnosis the use of the regular films in radiology has decreased. Thus, the management and handling of medical images in digital format has become an important and critical task. In Cardiology, for example, the main difficulty is to display dynamic images with the appropriated color palette and frame rate used on acquisition process by Cath, Angio and Echo systems. In addition, other difficulty is handling large images in memory by any existing personal computer, including thin clients. In this work we present a web-based application that carries out these tasks with robustness and excellent performance, without burdening the server and network. This application provides near-diagnostic quality display of cardiac images stored as DICOM 3.0 files via a web browser and provides a set of resources that allows the viewing of still and dynamic images. It can access image files from the local disks, or network connection. Its features include: allows real-time playback, dynamic thumbnails image viewing during loading, access to patient database information, image processing tools, linear and angular measurements, on-screen annotations, image printing and exporting DICOM images to other image formats, and many others, all characterized by a pleasant user-friendly interface, inside a Web browser by means of a Java application. This approach offers some advantages over the most of medical images viewers, such as: facility of installation, integration with other systems by means of public and standardized interfaces, platform independence, efficient manipulation and display of medical images, all with high performance.

  14. BOREAS Level-3s Landsat TM Imagery Scaled At-sensor Radiance in LGSOWG Format

    NASA Technical Reports Server (NTRS)

    Nickeson, Jaime; Knapp, David; Newcomer, Jeffrey A.; Cihlar, Josef; Hall, Forrest G. (Editor)

    2000-01-01

    For BOReal Ecosystem-Atmosphere Study (BOREAS),the level-3s Landsat Thematic Mapper (TM) data, along with the other remotely sensed images,were collected in order to provide spatially extensive information over the primary study areas. This information includes radiant energy,detailed land cover, and biophysical parameter maps such as Fraction of Photosynthetically Active Radiation (FPAR) and Leaf area Index (LAI). CCRS collected and supplied the level-3s images to BOREAS for use in the remote sensing research activities. Geographically,the bulk of the level-3s images cover the BOREAS Northern Study Area (NSA) and Southern Study Area (SSA) with a few images covering the area between the NSA and SSA. Temporally,the images cover the period of 22-Jun-1984 to 30-Jul-1996. The images are available in binary,image-format files.

  15. Parallax Player: a stereoscopic format converter

    NASA Astrophysics Data System (ADS)

    Feldman, Mark H.; Lipton, Lenny

    2003-05-01

    The Parallax Player is a software application that is, in essence, a stereoscopic format converter. Various formats may be inputted and outputted. In addition to being able to take any one of a wide variety of different formats and play them back on many different kinds of PCs and display screens. The Parallax Player has built into it the capability to produce ersatz stereo from a planar still or movie image. The player handles two basic forms of digital content - still images, and movies. It is assumed that all data is digital, either created by means of a photographic film process and later digitized, or directly captured or authored in a digital form. In its current implementation, running on a number of Windows Operating Systems, The Parallax Player reads in a broad selection of contemporary file formats.

  16. A new version of Visual tool for estimating the fractal dimension of images

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Felea, D.; Besliu, C.; Jipa, Al.; Bordeianu, C. C.; Stan, E.; Esanu, T.

    2010-04-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images (Grossu et al., 2009 [1]). The earlier version was limited to bi-dimensional sets of points, stored in bitmap files. The application was extended for working also with comma separated values files and three-dimensional images. New version program summaryProgram title: Fractal Analysis v02 Catalogue identifier: AEEG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9999 No. of bytes in distributed program, including test data, etc.: 4 366 783 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30 M Classification: 14 Catalogue identifier of previous version: AEEG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1999 Does the new version supersede the previous version?: Yes Nature of problem: Estimating the fractal dimension of 2D and 3D images. Solution method: Optimized implementation of the box-counting algorithm. Reasons for new version:The previous version was limited to bitmap image files. The new application was extended in order to work with objects stored in comma separated values (csv) files. The main advantages are: Easier integration with other applications (csv is a widely used, simple text file format); Less resources consumed and improved performance (only the information of interest, the "black points", are stored); Higher resolution (the points coordinates are loaded into Visual Basic double variables [2]); Possibility of storing three-dimensional objects (e.g. the 3D Sierpinski gasket). In this version the optimized box-counting algorithm [1] was extended to the three-dimensional case. Summary of revisions:The application interface was changed from SDI (single document interface) to MDI (multi-document interface). One form was added in order to provide a graphical user interface for the new functionalities (fractal analysis of 2D and 3D images stored in csv files). Additional comments: User friendly graphical interface; Easy deployment mechanism. Running time: In the first approximation, the algorithm is linear. References:[1] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C.C. Bordeianu, D. Felea, Comput. Phys. Comm. 180 (2009) 1999-2001.[2] F. Balena, Programming Microsoft Visual Basic 6.0, Microsoft Press, US, 1999.

  17. Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service

    PubMed Central

    Bao, Shunxing; Plassard, Andrew J.; Landman, Bennett A.; Gokhale, Aniruddha

    2017-01-01

    Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based “medical image processing-as-a-service” offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop’s distributed file system. Despite this promise, HBase’s load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage. PMID:28884169

  18. Improving the interactivity and functionality of Web-based radiology teaching files with the Java programming language.

    PubMed

    Eng, J

    1997-01-01

    Java is a programming language that runs on a "virtual machine" built into World Wide Web (WWW)-browsing programs on multiple hardware platforms. Web pages were developed with Java to enable Web-browsing programs to overlay transparent graphics and text on displayed images so that the user could control the display of labels and annotations on the images, a key feature not available with standard Web pages. This feature was extended to include the presentation of normal radiologic anatomy. Java programming was also used to make Web browsers compatible with the Digital Imaging and Communications in Medicine (DICOM) file format. By enhancing the functionality of Web pages, Java technology should provide greater incentive for using a Web-based approach in the development of radiology teaching material.

  19. BOREAS Soils Data over the SSA in Raster Format and AEAC Projection

    NASA Technical Reports Server (NTRS)

    Knapp, David; Rostad, Harold; Hall, Forrest G. (Editor)

    2000-01-01

    This data set consists of GIS layers that describe the soils of the BOREAS SSA. The original data were submitted as vector layers that were gridded by BOREAS staff to a 30-meter pixel size in the AEAC projection. These data layers include the soil code (which relates to the soil name), modifier (which also relates to the soil name), and extent (indicating the extent that this soil exists within the polygon). There are three sets of these layers representing the primary, secondary, and tertiary soil characteristics. Thus, there is a total of nine layers in this data set along with supporting files. The data are stored in binary, image format files.

  20. Rapid 3D bioprinting from medical images: an application to bone scaffolding

    NASA Astrophysics Data System (ADS)

    Lee, Daniel Z.; Peng, Matthew W.; Shinde, Rohit; Khalid, Arbab; Hong, Abigail; Pennacchi, Sara; Dawit, Abel; Sipzner, Daniel; Udupa, Jayaram K.; Rajapakse, Chamith S.

    2018-03-01

    Bioprinting of tissue has its applications throughout medicine. Recent advances in medical imaging allows the generation of 3-dimensional models that can then be 3D printed. However, the conventional method of converting medical images to 3D printable G-Code instructions has several limitations, namely significant processing time for large, high resolution images, and the loss of microstructural surface information from surface resolution and subsequent reslicing. We have overcome these issues by creating a JAVA program that skips the intermediate triangularization and reslicing steps and directly converts binary dicom images into G-Code. In this study, we tested the two methods of G-Code generation on the application of synthetic bone graft scaffold generation. We imaged human cadaveric proximal femurs at an isotropic resolution of 0.03mm using a high resolution peripheral quantitative computed tomography (HR-pQCT) scanner. These images, of the Digital Imaging and Communications in Medicine (DICOM) format, were then processed through two methods. In each method, slices and regions of print were selected, filtered to generate a smoothed image, and thresholded. In the conventional method, these processed images are converted to the STereoLithography (STL) format and then resliced to generate G-Code. In the new, direct method, these processed images are run through our JAVA program and directly converted to G-Code. File size, processing time, and print time were measured for each. We found that this new method produced a significant reduction in G-Code file size as well as processing time (92.23% reduction). This allows for more rapid 3D printing from medical images.

  1. Data management in pattern recognition and image processing systems

    NASA Technical Reports Server (NTRS)

    Zobrist, A. L.; Bryant, N. A.

    1976-01-01

    Data management considerations are important to any system which handles large volumes of data or where the manipulation of data is technically sophisticated. A particular problem is the introduction of image-formatted files into the mainstream of data processing application. This report describes a comprehensive system for the manipulation of image, tabular, and graphical data sets which involve conversions between the various data types. A key characteristic is the use of image processing technology to accomplish data management tasks. Because of this, the term 'image-based information system' has been adopted.

  2. Enhanced Historical Land-Use and Land-Cover Data Sets of the U.S. Geological Survey

    USGS Publications Warehouse

    Price, Curtis V.; Nakagaki, Naomi; Hitt, Kerie J.; Clawges, Rick M.

    2007-01-01

    Historical land-use and land-cover data, available from the U.S. Geological Survey (USGS) for the conterminous United States and Hawaii, have been enhanced for use in geographic information systems (GIS) applications. The original digital data sets were created by the USGS in the late 1970s and early 1980s and were later converted by USGS and the U.S. Environmental Protection Agency (USEPA) to a geographic information system (GIS) format in the early 1990s. These data were made available on USEPA's Web site since the early 1990s and have been used for many national applications, despite minor coding and topological errors. During the 1990s, a group of USGS researchers made modifications to the data set for use in the National Water-Quality Assessment Program. These edited files have been further modified to create a more accurate, topologically clean, and seamless national data set. Several different methods, including custom editing software and several batch processes, were applied to create this enhanced version of the national data set. The data sets are included in this report in the commonly used shapefile and Tagged Image Format File (TIFF) formats. In addition, this report includes two polygon data sets (in shapefile format) representing (1) land-use and land-cover source documentation extracted from the previously published USGS data files, and (2) the extent of each polygon data file.

  3. BOREAS TE-18 Landsat TM Physical Classification Image of the NSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. The objective of this classification is to provide the BOREAS investigators with a data product that characterizes the land cover of the NSA. A Landsat-5 TM image from 21-Jun-1995 was used to derive the classification. A technique was implemented that uses reflectances of various land cover types along with a geometric optical canopy model to produce spectral trajectories. These trajectories are used in a way that is similar to training data to classify the image into the different land cover classes. The data are provided in a binary, image file format. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  4. BOREAS TE-18 Landsat TM Physical Classification Image of the SSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. The objective of this classification is to provide the BOREAS investigators with a data product that characterizes the land cover of the SSA. A Landsat-5 TM image from 02-Sep-1994 was used to derive the classification. A technique was implemented that uses reflectances of various land cover types along with a geometric optical canopy model to produce spectral trajectories. These trajectories are used as training data to classify the image into the different land cover classes. These data are provided in a binary image file format. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  5. BOREAS TE-18 Landsat TM Maximum Likelihood Classification Image of the SSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. The objective of this classification is to provide the BOREAS investigators with a data product that characterizes the land cover of the SSA. A Landsat-5 TM image from 02-Sep- 1994 was used to derive the classification. A technique was implemented that uses reflectances of various land cover types along with a geometric optical canopy model to produce spectral trajectories. These trajectories are used as training data to classify the image into the different land cover classes. These data are provided in a binary image file format. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Center (DAAC).

  6. The DMSP Space Weather Sensors Data Archive Listing (1982-2013) and File Formats Descriptions

    DTIC Science & Technology

    2014-08-01

    environment sensors including the auroral particle spectrometer (SSJ), the fluxgate magnetometer (SSM), the topside thermal plasma monitor (SSIES... Fluxgate Magnetometer (SSM) for the Defense Meteorological Satellite Program (DMSP) Block 5D-2, Flight 7, Instrument Papers, AFGL-TR-84-0225; ADA155229...Flux) SSM The fluxgate magnetometer . (Special Sensor, Magnetometer ) SSULI The ultraviolet limb imager SSUSI The ultraviolet spectrographic imager

  7. Trueness and precision of digital impressions obtained using an intraoral scanner with different head size in the partially edentulous mandible.

    PubMed

    Hayama, Hironari; Fueki, Kenji; Wadachi, Juro; Wakabayashi, Noriyuki

    2018-03-01

    It remains unclear whether digital impressions obtained using an intraoral scanner are sufficiently accurate for use in fabrication of removable partial dentures. We therefore compared the trueness and precision between conventional and digital impressions in the partially edentulous mandible. Mandibular Kennedy Class I and III models with soft silicone simulated-mucosa placed on the residual edentulous ridge were used. The reference models were converted to standard triangulated language (STL) file format using an extraoral scanner. Digital impressions were obtained using an intraoral scanner with a large or small scanning head, and converted to STL files. For conventional impressions, pressure impressions of the reference models were made and working casts fabricated using modified dental stone; these were converted to STL file format using an extraoral scanner. Conversion to STL file format was performed 5 times for each method. Trueness and precision were evaluated by deviation analysis using three-dimensional image processing software. Digital impressions had superior trueness (54-108μm), but inferior precision (100-121μm) compared to conventional impressions (trueness 122-157μm, precision 52-119μm). The larger intraoral scanning head showed better trueness and precision than the smaller head, and on average required fewer scanned images of digital impressions than the smaller head (p<0.05). On the color map, the deviation distribution tended to differ between the conventional and digital impressions. Digital impressions are partially comparable to conventional impressions in terms of accuracy; the use of a larger scanning head may improve the accuracy for removable partial denture fabrication. Copyright © 2018 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  8. Heterogeneous Concurrent Modeling and Design in Java (Volume 2: Ptolemy II Software Architecture)

    DTIC Science & Technology

    2008-04-01

    file (EPS) suitable for inclusion in word processors. The image in figure 7.3 is such an EPS file imported into FrameMaker . At this time, the EPS...can be imported into word processors. This figure was imported into FrameMaker . 152 Ptolemy II Plot Package 7.2.4 Modifying the format You can control...FixToken class 57 FrameMaker 149 full name 4 function closures 59 function dependency 48 FunctionDependency class 48 FunctionToken 122 FunctionToken

  9. The PDS-based Data Processing, Archiving and Management Procedures in Chang'e Mission

    NASA Astrophysics Data System (ADS)

    Zhang, Z. B.; Li, C.; Zhang, H.; Zhang, P.; Chen, W.

    2017-12-01

    PDS is adopted as standard format of scientific data and foundation of all data-related procedures in Chang'e mission. Unlike the geographically distributed nature of the planetary data system, all procedures of data processing, archiving, management and distribution are proceeded in the headquarter of Ground Research and Application System of Chang'e mission in a centralized manner. The RAW data acquired by the ground stations is transmitted to and processed by data preprocessing subsystem (DPS) for the production of PDS-compliant Level 0 Level 2 data products using established algorithms, with each product file being well described using an attached label, then all products with the same orbit number are put together into a scheduled task for archiving along with a XML archive list file recoding all product files' properties such as file name, file size etc. After receiving the archive request from DPS, data management subsystem (DMS) is provoked to parse the XML list file to validate all the claimed files and their compliance to PDS using a prebuilt data dictionary, then to exact metadata of each data product file from its PDS label and the fields of its normalized filename. Various requirements of data management, retrieving, distribution and application can be well met using the flexible combination of the rich metadata empowered by the PDS. In the forthcoming CE-5 mission, all the design of data structure and procedures will be updated from PDS version 3 used in previous CE-1, CE-2 and CE-3 missions to the new version 4, the main changes would be: 1) a dedicated detached XML label will be used to describe the corresponding scientific data acquired by the 4 instruments carried, the XML parsing framework used in archive list validation will be reused for the label after some necessary adjustments; 2) all the image data acquired by the panorama camera, landing camera and lunar mineralogical spectrometer should use an Array_2D_Image/Array_3D_Image object to store image data, and use a Table_Character object to store image frame header; the tabulated data acquired by the lunar regolith penetrating radar should use a Table_Binary object to store measurements.

  10. 37 CFR 2.22 - Filing requirements for a TEAS Plus application.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... signed by a person properly authorized to sign on behalf of the owner pursuant to § 2.193(e)(1); (12) A... attach a digitized image of the mark in .jpg format. If the mark includes color, the drawing must show...

  11. BOREAS RSS-14 Level -3 Gridded Radiometer and Satellite Surface Radiation Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Hodges, Gary; Smith, Eric A.

    2000-01-01

    The BOREAS RSS-14 team collected and processed GOES-7 and -8 images of the BOREAS region as part of its effort to characterize the incoming, reflected, and emitted radiation at regional scales. This data set contains surface radiation parameters, such as net radiation and net solar radiation, that have been interpolated from GOES-7 images and AMS data onto the standard BOREAS mapping grid at a resolution of 5 km N-S and E-W. While some parameters are taken directly from the AMS data set, others have been corrected according to calibrations carried out during IFC-2 in 1994. The corrected values as well as the uncorrected values are included. For example, two values of net radiation are provided: an uncorrected value (Rn), and a value that has been corrected according to the calibrations (Rn-COR). The data are provided in binary image format data files. Some of the data files on the BOREAS CD-ROMs have been compressed using the Gzip program. See section 8.2 for details. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  12. Visualization of GPM Standard Products at the Precipitation Processing System (PPS)

    NASA Astrophysics Data System (ADS)

    Kelley, O.

    2010-12-01

    Many of the standard data products for the Global Precipitation Measurement (GPM) constellation of satellites will be generated at and distributed by the Precipitation Processing System (PPS) at NASA Goddard. PPS will provide several means to visualize these data products. These visualization tools will be used internally by PPS analysts to investigate potential anomalies in the data files, and these tools will also be made available to researchers. Currently, a free data viewer called THOR, the Tool for High-resolution Observation Review, can be downloaded and installed on Linux, Windows, and Mac OS X systems. THOR can display swath and grid products, and to a limited degree, the low-level data packets that the satellite itself transmits to the ground system. Observations collected since the 1997 launch of the Tropical Rainfall Measuring Mission (TRMM) satellite can be downloaded from the PPS FTP archive, and in the future, many of the GPM standard products will also be available from this FTP site. To provide easy access to this 80 terabyte and growing archive, PPS currently operates an on-line ordering tool called STORM that provides geographic and time searches, browse-image display, and the ability to order user-specified subsets of standard data files. Prior to the anticipated 2013 launch of the GPM core satellite, PPS will expand its visualization tools by integrating an on-line version of THOR within STORM to provide on-the-fly image creation of any portion of an archived data file at a user-specified degree of magnification. PPS will also provide OpenDAP access to the data archive and OGC WMS image creation of both swath and gridded data products. During the GPM era, PPS will continue to provide realtime globally-gridded 3-hour rainfall estimates to the public in a compact binary format (3B42RT) and in a GIS format (2-byte TIFF images + ESRI WorldFiles).

  13. Video multiple watermarking technique based on image interlacing using DWT.

    PubMed

    Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  14. Geographic Information for Analysis of Highway Runoff-Quality Data on a National or Regional Scale in the Conterminous United States

    USGS Publications Warehouse

    Smieszek, Tomas W.; Granato, Gregory E.

    2000-01-01

    Spatial data are important for interpretation of water-quality information on a regional or national scale. Geographic information systems (GIS) facilitate interpretation and integration of spatial data. The geographic information and data compiled for the conterminous United States during the National Highway Runoff Water-Quality Data and Methodology Synthesis project is described in this document, which also includes information on the structure, file types, and the geographic information in the data files. This 'geodata' directory contains two subdirectories, labeled 'gisdata' and 'gisimage.' The 'gisdata' directory contains ArcInfo coverages, ArcInfo export files, shapefiles (used in ArcView), Spatial Data Transfer Standard Topological Vector Profile format files, and meta files in subdirectories organized by file type. The 'gisimage' directory contains the GIS data in common image-file formats. The spatial geodata includes two rain-zone region maps and a map of national ecosystems originally published by the U.S. Environmental Protection Agency; regional estimates of mean annual streamflow, and water hardness published by the Federal Highway Administration; and mean monthly temperature, mean annual precipitation, and mean monthly snowfall modified from data published by the National Climatic Data Center and made available to the public by the Oregon Climate Service at Oregon State University. These GIS files were compiled for qualitative spatial analysis of available data on a national and(or) regional scale and therefore should be considered as qualitative representations, not precise geographic location information.

  15. Methods for identification of images acquired with digital cameras

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  16. Technical Note: Development and validation of an open data format for CT projection data.

    PubMed

    Chen, Baiyu; Duan, Xinhui; Yu, Zhicong; Leng, Shuai; Yu, Lifeng; McCollough, Cynthia

    2015-12-01

    Lack of access to projection data from patient CT scans is a major limitation for development and validation of new reconstruction algorithms. To meet this critical need, this work developed and validated a vendor-neutral format for CT projection data, which will further be employed to build a library of patient projection data for public access. A digital imaging and communication in medicine (DICOM)-like format was created for CT projection data (CT-PD), named the DICOM-CT-PD format. The format stores attenuation information in the DICOM image data block and stores parameters necessary for reconstruction in the DICOM header under various tags (51 tags to store the geometry and scan parameters and 9 tags to store patient information). To validate the accuracy and completeness of the new format, CT projection data from helical scans of the ACR CT accreditation phantom were acquired from two clinical CT scanners (Somatom Definition Flash, Siemens Healthcare, Forchheim, Germany and Discovery CT750 HD, GE Healthcare, Waukesha, WI). After decoding (by the authors for Siemens, by the manufacturer for GE), the projection data were converted to the DICOM-CT-PD format. Off-line CT reconstructions were performed by internal and external reconstruction researchers using only the information stored in the DICOM-CT-PD files and the DICOM-CT-PD field definitions. Compared with the commercially reconstructed CT images, the off-line reconstructed images created using the DICOM-CT-PD format are similar in terms of CT numbers (differences of 5 HU for the bone insert and -9 HU for the air insert), image noise (±1 HU), and low contrast detectability (6 mm rods visible in both). Because of different reconstruction approaches, slightly different in-plane and cross-plane high contrast spatial resolution were obtained compared to those reconstructed on the scanners (axial plane: GE off-line, 7 lp/cm; GE commercial, 7 lp/cm; Siemens off-line, 8 lp/cm; Siemens commercial, 7 lp/cm. Coronal plane: Siemens off-line, 6 lp/cm; Siemens commercial, 8 lp/cm). A vendor-neutral extended DICOM format has been developed that enables open sharing of CT projection data from third-generation CT scanners. Validation of the format showed that the geometric parameters and attenuation information in the DICOM-CT-PD file were correctly stored, could be retrieved with use of the provided instructions, and contained sufficient data for reconstruction of CT images that approximated those from the commercial scanner.

  17. Efficacy of ProTaper universal retreatment files in removing filling materials during root canal retreatment.

    PubMed

    Giuliani, Valentina; Cocchetti, Roberto; Pagavino, Gabriella

    2008-11-01

    The aim of this study was to evaluate the efficacy of the ProTaper Universal System rotary retreatment system and of Profile 0.06 and hand instruments (K-file) in the removal of root filling materials. Forty-two extracted single-rooted anterior teeth were selected. The root canals were enlarged with nickel-titanium (NiTi) rotary files, filled with gutta-percha and sealer, and randomly divided into 3 experimental groups. The filling materials were removed with solvent in conjunction with one of the following devices and techniques: the ProTaper Universal System for retreatment, ProFile 0.06, and hand instruments (K-file). The roots were longitudinally sectioned, and the image of the root surface was photographed. The images were captured in JPEG format; the areas of the remaining filling materials and the time required for removing the gutta-percha and sealer were calculated by using the nonparametric one-way Kruskal-Wallis test and Tukey-Kramer tests, respectively. The group that showed better results for removing filling materials was the ProTaper Universal System for retreatment files, whereas the group of ProFile rotary instruments yielded better root canal cleanliness than the hand instruments, even though there was no statistically significant difference. The ProTaper Universal System for retreatment and ProFile rotary instruments worked significantly faster than the K-file. The ProTaper Universal System for retreatment files left cleaner root canal walls than the K-file hand instruments and the ProFile Rotary instruments, although none of the devices used guaranteed complete removal of the filling materials. The rotary NiTi system proved to be faster than hand instruments in removing root filling materials.

  18. Cyanopolyyne Chemistry in TMC-1

    NASA Astrophysics Data System (ADS)

    Winstanley, N.; Nejad, L. A. M.

    1996-03-01

    Using pseudo-time-dependent models and three different reaction networks, a detailed study of the dominant reaction pathways for the formation of cyanopolyynes and their abundances in TMC-1 is presented. The analysis of the chemical reactions show that for the formation of cyanopolyynes there are two major chemical regimes. First, early times of less than ˜104 yrs when ion-molecule reactions are dominant, the main chemical route for the formation of larger cyanopolyynes is C_n H^ + xrightarrow{N}C_n N^ + xrightarrow{{H_2 }}HC_n N^ + xrightarrow{{H_2 }}H_2 C_n N^ + xrightarrow{{e^ - }}HC_n N wheren=5, 7, and 9. Second, at times greater than 104 yrs, when neutral-neutral reactions become dominant, two major reaction routes for the formation of cyanopolyynes are (a), HCNxrightarrow{{C_2 H}}HC_3 Nxrightarrow{{C_2 H}}HC_5 Nxrightarrow{{C_2 H}}HC_7 Nxrightarrow{{C_2 H}}HC_9 N and (b) C_n H_2 + CN to HC_{n + 1} N + H,{text{ }}n = 4,6, and 8 depending on the reaction network used. The results indicate that for route (a) large abundances ofC 2 H (fractional abundances of ˜10-7), and for route (b) large abundances ofC 2 H 2 are required in order to reproduce the observed abundances of cyanopolyynes. The calculated abundances of cyanopolyynes show great sensitivity to the value of extinction particularly att≳5×105 yrs (i.e. photochemical timescale). The effect of other physical parameters, such as the cosmic-ray ionization abundances are also examined. In general, the model calculations show that the observed abundances of cyanopolyynes can be achieved by pseudo-time-dependent models at late times of several million years.

  19. Digital Seismic-Reflection Data from Eastern Rhode Island Sound and Vicinity, 1975-1980

    USGS Publications Warehouse

    McMullen, K.Y.; Poppe, L.J.; Soderberg, N.K.

    2009-01-01

    During 1975 and 1980, the U.S. Geological Survey (USGS) conducted two seismic-reflection surveys in Rhode Island Sound (RIS) aboard the research vessel Asterias: cruise ASTR75-June surveyed eastern RIS in 1975 and cruise AST-80-6B surveyed southern RIS in 1980. Data from these surveys were recorded in analog form and archived at the USGS Woods Hole Coastal and Marine Science Center's Data Library. In response to recent interest in the geology of RIS and in an effort to make the data more readily accessible while preserving the original paper records, the seismic data from these cruises were scanned and converted to black and white Tagged Image File Format and grayscale Portable Network Graphics images and SEG-Y data files. Navigation data were converted from U.S. Coast Guard Long Range Aids to Navigation time delays to latitudes and longitudes that are available in Environmental Systems Research Institute, Inc., shapefile format and as eastings and northings in space-delimited text format. This report complements two others that contain analog seismic-reflection data from RIS (McMullen and others, 2009) and Long Island and Block Island Sounds (Poppe and others, 2002) and were converted into digital form.

  20. 77 FR 72788 - Copyright Office Fees

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-06

    ... Adobe Portable Document File (PDF) format that contains searchable, accessible text (not an image... processing such statements and associated royalty payments was funded solely by the royalty fees collected... Title 17 that permits the Office to apportion up to 50 percent of the cost of processing the SOAs and...

  1. File Management In Space

    NASA Technical Reports Server (NTRS)

    Critchfield, Anna R.; Zepp, Robert H.

    2000-01-01

    We propose that the user interact with the spacecraft as if the spacecraft were a file server, so that the user can select and receive data as files in standard formats (e.g., tables or images, such as jpeg) via the Internet. Internet technology will be used end-to-end from the spacecraft to authorized users, such as the flight operation team, and project scientists. The proposed solution includes a ground system and spacecraft architecture, mission operations scenarios, and an implementation roadmap showing migration from current practice to the future, where distributed users request and receive files of spacecraft data from archives or spacecraft with equal ease. This solution will provide ground support personnel and scientists easy, direct, secure access to their authorized data without cumbersome processing, and can be extended to support autonomous communications with the spacecraft.

  2. BOREAS Level-3a Landsat TM Imagery: Scaled At-sensor Radiance in BSQ Format

    NASA Technical Reports Server (NTRS)

    Nickerson, Jaime; Hall, Forrest G. (Editor); Knapp, David; Newcomer, Jeffrey A.; Cihlar, Josef

    2000-01-01

    For BOREAS, the level-3a Landsat TM data, along with the other remotely sensed images, were collected in order to provide spatially extensive information over the primary study areas. This information includes radiant energy, detailed land cover, and biophysical parameter maps such as FPAR and LAI. Although very similar in content to the level-3s Landsat TM products, the level-3a images were created to provide users with a more usable BSQ format and to provide information that permitted direct determination of per-pixel latitude and longitude coordinates. Geographically, the level-3a images cover the BOREAS NSA and SSA. Temporally, the images cover the period of 22-Jun-1984 to 30-Jul-1996. The images are available in binary, image-format files. With permission from CCRS and RSI, several of the full-resolution images are included on the BOREAS CD-ROM series. Due to copyright issues, the images not included on the CD-ROM may not be publicly available. See Sections 15 and 16 for information about how to acquire the data. Information about the images not on the CD-ROMs is provided in an inventory listing on the CD-ROMs.

  3. BOREAS Level-3s SPOT Imagery: Scaled At-sensor Radiance in LGSOWG Format

    NASA Technical Reports Server (NTRS)

    Strub, Richard; Nickeson, Jaime; Newcomer, Jeffrey A.; Hall, Forrest G. (Editor); Cihlar, Josef

    2000-01-01

    For BOReal Ecosystem-Atmosphere Study (BOREAS), the level-3s Satellite Pour l'Observation de la Terre (SPOT) data, along with the other remotely sensed images, were collected in order to provide spatially extensive information over the primary study areas. This information includes radiant energy, detailed land cover, and biophysical parameter maps such as Fraction of Photosynthetically Active Radiation (FPAR) and Leaf Area Index (LAI). The SPOT images acquired for the BOREAS project were selected primarily to fill temporal gaps in the Landsat Thematic Mapper (TM) image data collection. CCRS collected and supplied the level-3s images to BOREAS Information System (BORIS) for use in the remote sensing research activities. Spatially, the level-3s images cover 60- by 60-km portions of the BOREAS Northern Study Area (NSA) and Southern Study Area (SSA). Temporally, the images cover the period of 17-Apr-1994 to 30-Aug-1996. The images are available in binary image format files. Due to copyright issues, the SPOT images may not be publicly available.

  4. JWST science data products

    NASA Astrophysics Data System (ADS)

    Swade, Daryl; Bushouse, Howard; Greene, Gretchen; Swam, Michael

    2014-07-01

    Science data products for James Webb Space Telescope (JWST) ©observations will be generated by the Data Management Subsystem (DMS) within the JWST Science and Operations Center (S&OC) at the Space Telescope Science Institute (STScI). Data processing pipelines within the DMS will produce uncalibrated and calibrated exposure files, as well as higher level data products that result from combined exposures, such as mosaic images. Information to support the science observations, for example data from engineering telemetry, proposer inputs, and observation planning will be captured and incorporated into the science data products. All files will be generated in Flexible Image Transport System (FITS) format. The data products will be made available through the Mikulski Archive for Space Telescopes (MAST) and adhere to International Virtual Observatory Alliance (IVOA) standard data protocols.

  5. SETI-EC: SETI Encryption Code

    NASA Astrophysics Data System (ADS)

    Heller, René

    2018-03-01

    The SETI Encryption code, written in Python, creates a message for use in testing the decryptability of a simulated incoming interstellar message. The code uses images in a portable bit map (PBM) format, then writes the corresponding bits into the message, and finally returns both a PBM image and a text (TXT) file of the entire message. The natural constants (c, G, h) and the wavelength of the message are defined in the first few lines of the code, followed by the reading of the input files and their conversion into 757 strings of 359 bits to give one page. Each header of a page, i.e. the little-endian binary code translation of the tempo-spatial yardstick, is calculated and written on-the-fly for each page.

  6. 77 FR 28391 - Announcement of Requirements and Registration for “Ocular Imaging Challenge”

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-14

    ..., color, zoom, pan) Integrate with existing EHRs (e.g. ``single sign-on'') Where applicable, leverage and... existing office hardware platforms, and to integrate with existing EHR systems (e.g. ``single sign-on... on the acquisition devices in proprietary databases and file formats, and therefore have limited...

  7. Development of Software to Model AXAF-I Image Quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees; Hawkins, Lamar

    1996-01-01

    This draft final report describes the work performed under the delivery order number 145 from May 1995 through August 1996. The scope of work included a number of software development tasks for the performance modeling of AXAF-I. A number of new capabilities and functions have been added to the GT software, which is the command mode version of the GRAZTRACE software, originally developed by MSFC. A structural data interface has been developed for the EAL (old SPAR) finite element analysis FEA program, which is being used by MSFC Structural Analysis group for the analysis of AXAF-I. This interface utility can read the structural deformation file from the EAL and other finite element analysis programs such as NASTRAN and COSMOS/M, and convert the data to a suitable format that can be used for the deformation ray-tracing to predict the image quality for a distorted mirror. There is a provision in this utility to expand the data from finite element models assuming 180 degrees symmetry. This utility has been used to predict image characteristics for the AXAF-I HRMA, when subjected to gravity effects in the horizontal x-ray ground test configuration. The development of the metrology data processing interface software has also been completed. It can read the HDOS FITS format surface map files, manipulate and filter the metrology data, and produce a deformation file, which can be used by GT for ray tracing for the mirror surface figure errors. This utility has been used to determine the optimum alignment (axial spacing and clocking) for the four pairs of AXAF-I mirrors. Based on this optimized alignment, the geometric images and effective focal lengths for the as built mirrors were predicted to cross check the results obtained by Kodak.

  8. 76 FR 43679 - Filing via the Internet; Notice of Additional File Formats for efiling

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-21

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. RM07-16-000] Filing via the Internet; Notice of Additional File Formats for efiling Take notice that the Commission has added to its list of acceptable file formats the four-character file extensions for Microsoft Office 2007/2010...

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    North, Michael J.

    SchemaOnRead provides tools for implementing schema-on-read including a single function call (e.g., schemaOnRead("filename")) that reads text (TXT), comma separated value (CSV), raster image (BMP, PNG, GIF, TIFF, and JPG), R data (RDS), HDF5, NetCDF, spreadsheet (XLS, XLSX, ODS, and DIF), Weka Attribute-Relation File Format (ARFF), Epi Info (REC), Pajek network (PAJ), R network (NET), Hypertext Markup Language (HTML), SPSS (SAV), Systat (SYS), and Stata (DTA) files. It also recursively reads folders (e.g., schemaOnRead("folder")), returning a nested list of the contained elements.

  10. Real-Time Processing of Pressure-Sensitive Paint Images

    DTIC Science & Technology

    2006-12-01

    intermediate or final data to the hard disk in 3D grid format. In addition to the pressure or pressure coefficient at every grid point, the saved file may...occurs. Nevertheless, to achieve an accurate mapping between 2D image coordinates and 3D spatial coordinates, additional parameters must be introduced. A...improved mapping between the 2D and 3D coordinates. In a more sophisticated approach, additional terms corresponding to specific deformation modes

  11. Interactive publications: creation and usage

    NASA Astrophysics Data System (ADS)

    Thoma, George R.; Ford, Glenn; Chung, Michael; Vasudevan, Kirankumar; Antani, Sameer

    2006-02-01

    As envisioned here, an "interactive publication" has similarities to multimedia documents that have been in existence for a decade or more, but possesses specific differentiating characteristics. In common usage, the latter refers to online entities that, in addition to text, consist of files of images and video clips residing separately in databases, rarely providing immediate context to the document text. While an interactive publication has many media objects as does the "traditional" multimedia document, it is a self-contained document, either as a single file with media files embedded within it, or as a "folder" containing tightly linked media files. The main characteristic that differentiates an interactive publication from a traditional multimedia document is that the reader would be able to reuse the media content for analysis and presentation, and to check the underlying data and possibly derive alternative conclusions leading, for example, to more in-depth peer reviews. We have created prototype publications containing paginated text and several media types encountered in the biomedical literature: 3D animations of anatomic structures; graphs, charts and tabular data; cell development images (video sequences); and clinical images such as CT, MRI and ultrasound in the DICOM format. This paper presents developments to date including: a tool to convert static tables or graphs into interactive entities, authoring procedures followed to create prototypes, and advantages and drawbacks of each of these platforms. It also outlines future work including meeting the challenge of network distribution for these large files.

  12. BOREAS Level-4c AVHRR-LAC Ten-Day Composite Images: Surface Parameters

    NASA Technical Reports Server (NTRS)

    Cihlar, Josef; Chen, Jing; Huang, Fengting; Nickeson, Jaime; Newcomer, Jeffrey A.; Hall, Forrest G. (Editor)

    2000-01-01

    The BOReal Ecosystem-Atmosphere Study (BOREAS) Staff Science Satellite Data Acquisition Program focused on providing the research teams with the remotely sensed satellite data products they needed to compare and spatially extend point results. Manitoba Remote Sensing Center (MRSC) and BOREAS Information System (BORIS) personnel acquired, processed, and archived data from the Advanced Very High Resolution Radiometer (AVHRR) instruments on the NOAA-11 and -14 satellites. The AVHRR data were acquired by CCRS and were provided to BORIS for use by BOREAS researchers. These AVHRR level-4c data are gridded, 10-day composites of surface parameters produced from sets of single-day images. Temporally, the 10-day compositing periods begin 11-Apr-1994 and end 10-Sep-1994. Spatially, the data cover the entire BOREAS region. The data are stored in binary image format files. Note: Some of the data files on the BOREAS CD-ROMs have been compressed using the Gzip program.

  13. Mars Global Digital Dune Database: MC2-MC29

    USGS Publications Warehouse

    Hayward, Rosalyn K.; Mullins, Kevin F.; Fenton, L.K.; Hare, T.M.; Titus, T.N.; Bourke, M.C.; Colaprete, Anthony; Christensen, P.R.

    2007-01-01

    Introduction The Mars Global Digital Dune Database presents data and describes the methodology used in creating the database. The database provides a comprehensive and quantitative view of the geographic distribution of moderate- to large-size dune fields from 65? N to 65? S latitude and encompasses ~ 550 dune fields. The database will be expanded to cover the entire planet in later versions. Although we have attempted to include all dune fields between 65? N and 65? S, some have likely been excluded for two reasons: 1) incomplete THEMIS IR (daytime) coverage may have caused us to exclude some moderate- to large-size dune fields or 2) resolution of THEMIS IR coverage (100m/pixel) certainly caused us to exclude smaller dune fields. The smallest dune fields in the database are ~ 1 km2 in area. While the moderate to large dune fields are likely to constitute the largest compilation of sediment on the planet, smaller stores of sediment of dunes are likely to be found elsewhere via higher resolution data. Thus, it should be noted that our database excludes all small dune fields and some moderate to large dune fields as well. Therefore the absence of mapped dune fields does not mean that such dune fields do not exist and is not intended to imply a lack of saltating sand in other areas. Where availability and quality of THEMIS visible (VIS) or Mars Orbiter Camera narrow angle (MOC NA) images allowed, we classifed dunes and included dune slipface measurements, which were derived from gross dune morphology and represent the prevailing wind direction at the last time of significant dune modification. For dunes located within craters, the azimuth from crater centroid to dune field centroid was calculated. Output from a general circulation model (GCM) is also included. In addition to polygons locating dune fields, the database includes over 1800 selected Thermal Emission Imaging System (THEMIS) infrared (IR), THEMIS visible (VIS) and Mars Orbiter Camera Narrow Angle (MOC NA) images that were used to build the database. The database is presented in a variety of formats. It is presented as a series of ArcReader projects which can be opened using the free ArcReader software. The latest version of ArcReader can be downloaded at http://www.esri.com/software/arcgis/arcreader/download.html. The database is also presented in ArcMap projects. The ArcMap projects allow fuller use of the data, but require ESRI ArcMap? software. Multiple projects were required to accommodate the large number of images needed. A fuller description of the projects can be found in the Dunes_ReadMe file and the ReadMe_GIS file in the Documentation folder. For users who prefer to create their own projects, the data is available in ESRI shapefile and geodatabase formats, as well as the open Geographic Markup Language (GML) format. A printable map of the dunes and craters in the database is available as a Portable Document Format (PDF) document. The map is also included as a JPEG file. ReadMe files are available in PDF and ASCII (.txt) files. Tables are available in both Excel (.xls) and ASCII formats.

  14. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments.

    PubMed

    Gorgolewski, Krzysztof J; Auer, Tibor; Calhoun, Vince D; Craddock, R Cameron; Das, Samir; Duff, Eugene P; Flandin, Guillaume; Ghosh, Satrajit S; Glatard, Tristan; Halchenko, Yaroslav O; Handwerker, Daniel A; Hanke, Michael; Keator, David; Li, Xiangrui; Michael, Zachary; Maumet, Camille; Nichols, B Nolan; Nichols, Thomas E; Pellman, John; Poline, Jean-Baptiste; Rokem, Ariel; Schaefer, Gunnar; Sochat, Vanessa; Triplett, William; Turner, Jessica A; Varoquaux, Gaël; Poldrack, Russell A

    2016-06-21

    The development of magnetic resonance imaging (MRI) techniques has defined modern neuroimaging. Since its inception, tens of thousands of studies using techniques such as functional MRI and diffusion weighted imaging have allowed for the non-invasive study of the brain. Despite the fact that MRI is routinely used to obtain data for neuroscience research, there has been no widely adopted standard for organizing and describing the data collected in an imaging experiment. This renders sharing and reusing data (within or between labs) difficult if not impossible and unnecessarily complicates the application of automatic pipelines and quality assurance protocols. To solve this problem, we have developed the Brain Imaging Data Structure (BIDS), a standard for organizing and describing MRI datasets. The BIDS standard uses file formats compatible with existing software, unifies the majority of practices already common in the field, and captures the metadata necessary for most common data processing operations.

  15. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments

    PubMed Central

    Gorgolewski, Krzysztof J.; Auer, Tibor; Calhoun, Vince D.; Craddock, R. Cameron; Das, Samir; Duff, Eugene P.; Flandin, Guillaume; Ghosh, Satrajit S.; Glatard, Tristan; Halchenko, Yaroslav O.; Handwerker, Daniel A.; Hanke, Michael; Keator, David; Li, Xiangrui; Michael, Zachary; Maumet, Camille; Nichols, B. Nolan; Nichols, Thomas E.; Pellman, John; Poline, Jean-Baptiste; Rokem, Ariel; Schaefer, Gunnar; Sochat, Vanessa; Triplett, William; Turner, Jessica A.; Varoquaux, Gaël; Poldrack, Russell A.

    2016-01-01

    The development of magnetic resonance imaging (MRI) techniques has defined modern neuroimaging. Since its inception, tens of thousands of studies using techniques such as functional MRI and diffusion weighted imaging have allowed for the non-invasive study of the brain. Despite the fact that MRI is routinely used to obtain data for neuroscience research, there has been no widely adopted standard for organizing and describing the data collected in an imaging experiment. This renders sharing and reusing data (within or between labs) difficult if not impossible and unnecessarily complicates the application of automatic pipelines and quality assurance protocols. To solve this problem, we have developed the Brain Imaging Data Structure (BIDS), a standard for organizing and describing MRI datasets. The BIDS standard uses file formats compatible with existing software, unifies the majority of practices already common in the field, and captures the metadata necessary for most common data processing operations. PMID:27326542

  16. A novel fuzzy logic-based image steganography method to ensure medical data security.

    PubMed

    Karakış, R; Güler, I; Çapraz, I; Bilir, E

    2015-12-01

    This study aims to secure medical data by combining them into one file format using steganographic methods. The electroencephalogram (EEG) is selected as hidden data, and magnetic resonance (MR) images are also used as the cover image. In addition to the EEG, the message is composed of the doctor׳s comments and patient information in the file header of images. Two new image steganography methods that are based on fuzzy-logic and similarity are proposed to select the non-sequential least significant bits (LSB) of image pixels. The similarity values of the gray levels in the pixels are used to hide the message. The message is secured to prevent attacks by using lossless compression and symmetric encryption algorithms. The performance of stego image quality is measured by mean square of error (MSE), peak signal-to-noise ratio (PSNR), structural similarity measure (SSIM), universal quality index (UQI), and correlation coefficient (R). According to the obtained result, the proposed method ensures the confidentiality of the patient information, and increases data repository and transmission capacity of both MR images and EEG signals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. View_SPECPR: Software for Plotting Spectra (Installation Manual and User's Guide, Version 1.2)

    USGS Publications Warehouse

    Kokaly, Raymond F.

    2008-01-01

    This document describes procedures for installing and using the 'View_SPECPR' software system to plot spectra stored in SPECPR (SPECtrum Processing Routines) files. The View_SPECPR software is comprised of programs written in IDL (Interactive Data Language) that run within the ENVI (ENvironment for Visualizing Images) image processing system. SPECPR files are used by earth-remote-sensing scientists and planetary scientists for storing spectra collected by laboratory, field, and remote sensing instruments. A widely distributed SPECPR file is the U.S. Geological Survey (USGS) spectral library that contains thousands of spectra of minerals, vegetation, and man-made materials (Clark and others, 2007). SPECPR files contain reflectance data and associated wavelength and spectral resolution data, as well as meta-data on the time and date of collection and spectrometer settings. Furthermore, the SPECPR file automatically tracks changes to data records through its 'history' fields. For more details on the format and content of SPECPR files, see Clark (1993). For more details on ENVI, see ITT (2008). This program has been updated using an ENVI 4.5/IDL7.0 full license operating on a Windows XP operating system and requires the installation of the iTools components of IDL7.0; however, this program should work with full licenses on UNIX/LINUX systems. This software has not been tested with ENVI licenses on Windows Vista or Apple Operating Systems.

  18. The Use of an On-Board MV Imager for Plan Verification of Intensity Modulated Radiation Therapy and Volumetrically Modulated Arc Therapy

    NASA Astrophysics Data System (ADS)

    Walker, Justin A.

    The introduction of complex treatment modalities such as IMRT and VMAT has led to the development of many devices for plan verification. One such innovation in this field is the repurposing of the portal imager to not only be used for tumor localization but for recording dose distributions as well. Several advantages make portal imagers attractive options for this purpose. Very high spatial resolution allows for better verification of small field plans than may be possible with commercially available devices. Because the portal imager is attached to the gantry set up is simpler than any other method available, requiring no additional accessories, and often can be accomplished from outside the treatment room. Dose images capture by the portal imager are in digital format make permanent records that can be analyzed immediately. Portal imaging suffers from a few limitations however that must be overcome. Images captured contain dose information and a calibration must be maintained for image to dose conversion. Dose images can only be taken perpendicular to the treatment beam allowing only for planar dose comparison. Planar dose files are themself difficult to obtain for VMAT treatments and an in-house script had to be developed to create such a file before analysis could be performed. Using the methods described in this study, excellent agreement between planar dose files generated and dose images taken were found. The average agreement for IMRT field analyzed being greater than 97% for non-normalized images at 3mm and 3%. Comparable agreement for VAMT plans was found as well with the average agreement being greater than 98%.

  19. Heads Up

    MedlinePlus

    ... HEADS UP Resources Training Custom PDFs Mobile Apps Videos Graphics Podcasts Social Media File Formats Help: How do I view different file formats (PDF, DOC, PPT, MPEG) on this site? Adobe PDF file Microsoft PowerPoint ... file Apple Quicktime file RealPlayer file Text file ...

  20. BOREAS Regional DEM in Raster Format and AEAC Projection

    NASA Technical Reports Server (NTRS)

    Knapp, David; Verdin, Kristine; Hall, Forrest G. (Editor)

    2000-01-01

    This data set is based on the GTOPO30 Digital Elevation Model (DEM) produced by the United States Geological Survey EROS Data Center (USGS EDC). The BOReal Ecosystem-Atmosphere Study (BOREAS) region (1,000 km x 1000 km) was extracted from the GTOPO30 data and reprojected by BOREAS staff into the Albers Equal-Area Conic (AEAC) projection. The pixel size of these data is 1 km. The data are stored in binary, image format files.

  1. Informatics in radiology (infoRAD): HTML and Web site design for the radiologist: a primer.

    PubMed

    Ryan, Anthony G; Louis, Luck J; Yee, William C

    2005-01-01

    A Web site has enormous potential as a medium for the radiologist to store, present, and share information in the form of text, images, and video clips. With a modest amount of tutoring and effort, designing a site can be as painless as preparing a Microsoft PowerPoint presentation. The site can then be used as a hub for the development of further offshoots (eg, Web-based tutorials, storage for a teaching library, publication of information about one's practice, and information gathering from a wide variety of sources). By learning the basics of hypertext markup language (HTML), the reader will be able to produce a simple and effective Web page that permits display of text, images, and multimedia files. The process of constructing a Web page can be divided into five steps: (a) creating a basic template with formatted text, (b) adding color, (c) importing images and multimedia files, (d) creating hyperlinks, and (e) uploading one's page to the Internet. This Web page may be used as the basis for a Web-based tutorial comprising text documents and image files already in one's possession. Finally, there are many commercially available packages for Web page design that require no knowledge of HTML.

  2. Digital data to support development of a pesticide management plan for the Standing Rock Indian Reservation, Sioux County, North Dakota, and Corson County, South Dakota

    USGS Publications Warehouse

    Schaap, Bryan D.

    2004-01-01

    As part of a program to support development of pesticide management plans for Indian Reservations, the U.S. Geological Survey has been working in cooperation with the U.S. Environmental Protection Agency to make selected information available to the Tribes or in a format easier for the Tribes to use.As a result of this program, four digital data sets related to the geology or hydrology of the Standing Rock Indian Reservation were produced as part of this report. The digital data sets are based on maps published in 1982 at the 1:250,000 scale in "Geohydrology of the Standing Rock Indian Reservation, North and South Dakota," U.S. Geological Survey Hydrologic Investigations Atlas HA-644 by L.W. Howells. The digital data sets were created by 1) scanning the appropriate map to create an image file, 2) registering the image file to real-world coordinates, 3) creating a new image file rectified to real-world coordinates, and 4) digitizing of the features of interest using the rectified image as a guide. As digital data sets, the information can be used in a geographic information system in combination with other information to help develop a pesticide management plan.

  3. Towards an easier creation of three-dimensional data for embedding into scholarly 3D PDF (Portable Document Format) files

    PubMed Central

    2015-01-01

    The Portable Document Format (PDF) allows for embedding three-dimensional (3D) models and is therefore particularly suitable to communicate respective data, especially as regards scholarly articles. The generation of the necessary model data, however, is still challenging, especially for inexperienced users. This prevents an unrestrained proliferation of 3D PDF usage in scholarly communication. This article introduces a new solution for the creation of three of types of 3D geometry (point clouds, polylines and triangle meshes), that is based on MeVisLab, a framework for biomedical image processing. This solution enables even novice users to generate the model data files without requiring programming skills and without the need for an intensive training by simply using it as a conversion tool. Advanced users can benefit from the full capability of MeVisLab to generate and export the model data as part of an overall processing chain. Although MeVisLab is primarily designed for handling biomedical image data, the new module is not restricted to this domain. It can be used for all scientific disciplines. PMID:25780759

  4. Towards an easier creation of three-dimensional data for embedding into scholarly 3D PDF (Portable Document Format) files.

    PubMed

    Newe, Axel

    2015-01-01

    The Portable Document Format (PDF) allows for embedding three-dimensional (3D) models and is therefore particularly suitable to communicate respective data, especially as regards scholarly articles. The generation of the necessary model data, however, is still challenging, especially for inexperienced users. This prevents an unrestrained proliferation of 3D PDF usage in scholarly communication. This article introduces a new solution for the creation of three of types of 3D geometry (point clouds, polylines and triangle meshes), that is based on MeVisLab, a framework for biomedical image processing. This solution enables even novice users to generate the model data files without requiring programming skills and without the need for an intensive training by simply using it as a conversion tool. Advanced users can benefit from the full capability of MeVisLab to generate and export the model data as part of an overall processing chain. Although MeVisLab is primarily designed for handling biomedical image data, the new module is not restricted to this domain. It can be used for all scientific disciplines.

  5. TiConverter: A training image converting tool for multiple-point geostatistics

    NASA Astrophysics Data System (ADS)

    Fadlelmula F., Mohamed M.; Killough, John; Fraim, Michael

    2016-11-01

    TiConverter is a tool developed to ease the application of multiple-point geostatistics whether by the open source Stanford Geostatistical Modeling Software (SGeMS) or other available commercial software. TiConverter has a user-friendly interface and it allows the conversion of 2D training images into numerical representations in four different file formats without the need for additional code writing. These are the ASCII (.txt), the geostatistical software library (GSLIB) (.txt), the Isatis (.dat), and the VTK formats. It performs the conversion based on the RGB color system. In addition, TiConverter offers several useful tools including image resizing, smoothing, and segmenting tools. The purpose of this study is to introduce the TiConverter, and to demonstrate its application and advantages with several examples from the literature.

  6. BOREAS RSS-17 1994 ERS-1 Level-3 Freeze/Thaw Backscatter Change Images

    NASA Technical Reports Server (NTRS)

    Rignot, Eric; Nickeson, Jaime (Editor); Hall, Forrest G. (Editor); Way, JoBea; McDonald, Kyle C.; Smith, David E. (Technical Monitor)

    2000-01-01

    The Boreal Ecosystem-Atmosphere Study (BOREAS) Remote Sensing Science (RSS)-17 team acquired and analyzed imaging radar data from the European Space Agency's (ESA's) European Remote Sensing Satellite (ERS)-1 over a complete annual cycle at the BOREAS sites in Canada in 1994 to detect shifts in radar backscatter related to varying environmental conditions. Two independent transitions corresponding to soil thaw and possible canopy thaw were revealed by the data. The results demonstrated that radar provides an ability to observe thaw transitions at the beginning of the growing season, which in turn helps constrain the length of the growing season. The data set presented here includes change maps derived from radar backscatter images that were mosaicked together to cover the southern BOREAS sites. The image values used for calculating the changes are given relative to the reference mosaic image. The data are stored in binary image format files. The imaging radar data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). The data files are available on a CD-ROM (see document number 20010000884).

  7. IIPImage: Large-image visualization

    NASA Astrophysics Data System (ADS)

    Pillay, Ruven

    2014-08-01

    IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.

  8. Archive of Digitized Analog Boomer and Minisparker Seismic Reflection Data Collected from the Alabama-Mississippi-Louisiana Shelf During Cruises Onboard the R/V Carancahua and R/V Gyre, April and July, 1981

    USGS Publications Warehouse

    Sanford, Jordan M.; Harrison, Arnell S.; Wiese, Dana S.; Flocks, James G.

    2009-01-01

    In April and July of 1981, the U.S. Geological Survey (USGS) conducted geophysical surveys to investigate the shallow geologic framework of the Alabama-Mississippi-Louisiana Shelf in the northern Gulf of Mexico. Work was conducted onboard the Texas A&M University R/V Carancahua and the R/V Gyre to develop a geologic understanding of the study area and to locate potential hazards related to offshore oil and gas production. While the R/V Carancahua only collected boomer data, the R/V Gyre used a 400-Joule minisparker, 3.5-kilohertz (kHz) subbottom profiler, 12-kHz precision depth recorder, and two air guns. The authors selected the minisparker data set because, unlike with the boomer data, it provided the most complete record. This report is part of a series to digitally archive the legacy analog data collected from the Mississippi-Alabama SHelf (MASH). The MASH data rescue project is a cooperative effort by the USGS and the Minerals Management Service (MMS). This report serves as an archive of high-resolution scanned Tagged Image File Format (TIFF) and Graphics Interchange Format (GIF) images of the original boomer and minisparker paper records, navigation files, trackline maps, Geographic Information System (GIS) files, cruise logs, and formal Federal Geographic Data Committee (FGDC) metadata.

  9. Distributed file management for remote clinical image-viewing stations

    NASA Astrophysics Data System (ADS)

    Ligier, Yves; Ratib, Osman M.; Girard, Christian; Logean, Marianne; Trayser, Gerhard

    1996-05-01

    The Geneva PACS is based on a distributed architecture, with different archive servers used to store all the image files produced by digital imaging modalities. Images can then be visualized on different display stations with the Osiris software. Image visualization require to have the image file physically present on the local station. Thus, images must be transferred from archive servers to local display stations in an acceptable way, which means fast and user friendly where the notion of file must be hidden to users. The transfer of image files is done according to different schemes including prefetching and direct image selection. Prefetching allows the retrieval of previous studies of a patient in advance. A direct image selection is also provided in order to retrieve images on request. When images are transferred locally on the display station, they are stored in Papyrus files, each file containing a set of images. File names are used by the Osiris viewing software to open image sequences. But file names alone are not explicit enough to properly describe the content of the file. A specific utility has been developed to present a list of patients, and for each patient a list of exams which can be selected and automatically displayed. The system has been successfully tested in different clinical environments. It will be soon extended on a hospital wide basis.

  10. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  11. VR Lab ISS Graphics Models Data Package

    NASA Technical Reports Server (NTRS)

    Paddock, Eddie; Homan, Dave; Bell, Brad; Miralles, Evely; Hoblit, Jeff

    2016-01-01

    All the ISS models are saved in AC3D model format which is a text based format that can be loaded into blender and exported to other formats from there including FBX. The models are saved in two different levels of detail, one being labeled "LOWRES" and the other labeled "HIRES". There are two ".str" files (HIRES _ scene _ load.str and LOWRES _ scene _ load.str) that give the hierarchical relationship of the different nodes and the models associated with each node for both the "HIRES" and "LOWRES" model sets. All the images used for texturing are stored in Windows ".bmp" format for easy importing.

  12. Interoperability format translation and transformation between IFC architectural design file and simulation file formats

    DOEpatents

    Chao, Tian-Jy; Kim, Younghun

    2015-02-03

    Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function to convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.

  13. BOREAS RSS-7 Regional LAI and FPAR Images From 10-Day AVHRR-LAC Composites

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Chen, Jing; Cihlar, Josef

    2000-01-01

    The BOReal Ecosystem-Atmosphere Study Remote Sensing Science (BOREAS RSS-7) team collected various data sets to develop and validate an algorithm to allow the retrieval of the spatial distribution of Leaf Area Index (LAI) from remotely sensed images. Advanced Very High Resolution Radiometer (AVHRR) level-4c 10-day composite Normalized Difference Vegetation Index (NDVI) images produced at CCRS were used to produce images of LAI and the Fraction of Photosynthetically Active Radiation (FPAR) absorbed by plant canopies for the three summer IFCs in 1994 across the BOREAS region. The algorithms were developed based on ground measurements and Landsat Thematic Mapper (TM) images. The data are stored in binary image format files.

  14. TU-CD-304-11: Veritas 2.0: A Cloud-Based Tool to Facilitate Research and Innovation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, P; Patankar, A; Etmektzoglou, A

    Purpose: We introduce Veritas 2.0, a cloud-based, non-clinical research portal, to facilitate translation of radiotherapy research ideas to new delivery techniques. The ecosystem of research tools includes web apps for a research beam builder for TrueBeam Developer Mode, an image reader for compressed and uncompressed XIM files, and a trajectory log file based QA/beam delivery analyzer. Methods: The research beam builder can generate TrueBeam readable XML file either from scratch or from pre-existing DICOM-RT plans. DICOM-RT plan is first converted to XML format and then researcher can interactively modify or add control points to them. Delivered beam can be verifiedmore » via reading generated images and analyzing trajectory log files. Image reader can read both uncompressed and HND-compressed XIM images. The trajectory log analyzer lets researchers plot expected vs. actual values and deviations among 30 mechanical axes. The analyzer gives an animated view of MLC patterns for the beam delivery. Veritas 2.0 is freely available and its advantages versus standalone software are i) No software installation or maintenance needed, ii) easy accessibility across all devices iii) seamless upgrades and iv) OS independence. Veritas is written using open-source tools like twitter bootstrap, jQuery, flask, and Python-based modules. Results: In the first experiment, an anonymized 7-beam DICOM-RT IMRT plan was converted to XML beam containing 1400 control points. kV and MV imaging points were inserted into this XML beam. In another experiment, a binary log file was analyzed to compare actual vs expected values and deviations among axes. Conclusions: Veritas 2.0 is a public cloud-based web app that hosts a pool of research tools for facilitating research from conceptualization to verification. It is aimed at providing a platform for facilitating research and collaboration. I am full time employee at Varian Medical systems, Palo Alto.« less

  15. [Continuous observation of canal aberrations in S-shaped simulated root canal prepared by hand-used ProTaper files].

    PubMed

    Xia, Ling-yun; Leng, Wei-dong; Mao, Min; Yang, Guo-biao; Xiang, Yong-gang; Chen, Xin-mei

    2009-08-01

    To observe the formation of canal aberrations in S-shaped root canals prepared by every file of hand-used ProTaper. Fifteen S-shaped simulated resin root canals were selected. Each root canal was prepared by every file of hand-used ProTaper following the manufacturer instruction. The images of canals prepared by S1, S2, F1, F2 and F3 were taken and stored, which were divided into group S1, S2, F1, F2 and F3. One image of canal unprepared was superposed with the images of the same root canal in these five groups respectively to observe the types and number of canal aberrations, which included unprepared area, danger zone, ledge, elbow, zip and perforation. SPSS12.0 software pakage was used for Fisher's exact probabilities in 2x2 table. Unprepared area decreased following preparation by every file of ProTaper, but it still existed when the canal preparation was finished. The incidence of danger zone, elbow and zip in group F1 was 15/15, 11/15, 4/15, respectively, which was significantly higher than that in group S2(2/15,0,0) (P<0.001). Ledge appeared after prepared by F2, and increased sharply in group F3. None perforation was found in all groups. The incidence of canal aberrations begins to increase after prepared by finishing files of ProTaper.The presence of unprepared area suggests that it is essential to rinse canal abundantly during complicated canal preparation and canal antisepsis after preparation.

  16. Analyzing microtomography data with Python and the scikit-image library.

    PubMed

    Gouillart, Emmanuelle; Nunez-Iglesias, Juan; van der Walt, Stéfan

    2017-01-01

    The exploration and processing of images is a vital aspect of the scientific workflows of many X-ray imaging modalities. Users require tools that combine interactivity, versatility, and performance. scikit-image is an open-source image processing toolkit for the Python language that supports a large variety of file formats and is compatible with 2D and 3D images. The toolkit exposes a simple programming interface, with thematic modules grouping functions according to their purpose, such as image restoration, segmentation, and measurements. scikit-image users benefit from a rich scientific Python ecosystem that contains many powerful libraries for tasks such as visualization or machine learning. scikit-image combines a gentle learning curve, versatile image processing capabilities, and the scalable performance required for the high-throughput analysis of X-ray imaging data.

  17. Software on diffractive optics and computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Doskolovich, Leonid L.; Golub, Michael A.; Kazanskiy, Nikolay L.; Khramov, Alexander G.; Pavelyev, Vladimir S.; Seraphimovich, P. G.; Soifer, Victor A.; Volotovskiy, S. G.

    1995-01-01

    The `Quick-DOE' software for an IBM PC-compatible computer is aimed at calculating the masks of diffractive optical elements (DOEs) and computer generated holograms, computer simulation of DOEs, and for executing a number of auxiliary functions. In particular, among the auxiliary functions are the file format conversions, mask visualization on display from a file, implementation of fast Fourier transforms, and arranging and preparation of composite images for the output on a photoplotter. The software is aimed for use by opticians, DOE designers, and the programmers dealing with the development of the program for DOE computation.

  18. VizieR Online Data Catalog: High spatial resolution observations of HM Sge (Sacuto+, 2009)

    NASA Astrophysics Data System (ADS)

    Sacuto, S.; Chesneau, O.

    2008-11-01

    All the data products are stored in the FITS-based, optical interferometry data exchange format (OI-FITS), described in Pauls et al. (2005PASP..117.1255P). The OI Exchange Format is a standard for exchanging calibrated data from optical (visible/infrared) stellar interferometers. The standard is based on the Flexible Image Transport System (FITS), and supports storage of the optical interferometric observations including visibilities and differential phases. Several routines to read and write this format in various languages can be found in: Webpage http://www.mrao.cam.ac.uk/~jsy1001/exchange (2 data files).

  19. VizieR Online Data Catalog: High spatial resolution observations of HM Sge (Sacuto+, 2007)

    NASA Astrophysics Data System (ADS)

    Sacuto, S.; Chesneau, O.; Vannier, M.; Cruzalebes, P.

    2007-01-01

    All the data products are stored in the FITS-based, optical interferometry data exchange format (OI-FITS), described in Pauls et al. (2005PASP..117.1255P). The OI Exchange Format is a standard for exchanging calibrated data from optical (visible/infrared) stellar interferometers. The standard is based on the Flexible Image Transport System (FITS), and supports storage of the optical interferometric observations including visibilities and differential phases. Several routines to read and write this format in various languages can be found in: Webpage http://www.mrao.cam.ac.uk/~jsy1001/exchange (1 data file).

  20. Autoplot: a Browser for Science Data on the Web

    NASA Astrophysics Data System (ADS)

    Faden, J.; Weigel, R. S.; West, E. E.; Merka, J.

    2008-12-01

    Autoplot (www.autoplot.org) is software for plotting data from many different sources and in many different file formats. Data from CDF, CEF, Fits, NetCDF, and OpenDAP can be plotted, along with many other sources such as ASCII tables and Excel spreadsheets. This is done by adapting these various data formats and APIs into a common data model that borrows from the netCDF and CDF data models. Autoplot uses a web browser metaphor to simplify use. The user specifies a parameter URL, for example a CDF file accessible via http with a parameter name appended, and the file resource is downloaded and the parameter is rendered in a scientifically meaningful way. When data span multiple files, the user can use a file name template in the URL to aggregate (combine) a set of remote files. So the problem of aggregating data across file boundaries is handled on the client side, allowing simple web servers to be used. The das2 graphics library provides rich controls for exploring the data. Scripting is supported through Python, providing not just programmatic control, but for calculating new parameters in a language that will look familiar to IDL and Matlab users. Autoplot is Java-based software, and will run on most computers without a burdensome installation process. It can also used as an applet or as a servlet that serves static images. Autoplot was developed as part of the Virtual Radiation Belt Observatory (ViRBO) project, and is also being used for the Virtual Magnetospheric Observatory (VMO). It is expected that this flexible, general-purpose plotting tool will be useful for allowing a data provider to add instant visualization capabilities to a directory of files or for general use in the Virtual Observatory environment.

  1. Interoperability format translation and transformation between IFC architectural design file and simulation file formats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, Tian-Jy; Kim, Younghun

    Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function tomore » convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.« less

  2. The Brain/MINDS 3D digital marmoset brain atlas

    PubMed Central

    Woodward, Alexander; Hashikawa, Tsutomu; Maeda, Masahide; Kaneko, Takaaki; Hikishima, Keigo; Iriki, Atsushi; Okano, Hideyuki; Yamaguchi, Yoko

    2018-01-01

    We present a new 3D digital brain atlas of the non-human primate, common marmoset monkey (Callithrix jacchus), with MRI and coregistered Nissl histology data. To the best of our knowledge this is the first comprehensive digital 3D brain atlas of the common marmoset having normalized multi-modal data, cortical and sub-cortical segmentation, and in a common file format (NIfTI). The atlas can be registered to new data, is useful for connectomics, functional studies, simulation and as a reference. The atlas was based on previously published work but we provide several critical improvements to make this release valuable for researchers. Nissl histology images were processed to remove illumination and shape artifacts and then normalized to the MRI data. Brain region segmentation is provided for both hemispheres. The data is in the NIfTI format making it easy to integrate into neuroscience pipelines, whereas the previous atlas was in an inaccessible file format. We also provide cortical, mid-cortical and white matter boundary segmentations useful for visualization and analysis. PMID:29437168

  3. CBEFF Common Biometric Exchange File Format

    DTIC Science & Technology

    2001-01-03

    and systems. Points of contact for CBEFF and liaisons to other organizations can be found in Appendix F. 2. Purpose The purpose of CBEFF is...0x40 Signature Dynamics 0x80 Keystroke Dynamics 0x100 Lip Movement 0x200 Thermal Face Image 0x400 Thermal Hand Image 0x800 Gait 0x1000 Body...this process is negligible from the Biometric Objects point of view, unless the process creating the livescan sample to compare against the

  4. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  5. A Comparison of Some of the Most Current Methods of Image Compression

    DTIC Science & Technology

    1993-06-01

    found FrameMaker (available on the Sun) to be very flexible with imported files. It requires them to be in Raster format. 51 5. Access a. Fractal...account manager must set up a Sun account for access to FrameMaker , Sunvision, etc. Be sure to specify the utilities needed when signing up for an

  6. Rapid Generation of Large Dimension Photon Sieve Designs

    NASA Technical Reports Server (NTRS)

    Hariharan, Shravan; Fitzpatrick, Sean; Kim, Hyun Jung; Julian, Matthew; Sun, Wenbo; Tedjojuwono, Ken; MacDonnell, David

    2017-01-01

    A photon sieve is a revolutionary optical instrument that provides high resolution imaging at a fraction of the weight of typical telescopes (areal density of 0.3 kg/m2 compared to 25 kg/m2 for the James Webb Space Telescope). The photon sieve is a variation of a Fresnel Zone Plate consisting of many small holes spread out in a ring-like pattern, which focuses light of a specific wavelength by diffraction. The team at NASA Langley Research Center has produced a variety of small photon sieves for testing. However, it is necessary to increase both the scale and rate of production, as a single sieve previously took multiple weeks to design and fabricate. This report details the different methods used in producing photon sieve designs in two file formats: CIF and DXF. The difference between these methods, and the two file formats were compared, to determine the most efficient design process. Finally, a step-by-step sieve design and fabrication process was described. The design files can be generated in both formats using an editing tool such as Microsoft Excel. However, an approach using a MATLAB program reduced the computing time of the designs and increased the ability of the user to generate large photon sieve designs. Although the CIF generation process was deemed the most efficient, the design techniques for both file types have been proven to generate complete photon sieves that can be used for scientific applications

  7. PCF File Format.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thoreson, Gregory G

    PCF files are binary files designed to contain gamma spectra and neutron count rates from radiation sensors. It is the native format for the GAmma Detector Response and Analysis Software (GADRAS) package [1]. It can contain multiple spectra and information about each spectrum such as energy calibration. This document outlines the format of the file that would allow one to write a computer program to parse and write such files.

  8. Archive of digital and digitized analog boomer seismic reflection data collected during USGS cruise 96CCT02 in Copano, Corpus Christi, and Nueces Bays and Corpus Christi Bayou, Texas, July 1996

    USGS Publications Warehouse

    Harrison, Arnell S.; Dadisman, Shawn V.; Kindinger, Jack G.; Morton, Robert A.; Blum, Mike D.; Wiese, Dana S.; Subiño, Janice A.

    2007-01-01

    In June of 1996, the U.S. Geological Survey conducted geophysical surveys from Nueces to Copano Bays, Texas. This report serves as an archive of unprocessed digital boomer seismic reflection data, trackline maps, navigation files, GIS information, cruise log, and formal FGDC metadata. Filtered and gained digital images of the seismic profiles and high resolution scanned TIFF images of the original paper printouts are also provided. The archived trace data are in standard Society of Exploration Geophysicists (SEG) SEG-Y format (Barry and others, 1975) and may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU). Example SU processing scripts and USGS software for viewing the SEG-Y files (Zihlman, 1992) are also provided.

  9. A Detailed Examination of the GPM Core Satellite Gridded Text Product

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz; Kelley, Owen A.; Kummerow, C.; Huffman, George; Olson, William S.; Kwiatowski, John M.

    2015-01-01

    The Global Precipitation Measurement (GPM) mission quarter-degree gridded-text product has a similar file format and a similar purpose as the Tropical Rainfall Measuring Mission (TRMM) 3G68 quarter-degree product. The GPM text-grid format is an hourly summary of surface precipitation retrievals from various GPM instruments and combinations of GPM instruments. The GMI Goddard Profiling (GPROF) retrieval provides the widest swath (800 km) and does the retrieval using the GPM Microwave Imager (GMI). The Ku radar provides the widest radar swath (250 km swath) and also provides continuity with the TRMM Ku Precipitation Radar. GPM's Ku+Ka band matched swath (125 km swath) provides a dual-frequency precipitation retrieval. The "combined" retrieval (125 km swath) provides a multi-instrument precipitation retrieval based on the GMI, the DPR Ku radar, and the DPR Ka radar. While the data are reported in hourly grids, all hours for a day are packaged into a single text file that is g-zipped to reduce file size and to speed up downloading. The data are reported on a 0.25deg x 0.25 deg grid.

  10. Data files from the Grays Harbor Sediment Transport Experiment Spring 2001

    USGS Publications Warehouse

    Landerman, Laura A.; Sherwood, Christopher R.; Gelfenbaum, Guy; Lacy, Jessica; Ruggiero, Peter; Wilson, Douglas; Chisholm, Tom; Kurrus, Keith

    2005-01-01

    This publication consists of two DVD-ROMs, both of which are presented here. This report describes data collected during the Spring 2001 Grays Harbor Sediment Transport Experiment, and provides additional information needed to interpret the data. Two DVDs accompany this report; both contain documentation in html format that assist the user in navigating through the data. DVD-ROM-1 contains a digital version of this report in .pdf format, raw Aquatec acoustic backscatter (ABS) data in .zip format, Sonar data files in .avi format, and coastal processes and morphology data in ASCII format. ASCII data files are provided in .zip format; bundled coastal processes ASCII files are separated by deployment and instrument; bundled morphology ASCII files are separated into monthly data collection efforts containing the beach profiles collected (or extracted from the surface map) at that time; weekly surface maps are also bundled together. DVD-ROM-2 contains a digital version of this report in .pdf format, the binary data files collected by the SonTek instrumentation, calibration files for the pressure sensors, and Matlab m-files for loading the ABS data into Matlab and cleaning-up the optical backscatter (OBS) burst time-series data.

  11. MACMULTIVIEW 5.1

    NASA Technical Reports Server (NTRS)

    Norikane, L.

    1994-01-01

    MacMultiview is an interactive tool for the Macintosh II family which allows one to display and make computations utilizing polarimetric radar data collected by the Jet Propulsion Laboratory's imaging SAR (synthetic aperture radar) polarimeter system. The system includes the single-frequency L-band sensor mounted on the NASA CV990 aircraft and its replacement, the multi-frequency P-, L-, and C-band sensors mounted on the NASA DC-8. MacMultiview provides two basic functions: computation of synthesized polarimetric images and computation of polarization signatures. The radar data can be used to compute a variety of images. The total power image displays the sum of the polarized and unpolarized components of the backscatter for each pixel. The magnitude/phase difference image displays the HH (horizontal transmit and horizontal receive polarization) to VV (vertical transmit and vertical receive polarization) phase difference using color. Magnitude is displayed using intensity. The user may also select any transmit and receive polarization combination from which an image is synthesized. This image displays the backscatter which would have been observed had the sensor been configured using the selected transmit and receive polarizations. MacMultiview can also be used to compute polarization signatures, three dimensional plots of backscatter versus transmit and receive polarizations. The standard co-polarization signatures (transmit and receive polarizations are the same) and cross-polarization signatures (transmit and receive polarizations are orthogonal) can be plotted for any rectangular subset of pixels within a radar data set. In addition, the ratio of co- and cross-polarization signatures computed from different subsets within the same data set can also be computed. Computed images can be saved in a variety of formats: byte format (headerless format which saves the image as a string of byte values), MacMultiview (a byte image preceded by an ASCII header), and PICT2 format (standard format readable by MacMultiview and other image processing programs for the Macintosh). Images can also be printed on PostScript output devices. Polarization signatures can be saved in either a PICT format or as a text file containing PostScript commands and printed on any QuickDraw output device. The associated Stokes matrices can be stored in a text file. MacMultiview is written in C-language for Macintosh II series computers. MacMultiview will only run on Macintosh II series computers with 8-bit video displays (gray shades or color). The program also requires a minimum configuration of System 6.0, Finder 6.1, and 1Mb of RAM. MacMultiview is NOT compatible with System 7.0. It requires 32-Bit QuickDraw. Note: MacMultiview may not be fully compatible with preliminary versions of 32-Bit QuickDraw. Macintosh Programmer's Workshop and Macintosh Programmer's Workshop C (version 3.0) are required for recompiling and relinking. The standard distribution medium for this package is a set of three 800K 3.5 inch diskettes in Macintosh format. This program was developed in 1989 and updated in 1991. MacMultiview is a copyrighted work with all copyright vested in NASA. QuickDraw, Finder, Macintosh, and System 7 are trademarks of Apple Computer, Inc.

  12. Dagik: A Quick Look System of the Geospace Data in KML format

    NASA Astrophysics Data System (ADS)

    Yoshida, D.; Saito, A.

    2007-12-01

    Dagik (Daily Geospace data in KML) is a quick look plot sharing system using Google Earth as a data browser. It provides daily data lists that contain network links to the KML/KMZ files of various geospace data. KML is a markup language to display data on Google Earth, and KMZ is a compressed file of KML. Users can browse the KML/KMZ files with the following procedures: 1) download "dagik.kml" from Dagik homepage (http://www- step.kugi.kyoto-u.ac.jp/dagik/), and open it with Google Earth, 2) select date, 3) select data type to browse. Dagik is a collection of network links to KML/KMZ files. The daily Dagik files are available since 1957, though they contain only the geomagnetic index data in the early periods. There are three activities of Dagik. The first one is the generation of the daily data lists, the second is to provide several useful tools, such as observatory lists, and the third is to assist researchers to make KML/KMZ data plots. To make the plot browsing easy, there are three rules for Dagik plot format: 1) one file contains one UT day data, 2) use common plot panel size, 3) share the data list. There are three steps to join Dagik as a plot provider: 1) make KML/KMZ files of the data, 2) put the KML/KMZ files on Web, 3) notice Dagik group the URL address and description of the files. The KML/KMZ files will be included in Dagik data list. As of September 2007, quick looks of several geosphace data, such as GPS total electron content data, ionosonde data, magnetometer data, FUV imaging data by a satellite, ground-based airglow data, and satellite footprint data, are available. The system of Dagik is introduced in the presentation. u.ac.jp/dagik/

  13. File format for normalizing radiological concentration exposure rate and dose rate data for the effects of radioactive decay and weathering processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kraus, Terrence D.

    2017-04-01

    This report specifies the electronic file format that was agreed upon to be used as the file format for normalized radiological data produced by the software tool developed under this TI project. The NA-84 Technology Integration (TI) Program project (SNL17-CM-635, Normalizing Radiological Data for Analysis and Integration into Models) investigators held a teleconference on December 7, 2017 to discuss the tasks to be completed under the TI program project. During this teleconference, the TI project investigators determined that the comma-separated values (CSV) file format is the most suitable file format for the normalized radiological data that will be outputted frommore » the normalizing tool developed under this TI project. The CSV file format was selected because it provides the requisite flexibility to manage different types of radiological data (i.e., activity concentration, exposure rate, dose rate) from other sources [e.g., Radiological Assessment and Monitoring System (RAMS), Aerial Measuring System (AMS), Monitoring and Sampling). The CSV file format also is suitable for the file format of the normalized radiological data because this normalized data can then be ingested by other software [e.g., RAMS, Visual Sampling Plan (VSP)] used by the NA-84’s Consequence Management Program.« less

  14. Effect of ProTaper Gold, Self-Adjusting File, and XP-endo Shaper Instruments on Dentinal Microcrack Formation: A Micro-computed Tomographic Study.

    PubMed

    Bayram, H Melike; Bayram, Emre; Ocak, Mert; Uygun, Ahmet Demirhan; Celik, Hakan Hamdi

    2017-07-01

    The aim of the present study was to evaluate the frequency of dentinal microcracks observed after root canal preparation with ProTaper Universal (PTU; Dentsply Tulsa Dental Specialties, Tulsa, OK), ProTaper Gold (PTG; Dentsply Tulsa Dental Specialties), Self-Adjusting File (SAF; ReDent Nova, Ra'anana, Israel), and XP-endo Shaper (XP; FKG Dentaire, La Chaux-de-Fonds, Switzerland) instruments using micro-computed tomographic (CT) analysis. Forty extracted human mandibular premolars having single-canal and straight root were randomly assigned to 4 experimental groups (n = 10) according to the different nickel-titanium systems used for root canal preparation: PTU, PTG, SAF, and XP. In the SAF and XP groups, the canals were first prepared with a K-file until #25 at the working length, and then the SAF or XP files were used. The specimens were scanned using high-resolution micro-computed tomographic imaging before and after root canal preparation. Afterward, preoperative and postoperative cross-sectional images of the teeth were screened to identify the presence of dentinal defects. For each group, the number of microcracks was determined as a percentage rate. The McNemar test was used to determine significant differences before and after instrumentation. The level of significance was set at P ≤ .05. The PTU system significantly increased the percentage rate of microcracks compared with preoperative specimens (P < .05). No new dentinal microcracks were observed in the PTG, SAF, or XP groups. Root canal preparations with the PTG, SAF, and XP systems did not induce the formation of new dentinal microcracks on straight root canals of mandibular premolars. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  15. The digital geologic map of Colorado in ARC/INFO format, Part B. Common files

    USGS Publications Warehouse

    Green, Gregory N.

    1992-01-01

    This geologic map was prepared as a part of a study of digital methods and techniques as applied to complex geologic maps. The geologic map was digitized from the original scribe sheets used to prepare the published Geologic Map of Colorado (Tweto 1979). Consequently the digital version is at 1:500,000 scale using the Lambert Conformal Conic map projection parameters of the state base map. Stable base contact prints of the scribe sheets were scanned on a Tektronix 4991 digital scanner. The scanner automatically converts the scanned image to an ASCII vector format. These vectors were transferred to a VAX minicomputer, where they were then loaded into ARC/INFO. Each vector and polygon was given attributes derived from the original 1979 geologic map. This database was developed on a MicroVAX computer system using VAX V 5.4 nd ARC/INFO 5.0 software. UPDATE: April 1995, The update was done solely for the purpose of adding the abilitly to plot to an HP650c plotter. Two new ARC/INFO plot AMLs along with a lineset and shadeset for the HP650C design jet printer have been included. These new files are COLORADO.650, INDEX.650, TWETOLIN.E00 and TWETOSHD.E00. These files were created on a UNIX platform with ARC/INFO 6.1.2. Updated versions of INDEX.E00, CONTACT.E00, LINE.E00, DECO.E00 and BORDER.E00 files that included the newly defined HP650c items are also included. * Any use of trade, product, or firm names is for descriptive purposes only and does not imply endorsement by the U.S. Government. Descriptors: The Digital Geologic Map of Colorado in ARC/INFO Format Open-File Report 92-050

  16. Radiotherapy supporting system based on the image database using IS&C magneto-optical disk

    NASA Astrophysics Data System (ADS)

    Ando, Yutaka; Tsukamoto, Nobuhiro; Kunieda, Etsuo; Kubo, Atsushi

    1994-05-01

    Since radiation oncologists make the treatment plan by prior experience, information about previous cases is helpful in planning the radiation treatment. We have developed an supporting system for the radiation therapy. The case-based reasoning method was implemented in order to search old treatments and images of past cases. This system evaluates similarities between the current case and all stored cases (case base). The portal images of the similar cases can be retrieved for reference images, as well as treatment records which show examples of the radiation treatment. By this system radiotherapists can easily make suitable plans of the radiation therapy. This system is useful to prevent inaccurate plannings due to preconceptions and/or lack of knowledge. Images were stored into magneto-optical disks and the demographic data is recorded to the hard disk which is equipped in the personal computer. Images can be displayed quickly on the radiotherapist's demands. The radiation oncologist can refer past cases which are recorded in the case base and decide the radiation treatment of the current case. The file and data format of magneto-optical disk is the IS&C format. This format provides the interchangeability and reproducibility of the medical information which includes images and other demographic data.

  17. PSTOOLS - FOUR PROGRAMS THAT INTERPRET/FORMAT POSTSCRIPT FILES

    NASA Technical Reports Server (NTRS)

    Choi, D.

    1994-01-01

    PSTOOLS is a package of four programs that operate on files written in the page description language, PostScript. The programs include a PostScript previewer for the IRIS workstation, a PostScript driver for the Matrix QCRZ film recorder, a PostScript driver for the Tektronix 4693D printer, and a PostScript code beautifier that formats PostScript files to be more legible. The three programs PSIRIS, PSMATRIX, and PSTEK are similar in that they all interpret the PostScript language and output the graphical results to a device, and they support color PostScript images. The common code which is shared by these three programs is included as a library of routines. PSPRETTY formats a PostScript file by appropriately indenting procedures and code delimited by "saves" and "restores." PSTOOLS does not use Adobe fonts. PSTOOLS is written in C-language for implementation on SGI IRIS 4D series workstations running IRIX 3.2 or later. A README file and UNIX man pages provide information regarding the installation and use of the PSTOOLS programs. A six-page manual which provides slightly more detailed information may be purchased separately. The standard distribution medium for this package is one .25 inch streaming magnetic tape cartridge in UNIX tar format. PSIRIS (the largest program) requires 1.2Mb of main memory. PSMATRIX requires the "gpib" board (IEEE 488) available from Silicon Graphics. Inc. The programs with graphical interfaces require that the IRIS have at least 24 bit planes. This package was developed in 1990 and updated in 1991. SGI, IRIS 4D, and IRIX are trademarks of Silicon Graphics, Inc. Matrix QCRZ is a registered trademark of the AGFA Group. Tektronix 4693D is a trademark of Tektronix, Inc. Adobe is a trademark of Adobe Systems Incorporated. PostScript is a registered trademark of Adobe Systems Incorporated. UNIX is a registered trademark of AT&T Bell Laboratories.

  18. Influence of image compression on the interpretation of spectral-domain optical coherence tomography in exudative age-related macular degeneration

    PubMed Central

    Kim, J H; Kang, S W; Kim, J-r; Chang, Y S

    2014-01-01

    Purpose To evaluate the effect of image compression of spectral-domain optical coherence tomography (OCT) images in the examination of eyes with exudative age-related macular degeneration (AMD). Methods Thirty eyes from 30 patients who were diagnosed with exudative AMD were included in this retrospective observational case series. The horizontal OCT scans centered at the center of the fovea were conducted using spectral-domain OCT. The images were exported to Tag Image File Format (TIFF) and 100, 75, 50, 25 and 10% quality of Joint Photographic Experts Group (JPEG) format. OCT images were taken before and after intravitreal ranibizumab injections, and after relapse. The prevalence of subretinal and intraretinal fluids was determined. Differences in choroidal thickness between the TIFF and JPEG images were compared with the intra-observer variability. Results The prevalence of subretinal and intraretinal fluids was comparable regardless of the degree of compression. However, the chorio–scleral interface was not clearly identified in many images with a high degree of compression. In images with 25 and 10% quality of JPEG, the difference in choroidal thickness between the TIFF images and the respective JPEG images was significantly greater than the intra-observer variability of the TIFF images (P=0.029 and P=0.024, respectively). Conclusions In OCT images of eyes with AMD, 50% of the quality of the JPEG format would be an optimal degree of compression for efficient data storage and transfer without sacrificing image quality. PMID:24788012

  19. Cloud-based processing of multi-spectral imaging data

    NASA Astrophysics Data System (ADS)

    Bernat, Amir S.; Bolton, Frank J.; Weiser, Reuven; Levitz, David

    2017-03-01

    Multispectral imaging holds great promise as a non-contact tool for the assessment of tissue composition. Performing multi - spectral imaging on a hand held mobile device would allow to bring this technology and with it knowledge to low resource settings to provide a state of the art classification of tissue health. This modality however produces considerably larger data sets than white light imaging and requires preliminary image analysis for it to be used. The data then needs to be analyzed and logged, while not requiring too much of the system resource or a long computation time and battery use by the end point device. Cloud environments were designed to allow offloading of those problems by allowing end point devices (smartphones) to offload computationally hard tasks. For this end we present a method where the a hand held device based around a smartphone captures a multi - spectral dataset in a movie file format (mp4) and compare it to other image format in size, noise and correctness. We present the cloud configuration used for segmenting images to frames where they can later be used for further analysis.

  20. A high-speed network for cardiac image review.

    PubMed

    Elion, J L; Petrocelli, R R

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage.

  1. A high-speed network for cardiac image review.

    PubMed Central

    Elion, J. L.; Petrocelli, R. R.

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage. PMID:7949964

  2. NAVAIR Portable Source Initiative (NPSI) Standard for Reusable Source Dataset Metadata (RSDM) V2.4

    DTIC Science & Technology

    2012-09-26

    defining a raster file format: <RasterFileFormat> <FormatName>TIFF</FormatName> <Order>BIP</Order> < DataType >8-BIT_UNSIGNED</ DataType ...interleaved by line (BIL); Band interleaved by pixel (BIP). element RasterFileFormatType/ DataType diagram type restriction of xsd:string facets

  3. BOREAS RSS-16 Level-3b DC-8 AIRSAR SY Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Saatchi, Sasan; Newcomer, Jeffrey A.; Strub, Richard; Irani, Fred

    2000-01-01

    The BOREAS RSS-16 team used satellite and aircraft SAR data in conjunction with various ground measurements to determine the moisture regime of the boreal forest. RSS-16 assisted with the acquisition and ordering of NASA JPL AIRSAR data collected from the NASA DC-8 aircraft. The NASA JPL AIRSAR is a side-looking imaging radar system that utilizes the SAR principle to obtain high-resolution images that represent the radar backscatter of the imaged surface at different frequencies and polarizations. The information contained in each pixel of the AIRSAR data represents the radar backscatter for all possible combinations of horizontal and vertical transmit and receive polarizations (i.e., HH, HV, VH, and VV). Geographically, the data cover portions of the BOREAS SSA and NSA. Temporally, the data were acquired from 12-Aug-1993 to 31-Jul-1995. The level-3b AIRSAR SY data are the JPL synoptic product and contain 3 of the 12 total frequency and polarization combinations that are possible. The data are stored in binary image format files. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  4. Writing a Scientific Paper II. Communication by Graphics

    NASA Astrophysics Data System (ADS)

    Sterken, C.

    2011-07-01

    This paper discusses facets of visual communication by way of images, graphs, diagrams and tabular material. Design types and elements of graphical images are presented, along with advice on how to create graphs, and on how to read graphical illustrations. This is done in astronomical context, using case studies and historical examples of good and bad graphics. Design types of graphs (scatter and vector plots, histograms, pie charts, ternary diagrams and three-dimensional surface graphs) are explicated, as well as the major components of graphical images (axes, legends, textual parts, etc.). The basic features of computer graphics (image resolution, vector images, bitmaps, graphical file formats and file conversions) are explained, as well as concepts of color models and of color spaces (with emphasis on aspects of readability of color graphics by viewers suffering from color-vision deficiencies). Special attention is given to the verity of graphical content, and to misrepresentations and errors in graphics and associated basic statistics. Dangers of dot joining and curve fitting are discussed, with emphasis on the perception of linearity, the issue of nonsense correlations, and the handling of outliers. Finally, the distinction between data, fits and models is illustrated.

  5. FERMI/GLAST Integrated Trending and Plotting System Release 5.0

    NASA Technical Reports Server (NTRS)

    Ritter, Sheila; Brumer, Haim; Reitan, Denise

    2012-01-01

    An Integrated Trending and Plotting System (ITPS) is a trending, analysis, and plotting system used by space missions to determine performance and status of spacecraft and its instruments. ITPS supports several NASA mission operational control centers providing engineers, ground controllers, and scientists with access to the entire spacecraft telemetry data archive for the life of the mission, and includes a secure Web component for remote access. FERMI/GLAST ITPS Release 5.0 features include the option to display dates (yyyy/ddd) instead of orbit numbers along orbital Long-Term Trend (LTT) plot axis, the ability to save statistics from daily production plots as image files, and removal of redundant edit/create Input Definition File (IDF) screens. Other features are a fix to address invalid packet lengths, a change in naming convention of image files in order to use in script, the ability to save all ITPS plot images (from Windows or the Web) as GIF or PNG format, the ability to specify ymin and ymax on plots where previously only the desired range could be specified, Web interface capability to plot IDFs that contain out-oforder page and plot numbers, and a fix to change all default file names to show yyyydddhhmmss time stamps instead of hhmmssdddyyyy. A Web interface capability sorts files based on modification date (with newest one at top), and the statistics block can be displayed via a Web interface. Via the Web, users can graphically view the volume of telemetry data from each day contained in the ITPS archive in the Web digest. The ITPS could be also used in nonspace fields that need to plot data or trend data, including financial and banking systems, aviation and transportation systems, healthcare and educational systems, sales and marketing, and housing and construction.

  6. CheckDen, a program to compute quantum molecular properties on spatial grids.

    PubMed

    Pacios, Luis F; Fernandez, Alberto

    2009-09-01

    CheckDen, a program to compute quantum molecular properties on a variety of spatial grids is presented. The program reads as unique input wavefunction files written by standard quantum packages and calculates the electron density rho(r), promolecule and density difference function, gradient of rho(r), Laplacian of rho(r), information entropy, electrostatic potential, kinetic energy densities G(r) and K(r), electron localization function (ELF), and localized orbital locator (LOL) function. These properties can be calculated on a wide range of one-, two-, and three-dimensional grids that can be processed by widely used graphics programs to render high-resolution images. CheckDen offers also other options as extracting separate atom contributions to the property computed, converting grid output data into CUBE and OpenDX volumetric data formats, and perform arithmetic combinations with grid files in all the recognized formats.

  7. Medical Applications of the PHITS Code (3): User Assistance Program for Medical Physics Computation.

    PubMed

    Furuta, Takuya; Hashimoto, Shintaro; Sato, Tatsuhiko

    2016-01-01

    DICOM2PHITS and PSFC4PHITS are user assistance programs for medical physics PHITS applications. DICOM2PHITS is a program to construct the voxel PHITS simulation geometry from patient CT DICOM image data by using a conversion table from CT number to material composition. PSFC4PHITS is a program to convert the IAEA phase-space file data to PHITS format to be used as a simulation source of PHITS. Both of the programs are useful for users who want to apply PHITS simulation to verification of the treatment planning of radiation therapy. We are now developing a program to convert dose distribution obtained by PHITS to DICOM RT-dose format. We also want to develop a program which is able to implement treatment information included in other DICOM files (RT-plan and RT-structure) as a future plan.

  8. An Efficient Format for Nearly Constant-Time Access to Arbitrary Time Intervals in Large Trace Files

    DOE PAGES

    Chan, Anthony; Gropp, William; Lusk, Ewing

    2008-01-01

    A powerful method to aid in understanding the performance of parallel applications uses log or trace files containing time-stamped events and states (pairs of events). These trace files can be very large, often hundreds or even thousands of megabytes. Because of the cost of accessing and displaying such files, other methods are often used that reduce the size of the tracefiles at the cost of sacrificing detail or other information. This paper describes a hierarchical trace file format that provides for display of an arbitrary time window in a time independent of the total size of the file and roughlymore » proportional to the number of events within the time window. This format eliminates the need to sacrifice data to achieve a smaller trace file size (since storage is inexpensive, it is necessary only to make efficient use of bandwidth to that storage). The format can be used to organize a trace file or to create a separate file of annotations that may be used with conventional trace files. We present an analysis of the time to access all of the events relevant to an interval of time and we describe experiments demonstrating the performance of this file format.« less

  9. Storage, retrieval, and edit of digital video using Motion JPEG

    NASA Astrophysics Data System (ADS)

    Sudharsanan, Subramania I.; Lee, D. H.

    1994-04-01

    In a companion paper we describe a Micro Channel adapter card that can perform real-time JPEG (Joint Photographic Experts Group) compression of a 640 by 480 24-bit image within 1/30th of a second. Since this corresponds to NTSC video rates at considerably good perceptual quality, this system can be used for real-time capture and manipulation of continuously fed video. To facilitate capturing the compressed video in a storage medium, an IBM Bus master SCSI adapter with cache is utilized. Efficacy of the data transfer mechanism is considerably improved using the System Control Block architecture, an extension to Micro Channel bus masters. We show experimental results that the overall system can perform at compressed data rates of about 1.5 MBytes/second sustained and with sporadic peaks to about 1.8 MBytes/second depending on the image sequence content. We also describe mechanisms to access the compressed data very efficiently through special file formats. This in turn permits creation of simpler sequence editors. Another advantage of the special file format is easy control of forward, backward and slow motion playback. The proposed method can be extended for design of a video compression subsystem for a variety of personal computing systems.

  10. ISTP CDF Skeleton Editor

    NASA Technical Reports Server (NTRS)

    Chimiak, Reine; Harris, Bernard; Williams, Phillip

    2013-01-01

    Basic Common Data Format (CDF) tools (e.g., cdfedit) provide no specific support for creating International Solar-Terrestrial Physics/Space Physics Data Facility (ISTP/SPDF) standard files. While it is possible for someone who is familiar with the ISTP/SPDF metadata guidelines to create compliant files using just the basic tools, the process is error-prone and unreasonable for someone without ISTP/SPDF expertise. The key problem is the lack of a tool with specific support for creating files that comply with the ISTP/SPDF guidelines. There are basic CDF tools such as cdfedit and skeletoncdf for creating CDF files, but these have no specific support for creating ISTP/ SPDF compliant files. The SPDF ISTP CDF skeleton editor is a cross-platform, Java-based GUI editor program that allows someone with only a basic understanding of the ISTP/SPDF guidelines to easily create compliant files. The editor is a simple graphical user interface (GUI) application for creating and editing ISTP/SPDF guideline-compliant skeleton CDF files. The SPDF ISTP CDF skeleton editor consists of the following components: A swing-based Java GUI program, JavaHelp-based manual/ tutorial, Image/Icon files, and HTML Web page for distribution. The editor is available as a traditional Java desktop application as well as a Java Network Launching Protocol (JNLP) application. Once started, it functions like a typical Java GUI file editor application for creating/editing application-unique files.

  11. VizieR Online Data Catalog: GTC spectra of z~2.3 quasars (Sulentic+, 2014)

    NASA Astrophysics Data System (ADS)

    Sulentic, J. W.; Marziani, P.; Del Olmo, A.; Dultzin, D.; Perea, J.; Negrete, C. A.

    2014-09-01

    Spectroscopic data for 22 intermediate redshift quasars are identified in Table 1. Actual data files are in FITS format in the spectra sub-directory. Each individual spectrum cover the spectral range 360-770 nm. Units are in wavelength in Angstrom, and specific flux in erg/s/cm2/Angstrom (pW/m3) in the observed frame (i.e., before redshift correction). Full object name (OBJECT), total exposure time (EXPTIME), number of coadded individual spectra (NUM_IMAG), and observation date (DATE-OBS) are reported as records in the FITS header of each spectrum (as in Table 2 of the paper). (2 data files).

  12. Investigating Encrypted Material

    NASA Astrophysics Data System (ADS)

    McGrath, Niall; Gladyshev, Pavel; Kechadi, Tahar; Carthy, Joe

    When encrypted material is discovered during a digital investigation and the investigator cannot decrypt the material then s/he is faced with the problem of how to determine the evidential value of the material. This research is proposing a methodology of extracting probative value from the encrypted file of a hybrid cryptosystem. The methodology also incorporates a technique for locating the original plaintext file. Since child pornography (KP) images and terrorist related information (TI) are transmitted in encrypted format the digital investigator must ask the question Cui Bono? - who benefits or who is the recipient? By doing this the scope of the digital investigation can be extended to reveal the intended recipient.

  13. VizieR Online Data Catalog: The close circumstellar environment of the semi-regular S-type star pi1

    NASA Astrophysics Data System (ADS)

    Sacuto, S.; Jorissen, A.; Cruzalebes, P.; Chesneau, O.; Ohnaka, K.; Quirrenbach, A.; Lopez, B.

    2008-02-01

    All the data products are stored in the FITS-based, optical interferometry data exchange format (OI-FITS), described in Pauls et al. (2005PASP..117.1255P). The OI Exchange Format is a standard for exchanging calibrated data from optical (visible/infrared) stellar interferometers. The standard is based on the Flexible Image Transport System (FITS), and supports storage of the optical interferometric observables including visibilities and differential phases. Several routines to read and write this format in various languages can be found in: Webpage http://www.mrao.cam.ac.uk/research/OAS/oi_data/oifits.html (2 data files).

  14. A Data Exchange Standard for Optical (Visible/IR) Interferometry

    NASA Astrophysics Data System (ADS)

    Pauls, T. A.; Young, J. S.; Cotton, W. D.; Monnier, J. D.

    2005-11-01

    This paper describes the OI (Optical Interferometry) Exchange Format, a standard for exchanging calibrated data from optical (visible/infrared) stellar interferometers. The standard is based on the Flexible Image Transport System (FITS) and supports the storage of optical interferometric observables, including squared visibility and closure phase-data products not included in radio interferometry standards such as UV-FITS. The format has already gained the support of most currently operating optical interferometer projects, including COAST, NPOI, IOTA, CHARA, VLTI, PTI, and the Keck Interferometer, and is endorsed by the IAU Working Group on Optical Interferometry. Software is available for reading, writing, and the merging of OI Exchange Format files.

  15. Isostatic gravity map and principal facts for 694 gravity stations in Yellowstone National Park and vicinity, Wyoming, Montana, and Idaho

    USGS Publications Warehouse

    Carle, S.F.; Glen, J.M.; Langenheim, V.E.; Smith, R.B.; Oliver, H.W.

    1990-01-01

    The report presents the principal facts for gravity stations compiled for Yellowstone National Park and vicinity. The gravity data were compiled from three sources: Defense Mapping Agency, University of Utah, and U.S. Geological Survey. Part A of the report is a paper copy describing how the compilation was done and presenting the data in tabular format as well as a map; part B is a 5-1/4 inch floppy diskette containing only the data files in ASCII format. Requirements for part B: IBM PC or compatible, DOS v. 2.0 or higher. Files contained on this diskette: DOD.ISO -- File containing the principal facts of the 514 gravity stations obtained from the Defense Mapping Agency. The data are in Plouff format* (see file PFTAB.TEX). UTAH.ISO -- File containing the principal facts of 153 gravity stations obtained from the University of Utah. Data are in Plouff format. USGS.ISO -- File containing the principal facts of 27 gravity stations collected by the U.S. Geological Survey in July 1987. Data are in Plouff format. PFTAB.TXT -- File containing explanation of principal fact format. ACC.TXT -- File containing explanation of accuracy codes.

  16. BOREAS TGB-5 Fire History of Manitoba 1980 to 1991 in Raster Format

    NASA Technical Reports Server (NTRS)

    Stocks, Brian J.; Zepp, Richard; Knapp, David; Hall, Forrest G. (Editor); Conrad, Sara K. (Editor)

    2000-01-01

    The BOReal Ecosystem-Atmosphere Study Trace Gas Biogeochemistry (BOREAS TGB-5) team collected several data sets related to the effects of fire on the exchange of trace gases between the surface and the atmosphere. This raster format data set covers the province of Manitoba between 1980 and 1991. The data were gridded into the Albers Equal-Area Conic (AEAC) projection from the original vector data. The original vector data were produced by Forestry Canada from hand-drawn boundaries of fires on photocopies of 1:250,000-scale maps. The locational accuracy of the data is considered fair to poor. When the locations of some fire boundaries were compared to Landsat TM images, they were found to be off by as much as a few kilometers. This problem should be kept in mind when using these data. The data are stored in binary, image format files.

  17. Digital camera with apparatus for authentication of images produced from an image file

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1993-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.

  18. Coi-wiz: An interactive computer wizard for analyzing cardiac optical signals.

    PubMed

    Yuan, Xiaojing; Uyanik, Ilyas; Situ, Ning; Xi, Yutao; Cheng, Jie

    2009-01-01

    A number of revolutionary techniques have been developed for cardiac electrophysiology research to better study the various arrhythmia mechanisms that can enhance ablating strategies for cardiac arrhythmias. Once the three-dimensional high resolution cardiac optical imaging data is acquired, it is time consuming to manually go through them and try to identify the patterns associated with various arrhythmia symptoms. In this paper, we present an interactive computer wizard that helps cardiac electrophysiology researchers to visualize and analyze the high resolution cardiac optical imaging data. The wizard provides a file interface that accommodates different file formats. A series of analysis algorithms output waveforms, activation and action potential maps after spatial and temporal filtering, velocity field and heterogeneity measure. The interactive GUI allows the researcher to identify the region of interest in both the spatial and temporal domain, thus enabling them to study different heart chamber at their choice.

  19. VizieR Online Data Catalog: S4G disk galaxies stellar mass distribution (Diaz-Garcia+, 2016)

    NASA Astrophysics Data System (ADS)

    Diaz-Garcia, S.; Salo, H.; Laurikainen, E.

    2016-08-01

    We provide the tabulated radial profiles of mean stellar mass density in bins of total stellar mass (M*, from Munoz-Mateos et al., 2015ApJS..219....3M) and Hubble stage (T, from Buta et al., 2015, Cat. J/ApJS/217/32). We used the 3.6um imaging for the non-highly inclined galaxies (i<65° in Salo et al., 2015, Cat. J/ApJS/219/4) in the Spitzer Survey of Stellar Structure in Galaxies (Sheth et al., 2010, Cat. J/PASP/122/1397). We also provide the averaged stellar contribution to the circular velocity, computed from the radial force profiles of individual galaxies (from Diaz-Garcia et al., 2016A&A...587A.160D). Besides, we provide the FITS files of the bar synthetic images (2D) obtained by stacking images rescaled to a common frame determined by the bar parameters (from Herrera-Endoqui et al., 2015A&A...582A..86H) in bins of M*, T, and galaxy family (from Buta et al. 2015). For the bar stacks, we also tabulate the azimuthally averaged luminosity profiles, the tangential-to-radial forces (Qt), the m=2,4 Fourier amplitudes (A2,A4), and the radial profiles of ellipticity and b4 parameter. The fits files (.fit) of the bar stacks, in units of flux (MJy/sr). The pixel size is 0.02 x rbar, where rbar refers to the bar radius. The images are cut at a radius of 3 x rbar. In every folder, the terminology used to label the ".dat" and ".fit" files, in relation to their content, is the following: a) The term "starmass" is used when the binning of the sample was based on the total stellar mass of the galaxy, from Munoz-Mateos et al. (2015ApJS..219....3M). We indicate the common logarithm of the boundaries: (8.5,9.9.5,10,10.5,11). b) The term "ttype" is used when the binning of the sample was based on the Hubble stage of the galaxy (-3,0,3,5,8,11), from Buta et al. (2015, Cat. J/ApJS/217/32) c) The term "family" is used when the binning of the sample was based on the morphological family of the galaxy (AB,AB,AB,B), from Buta et al. (2015, Cat. J/ApJS/217/32). d) The term "hr" is used when the 1-D luminosity stacks were obtained in a common frame determined by the scalelength of the disks (from Salo et al., 2015, Cat. J/ApJS/219/4). e) The term "kpc" is used when the 1-D luminosity stacks were obtained in a common frame determined by the disk extent in physical units (kpc). f) The term "barred" is used when only barred galaxies are stacked (according to Buta et al., 2015, Cat. J/ApJS/217/32). g) The term "unbarred" is used when only non-barred galaxies are stacked. IDL reading: readcol,'luminositydiskkpc/luminositydiskkpc_*.dat',Radius,$ Steldens,bSteldens,BSteldens,SuBr,bSuBr,BSuBr,Nsample,$ format='F,F,F,F,F,F,F,F',delim=' ' readcol,'luminositydiskhr/luminositydiskhr_*.dat',Radius,$ Steldens,bSteldens,BSteldens,SuBr,bSuBr,BSuB,Nsample,$ format='F,F,F,F,F,F,F,F',delim=' ' readcol,'vrotdiskkpc/vrotdiskkpc_*.dat',Radius,Vrotmean,$ Vrotmedian,Sigma,Nsample,format='F,F,F,F,F',delim=' ' readcol,'vrotdiskhr/vrotdiskhr_*.dat',Radius,Vrotmean,Vrotmedian,$ Sigma,Nsample,format='F,F,F,F,F',delim=' ' readcol,'luminositybar/barsradialluminosity*.dat',Radius,$ Steldens,SuBr,format='F,F,F',delim=' ' readcol,'forceprofbar/barsradialforces_*.dat',Radius,Qt,A2,A4,$ format='F,F,F,F',delim=' ' readcol,'ellipseprofbar/barsradialellipse_*.dat',Radius,ellipticity,b4,$ format='F,F,F',delim=' ' fitsread,'barstackfits/barstack_*.fit',image (10 data files).

  20. Construction of three-dimensional tooth model by micro-computed tomography and application for data sharing.

    PubMed

    Kato, A; Ohno, N

    2009-03-01

    The study of dental morphology is essential in terms of phylogeny. Advances in three-dimensional (3D) measurement devices have enabled us to make 3D images of teeth without destruction of samples. However, raw fundamental data on tooth shape requires complex equipment and techniques. An online database of 3D teeth models is therefore indispensable. We aimed to explore the basic methodology for constructing 3D teeth models, with application for data sharing. Geometric information on the human permanent upper left incisor was obtained using micro-computed tomography (micro-CT). Enamel, dentine, and pulp were segmented by thresholding of different gray-scale intensities. Segmented data were separately exported in STereo-Lithography Interface Format (STL). STL data were converted to Wavefront OBJ (OBJect), as many 3D computer graphics programs support the Wavefront OBJ format. Data were also applied to Quick Time Virtual Reality (QTVR) format, which allows the image to be viewed from any direction. In addition to Wavefront OBJ and QTVR data, the original CT series were provided as 16-bit Tag Image File Format (TIFF) images on the website. In conclusion, 3D teeth models were constructed in general-purpose data formats, using micro-CT and commercially available programs. Teeth models that can be used widely would benefit all those who study dental morphology.

  1. 18 CFR 50.3 - Applications/pre-filing; rules and format.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... filings must be signed in compliance with § 385.2005 of this chapter. (e) The Commission will conduct a... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Applications/pre-filing... INTERSTATE ELECTRIC TRANSMISSION FACILITIES § 50.3 Applications/pre-filing; rules and format. (a) Filings are...

  2. Manual for Getdata Version 3.1: a FORTRAN Utility Program for Time History Data

    NASA Technical Reports Server (NTRS)

    Maine, Richard E.

    1987-01-01

    This report documents version 3.1 of the GetData computer program. GetData is a utility program for manipulating files of time history data, i.e., data giving the values of parameters as functions of time. The most fundamental capability of GetData is extracting selected signals and time segments from an input file and writing the selected data to an output file. Other capabilities include converting file formats, merging data from several input files, time skewing, interpolating to common output times, and generating calculated output signals as functions of the input signals. This report also documents the interface standards for the subroutines used by GetData to read and write the time history files. All interface to the data files is through these subroutines, keeping the main body of GetData independent of the precise details of the file formats. Different file formats can be supported by changes restricted to these subroutines. Other computer programs conforming to the interface standards can call the same subroutines to read and write files in compatible formats.

  3. Data Recovery Effort of Nimbus Era Observations by the NASA GES DISC

    NASA Technical Reports Server (NTRS)

    Johnson, James; Esfandiari, Ed; Zamkoff, Emily; Gerasimov, Irina; Al-Jazrawi, Atheer; Alcott, Gary

    2017-01-01

    NASA launched seven Nimbus meteorological satellites in the 1960s and 70s. These satellites carried instruments for making observations of the Earth in the visible, infrared, ultraviolet, and microwave wavelengths. The original data archive consisted of a combination of magnetic tapes and various film media. As these media are well past their expected end of life, the valuable data they contain are now being migrated to the GES DISC modern online archive. The process involves recovering the digital data files from the tapes as well as scanning images of the data from film strips. This presentation will address the status and challenges of recovering the Nimbus data. The old data products were written on now obsolete hardware systems and outdated file formats. They lack any metadata standards and each product is often written in its own proprietary file structure. This requires creating metadata by reading the contents of the old data files. The job is tedious and laborious, as documentation may be incomplete, data files and tapes are sometimes corrupted, or were improperly copied at the time they were created.

  4. Digital Camera with Apparatus for Authentication of Images Produced from an Image File

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1996-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.

  5. Goddard high resolution spectrograph science verification and data analysis

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The data analysis performed was to support the Orbital Verification (OV) and Science Verification (SV) of the GHRS was in the areas of the Digicon detector's performance and stability, wavelength calibration, and geomagnetic induced image motion. The results of the analyses are briefly described. Detailed results are given in the form of attachments. Specialized software was developed for the analyses. Calibration files were formatted according to the specifications in a Space Telescope Science report. IRAS images were restored of the Large Magellanic Cloud using a blocked iterative algorithm. The algorithm works with the raw data scans without regridding or interpolating the data on an equally spaced image grid.

  6. Compressing images for the Internet

    NASA Astrophysics Data System (ADS)

    Beretta, Giordano B.

    1998-01-01

    The World Wide Web has rapidly become the hot new mass communications medium. Content creators are using similar design and layout styles as in printed magazines, i.e., with many color images and graphics. The information is transmitted over plain telephone lines, where the speed/price trade-off is much more severe than in the case of printed media. The standard design approach is to use palettized color and to limit as much as possible the number of colors used, so that the images can be encoded with a small number of bits per pixel using the Graphics Interchange Format (GIF) file format. The World Wide Web standards contemplate a second data encoding method (JPEG) that allows color fidelity but usually performs poorly on text, which is a critical element of information communicated on this medium. We analyze the spatial compression of color images and describe a methodology for using the JPEG method in a way that allows a compact representation while preserving full color fidelity.

  7. CytometryML and other data formats

    NASA Astrophysics Data System (ADS)

    Leif, Robert C.

    2006-02-01

    Cytology automation and research will be enhanced by the creation of a common data format. This data format would provide the pathology and research communities with a uniform way for annotating and exchanging images, flow cytometry, and associated data. This specification and/or standard will include descriptions of the acquisition device, staining, the binary representations of the image and list-mode data, the measurements derived from the image and/or the list-mode data, and descriptors for clinical/pathology and research. An international, vendor-supported, non-proprietary specification will allow pathologists, researchers, and companies to develop and use image capture/analysis software, as well as list-mode analysis software, without worrying about incompatibilities between proprietary vendor formats. Presently, efforts to create specifications and/or descriptions of these formats include the Laboratory Digital Imaging Project (LDIP) Data Exchange Specification; extensions to the Digital Imaging and Communications in Medicine (DICOM); Open Microscopy Environment (OME); Flowcyt, an extension to the present Flow Cytometry Standard (FCS); and CytometryML. The feasibility of creating a common data specification for digital microscopy and flow cytometry in a manner consistent with its use for medical devices and interoperability with both hospital information and picture archiving systems has been demonstrated by the creation of the CytometryML schemas. The feasibility of creating a software system for digital microscopy has been demonstrated by the OME. CytometryML consists of schemas that describe instruments and their measurements. These instruments include digital microscopes and flow cytometers. Optical components including the instruments' excitation and emission parts are described. The description of the measurements made by these instruments includes the tagged molecule, data acquisition subsystem, and the format of the list-mode and/or image data. Many of the CytometryML data-types are based on the Digital Imaging and Communications in Medicine (DICOM). Binary files for images and list-mode data have been created and read.

  8. A mass spectrometry proteomics data management platform.

    PubMed

    Sharma, Vagisha; Eng, Jimmy K; Maccoss, Michael J; Riffle, Michael

    2012-09-01

    Mass spectrometry-based proteomics is increasingly being used in biomedical research. These experiments typically generate a large volume of highly complex data, and the volume and complexity are only increasing with time. There exist many software pipelines for analyzing these data (each typically with its own file formats), and as technology improves, these file formats change and new formats are developed. Files produced from these myriad software programs may accumulate on hard disks or tape drives over time, with older files being rendered progressively more obsolete and unusable with each successive technical advancement and data format change. Although initiatives exist to standardize the file formats used in proteomics, they do not address the core failings of a file-based data management system: (1) files are typically poorly annotated experimentally, (2) files are "organically" distributed across laboratory file systems in an ad hoc manner, (3) files formats become obsolete, and (4) searching the data and comparing and contrasting results across separate experiments is very inefficient (if possible at all). Here we present a relational database architecture and accompanying web application dubbed Mass Spectrometry Data Platform that is designed to address the failings of the file-based mass spectrometry data management approach. The database is designed such that the output of disparate software pipelines may be imported into a core set of unified tables, with these core tables being extended to support data generated by specific pipelines. Because the data are unified, they may be queried, viewed, and compared across multiple experiments using a common web interface. Mass Spectrometry Data Platform is open source and freely available at http://code.google.com/p/msdapl/.

  9. Seismicity of Afghanistan and vicinity

    USGS Publications Warehouse

    Dewey, James W.

    2006-01-01

    This publication describes the seismicity of Afghanistan and vicinity and is intended for use in seismic hazard studies of that nation. Included are digital files with information on earthquakes that have been recorded in Afghanistan and vicinity through mid-December 2004. Chapter A provides an overview of the seismicity and tectonics of Afghanistan and defines the earthquake parameters included in the 'Summary Catalog' and the 'Summary of Macroseismic Effects.' Chapter B summarizes compilation of the 'Master Catalog' and 'Sub-Threshold Catalog' and documents their formats. The 'Summary Catalog' itself is presented as a comma-delimited ASCII file, the 'Summary of Macroseismic Effects' is presented as an html file, and the 'Master Catalog' and 'Sub-Threshold Catalog' are presented as flat ASCII files. Finally, this report includes as separate plates a digital image of a map of epicenters of earthquakes occurring since 1964 (Plate 1) and a representation of areas of damage or strong shaking from selected past earthquakes in Afghanistan and vicinity (Plate 2).

  10. Mass spectrometer output file format mzML.

    PubMed

    Deutsch, Eric W

    2010-01-01

    Mass spectrometry is an important technique for analyzing proteins and other biomolecular compounds in biological samples. Each of the vendors of these mass spectrometers uses a different proprietary binary output file format, which has hindered data sharing and the development of open source software for downstream analysis. The solution has been to develop, with the full participation of academic researchers as well as software and hardware vendors, an open XML-based format for encoding mass spectrometer output files, and then to write software to use this format for archiving, sharing, and processing. This chapter presents the various components and information available for this format, mzML. In addition to the XML schema that defines the file structure, a controlled vocabulary provides clear terms and definitions for the spectral metadata, and a semantic validation rules mapping file allows the mzML semantic validator to insure that an mzML document complies with one of several levels of requirements. Complete documentation and example files insure that the format may be uniformly implemented. At the time of release, there already existed several implementations of the format and vendors have committed to supporting the format in their products.

  11. NDSI products system based on Hadoop platform

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Jiang, He; Yang, Xiaoxia; Geng, Erhui

    2015-12-01

    Snow is solid state of water resources on earth, and plays an important role in human life. Satellite remote sensing is significant in snow extraction with the advantages of cyclical, macro, comprehensiveness, objectivity, timeliness. With the continuous development of remote sensing technology, remote sensing data access to the trend of multiple platforms, multiple sensors and multiple perspectives. At the same time, in view of the remote sensing data of compute-intensive applications demand increase gradually. However, current the producing system of remote sensing products is in a serial mode, and this kind of production system is used for professional remote sensing researchers mostly, and production systems achieving automatic or semi-automatic production are relatively less. Facing massive remote sensing data, the traditional serial mode producing system with its low efficiency has been difficult to meet the requirements of mass data timely and efficient processing. In order to effectively improve the production efficiency of NDSI products, meet the demand of large-scale remote sensing data processed timely and efficiently, this paper build NDSI products production system based on Hadoop platform, and the system mainly includes the remote sensing image management module, NDSI production module, and system service module. Main research contents and results including: (1)The remote sensing image management module: includes image import and image metadata management two parts. Import mass basis IRS images and NDSI product images (the system performing the production task output) into HDFS file system; At the same time, read the corresponding orbit ranks number, maximum/minimum longitude and latitude, product date, HDFS storage path, Hadoop task ID (NDSI products), and other metadata information, and then create thumbnails, and unique ID number for each record distribution, import it into base/product image metadata database. (2)NDSI production module: includes the index calculation, production tasks submission and monitoring two parts. Read HDF images related to production task in the form of a byte stream, and use Beam library to parse image byte stream to the form of Product; Use MapReduce distributed framework to perform production tasks, at the same time monitoring task status; When the production task complete, calls remote sensing image management module to store NDSI products. (3)System service module: includes both image search and DNSI products download. To image metadata attributes described in JSON format, return to the image sequence ID existing in the HDFS file system; For the given MapReduce task ID, package several task output NDSI products into ZIP format file, and return to the download link (4)System evaluation: download massive remote sensing data and use the system to process it to get the NDSI products testing the performance, and the result shows that the system has high extendibility, strong fault tolerance, fast production speed, and the image processing results with high accuracy.

  12. The Hopkins Ultraviolet Telescope: The Final Archive

    NASA Astrophysics Data System (ADS)

    Dixon, William V.; Blair, William P.; Kruk, Jeffrey W.; Romelfanger, Mary L.

    2013-04-01

    The Hopkins Ultraviolet Telescope (HUT) was a 0.9 m telescope and moderate-resolution (Δλ = 3 Å) far-ultraviolet (820-1850 Å) spectrograph that flew twice on the space shuttle, in 1990 December (Astro-1, STS-35) and 1995 March (Astro-2, STS-67). The resulting spectra were originally archived in a nonstandard format that lacked important descriptive metadata. To increase their utility, we have modified the original data-reduction software to produce a new and more user-friendly data product, a time-tagged photon list similar in format to the Intermediate Data Files (IDFs) produced by the Far Ultraviolet Spectroscopic Explorer calibration pipeline. We have transferred all relevant pointing and instrument-status information from locally-archived science and engineering databases into new FITS header keywords for each data set. Using this new pipeline, we have reprocessed the entire HUT archive from both missions, producing a new set of calibrated spectral products in a modern FITS format that is fully compliant with Virtual Observatory requirements. For each exposure, we have generated quick-look plots of the fully-calibrated spectrum and associated pointing history information. Finally, we have retrieved from our archives HUT TV guider images, which provide information on aperture positioning relative to guide stars, and converted them into FITS-format image files. All of these new data products are available in the new HUT section of the Mikulski Archive for Space Telescopes (MAST), along with historical and reference documents from both missions. In this article, we document the improved data-processing steps applied to the data and show examples of the new data products.

  13. The Hopkins Ultraviolet Telescope: The Final Archive

    NASA Technical Reports Server (NTRS)

    Dixon, William V.; Blair, William P.; Kruk, Jeffrey W.; Romelfanger, Mary L.

    2013-01-01

    The Hopkins Ultraviolet Telescope (HUT) was a 0.9 m telescope and moderate-resolution (Delta)lambda equals 3 A) far-ultraviolet (820-1850 Å) spectrograph that flew twice on the space shuttle, in 1990 December (Astro-1, STS-35) and 1995 March (Astro-2, STS-67). The resulting spectra were originally archived in a nonstandard format that lacked important descriptive metadata. To increase their utility, we have modified the original datareduction software to produce a new and more user-friendly data product, a time-tagged photon list similar in format to the Intermediate Data Files (IDFs) produced by the Far Ultraviolet Spectroscopic Explorer calibration pipeline. We have transferred all relevant pointing and instrument-status information from locally-archived science and engineering databases into new FITS header keywords for each data set. Using this new pipeline, we have reprocessed the entire HUT archive from both missions, producing a new set of calibrated spectral products in a modern FITS format that is fully compliant with Virtual Observatory requirements. For each exposure, we have generated quicklook plots of the fully-calibrated spectrum and associated pointing history information. Finally, we have retrieved from our archives HUT TV guider images, which provide information on aperture positioning relative to guide stars, and converted them into FITS-format image files. All of these new data products are available in the new HUT section of the Mikulski Archive for Space Telescopes (MAST), along with historical and reference documents from both missions. In this article, we document the improved data-processing steps applied to the data and show examples of the new data products.

  14. The NJOY Nuclear Data Processing System, Version 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macfarlane, Robert; Muir, Douglas W.; Boicourt, R. M.

    The NJOY Nuclear Data Processing System, version 2016, is a comprehensive computer code package for producing pointwise and multigroup cross sections and related quantities from evaluated nuclear data in the ENDF-4 through ENDF-6 legacy card-image formats. NJOY works with evaluated files for incident neutrons, photons, and charged particles, producing libraries for a wide variety of particle transport and reactor analysis codes.

  15. Batch Conversion of 1-D FITS Spectra to Common Graphical Display Files

    NASA Astrophysics Data System (ADS)

    MacConnell, Darrell J.; Patterson, A. P.; Wing, R. F.; Costa, E.; Jedrzejewski, R. I.

    2008-09-01

    Authors DJM, RFW, and EC have accumulated about 1000 spectra of cool stars from CTIO, ESO, and LCO over the interval 1985 to 1994 and processed them with the standard IRAF tasks into FITS files of normalized intensity vs. wavelength. With the growth of the Web as a means of exchanging and preserving scientific information, we desired to put the spectra into a Web-readable format. We have searched without success sites such as the Goddard FITS Image Viewer page, http://fits.gsfc.nasa.gov/fits_viewer.html, for a program to convert a large number of 1-d stellar spectra from FITS format into common formats such as PDF, PS, or PNG. Author APP has written a Python script to do this using the PyFITS module and plotting routines from Pylab. The program determines the wavelength calibration using header keywords and creates PNG plots with a legend read from a CSV file that may contain the star name, position, spectral type, etc. It could readily be adapted to perform almost any kind of simple batch processing of astronomical data. The program may be obtained from the first author (jack@stsci.edu). Support for DJM from the research program for CSC astronomers at STScI is gratefully acknowledged. The Space Telescope Science Institute is operated by the Association of Universities for Research in Astronomy Inc. under NASA contract NAS 5-26555.

  16. A software to digital image processing to be used in the voxel phantom development.

    PubMed

    Vieira, J W; Lima, F R A

    2009-11-15

    Anthropomorphic models used in computational dosimetry, also denominated phantoms, are based on digital images recorded from scanning of real people by Computed Tomography (CT) or Magnetic Resonance Imaging (MRI). The voxel phantom construction requests computational processing for transformations of image formats, to compact two-dimensional (2-D) images forming of three-dimensional (3-D) matrices, image sampling and quantization, image enhancement, restoration and segmentation, among others. Hardly the researcher of computational dosimetry will find all these available abilities in single software, and almost always this difficulty presents as a result the decrease of the rhythm of his researches or the use, sometimes inadequate, of alternative tools. The need to integrate the several tasks mentioned above to obtain an image that can be used in an exposure computational model motivated the development of the Digital Image Processing (DIP) software, mainly to solve particular problems in Dissertations and Thesis developed by members of the Grupo de Pesquisa em Dosimetria Numérica (GDN/CNPq). Because of this particular objective, the software uses the Portuguese idiom in their implementations and interfaces. This paper presents the second version of the DIP, whose main changes are the more formal organization on menus and menu items, and menu for digital image segmentation. Currently, the DIP contains the menus Fundamentos, Visualizações, Domínio Espacial, Domínio de Frequências, Segmentações and Estudos. Each menu contains items and sub-items with functionalities that, usually, request an image as input and produce an image or an attribute in the output. The DIP reads edits and writes binary files containing the 3-D matrix corresponding to a stack of axial images from a given geometry that can be a human body or other volume of interest. It also can read any type of computational image and to make conversions. When the task involves only an output image, this is saved as a JPEG file in the Windows default; when it involves an image stack, the output binary file is denominated SGI (Simulações Gráficas Interativas (Interactive Graphic Simulations), an acronym already used in other publications of the GDN/CNPq.

  17. A Pyramid Scheme for Constructing Geologic Maps on Geobrowsers

    NASA Astrophysics Data System (ADS)

    Whitmeyer, S. J.; de Paor, D. G.; Daniels, J.; Jeremy, N.; Michael, R.; Santangelo, B.

    2008-12-01

    Hundreds of geologic maps have been draped onto Google Earth (GE) using the ground overlay tag of Keyhole Markup Language (KML) and dozens have been published on academic and survey web pages as downloadable KML or KMZ (zipped KML) files. The vast majority of these are small KML docs that link to single, large - often very large - image files (jpegs, tiffs, etc.) Files that exceed 50 MB in size defeat the purpose of GE as an interactive and responsive, and therefore fast, virtual terrain medium. KML supports super-overlays (a.k.a. image pyramids), which break large graphic files into manageable tiles that load only when they are in the visible region at a sufficient level of detail (LOD), and several automatic tile-generating applications have been written. The process of exporting map data from applications such as ArcGIS® to KML format is becoming more manageable but still poses challenges. Complications arise, for example, because of differences between grid-north at a point on a map and true north at the equivalent location on the virtual globe. In our recent field season, we devised ways of overcoming many of these obstacles in order to generate responsive, panable, zoomable geologic maps in which data is layered in a pyramid structure similar to the image pyramid used for default GE terrain. The structure of our KML code for each level of the pyramid is self-similar: (i) check whether the current tile is in the visible region, (ii) if so, render the current overlay, (iii) add the current data level, and (iv) using four network links, check the visibility and LOD of four nested tiles. By using this pyramid structure we provide the user with access to geologic and map data at multiple levels of observation. For example, when the viewpoint is distant, regional structures and stratigraphy (e.g. lithological groups and terrane boundaries) are visible. As the user zooms to lower elevations, formations and ultimately individual outcrops come into focus. The pyramid structure is ideally suited to geologic data which tends to be unevenly exposed across the earth's surface.

  18. An easy and effective approach to manage radiologic portable document format (PDF) files using iTunes.

    PubMed

    Qian, Li Jun; Zhou, Mi; Xu, Jian Rong

    2008-07-01

    The objective of this article is to explain an easy and effective approach for managing radiologic files in portable document format (PDF) using iTunes. PDF files are widely used as a standard file format for electronic publications as well as for medical online documents. Unfortunately, there is a lack of powerful software to manage numerous PDF documents. In this article, we explain how to use the hidden function of iTunes (Apple Computer) to manage PDF documents as easily as managing music files.

  19. BOREAS Level 3-b AVHRR-LAC Imagery: Scaled At-sensor Radiance in LGSOWG Format

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime; Newcomer, Jeffrey A.; Cihlar, Josef

    2000-01-01

    The BOREAS Staff Science Satellite Data Acquisition Program focused on providing the research teams with the remotely sensed satellite data products they needed to compare and spatially extend point results. Data acquired from the AVHRR instrument on the NOAA-9, -11, -12, and -14 satellites were processed and archived for the BOREAS region by the MRSC and BORIS. The data were acquired by CCRS and were provided for use by BOREAS researchers. A few winter acquisitions are available, but the archive contains primarily growing season imagery. These gridded, at-sensor radiance image data cover the period of 30-Jan-1994 to 18-Sep-1996. Geographically, the data cover the entire 1,000-km x 1,000-km BOREAS region. The data are stored in binary image format files.

  20. Interactive 3D-PDF Presentations for the Simulation and Quantification of Extended Endoscopic Endonasal Surgical Approaches.

    PubMed

    Mavar-Haramija, Marija; Prats-Galino, Alberto; Méndez, Juan A Juanes; Puigdelívoll-Sánchez, Anna; de Notaris, Matteo

    2015-10-01

    A three-dimensional (3D) model of the skull base was reconstructed from the pre- and post-dissection head CT images and embedded in a Portable Document Format (PDF) file, which can be opened by freely available software and used offline. The CT images were segmented using a specific 3D software platform for biomedical data, and the resulting 3D geometrical models of anatomical structures were used for dual purpose: to simulate the extended endoscopic endonasal transsphenoidal approaches and to perform the quantitative analysis of the procedures. The analysis consisted of bone removal quantification and the calculation of quantitative parameters (surgical freedom and exposure area) of each procedure. The results are presented in three PDF documents containing JavaScript-based functions. The 3D-PDF files include reconstructions of the nasal structures (nasal septum, vomer, middle turbinates), the bony structures of the anterior skull base and maxillofacial region and partial reconstructions of the optic nerve, the hypoglossal and vidian canals and the internal carotid arteries. Alongside the anatomical model, axial, sagittal and coronal CT images are shown. Interactive 3D presentations were created to explain the surgery and the associated quantification methods step-by-step. The resulting 3D-PDF files allow the user to interact with the model through easily available software, free of charge and in an intuitive manner. The files are available for offline use on a personal computer and no previous specialized knowledge in informatics is required. The documents can be downloaded at http://hdl.handle.net/2445/55224 .

  1. A software platform for the analysis of dermatology images

    NASA Astrophysics Data System (ADS)

    Vlassi, Maria; Mavraganis, Vlasios; Asvestas, Panteleimon

    2017-11-01

    The purpose of this paper is to present a software platform developed in Python programming environment that can be used for the processing and analysis of dermatology images. The platform provides the capability for reading a file that contains a dermatology image. The platform supports image formats such as Windows bitmaps, JPEG, JPEG2000, portable network graphics, TIFF. Furthermore, it provides suitable tools for selecting, either manually or automatically, a region of interest (ROI) on the image. The automated selection of a ROI includes filtering for smoothing the image and thresholding. The proposed software platform has a friendly and clear graphical user interface and could be a useful second-opinion tool to a dermatologist. Furthermore, it could be used to classify images including from other anatomical parts such as breast or lung, after proper re-training of the classification algorithms.

  2. 76 FR 10045 - Notice of Proposed Information Collection: Comment Request; “eLogic Model” Grant Performance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-23

    ... recommends not more than 32 characters). DO NOT convert Word files or Excel files into PDF format. Converting... not allow HUD to enter data from the Excel files into a database. DO NOT save your logic model in .xlsm format. If necessary save as an Excel 97-2003 .xls format. Using the .xlsm format can result in a...

  3. [Digital teaching archive. Concept, implementation, and experiences in a university setting].

    PubMed

    Trumm, C; Dugas, M; Wirth, S; Treitl, M; Lucke, A; Küttner, B; Pander, E; Clevert, D-A; Glaser, C; Reiser, M

    2005-08-01

    Film-based teaching files require a substantial investment in human, logistic, and financial resources. The combination of computer and network technology facilitates the workflow integration of distributing radiologic teaching cases within an institution (intranet) or via the World Wide Web (Internet). A digital teaching file (DTF) should include the following basic functions: image import from different sources and of different formats, editing of imported images, uniform case classification, quality control (peer review), a controlled access of different user groups (in-house and external), and an efficient retrieval strategy. The portable network graphics image format (PNG) is especially suitable for DTFs because of several features: pixel support, 2D-interlacing, gamma correction, and lossless compression. The American College of Radiology (ACR) "Index for Radiological Diagnoses" is hierarchically organized and thus an ideal classification system for a DTF. Computer-based training (CBT) in radiology is described in numerous publications, from supplementing traditional learning methods to certified education via the Internet. Attractiveness of a CBT application can be increased by integration of graphical and interactive elements but makes workflow integration of daily case input more difficult. Our DTF was built with established Internet instruments and integrated into a heterogeneous PACS/RIS environment. It facilitates a quick transfer (DICOM_Send) of selected images at the time of interpretation to the DTF and access to the DTF application at any time anywhere within the university hospital intranet employing a standard web browser. A DTF is a small but important building block in an institutional strategy of knowledge management.

  4. Integration of Geophysical and Geochemical Data

    NASA Astrophysics Data System (ADS)

    Yamagishi, Y.; Suzuki, K.; Tamura, H.; Nagao, H.; Yanaka, H.; Tsuboi, S.

    2006-12-01

    Integration of geochemical and geophysical data would give us a new insight to the nature of the Earth. It should advance our understanding for the dynamics of the Earth's interior and surface processes. Today various geochemical and geophysical data are available on Internet. These data are stored in various database systems. Each system is isolated and provides own format data. The goal of this study is to display both the geochemical and geophysical data obtained from such databases together visually. We adopt Google Earth as the presentation tool. Google Earth is virtual globe software and is provided free of charge by Google, Inc. Google Earth displays the Earth's surface using satellite images with mean resolution of ~15m. We display any graphical features on Google Earth by KML format file. We have developed softwares to convert geochemical and geophysical data to KML file. First of all, we tried to overlay data from Georoc and PetDB and seismic tomography data on Google Earth. Georoc and PetDB are both online database systems for geochemical data. The data format of Georoc is CSV and that of PetDB is Microsoft Excel. The format of tomography data we used is plain text. The conversion software can process these different file formats. The geochemical data (e. g. compositional abundance) is displayed as a three-dimensional column on the Earth's surface. The shape and color of the column mean the element type. The size and color tone vary according to the abundance of the element. The tomography data can be converted into a KML file for each depth. This overlay plot of geochemical data and tomography data should help us to correlate internal temperature anomalies to geochemical anomalies, which are observed at the surface of the Earth. Our tool can convert any geophysical and geochemical data to a KML as long as the data is associated with longitude and latitude. We are going to support more geophysical data formats. In addition, we are currently trying to obtain scientific insights for the Earth's interior based on the view of both geophysical and geochemical data on Google Earth.

  5. MACSIGMA0 - MACINTOSH TOOL FOR ANALYZING JPL AIRSAR, ERS-1, JERS-1, AND MAGELLAN MIDR DATA

    NASA Technical Reports Server (NTRS)

    Norikane, L.

    1994-01-01

    MacSigma0 is an interactive tool for the Macintosh which allows you to display and make computations from radar data collected by the following sensors: the JPL AIRSAR, ERS-1, JERS-1, and Magellan. The JPL AIRSAR system is a multi-polarimetric airborne synthetic aperture radar developed and operated by the Jet Propulsion Laboratory. It includes the single-frequency L-band sensor mounted on the NASA CV990 aircraft and its replacement, the multi-frequency P-, L-, and C-band sensors mounted on the NASA DC-8. MacSigma0 works with data in the standard JPL AIRSAR output product format, the compressed Stokes matrix format. ERS-1 and JERS-1 are single-frequency, single-polarization spaceborne synthetic aperture radars launched by the European Space Agency and NASDA respectively. To be usable by MacSigma0, The data must have been processed at the Alaska SAR Facility and must be in the "low-resolution" format. Magellan is a spacecraft mission to map the surface of Venus with imaging radar. The project is managed by the Jet Propulsion Laboratory. The spacecraft carries a single-frequency, single-polarization synthetic aperture radar. MacSigma0 works with framelets of the standard MIDR CD-ROM data products. MacSigma0 provides four basic functions: synthesis of images (if necessary), statistical analysis of selected areas, analysis of corner reflectors as a calibration measure (if appropriate and possible), and informative mouse tracking. For instance, the JPL AIRSAR data can be used to synthesize a variety of images such as a total power image. The total power image displays the sum of the polarized and unpolarized components of the backscatter for each pixel. Other images which can be synthesized are HH, HV, VV, RL, RR, HHVV*, HHHV*, HVVV*, HHVV* phase and correlation coefficient images. For the complex and phase images, phase is displayed using color and magnitude is displayed using intensity. MacSigma0 can also be used to compute statistics from within a selected area. The statistics computed depend on the image type. For JPL AIRSAR data, the HH, HV, VV, HHVV* phase, and correlation coefficient means and standard deviation measures are calculated. The mean, relative standard deviation, minimum, and maximum values are calculated for all other data types. A histogram of the selected area is also calculated and displayed. The selected area can be rectangular, linear, or polygonal in shape. The user is allowed to select multiple rectangular areas, but not multiple linear or polygonal areas. The statistics and histogram are displayed to the user and can either be printed or saved as a text file. MacSigma0 can also be used to analyze corner reflectors as a measure of the calibration for JPL AIRSAR, ERS-1, and JERS-1 data types. It computes a theoretical radar cross section and the actual radar cross section for a selected trihedral corner reflector. The theoretical cross section, measured cross section, their ratio in dBs, and other information are displayed to the user and can be saved into a text file. For ERS-1, JERS-1, and Magellan data, MacSigma0 simultaneously displays pixel location in data coordinates and in latitude, longitude coordinates. It also displays sigma0, the incidence angle (for Magellan data), the original pixel value (for Magellan data), and the noise power value (for ERS-1 and JERS-1 data). Grey scale computed images can be saved in a byte format (a headerless format which saves the image as a string of byte values) or a PICT format (a standard format readable by other image processing programs for the Macintosh). Images can also be printed. MacSigma0 is written in C-language for use on Macintosh series computers. The minimum configuration requirements for MacSigma0 are System 6.0, Finder 6.1, 1Mb of RAM, and at least a 4-bit color or grey-scale graphics display. MacSigma0 is also System 7 compatible. To compile the source code, Apple's Macintosh Programmers Workbench (MPW) 3.2 and the MPW C language compiler version 3.2 are required. The source code will not compile with a later version of the compiler; however, the compiled application which will run under the minimum hardware configuration is provided on the distribution medium. In addition, the distribution media includes an executable which runs significantly faster but requires a 68881 compatible math coprocessor and a 68020 compatible CPU. Since JPL AIRSAR data files can be very large, it is often desirable to reduce the size of a data file before transferring it to the Macintosh for use in MacSigma0. A small FORTRAN program which can be used for this purpose is included on the distribution media. MacSigma0 will print statistics on any output device which supports QuickDraw, and it will print images on any device which supports QuickDraw or PostScript. The standard distribution medium for MacSigma0 is a set of five 1.4Mb Macintosh format diskettes. This program was developed in 1992 and is a copyrighted work with all copyright vested in NASA. Version 4.2 of MacSigma0 was released in 1993.

  6. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format.

    PubMed

    Ismail, Mahmoud; Philbin, James

    2015-04-01

    The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies' metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata.

  7. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format

    PubMed Central

    Ismail, Mahmoud; Philbin, James

    2015-01-01

    Abstract. The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies’ metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata. PMID:26158117

  8. Toward privacy-preserving JPEG image retrieval

    NASA Astrophysics Data System (ADS)

    Cheng, Hang; Wang, Jingyue; Wang, Meiqing; Zhong, Shangping

    2017-07-01

    This paper proposes a privacy-preserving retrieval scheme for JPEG images based on local variance. Three parties are involved in the scheme: the content owner, the server, and the authorized user. The content owner encrypts JPEG images for privacy protection by jointly using permutation cipher and stream cipher, and then, the encrypted versions are uploaded to the server. With an encrypted query image provided by an authorized user, the server may extract blockwise local variances in different directions without knowing the plaintext content. After that, it can calculate the similarity between the encrypted query image and each encrypted database image by a local variance-based feature comparison mechanism. The authorized user with the encryption key can decrypt the returned encrypted images with plaintext content similar to the query image. The experimental results show that the proposed scheme not only provides effective privacy-preserving retrieval service but also ensures both format compliance and file size preservation for encrypted JPEG images.

  9. Dental image replacement on cone beam computed tomography with three-dimensional optical scanning of a dental cast, occlusal bite, or bite tray impression.

    PubMed

    Kang, S-H; Lee, J-W; Lim, S-H; Kim, Y-H; Kim, M-K

    2014-10-01

    The goal of the present study was to compare the accuracy of dental image replacement on a cone beam computed tomography (CBCT) image using digital image data from three-dimensional (3D) optical scanning of a dental cast, occlusal bite, and bite tray impression. A Bracket Typodont dental model was used. CBCT of the dental model was performed and the data were converted to stereolithography (STL) format. Three experimental materials, a dental cast, occlusal bite, and bite tray impression, were optically scanned in 3D. STL files converted from the CBCT of the Typodont model and the 3D optical-scanned STL files of the study materials were image-registered. The error range of each methodology was measured and compared with a 3D optical scan of the Typodont. For the three materials, the smallest error observed was 0.099±0.114mm (mean error±standard deviation) for registering the 3D optical scan image of the dental cast onto the CBCT dental image. Although producing a dental cast can be laborious, the study results indicate that it is the preferred method. In addition, an occlusal bite is recommended when bite impression materials are used. Copyright © 2014 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  10. A Mass Spectrometry Proteomics Data Management Platform*

    PubMed Central

    Sharma, Vagisha; Eng, Jimmy K.; MacCoss, Michael J.; Riffle, Michael

    2012-01-01

    Mass spectrometry-based proteomics is increasingly being used in biomedical research. These experiments typically generate a large volume of highly complex data, and the volume and complexity are only increasing with time. There exist many software pipelines for analyzing these data (each typically with its own file formats), and as technology improves, these file formats change and new formats are developed. Files produced from these myriad software programs may accumulate on hard disks or tape drives over time, with older files being rendered progressively more obsolete and unusable with each successive technical advancement and data format change. Although initiatives exist to standardize the file formats used in proteomics, they do not address the core failings of a file-based data management system: (1) files are typically poorly annotated experimentally, (2) files are “organically” distributed across laboratory file systems in an ad hoc manner, (3) files formats become obsolete, and (4) searching the data and comparing and contrasting results across separate experiments is very inefficient (if possible at all). Here we present a relational database architecture and accompanying web application dubbed Mass Spectrometry Data Platform that is designed to address the failings of the file-based mass spectrometry data management approach. The database is designed such that the output of disparate software pipelines may be imported into a core set of unified tables, with these core tables being extended to support data generated by specific pipelines. Because the data are unified, they may be queried, viewed, and compared across multiple experiments using a common web interface. Mass Spectrometry Data Platform is open source and freely available at http://code.google.com/p/msdapl/. PMID:22611296

  11. Kepler Fine Guidance Sensor Data

    NASA Technical Reports Server (NTRS)

    Van Cleve, Jeffrey; Campbell, Jennifer Roseanna

    2017-01-01

    The Kepler and K2 missions collected Fine Guidance Sensor (FGS) data in addition to the science data, as discussed in the Kepler Instrument Handbook (KIH, Van Cleve and Caldwell 2016). The FGS CCDs are frame transfer devices (KIH Table 7) located in the corners of the Kepler focal plane (KIH Figure 24), which are read out 10 times every second. The FGS data are being made available to the user community for scientific analysis as flux and centroid time series, along with a limited number of FGS full frame images which may be useful for constructing a World Coordinate System (WCS) or otherwise putting the time series data in context. This document will describe the data content and file format, and give example MATLAB scripts to read the time series. There are three file types delivered as the FGS data.1. Flux and Centroid (FLC) data: time series of star signal and centroid data. 2. Ancillary FGS Reference (AFR) data: catalog of information about the observed stars in the FLC data. 3. FGS Full-Frame Image (FGI) data: full-frame image snapshots of the FGS CCDs.

  12. 12 CFR 335.801 - Inapplicable SEC regulations; FDIC substituted regulations; additional information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... a continuing hardship exemption under these rules may file the forms with the FDIC in paper format... these rules may file the appropriate forms with the FDIC in paper format. Instructions for continuing...) Previously filed exhibits, whether in paper or electronic format, may be incorporated by reference into an...

  13. 12 CFR 335.801 - Inapplicable SEC regulations; FDIC substituted regulations; additional information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... a continuing hardship exemption under these rules may file the forms with the FDIC in paper format... these rules may file the appropriate forms with the FDIC in paper format. Instructions for continuing...) Previously filed exhibits, whether in paper or electronic format, may be incorporated by reference into an...

  14. 12 CFR 335.801 - Inapplicable SEC regulations; FDIC substituted regulations; additional information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... a continuing hardship exemption under these rules may file the forms with the FDIC in paper format... these rules may file the appropriate forms with the FDIC in paper format. Instructions for continuing...) Previously filed exhibits, whether in paper or electronic format, may be incorporated by reference into an...

  15. 12 CFR 335.801 - Inapplicable SEC regulations; FDIC substituted regulations; additional information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... a continuing hardship exemption under these rules may file the forms with the FDIC in paper format... these rules may file the appropriate forms with the FDIC in paper format. Instructions for continuing...) Previously filed exhibits, whether in paper or electronic format, may be incorporated by reference into an...

  16. Archive of digital Boomer seismic reflection data collected during USGS Cruises 94CCT01 and 95CCT01, eastern Texas and western Louisiana, 1994 and 1995

    USGS Publications Warehouse

    Calderon, Karynna; Dadisman, Shawn V.; Kindinger, Jack G.; Flocks, James G.; Morton, Robert A.; Wiese, Dana S.

    2004-01-01

    In June of 1994 and August and September of 1995, the U.S. Geological Survey, in cooperation with the University of Texas Bureau of Economic Geology, conducted geophysical surveys of the Sabine and Calcasieu Lake areas and the Gulf of Mexico offshore eastern Texas and western Louisiana. This report serves as an archive of unprocessed digital boomer seismic reflection data, trackline maps, navigation files, observers' logbooks, GIS information, and formal FGDC metadata. In addition, a filtered and gained GIF image of each seismic profile is provided. The archived trace data are in standard Society of Exploration Geophysicists (SEG) SEG-Y format (Barry and others, 1975) and may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU). Examples of SU processing scripts and in-house (USGS) software for viewing SEG-Y files (Zihlman, 1992) are also provided. Processed profile images, trackline maps, navigation files, and formal metadata may be viewed with a web browser. Scanned handwritten logbooks and Field Activity Collection System (FACS) logs may be viewed with Adobe Reader.

  17. Archive of digital Boomer and Chirp seismic reflection data collected during USGS Cruises 01RCE05 and 02RCE01 in the Lower Atchafalaya River, Mississippi River Delta, and offshore southeastern Louisiana, October 23-30, 2001, and August 18-19, 2002

    USGS Publications Warehouse

    Calderon, Karynna; Dadisman, Shawn V.; Kindinger, Jack G.; Flocks, James G.; Ferina, Nicholas F.; Wiese, Dana S.

    2004-01-01

    In October of 2001 and August of 2002, the U.S. Geological Survey conducted geophysical surveys of the Lower Atchafalaya River, the Mississippi River Delta, Barataria Bay, and the Gulf of Mexico south of East Timbalier Island, Louisiana. This report serves as an archive of unprocessed digital marine seismic reflection data, trackline maps, navigation files, observers' logbooks, GIS information, and formal FGDC metadata. In addition, a filtered and gained GIF image of each seismic profile is provided. The archived trace data are in standard Society of Exploration Geophysicists (SEG) SEG-Y format (Barry and othes, 1975) and may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU). Examples of SU processing scripts and in-house (USGS) software for viewing SEG-Y files (Zihlman, 1992) are also provided. Processed profile images, trackline maps, navigation files, and formal metadata may be viewed with a web browser. Scanned handwritten logbooks and Field Activity Collection System (FACS) logs may be viewed with Adobe Reader.

  18. FTOOLS: A general package of software to manipulate FITS files

    NASA Astrophysics Data System (ADS)

    Blackburn, J. K.; Shaw, R. A.; Payne, H. E.; Hayes, J. J. E.; Heasarc

    1999-12-01

    FTOOLS, a highly modular collection of utilities for processing and analyzing data in the FITS (Flexible Image Transport System) format, has been developed in support of the HEASARC (High Energy Astrophysics Research Archive Center) at NASA's Goddard Space Flight Center. The FTOOLS package contains many utility programs which perform modular tasks on any FITS image or table, as well as higher-level analysis programs designed specifically for data from current and past high energy astrophysics missions. The utility programs for FITS tables are especially rich and powerful, and provide functions for presentation of file contents, extraction of specific rows or columns, appending or merging tables, binning values in a column or selecting subsets of rows based on a boolean expression. Individual FTOOLS programs can easily be chained together in scripts to achieve more complex operations such as the generation and displaying of spectra or light curves. FTOOLS development began in 1991 and has produced the main set of data analysis software for the current ASCA and RXTE space missions and for other archival sets of X-ray and gamma-ray data. The FTOOLS software package is supported on most UNIX platforms and on Windows machines. The user interface is controlled by standard parameter files that are very similar to those used by IRAF. The package is self documenting through a stand alone help task called fhelp. Software is written in ANSI C and FORTRAN to provide portability across most computer systems. The data format dependencies between hardware platforms are isolated through the FITSIO library package.

  19. Clementine High Resolution Camera Mosaicking Project. Volume 21; CL 6021; 80 deg S to 90 deg S Latitude, North Periapsis; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Clementine I high resolution (HiRes) camera lunar image mosaics developed by Malin Space Science Systems (MSSS). These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. The geometric control is provided by the U. S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD are compiled from polar data (latitudes greater than 80 degrees), and are presented in the stereographic projection at a scale of 30 m/pixel at the pole, a resolution 5 times greater than that (150 m/pixel) of the corresponding UV/Vis polar basemap. This 5:1 scale ratio is in keeping with the sub-polar mosaic, in which the HiRes and UV/Vis mosaics had scales of 20 m/pixel and 100 m/pixel, respectively. The equal-area property of the stereographic projection made this preferable for the HiRes polar mosaic rather than the basemap's orthographic projection. Thus, a necessary first step in constructing the mosaic was the reprojection of the UV/Vis basemap to the stereographic projection. The HiRes polar data can be naturally grouped according to the orbital periapsis, which was in the south during the first half of the mapping mission and in the north during the second half. Images in each group have generally uniform intrinsic resolution, illumination, exposure and gain. Rather than mingle data from the two periapsis epochs, separate mosaics are provided for each, a total of 4 polar mosaics. The mosaics are divided into 100 square tiles of 2250 pixels (approximately 2.2 deg near the pole) on a side. Not all squares of this grid contain HiRes mosaic data, some inevitably since a square is not a perfect representation of a (latitude) circle, others due to the lack of HiRes data. This CD also contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  20. Comparative study of root-canal shaping with stainless steel and rotary NiTi files performed by preclinical dental students.

    PubMed

    Alrahabi, Mothanna

    2015-01-01

    We evaluated the use of NiTi rotary and stainless steel endodontic instruments for canal shaping by undergraduate students. We also assessed the quality of root canal preparation as well as the occurrence of iatrogenic events during instrumentation. In total, 30 third-year dental students attending Taibah University Dental College prepared 180 simulated canals in resin blocks with NiTi rotary instruments and stainless steel hand files. Superimposed images were prepared to measure the removal of material at different levels from apical termination using the GSA image analysis software. Preparation time, procedural accidents, and canal shape after preparation were analyzed using χ 2 and t-tests. The statistical significance level was set at P < 0.05. There were significant differences in preparation time between NiTi instruments and stainless steel files; the former was associated with shorter preparation time, less ledge formation (1.1% vs. 14.4%), and greater instrument fracture (5.56% vs. 1.1%). These results indicate that NiTi rotary instruments result in better canal geometry and cause less canal transportation. Manual instrumentation using stainless steel files is safer than rotary instrumentation for inexperienced students. Intensive preclinical training is a prerequisite for using NiTi rotary instruments. These results prompted us to reconsider theoretical and practical coursework when teaching endodontics.

  1. 40 CFR 264.71 - Use of manifest system.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... revising paragraph (a)(2), and by adding paragraphs (f), (g), (h), (i), (j), and (k) to read as follows... image file of Page 1 of the manifest, or both a data string file and the image file corresponding to Page 1 of the manifest. Any data or image files transmitted to EPA under this paragraph must be...

  2. Archive of Digital Boomer Seismic Reflection Data Collected During USGS Field Activities 93LCA01 and 94LCA01 in Kingsley, Orange, and Lowry Lakes, Northeast Florida, 1993 and 1994

    USGS Publications Warehouse

    Calderon, Karynna; Dadisman, Shawn V.; Kindinger, Jack G.; Davis, Jeffrey B.; Flocks, James G.; Wiese, Dana S.

    2004-01-01

    In August and September of 1993 and January of 1994, the U.S. Geological Survey, under a cooperative agreement with the St. Johns River Water Management District (SJRWMD), conducted geophysical surveys of Kingsley Lake, Orange Lake, and Lowry Lake in northeast Florida. This report serves as an archive of unprocessed digital boomer seismic reflection data, trackline maps, navigation files, GIS information, observer's logbook, Field Activity Collection System (FACS) logs, and formal FGDC metadata. A filtered and gained GIF image of each seismic profile is also provided. Refer to the Acronyms page for expansion of acronyms and abbreviations used in this report. The archived trace data are in standard Society of Exploration Geophysicists (SEG) SEG-Y format (Barry and others, 1975) and may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU). Examples of SU processing scripts and in-house (USGS) software for viewing SEG-Y files (Zihlman, 1992) are also provided. The data archived here were collected under a cooperative agreement with the St. Johns River Water Management District as part of the USGS Lakes and Coastal Aquifers (LCA) Project. For further information about this study, refer to http://coastal.er.usgs.gov/stjohns, Kindinger and others (1994), and Kindinger and others (2000). The USGS Florida Integrated Science Center (FISC) - Coastal and Watershed Studies in St. Petersburg, Florida, assigns a unique identifier to each cruise or field activity. For example, 93LCA01 tells us the data were collected in 1993 for the Lakes and Coastal Aquifers (LCA) Project and the data were collected during the first field activity for that project in that calendar year. For a detailed description of the method used to assign the field activity ID, see http://walrus.wr.usgs.gov/infobank/programs/html/definition/activity.html. The boomer is an acoustic energy source that consists of capacitors charged to a high voltage and discharged through a transducer in the water. The transducer is towed on a sled at the sea surface and when discharged emits a short acoustic pulse, or shot, that propagates through the water and sediment column. The acoustic energy is reflected at density boundaries (such as the seafloor or sediment layers beneath the seafloor), detected by the receiver, and recorded by a PC-based seismic acquisition system. This process is repeated at timed intervals (e.g., 0.5 s) and recorded for specific intervals of time (e.g., 100 ms). In this way, a two-dimensional vertical image of the shallow geologic structure beneath the ship track is produced. Acquisition geometery for 94LCA01 is recorded in the operations logbook. No logbook exists for 93LCA01. Table 1 displays acquisition parameters for both field activities. For more information about the acquisition equipment used, refer to the FACS equipment logs. The unprocessed seismic data are stored in SEG-Y format (Barry and others, 1975). For a detailed description of the data format, refer to the SEG-Y Format page. See the How To Download SEG-Y Data page for more information about these files. Processed profiles can be viewed as GIF images from the Profiles page. Refer to the Software page for details about the processing and examples of the processing scripts. Detailed information about the navigation systems used for each field activity can be found in Table 1 and the FACS equipment logs. To view the trackline maps and navigation files, and for more information about these items, see the Navigation page. The original trace files were recorded in nonstandard ELICS format and later converted to standard SEG-Y format. The original trace files for 94LCA01 lines ORJ127_1, ORJ127_3, and ORJ131_1 were divided into two or more trace files (e.g., ORJ127_1 became ORJ127_1a and ORJ127_1b) because the original total number of traces exceeded the maximum allowed by the processing system. Digital data were not recoverable for 93LCA

  3. Mineral and Vegetation Maps of the Bodie Hills, Sweetwater Mountains, and Wassuk Range, California/Nevada, Generated from ASTER Satellite Data

    USGS Publications Warehouse

    Rockwell, Barnaby W.

    2010-01-01

    Multispectral remote sensing data acquired by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) were analyzed to identify and map minerals, vegetation groups, and volatiles (water and snow) in support of geologic studies of the Bodie Hills, Sweetwater Mountains, and Wassuk Range, California/Nevada. Digital mineral and vegetation mapping results are presented in both portable document format (PDF) and ERDAS Imagine format (.img). The ERDAS-format files are suitable for integration with other geospatial data in Geographic Information Systems (GIS) such as ArcGIS. The ERDAS files showing occurrence of 1) iron-bearing minerals, vegetation, and water, and 2) clay, sulfate, mica, carbonate, Mg-OH, and hydrous quartz minerals have been attributed according to identified material, so that the material detected in a pixel can be queried with the interactive attribute identification tools of GIS and image processing software packages (for example, the Identify Tool of ArcMap and the Inquire Cursor Tool of ERDAS Imagine). All raster data have been orthorectified to the Universal Transverse Mercator (UTM) projection using a projective transform with ground-control points selected from orthorectified Landsat Thematic Mapper data and a digital elevation model from the U.S. Geological Survey (USGS) National Elevation Dataset (1/3 arc second, 10 m resolution). Metadata compliant with Federal Geographic Data Committee (FGDC) standards for all ERDAS-format files have been included, and contain important information regarding geographic coordinate systems, attributes, and cross-references. Documentation regarding spectral analysis methodologies employed to make the maps is included in these cross-references.

  4. Transferable Output ASCII Data (TOAD) gateway: Version 1.0 user's guide

    NASA Technical Reports Server (NTRS)

    Bingel, Bradford D.

    1991-01-01

    The Transferable Output ASCII Data (TOAD) Gateway, release 1.0 is described. This is a software tool for converting tabular data from one format into another via the TOAD format. This initial release of the Gateway allows free data interchange among the following file formats: TOAD; Standard Interface File (SIF); Program to Optimize Simulated Trajectories (POST) input; Comma Separated Value (TSV); and a general free-form file format. As required, additional formats can be accommodated quickly and easily.

  5. Sensitivity Data File Formats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.

    2016-04-01

    The format of the TSUNAMI-A sensitivity data file produced by SAMS for cases with deterministic transport solutions is given in Table 6.3.A.1. The occurrence of each entry in the data file is followed by an identification of the data contained on each line of the file and the FORTRAN edit descriptor denoting the format of each line. A brief description of each line is also presented. A sample of the TSUNAMI-A data file for the Flattop-25 sample problem is provided in Figure 6.3.A.1. Here, only two profiles out of the 130 computed are shown.

  6. TOAD Editor

    NASA Technical Reports Server (NTRS)

    Bingle, Bradford D.; Shea, Anne L.; Hofler, Alicia S.

    1993-01-01

    Transferable Output ASCII Data (TOAD) computer program (LAR-13755), implements format designed to facilitate transfer of data across communication networks and dissimilar host computer systems. Any data file conforming to TOAD format standard called TOAD file. TOAD Editor is interactive software tool for manipulating contents of TOAD files. Commonly used to extract filtered subsets of data for visualization of results of computation. Also offers such user-oriented features as on-line help, clear English error messages, startup file, macroinstructions defined by user, command history, user variables, UNDO features, and full complement of mathematical statistical, and conversion functions. Companion program, TOAD Gateway (LAR-14484), converts data files from variety of other file formats to that of TOAD. TOAD Editor written in FORTRAN 77.

  7. A model for a PC-based, universal-format, multimedia digitization system: moving beyond the scanner.

    PubMed

    McEachen, James C; Cusack, Thomas J; McEachen, John C

    2003-08-01

    Digitizing images for use in case presentations based on hardcopy films, slides, photographs, negatives, books, and videos can present a challenging task. Scanners and digital cameras have become standard tools of the trade. Unfortunately, use of these devices to digitize multiple images in many different media formats can be a time-consuming and in some cases unachievable process. The authors' goal was to create a PC-based solution for digitizing multiple media formats in a timely fashion while maintaining adequate image presentation quality. The authors' PC-based solution makes use of off-the-shelf hardware applications to include a digital document camera (DDC), VHS video player, and video-editing kit. With the assistance of five staff radiologists, the authors examined the quality of multiple image types digitized with this equipment. The authors also quantified the speed of digitization of various types of media using the DDC and video-editing kit. With regard to image quality, the five staff radiologists rated the digitized angiography, CT, and MR images as adequate to excellent for use in teaching files and case presentations. With regard to digitized plain films, the average rating was adequate. As for performance, the authors recognized a 68% improvement in the time required to digitize hardcopy films using the DDC instead of a professional quality scanner. The PC-based solution provides a means for digitizing multiple images from many different types of media in a timely fashion while maintaining adequate image presentation quality.

  8. PixelLearn

    NASA Technical Reports Server (NTRS)

    Mazzoni, Dominic; Wagstaff, Kiri; Bornstein, Benjamin; Tang, Nghia; Roden, Joseph

    2006-01-01

    PixelLearn is an integrated user-interface computer program for classifying pixels in scientific images. Heretofore, training a machine-learning algorithm to classify pixels in images has been tedious and difficult. PixelLearn provides a graphical user interface that makes it faster and more intuitive, leading to more interactive exploration of image data sets. PixelLearn also provides image-enhancement controls to make it easier to see subtle details in images. PixelLearn opens images or sets of images in a variety of common scientific file formats and enables the user to interact with several supervised or unsupervised machine-learning pixel-classifying algorithms while the user continues to browse through the images. The machinelearning algorithms in PixelLearn use advanced clustering and classification methods that enable accuracy much higher than is achievable by most other software previously available for this purpose. PixelLearn is written in portable C++ and runs natively on computers running Linux, Windows, or Mac OS X.

  9. Total Petroleum Systems and Geologic Assessment of Oil and Gas Resources in the Powder River Basin Province, Wyoming and Montana

    USGS Publications Warehouse

    Anna, L.O.

    2009-01-01

    The U.S. Geological Survey completed an assessment of the undiscovered oil and gas potential of the Powder River Basin in 2006. The assessment of undiscovered oil and gas used the total petroleum system concept, which includes mapping the distribution of potential source rocks and known petroleum accumulations and determining the timing of petroleum generation and migration. Geologically based, it focuses on source and reservoir rock stratigraphy, timing of tectonic events and the configuration of resulting structures, formation of traps and seals, and burial history modeling. The total petroleum system is subdivided into assessment units based on similar geologic characteristics and accumulation and petroleum type. In chapter 1 of this report, five total petroleum systems, eight conventional assessment units, and three continuous assessment units were defined and the undiscovered oil and gas resources within each assessment unit quantitatively estimated. Chapter 2 describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  10. Displaying Composite and Archived Soundings in the Advanced Weather Interactive Processing System

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Volkmer, Matthew R.; Blottman, Peter F.; Sharp, David W.

    2008-01-01

    In a previous task, the Applied Meteorology Unit (AMU) developed spatial and temporal climatologies of lightning occurrence based on eight atmospheric flow regimes. The AMU created climatological, or composite, soundings of wind speed and direction, temperature, and dew point temperature at four rawinsonde observation stations at Jacksonville, Tampa, Miami, and Cape Canaveral Air Force Station, for each of the eight flow regimes. The composite soundings were delivered to the National Weather Service (NWS) Melbourne (MLB) office for display using the National version of the Skew-T Hodograph analysis and Research Program (NSHARP) software program. The NWS MLB requested the AMU make the composite soundings available for display in the Advanced Weather Interactive Processing System (AWIPS), so they could be overlaid on current observed soundings. This will allow the forecasters to compare the current state of the atmosphere with climatology. This presentation describes how the AMU converted the composite soundings from NSHARP Archive format to Network Common Data Form (NetCDF) format, so that the soundings could be displayed in AWl PS. The NetCDF is a set of data formats, programming interfaces, and software libraries used to read and write scientific data files. In AWIPS, each meteorological data type, such as soundings or surface observations, has a unique NetCDF format. Each format is described by a NetCDF template file. Although NetCDF files are in binary format, they can be converted to a text format called network Common data form Description Language (CDL). A software utility called ncgen is used to create a NetCDF file from a CDL file, while the ncdump utility is used to create a CDL file from a NetCDF file. An AWIPS receives soundings in Binary Universal Form for the Representation of Meteorological data (BUFR) format (http://dss.ucar.edu/docs/formats/bufr/), and then decodes them into NetCDF format. Only two sounding files are generated in AWIPS per day. One file contains all of the soundings received worldwide between 0000 UTC and 1200 UTC, and the other includes all soundings between 1200 UTC and 0000 UTC. In order to add the composite soundings into AWIPS, a procedure was created to configure, or localize, AWIPS. This involved modifying and creating several configuration text files. A unique fourcharacter site identifier was created for each of the 32 soundings so each could be viewed separately. The first three characters were based on the site identifier of the observed sounding, while the last character was based on the flow regime. While researching the localization process for soundings, the AMU discovered a method of archiving soundings so old soundings would not get purged automatically by AWl PS. This method could provide an alternative way of localizing AWl PS for composite soundings. In addition, this would allow forecasters to use archived soundings in AWIPS for case studies. A test sounding file in NetCDF format was written in order to verify the correct format for soundings in AWIPS. After the file was viewed successfully in AWIPS, the AMU wrote a software program in the Tool Command Language/Tool Kit (Tcl/Tk) language to convert the 32 composite soundings from NSHARP Archive to CDL format. The ncgen utility was then used to convert the CDL file to a NetCDF file. The NetCDF file could then be read and displayed in AWIPS.

  11. Tools for Requirements Management: A Comparison of Telelogic DOORS and the HiVe

    DTIC Science & Technology

    2006-07-01

    types DOORS deals with are text files, spreadsheets, FrameMaker , rich text, Microsoft Word and Microsoft Project. 2.5.1 Predefined file formats DOORS...during the export. DOORS exports FrameMaker files in an incomplete format, meaning DOORS exported files will have to be opened in FrameMaker and saved

  12. Software for hyperspectral, joint photographic experts group (.JPG), portable network graphics (.PNG) and tagged image file format (.TIFF) segmentation

    NASA Astrophysics Data System (ADS)

    Bruno, L. S.; Rodrigo, B. P.; Lucio, A. de C. Jorge

    2016-10-01

    This paper presents a system developed by an application of a neural network Multilayer Perceptron for drone acquired agricultural image segmentation. This application allows a supervised user training the classes that will posteriorly be interpreted by neural network. These classes will be generated manually with pre-selected attributes in the application. After the attribute selection a segmentation process is made to allow the relevant information extraction for different types of images, RGB or Hyperspectral. The application allows extracting the geographical coordinates from the image metadata, geo referencing all pixels on the image. In spite of excessive memory consume on hyperspectral images regions of interest, is possible to perform segmentation, using bands chosen by user that can be combined in different ways to obtain different results.

  13. An open-source solution for advanced imaging flow cytometry data analysis using machine learning.

    PubMed

    Hennig, Holger; Rees, Paul; Blasi, Thomas; Kamentsky, Lee; Hung, Jane; Dao, David; Carpenter, Anne E; Filby, Andrew

    2017-01-01

    Imaging flow cytometry (IFC) enables the high throughput collection of morphological and spatial information from hundreds of thousands of single cells. This high content, information rich image data can in theory resolve important biological differences among complex, often heterogeneous biological samples. However, data analysis is often performed in a highly manual and subjective manner using very limited image analysis techniques in combination with conventional flow cytometry gating strategies. This approach is not scalable to the hundreds of available image-based features per cell and thus makes use of only a fraction of the spatial and morphometric information. As a result, the quality, reproducibility and rigour of results are limited by the skill, experience and ingenuity of the data analyst. Here, we describe a pipeline using open-source software that leverages the rich information in digital imagery using machine learning algorithms. Compensated and corrected raw image files (.rif) data files from an imaging flow cytometer (the proprietary .cif file format) are imported into the open-source software CellProfiler, where an image processing pipeline identifies cells and subcellular compartments allowing hundreds of morphological features to be measured. This high-dimensional data can then be analysed using cutting-edge machine learning and clustering approaches using "user-friendly" platforms such as CellProfiler Analyst. Researchers can train an automated cell classifier to recognize different cell types, cell cycle phases, drug treatment/control conditions, etc., using supervised machine learning. This workflow should enable the scientific community to leverage the full analytical power of IFC-derived data sets. It will help to reveal otherwise unappreciated populations of cells based on features that may be hidden to the human eye that include subtle measured differences in label free detection channels such as bright-field and dark-field imagery. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  14. A Systematic Approach for Assessing Workforce Readiness

    DTIC Science & Technology

    2014-08-01

    goals: (1) to collect data from designated in- formation technology equipment and (2) to analyze the collected data. The two goals are typically...office, home, or information technology (IT) department. The purpose of data collection is to gather images of disks, file servers, etc., from a site...naissance to identify which technologies are deployed at a site and determine which data need to be collected from the organization’s systems and

  15. Interactive visualization tools for the structural biologist.

    PubMed

    Porebski, Benjamin T; Ho, Bosco K; Buckle, Ashley M

    2013-10-01

    In structural biology, management of a large number of Protein Data Bank (PDB) files and raw X-ray diffraction images often presents a major organizational problem. Existing software packages that manipulate these file types were not designed for these kinds of file-management tasks. This is typically encountered when browsing through a folder of hundreds of X-ray images, with the aim of rapidly inspecting the diffraction quality of a data set. To solve this problem, a useful functionality of the Macintosh operating system (OSX) has been exploited that allows custom visualization plugins to be attached to certain file types. Software plugins have been developed for diffraction images and PDB files, which in many scenarios can save considerable time and effort. The direct visualization of diffraction images and PDB structures in the file browser can be used to identify key files of interest simply by scrolling through a list of files.

  16. The digital geologic map of Colorado in ARC/INFO format, Part A. Documentation

    USGS Publications Warehouse

    Green, Gregory N.

    1992-01-01

    This geologic map was prepared as a part of a study of digital methods and techniques as applied to complex geologic maps. The geologic map was digitized from the original scribe sheets used to prepare the published Geologic Map of Colorado (Tweto 1979). Consequently the digital version is at 1:500,000 scale using the Lambert Conformal Conic map projection parameters of the state base map. Stable base contact prints of the scribe sheets were scanned on a Tektronix 4991 digital scanner. The scanner automatically converts the scanned image to an ASCII vector format. These vectors were transferred to a VAX minicomputer, where they were then loaded into ARC/INFO. Each vector and polygon was given attributes derived from the original 1979 geologic map. This database was developed on a MicroVAX computer system using VAX V 5.4 nd ARC/INFO 5.0 software. UPDATE: April 1995, The update was done solely for the purpose of adding the abilitly to plot to an HP650c plotter. Two new ARC/INFO plot AMLs along with a lineset and shadeset for the HP650C design jet printer have been included. These new files are COLORADO.650, INDEX.650, TWETOLIN.E00 and TWETOSHD.E00. These files were created on a UNIX platform with ARC/INFO 6.1.2. Updated versions of INDEX.E00, CONTACT.E00, LINE.E00, DECO.E00 and BORDER.E00 files that included the newly defined HP650c items are also included. * Any use of trade, product, or firm names is for descriptive purposes only and does not imply endorsement by the U.S. Government. Descriptors: The Digital Geologic Map of Colorado in ARC/INFO Format Open-File Report 92-050

  17. The prevalence of encoded digital trace evidence in the nonfile space of computer media(,) (.).

    PubMed

    Garfinkel, Simson L

    2014-09-01

    Forensically significant digital trace evidence that is frequently present in sectors of digital media not associated with allocated or deleted files. Modern digital forensic tools generally do not decompress such data unless a specific file with a recognized file type is first identified, potentially resulting in missed evidence. Email addresses are encoded differently for different file formats. As a result, trace evidence can be categorized as Plain in File (PF), Encoded in File (EF), Plain Not in File (PNF), or Encoded Not in File (ENF). The tool bulk_extractor finds all of these formats, but other forensic tools do not. A study of 961 storage devices purchased on the secondary market and shows that 474 contained encoded email addresses that were not in files (ENF). Different encoding formats are the result of different application programs that processed different kinds of digital trace evidence. Specific encoding formats explored include BASE64, GZIP, PDF, HIBER, and ZIP. Published 2014. This article is a U.S. Government work and is in the public domain in the USA. Journal of Forensic Sciences published by Wiley Periodicals, Inc. on behalf of American Academy of Forensic Sciences.

  18. A GUI visualization system for airborne lidar image data to reconstruct 3D city model

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2015-10-01

    A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.

  19. Highway Safety Information System guidebook for the Minnesota state data files. Volume 1 : SAS file formats

    DOT National Transportation Integrated Search

    2001-02-01

    The Minnesota data system includes the following basic files: Accident data (Accident File, Vehicle File, Occupant File); Roadlog File; Reference Post File; Traffic File; Intersection File; Bridge (Structures) File; and RR Grade Crossing File. For ea...

  20. PDB explorer -- a web based algorithm for protein annotation viewer and 3D visualization.

    PubMed

    Nayarisseri, Anuraj; Shardiwal, Rakesh Kumar; Yadav, Mukesh; Kanungo, Neha; Singh, Pooja; Shah, Pratik; Ahmed, Sheaza

    2014-12-01

    The PDB file format, is a text format characterizing the three dimensional structures of macro molecules available in the Protein Data Bank (PDB). Determined protein structure are found in coalition with other molecules or ions such as nucleic acids, water, ions, Drug molecules and so on, which therefore can be described in the PDB format and have been deposited in PDB database. PDB is a machine generated file, it's not human readable format, to read this file we need any computational tool to understand it. The objective of our present study is to develop a free online software for retrieval, visualization and reading of annotation of a protein 3D structure which is available in PDB database. Main aim is to create PDB file in human readable format, i.e., the information in PDB file is converted in readable sentences. It displays all possible information from a PDB file including 3D structure of that file. Programming languages and scripting languages like Perl, CSS, Javascript, Ajax, and HTML have been used for the development of PDB Explorer. The PDB Explorer directly parses the PDB file, calling methods for parsed element secondary structure element, atoms, coordinates etc. PDB Explorer is freely available at http://www.pdbexplorer.eminentbio.com/home with no requirement of log-in.

  1. NoSQL: collection document and cloud by using a dynamic web query form

    NASA Astrophysics Data System (ADS)

    Abdalla, Hemn B.; Lin, Jinzhao; Li, Guoquan

    2015-07-01

    Mongo-DB (from "humongous") is an open-source document database and the leading NoSQL database. A NoSQL (Not Only SQL, next generation databases, being non-relational, deal, open-source and horizontally scalable) presenting a mechanism for storage and retrieval of documents. Previously, we stored and retrieved the data using the SQL queries. Here, we use the MonogoDB that means we are not utilizing the MySQL and SQL queries. Directly importing the documents into our Drives, retrieving the documents on that drive by not applying the SQL queries, using the IO BufferReader and Writer, BufferReader for importing our type of document files to my folder (Drive). For retrieving the document files, the usage is BufferWriter from the particular folder (or) Drive. In this sense, providing the security for those storing files for what purpose means if we store the documents in our local folder means all or views that file and modified that file. So preventing that file, we are furnishing the security. The original document files will be changed to another format like in this paper; Binary format is used. Our documents will be converting to the binary format after that direct storing in one of our folder, that time the storage space will provide the private key for accessing that file. Wherever any user tries to discover the Document files means that file data are in the binary format, the document's file owner simply views that original format using that personal key from receive the secret key from the cloud.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sublet, J.-Ch.; Koning, A.J.; Forrest, R.A.

    The reasons for the conversion of the European Activation File, EAF into ENDF-6 format are threefold. First, it significantly enhances the JEFF-3.0 release by the addition of an activation file. Second, to considerably increase its usage by using a recognized, official file format, allowing existing plug-in processes to be effective; and third, to move towards a universal nuclear data file in contrast to the current separate general and special-purpose files. The format chosen for the JEFF-3.0/A file uses reaction cross sections (MF-3), cross sections (MF-10), and multiplicities (MF-9). Having the data in ENDF-6 format allows the ENDF suite of utilitiesmore » and checker codes to be used alongside many other utility, visualizing, and processing codes. It is based on the EAF activation file used for many applications from fission to fusion, including dosimetry, inventories, depletion-transmutation, and geophysics. JEFF-3.0/A takes advantage of four generations of EAF files. Extensive benchmarking activities on these files provide feedback and validation with integral measurements. These, in parallel with a detailed graphical analysis based on EXFOR, have been applied stimulating new measurements, significantly increasing the quality of this activation file. The next step is to include the EAF uncertainty data for all channels into JEFF-3.0/A.« less

  3. BOREAS TE-18, 30-m, Radiometrically Rectified Landsat TM Imagery

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team used a radiometric rectification process to produce standardized DN values for a series of Landsat TM images of the BOREAS SSA and NSA in order to compare images that were collected under different atmospheric conditions. The images for each study area were referenced to an image that had very clear atmospheric qualities. The reference image for the SSA was collected on 02-Sep-1994, while the reference image for the NSA was collected on 21-Jun-1995. the 23 rectified images cover the period of 07-Jul-1985 to 18 Sep-1994 in the SSA and from 22-Jun-1984 to 09-Jun-1994 in the NSA. Each of the reference scenes had coincident atmospheric optical thickness measurements made by RSS-11. The radiometric rectification process is described in more detail by Hall et al. (199 1). The original Landsat TM data were received from CCRS for use in the BOREAS project. The data are stored in binary image-format files. Due to the nature of the radiometric rectification process and copyright issues, these full-resolution images may not be publicly distributed. However, a spatially degraded 60-m resolution version of the images is available on the BOREAS CD-ROM series. See Sections 15 and 16 for information about how to possibly acquire the full resolution data. Information about the full-resolution images is provided in an inventory listing on the CD-ROMs. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  4. FRS Geospatial Return File Format

    EPA Pesticide Factsheets

    The Geospatial Return File Format describes format that needs to be used to submit latitude and longitude coordinates for use in Envirofacts mapping applications. These coordinates are stored in the Geospatail Reference Tables.

  5. SEDIMENT DATA - COMMENCEMENT BAY HYLEBOS WATERWAY - TACOMA, WA - PRE-REMEDIAL DESIGN PROGRAM

    EPA Science Inventory

    Event 1A/1B Data Files URL address: http://www.epa.gov/r10earth/datalib/superfund/hybos1ab.htm. Sediment Chemistry Data (Database Format): HYBOS1AB.EXE is a self-extracting file which expands to the single-value per record .DBF format database file HYBOS1AB.DBF. This file contai...

  6. 76 FR 5431 - Released Rates of Motor Common Carriers of Household Goods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-31

    ... may be submitted either via the Board's e-filing format or in traditional paper format. Any person using e-filing should attach a document and otherwise comply with the instructions at the E- FILING link on the Board's website at http://www.stb.dot.gov . Any person submitting a filing in the traditional...

  7. 75 FR 52054 - Assessment of Mediation and Arbitration Procedures

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-24

    ...: Comments may be submitted either via the Board's e-filing format or in the traditional paper format. Any person using e-filing should attach a document and otherwise comply with the instructions at the E-FILING link on the Board's Web site, at http://www.stb.dot.gov . Any person submitting a filing in the...

  8. Method for measuring anterior chamber volume by image analysis

    NASA Astrophysics Data System (ADS)

    Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli

    2007-12-01

    Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.

  9. [A solution for display and processing of DICOM images in web PACS].

    PubMed

    Xue, Wei-jing; Lu, Wen; Wang, Hai-yang; Meng, Jian

    2009-03-01

    Use the technique of Java Applet to realize the supporting of DICOM image in ordinary Web browser, thereby to expand the processing function of medical image. First analyze the format of DICOM file and design a class which can acquire the pixels, then design two Applet classes, of which one is used to disposal the DICOM image, the other is used to display DICOM image that have been disposaled in the first Applet. They all embedded in the View page, and they communicate by Applet Context object. The method designed in this paper can make users display and process DICOM images directly by using ordinary Web browser, which makes Web PACS not only have the advantages of B/S model, but also have the advantages of the C/S model. Java Applet is the key for expanding the Web browser's function in Web PACS, which provides a guideline to sharing of medical images.

  10. Photon-HDF5: An Open File Format for Timestamp-Based Single-Molecule Fluorescence Experiments.

    PubMed

    Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier

    2016-01-05

    We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long-term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode, photomultiplier tube, or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc.) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. The format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference Python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. To encourage adoption by the academic and commercial communities, all software is released under the MIT open source license. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  11. Photon-HDF5: An Open File Format for Timestamp-Based Single-Molecule Fluorescence Experiments

    PubMed Central

    Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier

    2016-01-01

    We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long-term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode, photomultiplier tube, or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc.) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. The format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference Python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. To encourage adoption by the academic and commercial communities, all software is released under the MIT open source license. PMID:26745406

  12. Photon-HDF5: an open file format for single-molecule fluorescence experiments using photon-counting detectors

    DOE PAGES

    Ingargiola, A.; Laurence, T. A.; Boutelle, R.; ...

    2015-12-23

    We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode (SPAD), photomultiplier tube (PMT) or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. Themore » format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. As a result, to encourage adoption by the academic and commercial communities, all software is released under the MIT open source license.« less

  13. Single mimivirus particles intercepted and imaged with an X-ray laser (CXIDB ID 1)

    DOE Data Explorer

    Seibert, M. Marvin; Ekeberg, Tomas; Maia, Filipe R.N.C.

    2011-02-02

    These are the files used to reconstruct the images in the paper "Single Mimivirus particles intercepted and imaged with an X-ray laser". Besides the diffracted intensities, the Hawk configuration files used for the reconstructions are also provided. The files from CXIDB ID 1 are the pattern and configuration files for the pattern showed in Figure 2a in the paper.

  14. Single mimivirus particles intercepted and imaged with an X-ray laser (CXIDB ID 2)

    DOE Data Explorer

    Seibert, M. Marvin; Ekeberg, Tomas

    2011-02-02

    These are the files used to reconstruct the images in the paper "Single Mimivirus particles intercepted and imaged with an X-ray laser". Besides the diffracted intensities, the Hawk configuration files used for the reconstructions are also provided. The files from CXIDB ID 2 are the pattern and configuration files for the pattern showed in Figure 2b in the paper.

  15. Data Flow for the TERRA-REF project

    NASA Astrophysics Data System (ADS)

    Kooper, R.; Burnette, M.; Maloney, J.; LeBauer, D.

    2017-12-01

    The Transportation Energy Resources from Renewable Agriculture Phenotyping Reference Platform (TERRA-REF) program aims to identify crop traits that are best suited to producing high-energy sustainable biofuels and match those plant characteristics to their genes to speed the plant breeding process. One tool used to achieve this goal is a high-throughput phenotyping robot outfitted with sensors and cameras to monitor the growth of 1.25 acres of sorghum. Data types range from hyperspectral imaging to 3D reconstructions and thermal profiles, all at 1mm resolution. This system produces thousands of daily measurements with high spatiotemporal resolution. The team at NCSA processes, annotates, organizes and stores the massive amounts of data produced by this system - up to 5 TB per day. Data from the sensors is streamed to a local gantry-cache server. The standardized sensor raw data stream is automatically and securely delivered to NCSA using Globus Connect service. Once files have been successfully received by the Globus endpoint, the files are removed from the gantry-cache server. As each dataset arrives or is created the Clowder system automatically triggers different software tools to analyze each file, extract information, and convert files to a common format. Other tools can be triggered to run after all required data is uploaded. For example, a stitched image of the entire field is created after all images of the field become available. Some of these tools were developed by external collaborators based on predictive models and algorithms, others were developed as part of other projects and could be leveraged by the TERRA project. Data will be stored for the lifetime of the project and is estimated to reach 10 PB over 3 years. The Clowder system, BETY and other systems will allow users to easily find data by browsing or searching the extracted information.

  16. Streamlined, Inexpensive 3D Printing of the Brain and Skull.

    PubMed

    Naftulin, Jason S; Kimchi, Eyal Y; Cash, Sydney S

    2015-01-01

    Neuroimaging technologies such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) collect three-dimensional data (3D) that is typically viewed on two-dimensional (2D) screens. Actual 3D models, however, allow interaction with real objects such as implantable electrode grids, potentially improving patient specific neurosurgical planning and personalized clinical education. Desktop 3D printers can now produce relatively inexpensive, good quality prints. We describe our process for reliably generating life-sized 3D brain prints from MRIs and 3D skull prints from CTs. We have integrated a standardized, primarily open-source process for 3D printing brains and skulls. We describe how to convert clinical neuroimaging Digital Imaging and Communications in Medicine (DICOM) images to stereolithography (STL) files, a common 3D object file format that can be sent to 3D printing services. We additionally share how to convert these STL files to machine instruction gcode files, for reliable in-house printing on desktop, open-source 3D printers. We have successfully printed over 19 patient brain hemispheres from 7 patients on two different open-source desktop 3D printers. Each brain hemisphere costs approximately $3-4 in consumable plastic filament as described, and the total process takes 14-17 hours, almost all of which is unsupervised (preprocessing = 4-6 hr; printing = 9-11 hr, post-processing = <30 min). Printing a matching portion of a skull costs $1-5 in consumable plastic filament and takes less than 14 hr, in total. We have developed a streamlined, cost-effective process for 3D printing brain and skull models. We surveyed healthcare providers and patients who confirmed that rapid-prototype patient specific 3D models may help interdisciplinary surgical planning and patient education. The methods we describe can be applied for other clinical, research, and educational purposes.

  17. Log ASCII Standard (LAS) Files for Geophysical Wireline Well Logs and Their Application to Geologic Cross Sections Through the Central Appalachian Basin

    USGS Publications Warehouse

    Crangle, Robert D.

    2007-01-01

    Introduction The U.S. Geological Survey (USGS) uses geophysical wireline well logs for a variety of purposes, including stratigraphic correlation (Hettinger, 2001, Ryder, 2002), petroleum reservoir analyses (Nelson and Bird, 2005), aquifer studies (Balch, 1988), and synthetic seismic profiles (Kulander and Ryder, 2005). Commonly, well logs are easier to visualize, manipulate, and interpret when available in a digital format. In recent geologic cross sections E-E' and D-D', constructed through the central Appalachian basin (Ryder, Swezey, and others, in press; Ryder, Crangle, and others, in press), gamma ray well log traces and lithologic logs were used to correlate key stratigraphic intervals (Fig. 1). The stratigraphy and structure of the cross sections are illustrated through the use of graphical software applications (e.g., Adobe Illustrator). The gamma ray traces were digitized in Neuralog (proprietary software) from paper well logs and converted to a Log ASCII Standard (LAS) format. Once converted, the LAS files were transformed to images through an LAS-reader application (e.g., GeoGraphix Prizm) and then overlain in positions adjacent to well locations, used for stratigraphic control, on each cross section. This report summarizes the procedures used to convert paper logs to a digital LAS format using a third-party software application, Neuralog. Included in this report are LAS files for sixteen wells used in geologic cross section E-E' (Table 1) and thirteen wells used in geologic cross section D-D' (Table 2).

  18. ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (SILICON GRAPHICS VERSION)

    NASA Technical Reports Server (NTRS)

    Walters, D.

    1994-01-01

    The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.

  19. ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (CONCURRENT VERSION)

    NASA Technical Reports Server (NTRS)

    Pearson, R. W.

    1994-01-01

    The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.

  20. ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (SUN VERSION)

    NASA Technical Reports Server (NTRS)

    Walters, D.

    1994-01-01

    The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.

  1. ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (MASSCOMP VERSION)

    NASA Technical Reports Server (NTRS)

    Walters, D.

    1994-01-01

    The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.

  2. ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (DEC VAX VERSION)

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1994-01-01

    The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.

  3. Keemei: cloud-based validation of tabular bioinformatics file formats in Google Sheets.

    PubMed

    Rideout, Jai Ram; Chase, John H; Bolyen, Evan; Ackermann, Gail; González, Antonio; Knight, Rob; Caporaso, J Gregory

    2016-06-13

    Bioinformatics software often requires human-generated tabular text files as input and has specific requirements for how those data are formatted. Users frequently manage these data in spreadsheet programs, which is convenient for researchers who are compiling the requisite information because the spreadsheet programs can easily be used on different platforms including laptops and tablets, and because they provide a familiar interface. It is increasingly common for many different researchers to be involved in compiling these data, including study coordinators, clinicians, lab technicians and bioinformaticians. As a result, many research groups are shifting toward using cloud-based spreadsheet programs, such as Google Sheets, which support the concurrent editing of a single spreadsheet by different users working on different platforms. Most of the researchers who enter data are not familiar with the formatting requirements of the bioinformatics programs that will be used, so validating and correcting file formats is often a bottleneck prior to beginning bioinformatics analysis. We present Keemei, a Google Sheets Add-on, for validating tabular files used in bioinformatics analyses. Keemei is available free of charge from Google's Chrome Web Store. Keemei can be installed and run on any web browser supported by Google Sheets. Keemei currently supports the validation of two widely used tabular bioinformatics formats, the Quantitative Insights into Microbial Ecology (QIIME) sample metadata mapping file format and the Spatially Referenced Genetic Data (SRGD) format, but is designed to easily support the addition of others. Keemei will save researchers time and frustration by providing a convenient interface for tabular bioinformatics file format validation. By allowing everyone involved with data entry for a project to easily validate their data, it will reduce the validation and formatting bottlenecks that are commonly encountered when human-generated data files are first used with a bioinformatics system. Simplifying the validation of essential tabular data files, such as sample metadata, will reduce common errors and thereby improve the quality and reliability of research outcomes.

  4. Fpack and Funpack Utilities for FITS Image Compression and Uncompression

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2008-01-01

    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.

  5. A Python library for FAIRer access and deposition to the Metabolomics Workbench Data Repository.

    PubMed

    Smelter, Andrey; Moseley, Hunter N B

    2018-01-01

    The Metabolomics Workbench Data Repository is a public repository of mass spectrometry and nuclear magnetic resonance data and metadata derived from a wide variety of metabolomics studies. The data and metadata for each study is deposited, stored, and accessed via files in the domain-specific 'mwTab' flat file format. In order to improve the accessibility, reusability, and interoperability of the data and metadata stored in 'mwTab' formatted files, we implemented a Python library and package. This Python package, named 'mwtab', is a parser for the domain-specific 'mwTab' flat file format, which provides facilities for reading, accessing, and writing 'mwTab' formatted files. Furthermore, the package provides facilities to validate both the format and required metadata elements of a given 'mwTab' formatted file. In order to develop the 'mwtab' package we used the official 'mwTab' format specification. We used Git version control along with Python unit-testing framework as well as continuous integration service to run those tests on multiple versions of Python. Package documentation was developed using sphinx documentation generator. The 'mwtab' package provides both Python programmatic library interfaces and command-line interfaces for reading, writing, and validating 'mwTab' formatted files. Data and associated metadata are stored within Python dictionary- and list-based data structures, enabling straightforward, 'pythonic' access and manipulation of data and metadata. Also, the package provides facilities to convert 'mwTab' files into a JSON formatted equivalent, enabling easy reusability of the data by all modern programming languages that implement JSON parsers. The 'mwtab' package implements its metadata validation functionality based on a pre-defined JSON schema that can be easily specialized for specific types of metabolomics studies. The library also provides a command-line interface for interconversion between 'mwTab' and JSONized formats in raw text and a variety of compressed binary file formats. The 'mwtab' package is an easy-to-use Python package that provides FAIRer utilization of the Metabolomics Workbench Data Repository. The source code is freely available on GitHub and via the Python Package Index. Documentation includes a 'User Guide', 'Tutorial', and 'API Reference'. The GitHub repository also provides 'mwtab' package unit-tests via a continuous integration service.

  6. Accelerating Malware Detection via a Graphics Processing Unit

    DTIC Science & Technology

    2010-09-01

    Processing Unit . . . . . . . . . . . . . . . . . . 4 PE Portable Executable . . . . . . . . . . . . . . . . . . . . . 4 COFF Common Object File Format...operating systems for the future [Szo05]. The PE format is an updated version of the common object file format ( COFF ) [Mic06]. Microsoft released a new...NAs02]. These alerts can be costly in terms of time and resources for individuals and organizations to investigate each misidentified file [YWL07] [Vak10

  7. BOREAS TGB-12 Soil Carbon and Flux Data of NSA-MSA in Raster Format

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David E. (Editor); Rapalee, Gloria; Davidson, Eric; Harden, Jennifer W.; Trumbore, Susan E.; Veldhuis, Hugo

    2000-01-01

    The BOREAS TGB-12 team made measurements of soil carbon inventories, carbon concentration in soil gases, and rates of soil respiration at several sites. This data set provides: (1) estimates of soil carbon stocks by horizon based on soil survey data and analyses of data from individual soil profiles; (2) estimates of soil carbon fluxes based on stocks, fire history, drain-age, and soil carbon inputs and decomposition constants based on field work using radiocarbon analyses; (3) fire history data estimating age ranges of time since last fire; and (4) a raster image and an associated soils table file from which area-weighted maps of soil carbon and fluxes and fire history may be generated. This data set was created from raster files, soil polygon data files, and detailed lab analysis of soils data that were received from Dr. Hugo Veldhuis, who did the original mapping in the field during 1994. Also used were soils data from Susan Trumbore and Jennifer Harden (BOREAS TGB-12). The binary raster file covers a 733-km 2 area within the NSA-MSA.

  8. The shaping effects of three nickel-titanium rotary instruments in simulated S-shaped canals.

    PubMed

    Yoshimine, Y; Ono, M; Akamine, A

    2005-05-01

    The purpose of this study was to compare the shaping effects of three nickel-titanium rotary instruments, ProTaper, K3, and RaCe, with emphasis on canal transportation. Simulated canals with an S-shaped curvature in clear resin blocks were prepared with a torque-control, low-speed engine. Canals were prepared using the crown-down technique to the size of #30. Canal aberrations were assessed by comparing the pre- and postinstrumentation images under a stereomicroscope. ProTaper instruments caused greater widening of canals compared to K3 or RaCe. Furthermore, ProTaper files showed a tendency to ledge or zip formation at the end-point of preparation. These canal aberrations may be caused by ProTaper finishing files, which appear to be less flexible than other files of the same tip-size, because of their greater taper-size. These results suggest that nickel-titanium file systems including less tapered, more flexible instruments, like K3 and RaCe should be used in the apical preparation of canals with a complicated curvature.

  9. Computer printing and filing of microbiology reports. 1. Description of the system.

    PubMed Central

    Goodwin, C S; Smith, B C

    1976-01-01

    From March 1974 all reports from this microbiology department have been computer printed and filed. The system was designed to include every medically important microorganism and test. Technicians at the laboratory bench made their results computer-readable using Port-a-punch cards, and specimen details were recorded on paper-tape, allowing the full description of each specimen to appear on the report. A summary form of each microbiology phrase enabled copies of reports to be printed on wide paper with 12 to 18 reports per sheet; such copies, in alphabetical order for one day, and cumulatively for one week were used by staff answering enquiries to the office. This format could also be used for printing allthe reports for one patient. Retrieval of results from the files was easily performed and was useful to medical and laboratory staff and for control-of-infection purposes. The system was written in COBOL and was designed to be as cost-effective as possible without sacrificing accuracy; the cost of a report and its filing was 17-97 pence. Images PMID:939809

  10. Facilitating Analysis of Multiple Partial Data Streams

    NASA Technical Reports Server (NTRS)

    Maimone, Mark W.; Liebersbach, Robert R.

    2008-01-01

    Robotic Operations Automation: Mechanisms, Imaging, Navigation report Generation (ROAMING) is a set of computer programs that facilitates and accelerates both tactical and strategic analysis of time-sampled data especially the disparate and often incomplete streams of Mars Explorer Rover (MER) telemetry data described in the immediately preceding article. As used here, tactical refers to the activities over a relatively short time (one Martian day in the original MER application) and strategic refers to a longer time (the entire multi-year MER missions in the original application). Prior to installation, ROAMING must be configured with the types of data of interest, and parsers must be modified to understand the format of the input data (many example parsers are provided, including for general CSV files). Thereafter, new data from multiple disparate sources are automatically resampled into a single common annotated spreadsheet stored in a readable space-separated format, and these data can be processed or plotted at any time scale. Such processing or plotting makes it possible to study not only the details of a particular activity spanning only a few seconds, but also longer-term trends. ROAMING makes it possible to generate mission-wide plots of multiple engineering quantities [e.g., vehicle tilt as in Figure 1(a), motor current, numbers of images] that, heretofore could be found only in thousands of separate files. ROAMING also supports automatic annotation of both images and graphs. In the MER application, labels given to terrain features by rover scientists and engineers are automatically plotted in all received images based on their associated camera models (see Figure 2), times measured in seconds are mapped to Mars local time, and command names or arbitrary time-labeled events can be used to label engineering plots, as in Figure 1(b).

  11. Automated extraction of chemical structure information from digital raster images

    PubMed Central

    Park, Jungkap; Rosania, Gus R; Shedden, Kerby A; Nguyen, Mandee; Lyu, Naesung; Saitou, Kazuhiro

    2009-01-01

    Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links to scientific research articles. PMID:19196483

  12. SSM/OOM - SSM WITH OOM MANIPULATION CODE

    NASA Technical Reports Server (NTRS)

    Goza, S. P.

    1994-01-01

    Creating, animating, and recording solid-shaded and wireframe three-dimensional geometric models can be of great assistance in the research and design phases of product development, in project planning, and in engineering analyses. SSM and OOM are application programs which together allow for interactive construction and manipulation of three-dimensional models of real-world objects as simple as boxes or as complex as Space Station Freedom. The output of SSM, in the form of binary files defining geometric three dimensional models, is used as input to OOM. Animation in OOM is done using 3D models from SSM as well as cameras and light sources. The animated results of OOM can be output to videotape recorders, film recorders, color printers and disk files. SSM and OOM are also available separately as MSC-21914 and MSC-22263, respectively. The Solid Surface Modeler (SSM) is an interactive graphics software application for solid-shaded and wireframe three-dimensional geometric modeling. The program has a versatile user interface that, in many cases, allows mouse input for intuitive operation or keyboard input when accuracy is critical. SSM can be used as a stand-alone model generation and display program and offers high-fidelity still image rendering. Models created in SSM can also be loaded into the Object Orientation Manipulator for animation or engineering simulation. The Object Orientation Manipulator (OOM) is an application program for creating, rendering, and recording three-dimensional computer-generated still and animated images. This is done using geometrically defined 3D models, cameras, and light sources, referred to collectively as animation elements. OOM does not provide the tools necessary to construct 3D models; instead, it imports binary format model files generated by the Solid Surface Modeler (SSM). Model files stored in other formats must be converted to the SSM binary format before they can be used in OOM. SSM is available as MSC-21914 or as part of the SSM/OOM bundle, COS-10047. Among OOM's features are collision detection (with visual and audio feedback), the capability to define and manipulate hierarchical relationships between animation elements, stereographic display, and ray- traced rendering. OOM uses Euler angle transformations for calculating the results of translation and rotation operations. OOM and SSM are written in C-language for implementation on SGI IRIS 4D series workstations running the IRIX operating system. A minimum of 8Mb of RAM is recommended for each program. The standard distribution medium for this program package is a .25 inch streaming magnetic IRIX tape cartridge in UNIX tar format. These versions of OOM and SSM were released in 1993.

  13. Performance of the JPEG Estimated Spectrum Adaptive Postfilter (JPEG-ESAP) for Low Bit Rates

    NASA Technical Reports Server (NTRS)

    Linares, Irving (Inventor)

    2016-01-01

    Frequency-based, pixel-adaptive filtering using the JPEG-ESAP algorithm for low bit rate JPEG formatted color images may allow for more compressed images while maintaining equivalent quality at a smaller file size or bitrate. For RGB, an image is decomposed into three color bands--red, green, and blue. The JPEG-ESAP algorithm is then applied to each band (e.g., once for red, once for green, and once for blue) and the output of each application of the algorithm is rebuilt as a single color image. The ESAP algorithm may be repeatedly applied to MPEG-2 video frames to reduce their bit rate by a factor of 2 or 3, while maintaining equivalent video quality, both perceptually, and objectively, as recorded in the computed PSNR values.

  14. Cycle time reduction by Html report in mask checking flow

    NASA Astrophysics Data System (ADS)

    Chen, Jian-Cheng; Lu, Min-Ying; Fang, Xiang; Shen, Ming-Feng; Ma, Shou-Yuan; Yang, Chuen-Huei; Tsai, Joe; Lee, Rachel; Deng, Erwin; Lin, Ling-Chieh; Liao, Hung-Yueh; Tsai, Jenny; Bowhill, Amanda; Vu, Hien; Russell, Gordon

    2017-07-01

    The Mask Data Correctness Check (MDCC) is a reticle-level, multi-layer DRC-like check evolved from mask rule check (MRC). The MDCC uses extended job deck (EJB) to achieve mask composition and to perform a detailed check for positioning and integrity of each component of the reticle. Different design patterns on the mask will be mapped to different layers. Therefore, users may be able to review the whole reticle and check the interactions between different designs before the final mask pattern file is available. However, many types of MDCC check results, such as errors from overlapping patterns usually have very large and complex-shaped highlighted areas covering the boundary of the design. Users have to load the result OASIS file and overlap it to the original database that was assembled in MDCC process on a layout viewer, then search for the details of the check results. We introduce a quick result-reviewing method based on an html format report generated by Calibre® RVE. In the report generation process, we analyze and extract the essential part of result OASIS file to a result database (RDB) file by standard verification rule format (SVRF) commands. Calibre® RVE automatically loads the assembled reticle pattern and generates screen shots of these check results. All the processes are automatically triggered just after the MDCC process finishes. Users just have to open the html report to get the information they need: for example, check summary, captured images of results and their coordinates.

  15. A fast and efficient python library for interfacing with the Biological Magnetic Resonance Data Bank.

    PubMed

    Smelter, Andrey; Astra, Morgan; Moseley, Hunter N B

    2017-03-17

    The Biological Magnetic Resonance Data Bank (BMRB) is a public repository of Nuclear Magnetic Resonance (NMR) spectroscopic data of biological macromolecules. It is an important resource for many researchers using NMR to study structural, biophysical, and biochemical properties of biological macromolecules. It is primarily maintained and accessed in a flat file ASCII format known as NMR-STAR. While the format is human readable, the size of most BMRB entries makes computer readability and explicit representation a practical requirement for almost any rigorous systematic analysis. To aid in the use of this public resource, we have developed a package called nmrstarlib in the popular open-source programming language Python. The nmrstarlib's implementation is very efficient, both in design and execution. The library has facilities for reading and writing both NMR-STAR version 2.1 and 3.1 formatted files, parsing them into usable Python dictionary- and list-based data structures, making access and manipulation of the experimental data very natural within Python programs (i.e. "saveframe" and "loop" records represented as individual Python dictionary data structures). Another major advantage of this design is that data stored in original NMR-STAR can be easily converted into its equivalent JavaScript Object Notation (JSON) format, a lightweight data interchange format, facilitating data access and manipulation using Python and any other programming language that implements a JSON parser/generator (i.e., all popular programming languages). We have also developed tools to visualize assigned chemical shift values and to convert between NMR-STAR and JSONized NMR-STAR formatted files. Full API Reference Documentation, User Guide and Tutorial with code examples are also available. We have tested this new library on all current BMRB entries: 100% of all entries are parsed without any errors for both NMR-STAR version 2.1 and version 3.1 formatted files. We also compared our software to three currently available Python libraries for parsing NMR-STAR formatted files: PyStarLib, NMRPyStar, and PyNMRSTAR. The nmrstarlib package is a simple, fast, and efficient library for accessing data from the BMRB. The library provides an intuitive dictionary-based interface with which Python programs can read, edit, and write NMR-STAR formatted files and their equivalent JSONized NMR-STAR files. The nmrstarlib package can be used as a library for accessing and manipulating data stored in NMR-STAR files and as a command-line tool to convert from NMR-STAR file format into its equivalent JSON file format and vice versa, and to visualize chemical shift values. Furthermore, the nmrstarlib implementation provides a guide for effectively JSONizing other older scientific formats, improving the FAIRness of data in these formats.

  16. Archive of digital CHIRP seismic reflection data collected during USGS cruise 06FSH01 offshore of Siesta Key, Florida, May 2006

    USGS Publications Warehouse

    Harrison, Arnell S.; Dadisman, Shawn V.; Flocks, James G.; Wiese, Dana S.; Robbins, Lisa L.

    2007-01-01

    In May of 2006, the U.S. Geological Survey conducted geophysical surveys offshore of Siesta Key, Florida. This report serves as an archive of unprocessed digital chirp seismic reflection data, trackline maps, navigation files, GIS information, Field Activity Collection System (FACS) logs, observer's logbook, and formal FGDC metadata. Gained digital images of the seismic profiles are also provided. The archived trace data are in standard Society of Exploration Geophysicists (SEG) SEG-Y format (Barry and others, 1975) and may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU). Example SU processing scripts and USGS software for viewing the SEG-Y files (Zihlman, 1992) are also provided.

  17. Archive of digital CHIRP seismic reflection data collected during USGS cruise 06SCC01 offshore of Isles Dernieres, Louisiana, June 2006

    USGS Publications Warehouse

    Harrison, Arnell S.; Dadisman, Shawn V.; Ferina, Nick F.; Wiese, Dana S.; Flocks, James G.

    2007-01-01

    In June of 2006, the U.S. Geological Survey conducted a geophysical survey offshore of Isles Dernieres, Louisiana. This report serves as an archive of unprocessed digital CHIRP seismic reflection data, trackline maps, navigation files, GIS information, Field Activity Collection System (FACS) logs, observer's logbook, and formal FGDC metadata. Gained digital images of the seismic profiles are also provided. The archived trace data are in standard Society of Exploration Geophysicists (SEG) SEG-Y format (Barry and others, 1975) and may be downloaded and processed with commercial or public domain software such as Seismic UNIX (SU). Example SU processing scripts and USGS software for viewing the SEG-Y files (Zihlman, 1992) are also provided.

  18. An integrated software system for geometric correction of LANDSAT MSS imagery

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Esilva, A. J. F. M.; Camara-Neto, G.; Serra, P. R. M.; Desousa, R. C. M.; Mitsuo, Fernando Augusta, II

    1984-01-01

    A system for geometrically correcting LANDSAT MSS imagery includes all phases of processing, from receiving a raw computer compatible tape (CCT) to the generation of a corrected CCT (or UTM mosaic). The system comprises modules for: (1) control of the processing flow; (2) calculation of satellite ephemeris and attitude parameters, (3) generation of uncorrected files from raw CCT data; (4) creation, management and maintenance of a ground control point library; (5) determination of the image correction equations, using attitude and ephemeris parameters and existing ground control points; (6) generation of corrected LANDSAT file, using the equations determined beforehand; (7) union of LANDSAT scenes to produce and UTM mosaic; and (8) generation of output tape, in super-structure format.

  19. Effect of combined digital imaging parameters on endodontic file measurements.

    PubMed

    de Oliveira, Matheus Lima; Pinto, Geraldo Camilo de Souza; Ambrosano, Glaucia Maria Bovi; Tosoni, Guilherme Monteiro

    2012-10-01

    This study assessed the effect of the combination of a dedicated endodontic filter, spatial resolution, and contrast resolution on the determination of endodontic file lengths. Forty extracted single-rooted teeth were x-rayed with K-files (ISO size 10 and 15) in the root canals. Images were acquired using the VistaScan system (Dürr Dental, Beitigheim-Bissingen, Germany) under different combining parameters of spatial resolution (10 and 25 line pairs per millimeter [lp/mm]) and contrast resolution (8- and 16-bit depths). Subsequently, a dedicated endodontic filter was applied on the 16-bit images, creating 2 additional parameters. Six observers measured the length of the endodontic files in the root canals using the software that accompanies the system. The mean values of the actual file lengths and the measurements of the radiographic images were submitted to 1-way analysis of variance and the Tukey test at a level of significance of 5%. The intraobserver reproducibility was assessed by the intraclass correlation coefficient. All combined image parameters showed excellent intraobserver agreement with intraclass correlation coefficient means higher than 0.98. The imaging parameter of 25 lp/mm and 16 bit associated with the use of the endodontic filter did not differ significantly from the actual file lengths when both file sizes were analyzed together or separately (P > .05). When the size 15 file was evaluated separately, only 8-bit images differed significantly from the actual file lengths (P ≤ .05). The combination of an endodontic filter with high spatial resolution and high contrast resolution is recommended for the determination of file lengths when using storage phosphor plates. Copyright © 2012 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  20. Online molecular image repository and analysis system: A multicenter collaborative open-source infrastructure for molecular imaging research and application.

    PubMed

    Rahman, Mahabubur; Watabe, Hiroshi

    2018-05-01

    Molecular imaging serves as an important tool for researchers and clinicians to visualize and investigate complex biochemical phenomena using specialized instruments; these instruments are either used individually or in combination with targeted imaging agents to obtain images related to specific diseases with high sensitivity, specificity, and signal-to-noise ratios. However, molecular imaging, which is a multidisciplinary research field, faces several challenges, including the integration of imaging informatics with bioinformatics and medical informatics, requirement of reliable and robust image analysis algorithms, effective quality control of imaging facilities, and those related to individualized disease mapping, data sharing, software architecture, and knowledge management. As a cost-effective and open-source approach to address these challenges related to molecular imaging, we develop a flexible, transparent, and secure infrastructure, named MIRA, which stands for Molecular Imaging Repository and Analysis, primarily using the Python programming language, and a MySQL relational database system deployed on a Linux server. MIRA is designed with a centralized image archiving infrastructure and information database so that a multicenter collaborative informatics platform can be built. The capability of dealing with metadata, image file format normalization, and storing and viewing different types of documents and multimedia files make MIRA considerably flexible. With features like logging, auditing, commenting, sharing, and searching, MIRA is useful as an Electronic Laboratory Notebook for effective knowledge management. In addition, the centralized approach for MIRA facilitates on-the-fly access to all its features remotely through any web browser. Furthermore, the open-source approach provides the opportunity for sustainable continued development. MIRA offers an infrastructure that can be used as cross-boundary collaborative MI research platform for the rapid achievement in cancer diagnosis and therapeutics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. HUBBLE SHOWS EXPANSION OF ETA CARINAE DEBRIS

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The furious expansion of a huge, billowing pair of gas and dust clouds are captured in this NASA Hubble Space Telescope comparison image of the supermassive star Eta Carinae. To create the picture, astronomers aligned and subtracted two images of Eta Carinae taken 17 months apart (April 1994, September 1995). Black represents where the material was located in the older image, and white represents the more recent location. (The light and dark streaks that make an 'X' pattern are instrumental artifacts caused by the extreme brightness of the central star. The bright white region at the center of the image results from the star and its immediate surroundings being 'saturated' in one of the images.)Photo Credit: Jon Morse (University of Colorado), Kris Davidson (University of Minnesota), and NASA Image files in GIF and JPEG format and captions may be accessed on Internet via anonymous ftp from oposite.stsci.edu in /pubinfo.

  2. High-contrast multilayer imaging of biological organisms through dark-field digital refocusing.

    PubMed

    Faridian, Ahmad; Pedrini, Giancarlo; Osten, Wolfgang

    2013-08-01

    We have developed an imaging system to extract high contrast images from different layers of biological organisms. Utilizing a digital holographic approach, the system works without scanning through layers of the specimen. In dark-field illumination, scattered light has the main contribution in image formation, but in the case of coherent illumination, this creates a strong speckle noise that reduces the image quality. To remove this restriction, the specimen has been illuminated with various speckle-fields and a hologram has been recorded for each speckle-field. Each hologram has been analyzed separately and the corresponding intensity image has been reconstructed. The final image has been derived by averaging over the reconstructed images. A correlation approach has been utilized to determine the number of speckle-fields required to achieve a desired contrast and image quality. The reconstructed intensity images in different object layers are shown for different sea urchin larvae. Two multimedia files are attached to illustrate the process of digital focusing.

  3. Reversible watermarking for knowledge digest embedding and reliability control in medical images.

    PubMed

    Coatrieux, Gouenou; Le Guillou, Clara; Cauvin, Jean-Michel; Roux, Christian

    2009-03-01

    To improve medical image sharing in applications such as e-learning or remote diagnosis aid, we propose to make the image more usable by watermarking it with a digest of its associated knowledge. The aim of such a knowledge digest (KD) is for it to be used for retrieving similar images with either the same findings or differential diagnoses. It summarizes the symbolic descriptions of the image, the symbolic descriptions of the findings semiology, and the similarity rules that contribute to balancing the importance of previous descriptors when comparing images. Instead of modifying the image file format by adding some extra header information, watermarking is used to embed the KD in the pixel gray-level values of the corresponding images. When shared through open networks, watermarking also helps to convey reliability proofs (integrity and authenticity) of an image and its KD. The interest of these new image functionalities is illustrated in the updating of the distributed users' databases within the framework of an e-learning application demonstrator of endoscopic semiology.

  4. Digital geologic map of the Butler Peak 7.5' quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Miller, Fred K.; Matti, Jonathan C.; Brown, Howard J.; digital preparation by Cossette, P. M.

    2000-01-01

    Open-File Report 00-145, is a digital geologic map database of the Butler Peak 7.5' quadrangle that includes (1) ARC/INFO (Environmental Systems Research Institute) version 7.2.1 Patch 1 coverages, and associated tables, (2) a Portable Document Format (.pdf) file of the Description of Map Units, Correlation of Map Units chart, and an explanation of symbols used on the map, btlrpk_dcmu.pdf, (3) a Portable Document Format file of this Readme, btlrpk_rme.pdf (the Readme is also included as an ascii file in the data package), and (4) a PostScript plot file of the map, Correlation of Map Units, and Description of Map Units on a single sheet, btlrpk.ps. No paper map is included in the Open-File report, but the PostScript plot file (number 4 above) can be used to produce one. The PostScript plot file generates a map, peripheral text, and diagrams in the editorial format of USGS Geologic Investigation Series (I-series) maps.

  5. MXA: a customizable HDF5-based data format for multi-dimensional data sets

    NASA Astrophysics Data System (ADS)

    Jackson, M.; Simmons, J. P.; De Graef, M.

    2010-09-01

    A new digital file format is proposed for the long-term archival storage of experimental data sets generated by serial sectioning instruments. The format is known as the multi-dimensional eXtensible Archive (MXA) format and is based on the public domain Hierarchical Data Format (HDF5). The MXA data model, its description by means of an eXtensible Markup Language (XML) file with associated Document Type Definition (DTD) are described in detail. The public domain MXA package is available through a dedicated web site (mxa.web.cmu.edu), along with implementation details and example data files.

  6. Clementine High Resolution Camera Mosaicking Project

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This report constitutes the final report for NASA Contract NASW-5054. This project processed Clementine I high resolution images of the Moon, mosaicked these images together, and created a 22-disk set of compact disk read-only memory (CD-ROM) volumes. The mosaics were produced through semi-automated registration and calibration of the high resolution (HiRes) camera's data against the geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic produced by the US Geological Survey (USGS). The HiRes mosaics were compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution nadir-looking observations. The images were spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel for sub-polar mosaics (below 80 deg. latitude) and using the stereographic projection at a scale of 30 m/pixel for polar mosaics. Only images with emission angles less than approximately 50 were used. Images from non-mapping cross-track slews, which tended to have large SPICE errors, were generally omitted. The locations of the resulting image population were found to be offset from the UV/Vis basemap by up to 13 km (0.4 deg.). Geometric control was taken from the 100 m/pixel global and 150 m/pixel polar USGS Clementine Basemap Mosaics compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Radiometric calibration was achieved by removing the image nonuniformity dominated by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap, that approximately transform the 8-bit HiRes data to photometric units. The sub-polar mosaics are divided into tiles that cover approximately 1.75 deg. of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. Polar mosaics are tiled into squares 2250 pixels on a side, which spans approximately 2.2 deg. Two mosaics are provided for each pole: one corresponding to data acquired while periapsis was in the south, the other while periapsis was in the north. The CD-ROMs also contain ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files.

  7. Tele-transmission of stereoscopic images of the optic nerve head in glaucoma via Internet.

    PubMed

    Bergua, Antonio; Mardin, Christian Y; Horn, Folkert K

    2009-06-01

    The objective was to describe an inexpensive system to visualize stereoscopic photographs of the optic nerve head on computer displays and to transmit such images via the Internet for collaborative research or remote clinical diagnosis in glaucoma. Stereoscopic images of glaucoma patients were digitized and stored in a file format (joint photographic stereoimage [jps]) containing all three-dimensional information for both eyes on an Internet Web site (www.trizax.com). The size of jps files was between 0.4 to 1.4 MB (corresponding to a diagonal stereo image size between 900 and 1400 pixels) suitable for Internet protocols. A conventional personal computer system equipped with wireless stereoscopic LCD shutter glasses and a CRT-monitor with high refresh rate (120 Hz) can be used to obtain flicker-free stereo visualization of true-color images with high resolution. Modern thin-film transistor-LCD displays in combination with inexpensive red-cyan goggles achieve stereoscopic visualization with the same resolution but reduced color quality and contrast. The primary aim of our study was met to transmit stereoscopic images via the Internet. Additionally, we found that with both stereoscopic visualization techniques, cup depth, neuroretinal rim shape, and slope of the inner wall of the optic nerve head, can be qualitatively better perceived and interpreted than with monoscopic images. This study demonstrates high-quality and low-cost Internet transmission of stereoscopic images of the optic nerve head from glaucoma patients. The technique allows exchange of stereoscopic images and can be applied to tele-diagnostic and glaucoma research.

  8. Training system for digital mammographic diagnoses of breast cancer

    NASA Astrophysics Data System (ADS)

    Thomaz, R. L.; Nirschl Crozara, M. G.; Patrocinio, A. C.

    2013-03-01

    As the technology evolves, the analog mammography systems are being replaced by digital systems. The digital system uses video monitors as the display of mammographic images instead of the previously used screen-film and negatoscope for analog images. The change in the way of visualizing mammographic images may require a different approach for training the health care professionals in diagnosing the breast cancer with digital mammography. Thus, this paper presents a computational approach to train the health care professionals providing a smooth transition between analog and digital technology also training to use the advantages of digital image processing tools to diagnose the breast cancer. This computational approach consists of a software where is possible to open, process and diagnose a full mammogram case from a database, which has the digital images of each of the mammographic views. The software communicates with a gold standard digital mammogram cases database. This database contains the digital images in Tagged Image File Format (TIFF) and the respective diagnoses according to BI-RADSTM, these files are read by software and shown to the user as needed. There are also some digital image processing tools that can be used to provide better visualization of each single image. The software was built based on a minimalist and a user-friendly interface concept that might help in the smooth transition. It also has an interface for inputting diagnoses from the professional being trained, providing a result feedback. This system has been already completed, but hasn't been applied to any professional training yet.

  9. Classifications for Coastal Wetlands Planning, Protection and Restoration Act (CWPPRA) site-specific projects: 2010

    USGS Publications Warehouse

    Jones, William R.; Garber, Adrienne

    2013-01-01

    The Coastal Wetlands Planning, Protection and Restoration Act (CWPPRA) funds over 100 wetland restoration projects across Louisiana. Integral to the success of CWPPRA is its long-term monitoring program, which enables State and Federal agencies to determine the effectiveness of each restoration effort. One component of this monitoring program is the classification of high-resolution, color-infrared aerial photography at the U.S. Geological Survey’s National Wetlands Research Center in Lafayette, Louisiana. Color-infrared aerial photography (9- by 9-inch) is obtained before project construction and several times after construction. Each frame is scanned on a photogrametric scanner that produces a high-resolution image in Tagged Image File Format (TIFF). By using image-processing software, these TIFF files are then orthorectified and mosaicked to produce a seamless image of a project area and its associated reference area (a control site near the project that has common environmental features, such as marsh type, soil types, and water salinities.) The project and reference areas are then classified according to pixel value into two distinct classes, land and water. After initial land and water ratios have been established by using photography obtained before and after project construction, subsequent comparisons can be made over time to determine land-water change.

  10. Hear it, See it, Explore it: Visualizations and Sonifications of Seismic Signals

    NASA Astrophysics Data System (ADS)

    Fisher, M.; Peng, Z.; Simpson, D. W.; Kilb, D. L.

    2010-12-01

    Sonification of seismic data is an innovative way to represent seismic data in the audible range (Simpson, 2005). Seismic waves with different frequency and temporal characteristics, such as those from teleseismic earthquakes, deep “non-volcanic” tremor and local earthquakes, can be easily discriminated when time-compressed to the audio range. Hence, sonification is particularly useful for presenting complicated seismic signals with multiple sources, such as aftershocks within the coda of large earthquakes, and remote triggering of earthquakes and tremor by large teleseismic earthquakes. Previous studies mostly focused on converting the seismic data into audible files by simple time compression or frequency modulation (Simpson et al., 2009). Here we generate animations of the seismic data together with the sounds. We first read seismic data in the SAC format into Matlab, and generate a sequence of image files and an associated WAV sound file. Next, we use a third party video editor, such as the QuickTime Pro, to combine the image sequences and the sound file into an animation. We have applied this simple procedure to generate animations of remotely triggered earthquakes, tremor and low-frequency earthquakes in California, and mainshock-aftershock sequences in Japan and California. These animations clearly demonstrate the interactions of earthquake sequences and the richness of the seismic data. The tool developed in this study can be easily adapted for use in other research applications and to create sonification/animation of seismic data for education and outreach purpose.

  11. Electronic Photography at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Holm, Jack; Judge, Nancianne

    1995-01-01

    An electronic photography facility has been established in the Imaging & Photographic Technology Section, Visual Imaging Branch, at the NASA Langley Research Center (LaRC). The purpose of this facility is to provide the LaRC community with access to digital imaging technology. In particular, capabilities have been established for image scanning, direct image capture, optimized image processing for storage, image enhancement, and optimized device dependent image processing for output. Unique approaches include: evaluation and extraction of the entire film information content through scanning; standardization of image file tone reproduction characteristics for optimal bit utilization and viewing; education of digital imaging personnel on the effects of sampling and quantization to minimize image processing related information loss; investigation of the use of small kernel optimal filters for image restoration; characterization of a large array of output devices and development of image processing protocols for standardized output. Currently, the laboratory has a large collection of digital image files which contain essentially all the information present on the original films. These files are stored at 8-bits per color, but the initial image processing was done at higher bit depths and/or resolutions so that the full 8-bits are used in the stored files. The tone reproduction of these files has also been optimized so the available levels are distributed according to visual perceptibility. Look up tables are available which modify these files for standardized output on various devices, although color reproduction has been allowed to float to some extent to allow for full utilization of output device gamut.

  12. Adding and Deleting Images

    EPA Pesticide Factsheets

    Images are added via the Drupal WebCMS Editor. Once an image is uploaded onto a page, it is available via the Library and your files. You can edit the metadata, delete the image permanently, and/or replace images on the Files tab.

  13. A Summary of Proposed Changes to the Current ICARTT Format Standards and their Implications to Future Airborne Studies

    NASA Astrophysics Data System (ADS)

    Northup, E. A.; Kusterer, J.; Quam, B.; Chen, G.; Early, A. B.; Beach, A. L., III

    2015-12-01

    The current ICARTT file format standards were developed for the purpose of fulfilling the data management needs for the International Consortium for Atmospheric Research on Transport and Transformation (ICARTT) campaign in 2004. The goal of the ICARTT file format was to establish a common and simple to use data file format to promote data exchange and collaboration among science teams with similar science objectives. ICARTT has been the NASA standard since 2010, and is widely used by NOAA, NSF, and international partners (DLR, FAAM). Despite its level of acceptance, there are a number of issues with the current ICARTT format, especially concerning the machine readability. To enhance usability, the ICARTT Refresh Earth Science Data Systems Working Group (ESDSWG) was established to enable a platform for atmospheric science data producers, users (e.g. modelers) and data managers to collaborate on developing criteria for this file format. Ultimately, this is a cross agency effort to improve and aggregate the metadata records being produced. After conducting a survey to identify deficiencies in the current format, we determined which are considered most important to the various communities. Numerous recommendations were made to improve upon the file format while maintaining backward compatibility. The recommendations made to date and their advantages and limitations will be discussed.

  14. EPA Remote Sensing Information Gateway

    NASA Astrophysics Data System (ADS)

    Paulsen, H. K.; Szykman, J. J.; Plessel, T.; Freeman, M.; Dimmick, F.

    2009-12-01

    The Remote Sensing Information Gateway was developed by the U.S. Environmental Protection Agency (EPA) to assist researchers in easily obtaining and combining a variety of environmental datasets related to air quality research. Current datasets available include, but are not limited to surface PM2.5 and O3 data, satellite derived aerosol optical depth , and 3-dimensional output from U.S. EPA's Models 3/Community Multi-scale Air Quality (CMAQ) modeling system. The presentation will include a demonstration that illustrates several scenarios of how researchers use the tool to help them visualize and obtain data for their work; with a particular focus on episode analysis related to biomass burning impacts on air quality. The presentation will provide an overview on how RSIG works and how the code has been—and can be—adapted for other projects. One example is the Virtual Estuary, which focuses on automating the retrieval and pre-processing of a variety of data needed for estuarine research. RSIG’s source codes are freely available to researchers with permission from the EPA principal investigator, Dr. Jim Szykman. RSIG is available to the community and can be accessed online at http://www.epa.gov/rsig. Once the JAVA policy file is configured on your computer you can run the RSIG applet on your computer and connect to the RSIG server to visualize and retrieve available data sets. The applet allows the user to specify the temporal/spatial areas of interest, and the types of data to retrieve. The applet then communicates with RSIG subsetter codes located on the data owners’ remote servers; the subsetter codes assemble and transfer via ordinary Internet protocols only the specified data to the researcher’s computer. This is much faster than the usual method of transferring large files via FTP and greatly reduces network traffic. The RSIG applet then visualizes the transferred data on a latitude-longitude map, automatically locating the data in the correct geographic position. Images, animations, and aggregated data can be saved or exported in a variety of data formats: Binary External Data Representation (XDR) format (simple, efficient), NetCDF-COARDS format, NetCDF-IOAPI format (regridding the data to a CMAQ grid), HDF (unsubsetted satellite files), ASCII tab-delimited spreadsheet, MCMC (used for input into HB model), PNG images, MPG movies, KMZ movies (for display in Google Earth and similar applications), GeoTIFF RGB format and 32-bit float format. RSIG’s source codes are freely available to researchers with permission from the EPA. Contacts for obtaining RSIG code are available at the RSIG website.

  15. Marker-free registration for the accurate integration of CT images and the subject's anatomy during navigation surgery of the maxillary sinus

    PubMed Central

    Kang, S-H; Kim, M-K; Kim, J-H; Park, H-K; Park, W

    2012-01-01

    Objective This study compared three marker-free registration methods that are applicable to a navigation system that can be used for maxillary sinus surgery, and evaluated the associated errors, with the aim of determining which registration method is the most applicable for operations that require accurate navigation. Methods The CT digital imaging and communications in medicine (DICOM) data of ten maxillary models in DICOM files were converted into stereolithography file format. All of the ten maxillofacial models were scanned three dimensionally using a light-based three-dimensional scanner. The methods applied for registration of the maxillofacial models utilized the tooth cusp, bony landmarks and maxillary sinus anterior wall area. The errors during registration were compared between the groups. Results There were differences between the three registration methods in the zygoma, sinus posterior wall, molar alveolar, premolar alveolar, lateral nasal aperture and the infraorbital areas. The error was smallest using the overlay method for the anterior wall of the maxillary sinus, and the difference was statistically significant. Conclusion The navigation error can be minimized by conducting registration using the anterior wall of the maxillary sinus during image-guided surgery of the maxillary sinus. PMID:22499127

  16. BOREAS RSS-10 TOMS Circumpolar One-Degree PAR Images

    NASA Technical Reports Server (NTRS)

    Dye, Dennis G.; Holben, Brent; Nickeson, Jaime (Editor); Hall, Forrest G. (Editor); Smith, David E. (Technical Monitor)

    2000-01-01

    The Boreal Ecosystem-Atmosphere Study (BOREAS) Remote Sensing Science (RSS)-10 team investigated the magnitude of daily, seasonal, and yearly variations of Photosynthetically Active Radiation (PAR) from ground and satellite observations. This data set contains satellite estimates of surface-incident PAR (400-700 nm, MJ/sq m) at one-degree spatial resolution. The spatial coverage is circumpolar from latitudes of 41 to 66 degrees north. The temporal coverage is from May through September for years 1979 through 1989. Eleven-year statistics are also provided: (1) mean, (2) standard deviation, and (3) coefficient of variation for 1979-89. The PAR estimates were derived from the global gridded ultraviolet reflectivity data product (average of 360, 380 nm) from the Nimbus-7 Total Ozone Mapping Spectrometer (TOMS). Image mask data are provided for identifying the boreal forest zone, and ocean/land and snow/ice-covered areas. The data are available as binary image format data files. The PAR data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). The data files are available on a CD-ROM (see document number 20010000884).

  17. NASA Standard for Airborne Data: ICARTT Format ESDS-RFC-019

    NASA Astrophysics Data System (ADS)

    Thornhill, A.; Brown, C.; Aknan, A.; Crawford, J. H.; Chen, G.; Williams, E. J.

    2011-12-01

    Airborne field studies generate a plethora of data products in the effort to study atmospheric composition and processes. Data file formats for airborne field campaigns are designed to present data in an understandable and organized way to support collaboration and to document relevant and important meta data. The ICARTT file format was created to facilitate data management during the International Consortium for Atmospheric Research on Transport and Transformation (ICARTT) campaign in 2004 that involved government-agencies and university participants from five countries. Since this mission the ICARTT format has been used in subsequent field campaigns such as Polar Study Using Aircraft Remote Sensing, Surface Measurements and Models of Climates, Chemistry, Aerosols, and Transport (POLARCAT) and the first phase of Deriving Information on Surface Conditions from COlumn and VERtically Resolved Observations Relevant to Air Quality (DISCOVER-AQ). The ICARTT file format has been endorsed as a standard format for airborne data by the Standard Process Group (SPG), one of the Earth Science Data Systems Working Groups (ESDSWG) in 2010. The detailed description of the ICARTT format can be found at http://www-air.larc.nasa.gov/missions/etc/ESDS-RFC-019-v1.00.pdf. The ICARTT data format is an ASCII, comma delimited format that was based on the NASA Ames and GTE file formats. The file header is detailed enough to fully describe the data for users outside of the instrument group and includes a description of the meta data. The ICARTT scanning tools, format structure, implementations, and examples will be presented.

  18. Viewing Files | Smokefree 60+

    Cancer.gov

    In addition to standard HTML webpages, our website contains files in other formats. You may need additional software or browser plug-ins to view some of these files. The following list shows each format along with links to the corresponding freely available plug-ins or viewers. Documents  Adobe Acrobat Reader (.pdf)

  19. Visual tool for estimating the fractal dimension of images

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Besliu, C.; Rusu, M. V.; Jipa, Al.; Bordeianu, C. C.; Felea, D.

    2009-10-01

    This work presents a new Visual Basic 6.0 application for estimating the fractal dimension of images, based on an optimized version of the box-counting algorithm. Following the attempt to separate the real information from "noise", we considered also the family of all band-pass filters with the same band-width (specified as parameter). The fractal dimension can be thus represented as a function of the pixel color code. The program was used for the study of paintings cracks, as an additional tool which can help the critic to decide if an artistic work is original or not. Program summaryProgram title: Fractal Analysis v01 Catalogue identifier: AEEG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 29 690 No. of bytes in distributed program, including test data, etc.: 4 967 319 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30M Classification: 14 Nature of problem: Estimating the fractal dimension of images. Solution method: Optimized implementation of the box-counting algorithm. Use of a band-pass filter for separating the real information from "noise". User friendly graphical interface. Restrictions: Although various file-types can be used, the application was mainly conceived for the 8-bit grayscale, windows bitmap file format. Running time: In a first approximation, the algorithm is linear.

  20. VizieR Online Data Catalog: BzJK observations around radio galaxies (Galametz+, 2009)

    NASA Astrophysics Data System (ADS)

    Galametz, A.; De Breuck, C.; Vernet, J.; Stern, D.; Rettura, A.; Marmo, C.; Omont, A.; Allen, M.; Seymour, N.

    2010-02-01

    We imaged the two targets using the Bessel B-band filter of the Large Format Camera (LFC) on the Palomar 5m Hale Telescope. We imaged the radio galaxy fields using the z-band filter of Palomar/LFC. In February 2005, we observed 7C 1751+6809 for 60-min under photometric conditions. In August 2005, we observed 7C 1756+6520 for 135-min but in non-photometric conditions. The tables provide the B, z, J and Ks magnitudes and coordinates of the pBzK* galaxies (red passively evolving candidates selected by BzK=(z-K)-(B-z)<-0.2 and (z-K)>2.2) for both fields. The B and z bands were obtained using the Large Format Camera (LFC) on the Palomar 5m Hale Telescope, and the J and Ks bands using Wide-field Infrared Camera (WIRCAM) of the Canada-France-Hawaii Telescope (CFHT). (2 data files).

  1. PMG: online generation of high-quality molecular pictures and storyboarded animations

    PubMed Central

    Autin, Ludovic; Tufféry, Pierre

    2007-01-01

    The Protein Movie Generator (PMG) is an online service able to generate high-quality pictures and animations for which one can then define simple storyboards. The PMG can therefore efficiently illustrate concepts such as molecular motion or formation/dissociation of complexes. Emphasis is put on the simplicity of animation generation. Rendering is achieved using Dino coupled to POV-Ray. In order to produce highly informative images, the PMG includes capabilities of using different molecular representations at the same time to highlight particular molecular features. Moreover, sophisticated rendering concepts including scene definition, as well as modeling light and materials are available. The PMG accepts Protein Data Bank (PDB) files as input, which may include series of models or molecular dynamics trajectories and produces images or movies under various formats. PMG can be accessed at http://bioserv.rpbs.jussieu.fr/PMG.html. PMID:17478496

  2. Prototype of Partial Cutting Tool of Geological Map Images Distributed by Geological Web Map Service

    NASA Astrophysics Data System (ADS)

    Nonogaki, S.; Nemoto, T.

    2014-12-01

    Geological maps and topographical maps play an important role in disaster assessment, resource management, and environmental preservation. These map information have been distributed in accordance with Web services standards such as Web Map Service (WMS) and Web Map Tile Service (WMTS) recently. In this study, a partial cutting tool of geological map images distributed by geological WMTS was implemented with Free and Open Source Software. The tool mainly consists of two functions: display function and cutting function. The former function was implemented using OpenLayers. The latter function was implemented using Geospatial Data Abstraction Library (GDAL). All other small functions were implemented by PHP and Python. As a result, this tool allows not only displaying WMTS layer on web browser but also generating a geological map image of intended area and zoom level. At this moment, available WTMS layers are limited to the ones distributed by WMTS for the Seamless Digital Geological Map of Japan. The geological map image can be saved as GeoTIFF format and WebGL format. GeoTIFF is one of the georeferenced raster formats that is available in many kinds of Geographical Information System. WebGL is useful for confirming a relationship between geology and geography in 3D. In conclusion, the partial cutting tool developed in this study would contribute to create better conditions for promoting utilization of geological information. Future work is to increase the number of available WMTS layers and the types of output file format.

  3. Archive of digital chirp subbottom profile data collected during USGS cruise 12BIM03 offshore of the Chandeleur Islands, Louisiana, July 2012

    USGS Publications Warehouse

    Forde, Arnell S.; Miselis, Jennifer L.; Wiese, Dana S.

    2014-01-01

    From July 23 - 31, 2012, the U.S. Geological Survey conducted geophysical surveys to investigate the geologic controls on barrier island framework and long-term sediment transport along the oil spill mitigation sand berm constructed at the north end and just offshore of the Chandeleur Islands, La. (figure 1). This effort is part of a broader USGS study, which seeks to better understand barrier island evolution over medium time scales (months to years). This report serves as an archive of unprocessed digital chirp subbottom data, trackline maps, navigation files, Geographic Information System (GIS) files, Field Activity Collection System (FACS) logs, and formal Federal Geographic Data Committee (FGDC) metadata. Gained (showing a relative increase in signal amplitude) digital images of the seismic profiles are also provided. Refer to the Abbreviations page for expansions of acronyms and abbreviations used in this report. The USGS St. Petersburg Coastal and Marine Science Center (SPCMSC) assigns a unique identifier to each cruise or field activity. For example, 12BIM03 tells us the data were collected in 2012 during the third field activity for that project in that calendar year and BIM is a generic code, which represents efforts related to Barrier Island Mapping. Refer to http://walrus.wr.usgs.gov/infobank/programs/html/definition/activity.html for a detailed description of the method used to assign the field activity ID. All chirp systems use a signal of continuously varying frequency; the EdgeTech SB-424 system used during this survey produces high-resolution, shallow-penetration (typically less than 50 milliseconds (ms)) profile images of sub-seafloor stratigraphy. The towfish contains a transducer that transmits and receives acoustic energy and is typically towed 1 - 2 m below the sea surface. As transmitted acoustic energy intersects density boundaries, such as the seafloor or sub-surface sediment layers, energy is reflected back toward the transducer, received, and recorded by a PC-based seismic acquisition system. This process is repeated at regular time intervals (for example, 0.125 seconds (s)) and returned energy is recorded for a specific duration (for example, 50 ms). In this way, a two-dimensional (2-D) vertical image of the shallow geologic structure beneath the ship track is produced. Figure 2 displays the acquisition geometry. Refer to table 1 for a summary of acquisition parameters and table 2 for trackline statistics. The archived trace data are in standard Society of Exploration Geophysicists (SEG) SEG Y rev. 0 format (Barry and others, 1975); the first 3,200 bytes of the card image header are in ASCII format instead of EBCDIC format. The SEG Y files may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU) (Cohen and Stockwell, 2010). See the How To Download SEG Y Data page for download instructions. The web version of this archive does not contain the SEG Y trace files. These files are very large and would require extremely long download times. To obtain the complete DVD archive, contact USGS Information Services at 1-888-ASK-USGS or infoservices@usgs.gov. The printable profiles provided here are GIF images that were processed and gained using SU software and can be viewed from the Profiles page or from links located on the trackline maps; refer to the Software page for links to example SU processing scripts. The SEG Y files are available on the DVD version of this report or on the Web, downloadable via the USGS Coastal and Marine Geoscience Data System (http://cmgds.marine.usgs.gov). The data are also available for viewing using GeoMapApp (http://www.geomapapp.org) and Virtual Ocean (http://www.virtualocean.org) multi-platform open source software. Detailed information about the navigation system used can be found in table 1 and the Field Activity Collection System (FACS) logs. To view the trackline maps and navigation files, and for more information about these items, see the Navigation page.

  4. VizieR Online Data Catalog: Dwarf galaxies surface brightness profiles. II. (Herrmann+, 2016)

    NASA Astrophysics Data System (ADS)

    Herrmann, K. A.; Hunter, D. A.; Elmegreen, B. G.

    2016-07-01

    Our galaxy sample (see Table1) is derived from the survey of nearby (>30Mpc) late-type galaxies conducted by Hunter & Elmegreen 2006 (cat. J/ApJS/162/49). The full survey includes 94 dwarf Irregulars (dIms), 26 Blue Compact Dwarfs (BCDs), and 20 Magellanic-type spirals (Sms). The 141 dwarf sample presented in the first paper of the present series (Paper I; Herrmann et al. 2013, Cat. J/AJ/146/104) contains one fewer Sm galaxy and two additional dIm systems than the original survey. A multi-wavelength data set has been assembled for these galaxies. The data include Hα images (129 galaxies with detections) to trace star formation over the past 10Myr (Hunter & Elmegreen 2004, Cat. J/AJ/128/2170) and satellite UV images (61 galaxies observed) obtained with the Galaxy Evolution Explorer (GALEX) to trace star formation over the past ~200Myr. The GALEX data include images from two passbands with effective wavelengths of 1516Å (FUV) and 2267Å (NUV) and resolutions of 4'' and 5.6'', respectively. Three of the galaxies in our sample with NUV data do not have FUV data. To trace older stars we have UBV images, which are sensitive to stars formed over the past 1Gyr for on-going star formation, and images in at least one band of JHK for 40 galaxies in the sample, which integrates the star formation over the galaxy's lifetime. Note that nine dwarfs are missing UB data and three more are missing U-band data. In addition we made use of 3.6μm images (39 galaxies) obtained with the Infrared Array Camera (IRAC) in the Spitzer archives also to probe old stars. (3 data files).

  5. Effect of various digital processing algorithms on the measurement accuracy of endodontic file length.

    PubMed

    Kal, Betül Ilhan; Baksi, B Güniz; Dündar, Nesrin; Sen, Bilge Hakan

    2007-02-01

    The aim of this study was to compare the accuracy of endodontic file lengths after application of various image enhancement modalities. Endodontic files of three different ISO sizes were inserted in 20 single-rooted extracted permanent mandibular premolar teeth and standardized images were obtained. Original digital images were then enhanced using five processing algorithms. Six evaluators measured the length of each file on each image. The measurements from each processing algorithm and each file size were compared using repeated measures ANOVA and Bonferroni tests (P = 0.05). Paired t test was performed to compare the measurements with the true lengths of the files (P = 0.05). All of the processing algorithms provided significantly shorter measurements than the true length of each file size (P < 0.05). The threshold enhancement modality produced significantly higher mean error values (P < 0.05), while there was no significant difference among the other enhancement modalities (P > 0.05). Decrease in mean error value was observed with increasing file size (P < 0.05). Invert, contrast/brightness and edge enhancement algorithms may be recommended for accurate file length measurements when utilizing storage phosphor plates.

  6. Platform for Postprocessing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Don

    2008-01-01

    Taking advantage of the similarities that exist among all waveform-based non-destructive evaluation (NDE) methods, a common software platform has been developed containing multiple- signal and image-processing techniques for waveforms and images. The NASA NDE Signal and Image Processing software has been developed using the latest versions of LabVIEW, and its associated Advanced Signal Processing and Vision Toolkits. The software is useable on a PC with Windows XP and Windows Vista. The software has been designed with a commercial grade interface in which two main windows, Waveform Window and Image Window, are displayed if the user chooses a waveform file to display. Within these two main windows, most actions are chosen through logically conceived run-time menus. The Waveform Window has plots for both the raw time-domain waves and their frequency- domain transformations (fast Fourier transform and power spectral density). The Image Window shows the C-scan image formed from information of the time-domain waveform (such as peak amplitude) or its frequency-domain transformation at each scan location. The user also has the ability to open an image, or series of images, or a simple set of X-Y paired data set in text format. Each of the Waveform and Image Windows contains menus from which to perform many user actions. An option exists to use raw waves obtained directly from scan, or waves after deconvolution if system wave response is provided. Two types of deconvolution, time-based subtraction or inverse-filter, can be performed to arrive at a deconvolved wave set. Additionally, the menu on the Waveform Window allows preprocessing of waveforms prior to image formation, scaling and display of waveforms, formation of different types of images (including non-standard types such as velocity), gating of portions of waves prior to image formation, and several other miscellaneous and specialized operations. The menu available on the Image Window allows many further image processing and analysis operations, some of which are found in commercially-available image-processing software programs (such as Adobe Photoshop), and some that are not (removing outliers, Bscan information, region-of-interest analysis, line profiles, and precision feature measurements).

  7. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm using Probabilistic Boolean Logic applied to CMOS Components

    DTIC Science & Technology

    2015-12-24

    Signal to Noise Ratio SPICE Simulation Program with Integrated Circuit Emphasis TIFF Tagged Image File Format USC University of Southern California xvii...sources can create errors in digital circuits. These effects can be simulated using Simulation Program with Integrated Circuit Emphasis ( SPICE ) or...compute summary statistics. 4.1 Circuit Simulations Noisy analog circuits can be simulated in SPICE or Cadence SpectreTM software via noisy voltage

  8. Surface mineral maps of Afghanistan derived from HyMap imaging spectrometer data, version 2

    USGS Publications Warehouse

    Kokaly, Raymond F.; King, Trude V.V.; Hoefen, Todd M.

    2013-01-01

    This report presents a new version of surface mineral maps derived from HyMap imaging spectrometer data collected over Afghanistan in the fall of 2007. This report also describes the processing steps applied to the imaging spectrometer data. The 218 individual flight lines composing the Afghanistan dataset, covering more than 438,000 square kilometers, were georeferenced to a mosaic of orthorectified Landsat images. The HyMap data were converted from radiance to reflectance using a radiative transfer program in combination with ground-calibration sites and a network of cross-cutting calibration flight lines. The U.S. Geological Survey Material Identification and Characterization Algorithm (MICA) was used to generate two thematic maps of surface minerals: a map of iron-bearing minerals and other materials, which have their primary absorption features at the shorter wavelengths of the reflected solar wavelength range, and a map of carbonates, phyllosilicates, sulfates, altered minerals, and other materials, which have their primary absorption features at the longer wavelengths of the reflected solar wavelength range. In contrast to the original version, version 2 of these maps is provided at full resolution of 23-meter pixel size. The thematic maps, MICA summary images, and the material fit and depth images are distributed in digital files linked to this report, in a format readable by remote sensing software and Geographic Information Systems (GIS). The digital files can be downloaded from http://pubs.usgs.gov/ds/787/downloads/.

  9. CSAM Metrology Software Tool

    NASA Technical Reports Server (NTRS)

    Vu, Duc; Sandor, Michael; Agarwal, Shri

    2005-01-01

    CSAM Metrology Software Tool (CMeST) is a computer program for analysis of false-color CSAM images of plastic-encapsulated microcircuits. (CSAM signifies C-mode scanning acoustic microscopy.) The colors in the images indicate areas of delamination within the plastic packages. Heretofore, the images have been interpreted by human examiners. Hence, interpretations have not been entirely consistent and objective. CMeST processes the color information in image-data files to detect areas of delamination without incurring inconsistencies of subjective judgement. CMeST can be used to create a database of baseline images of packages acquired at given times for comparison with images of the same packages acquired at later times. Any area within an image can be selected for analysis, which can include examination of different delamination types by location. CMeST can also be used to perform statistical analyses of image data. Results of analyses are available in a spreadsheet format for further processing. The results can be exported to any data-base-processing software.

  10. Dynamic Torsional and Cyclic Fracture Behavior of ProFile Rotary Instruments at Continuous or Reciprocating Rotation as Visualized with High-speed Digital Video Imaging.

    PubMed

    Tokita, Daisuke; Ebihara, Arata; Miyara, Kana; Okiji, Takashi

    2017-08-01

    This study examined the dynamic fracture behavior of nickel-titanium rotary instruments in torsional or cyclic loading at continuous or reciprocating rotation by means of high-speed digital video imaging. The ProFile instruments (size 30, 0.06 taper; Dentsply Maillefer, Ballaigues, Switzerland) were categorized into 4 groups (n = 7 in each group) as follows: torsional/continuous (TC), torsional/reciprocating (TR), cyclic/continuous (CC), and cyclic/reciprocating (CR). Torsional loading was performed by rotating the instruments by holding the tip with a vise. For cyclic loading, a custom-made device with a 38° curvature was used. Dynamic fracture behavior was observed with a high-speed camera. The time to fracture was recorded, and the fractured surface was examined with scanning electron microscopy. The TC group initially exhibited necking of the file followed by the development of an initial crack line. The TR group demonstrated opening and closing of a crack according to its rotation in the cutting and noncutting directions, respectively. The CC group separated without any detectable signs of deformation. In the CR group, initial crack formation was recognized in 5 of 7 samples. The reciprocating rotation exhibited a longer time to fracture in both torsional and cyclic fatigue testing (P < .05). The scanning electron microscopic images showed a severely deformed surface in the TR group. The dynamic fracture behavior of NiTi rotary instruments, as visualized with high-speed digital video imaging, varied between the different modes of rotation and different fatigue testing. Reciprocating rotation induced a slower crack propagation and conferred higher fatigue resistance than continuous rotation in both torsional and cyclic loads. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  11. Rapid Prototyping Integrated With Nondestructive Evaluation and Finite Element Analysis

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Baaklini, George Y.

    2001-01-01

    Most reverse engineering approaches involve imaging or digitizing an object then creating a computerized reconstruction that can be integrated, in three dimensions, into a particular design environment. Rapid prototyping (RP) refers to the practical ability to build high-quality physical prototypes directly from computer aided design (CAD) files. Using rapid prototyping, full-scale models or patterns can be built using a variety of materials in a fraction of the time required by more traditional prototyping techniques (refs. 1 and 2). Many software packages have been developed and are being designed to tackle the reverse engineering and rapid prototyping issues just mentioned. For example, image processing and three-dimensional reconstruction visualization software such as Velocity2 (ref. 3) are being used to carry out the construction process of three-dimensional volume models and the subsequent generation of a stereolithography file that is suitable for CAD applications. Producing three-dimensional models of objects from computed tomography (CT) scans is becoming a valuable nondestructive evaluation methodology (ref. 4). Real components can be rendered and subjected to temperature and stress tests using structural engineering software codes. For this to be achieved, accurate high-resolution images have to be obtained via CT scans and then processed, converted into a traditional file format, and translated into finite element models. Prototyping a three-dimensional volume of a composite structure by reading in a series of two-dimensional images generated via CT and by using and integrating commercial software (e.g. Velocity2, MSC/PATRAN (ref. 5), and Hypermesh (ref. 6)) is being applied successfully at the NASA Glenn Research Center. The building process from structural modeling to the analysis level is outlined in reference 7. Subsequently, a stress analysis of a composite cooling panel under combined thermomechanical loading conditions was performed to validate this process.

  12. Transported Geothermal Energy Technoeconomic Screening Tool - Calculation Engine

    DOE Data Explorer

    Liu, Xiaobing

    2016-09-21

    This calculation engine estimates technoeconomic feasibility for transported geothermal energy projects. The TGE screening tool (geotool.exe) takes input from input file (input.txt), and list results into output file (output.txt). Both the input and ouput files are in the same folder as the geotool.exe. To use the tool, the input file containing adequate information of the case should be prepared in the format explained below, and the input file should be put into the same folder as geotool.exe. Then the geotool.exe can be executed, which will generate a output.txt file in the same folder containing all key calculation results. The format and content of the output file is explained below as well.

  13. 15 CFR 995.26 - Conversion of NOAA ENC ® files to other formats.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) Conversion of NOAA ENC files to other formats—(1) Content. CEVAD may provide NOAA ENC data in forms other... data files without degradation to positional accuracy or informational content. (2) Software certification. Conversion of NOAA ENC data to other formats must be accomplished within the constraints of IHO...

  14. BOREAS TE-18, 60-m, Radiometrically Rectified Landsat TM Imagery

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team used a radiometric rectification process to produce standardized DN values for a series of Landsat TM images of the BOREAS SSA and NSA in order to compare images that were collected under different atmospheric conditions. The images for each study area were referenced to an image that had very clear atmospheric qualities. The reference image for the SSA was collected on 02-Sep-1994, while the reference image for the NSA was collected on 2 1 Jun-1995. The 23 rectified images cover the period of 07-Jul-1985 to 18-Sep-1994 in the SSA and 22-Jun-1984 to 09-Jun-1994 in the NSA. Each of the reference scenes had coincident atmospheric optical thickness measurements made by RSS-11. The radiometric rectification process is described in more detail by Hall et al. (1991). The original Landsat TM data were received from CCRS for use in the BOREAS project. Due to the nature of the radiometric rectification process and copyright issues, the full-resolution (30-m) images may not be publicly distributed. However, this spatially degraded 60-m resolution version of the images may be openly distributed and is available on the BOREAS CD-ROM series. After the radiometric rectification processing, the original data were degraded to a 60-m pixel size from the original 30-m pixel size by averaging the data over a 2- by 2-pixel window. The data are stored in binary image-format files. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  15. Accessible and informative sectioned images, color-coded images, and surface models of the ear.

    PubMed

    Park, Hyo Seok; Chung, Min Suk; Shin, Dong Sun; Jung, Yong Wook; Park, Jin Seo

    2013-08-01

    In our previous research, we created state-of-the-art sectioned images, color-coded images, and surface models of the human ear. Our ear data would be more beneficial and informative if they were more easily accessible. Therefore, the purpose of this study was to distribute the browsing software and the PDF file in which ear images are to be readily obtainable and freely explored. Another goal was to inform other researchers of our methods for establishing the browsing software and the PDF file. To achieve this, sectioned images and color-coded images of ear were prepared (voxel size 0.1 mm). In the color-coded images, structures related to hearing, equilibrium, and structures originated from the first and second pharyngeal arches were segmented supplementarily. The sectioned and color-coded images of right ear were added to the browsing software, which displayed the images serially along with structure names. The surface models were reconstructed to be combined into the PDF file where they could be freely manipulated. Using the browsing software and PDF file, sectional and three-dimensional shapes of ear structures could be comprehended in detail. Furthermore, using the PDF file, clinical knowledge could be identified through virtual otoscopy. Therefore, the presented educational tools will be helpful to medical students and otologists by improving their knowledge of ear anatomy. The browsing software and PDF file can be downloaded without charge and registration at our homepage (http://anatomy.dongguk.ac.kr/ear/). Copyright © 2013 Wiley Periodicals, Inc.

  16. The AstroHDF Effort

    NASA Astrophysics Data System (ADS)

    Masters, J.; Alexov, A.; Folk, M.; Hanisch, R.; Heber, G.; Wise, M.

    2012-09-01

    Here we update the astronomy community on our effort to deal with the demands of ever-increasing astronomical data size and complexity, using the Hierarchical Data Format, version 5 (HDF5) format (Wise et al. 2011). NRAO, LOFAR and VAO have joined forces with The HDF Group to write an NSF grant, requesting funding to assist in the effort. This paper briefly summarizes our motivation for the proposed project, an outline of the project itself, and some of the material discussed at the ADASS Birds of a Feather (BoF) discussion. Topics of discussion included: community experiences with HDF5 and other file formats; toolsets which exist and/or can be adapted for HDF5; a call for development towards visualizing large (> 1 TB) image cubes; and, general lessons learned from working with large and complex data.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ingargiola, A.; Laurence, T. A.; Boutelle, R.

    We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode (SPAD), photomultiplier tube (PMT) or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. Themore » format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. As a result, to encourage adoption by the academic and commercial communities, all software is released under the MIT open source license.« less

  18. NOTE: MMCTP: a radiotherapy research environment for Monte Carlo and patient-specific treatment planning

    NASA Astrophysics Data System (ADS)

    Alexander, A.; DeBlois, F.; Stroian, G.; Al-Yahya, K.; Heath, E.; Seuntjens, J.

    2007-07-01

    Radiotherapy research lacks a flexible computational research environment for Monte Carlo (MC) and patient-specific treatment planning. The purpose of this study was to develop a flexible software package on low-cost hardware with the aim of integrating new patient-specific treatment planning with MC dose calculations suitable for large-scale prospective and retrospective treatment planning studies. We designed the software package 'McGill Monte Carlo treatment planning' (MMCTP) for the research development of MC and patient-specific treatment planning. The MMCTP design consists of a graphical user interface (GUI), which runs on a simple workstation connected through standard secure-shell protocol to a cluster for lengthy MC calculations. Treatment planning information (e.g., images, structures, beam geometry properties and dose distributions) is converted into a convenient MMCTP local file storage format designated, the McGill RT format. MMCTP features include (a) DICOM_RT, RTOG and CADPlan CART format imports; (b) 2D and 3D visualization views for images, structure contours, and dose distributions; (c) contouring tools; (d) DVH analysis, and dose matrix comparison tools; (e) external beam editing; (f) MC transport calculation from beam source to patient geometry for photon and electron beams. The MC input files, which are prepared from the beam geometry properties and patient information (e.g., images and structure contours), are uploaded and run on a cluster using shell commands controlled from the MMCTP GUI. The visualization, dose matrix operation and DVH tools offer extensive options for plan analysis and comparison between MC plans and plans imported from commercial treatment planning systems. The MMCTP GUI provides a flexible research platform for the development of patient-specific MC treatment planning for photon and electron external beam radiation therapy. The impact of this tool lies in the fact that it allows for systematic, platform-independent, large-scale MC treatment planning for different treatment sites. Patient recalculations were performed to validate the software and ensure proper functionality.

  19. Archive of digital boomer and CHIRP seismic reflection data collected during USGS cruise 06FSH03 offshore of Fort Lauderdale, Florida, September 2006

    USGS Publications Warehouse

    Harrison, Arnell S.; Dadisman, Shawn V.; Reich, Christopher D.; Wiese, Dana S.; Greenwood, Jason W.; Swarzenski, Peter W.

    2007-01-01

    In September of 2006, the U.S. Geological Survey conducted geophysical surveys offshore of Fort Lauderdale, FL. This report serves as an archive of unprocessed digital boomer and CHIRP seismic reflection data, trackline maps, navigation files, GIS information, Field Activity Collection System (FACS) logs, observer's logbook, and formal FGDC metadata. Filtered and gained digital images of the seismic profiles are also provided. The archived trace data are in standard Society of Exploration Geophysicists (SEG) SEG-Y format (Barry and others, 1975) and may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU). Example SU processing scripts and USGS software for viewing the SEG-Y files (Zihlman, 1992) are also provided.

  20. Multimedial data base and management system for self-education and testing the students' knowledge on pathomorphology.

    PubMed

    Szymaś, J; Gawroński, M

    1993-01-01

    The composition assumed our experience in creating and using multimedial data base of examination questions and management system, which is used for. This system is implemented on microcomputers compatible with IBM PC and works in network system Net Ware 3.11. The test questions exceeded 2000 until now. The packet consists of the two functionally individual programs: ASSISTANT, which is the administrator for the databases, and EXAMINATOR which is the executive program. This system enables to use text files and add images to each question, which are adjusted to display on standard graphics devices (VGA). Standard format of the notation files enables to elaborate the results in order to estimate the scale of answers and to find correlations between the results.

Top