Sample records for image file format

  1. Main image file tape description

    USGS Publications Warehouse

    Warriner, Howard W.

    1980-01-01

    This Main Image File Tape document defines the data content and file structure of the Main Image File Tape (MIFT) produced by the EROS Data Center (EDC). This document also defines an INQUIRY tape, which is just a subset of the MIFT. The format of the INQUIRY tape is identical to the MIFT except for two records; therefore, with the exception of these two records (described elsewhere in this document), every remark made about the MIFT is true for the INQUIRY tape.

  2. Extracting the Data From the LCM vk4 Formatted Output File

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, James G.

    These are slides about extracting the data from the LCM vk4 formatted output file. The following is covered: vk4 file produced by Keyence VK Software, custom analysis, no off the shelf way to read the file, reading the binary data in a vk4 file, various offsets in decimal lines, finding the height image data, directly in MATLAB, binary output beginning of height image data, color image information, color image binary data, color image decimal and binary data, MATLAB code to read vk4 file (choose a file, read the file, compute offsets, read optical image, laser optical image, read and computemore » laser intensity image, read height image, timing, display height image, display laser intensity image, display RGB laser optical images, display RGB optical images, display beginning data and save images to workspace, gamma correction subroutine), reading intensity form the vk4 file, linear in the low range, linear in the high range, gamma correction for vk4 files, computing the gamma intensity correction, observations.« less

  3. PCF File Format.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thoreson, Gregory G

    PCF files are binary files designed to contain gamma spectra and neutron count rates from radiation sensors. It is the native format for the GAmma Detector Response and Analysis Software (GADRAS) package [1]. It can contain multiple spectra and information about each spectrum such as energy calibration. This document outlines the format of the file that would allow one to write a computer program to parse and write such files.

  4. Representation of thermal infrared imaging data in the DICOM using XML configuration files.

    PubMed

    Ruminski, Jacek

    2007-01-01

    The DICOM standard has become a widely accepted and implemented format for the exchange and storage of medical imaging data. Different imaging modalities are supported however there is not a dedicated solution for thermal infrared imaging in medicine. In this article we propose new ideas and improvements to final proposal of the new DICOM Thermal Infrared Imaging structures and services. Additionally, we designed, implemented and tested software packages for universal conversion of existing thermal imaging files to the DICOM format using XML configuration files. The proposed solution works fast and requires minimal number of user interactions. The XML configuration file enables to compose a set of attributes for any source file format of thermal imaging camera.

  5. Efficient stereoscopic contents file format on the basis of ISO base media file format

    NASA Astrophysics Data System (ADS)

    Kim, Kyuheon; Lee, Jangwon; Suh, Doug Young; Park, Gwang Hoon

    2009-02-01

    A lot of 3D contents haven been widely used for multimedia services, however, real 3D video contents have been adopted for a limited applications such as a specially designed 3D cinema. This is because of the difficulty of capturing real 3D video contents and the limitation of display devices available in a market. However, diverse types of display devices for stereoscopic video contents for real 3D video contents have been recently released in a market. Especially, a mobile phone with a stereoscopic camera has been released in a market, which provides a user as a consumer to have more realistic experiences without glasses, and also, as a content creator to take stereoscopic images or record the stereoscopic video contents. However, a user can only store and display these acquired stereoscopic contents with his/her own devices due to the non-existence of a common file format for these contents. This limitation causes a user not share his/her contents with any other users, which makes it difficult the relevant market to stereoscopic contents is getting expanded. Therefore, this paper proposes the common file format on the basis of ISO base media file format for stereoscopic contents, which enables users to store and exchange pure stereoscopic contents. This technology is also currently under development for an international standard of MPEG as being called as a stereoscopic video application format.

  6. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  7. Interoperability format translation and transformation between IFC architectural design file and simulation file formats

    DOEpatents

    Chao, Tian-Jy; Kim, Younghun

    2015-02-03

    Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function to convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.

  8. Interoperability format translation and transformation between IFC architectural design file and simulation file formats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, Tian-Jy; Kim, Younghun

    Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function tomore » convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.« less

  9. Sensitivity Data File Formats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.

    2016-04-01

    The format of the TSUNAMI-A sensitivity data file produced by SAMS for cases with deterministic transport solutions is given in Table 6.3.A.1. The occurrence of each entry in the data file is followed by an identification of the data contained on each line of the file and the FORTRAN edit descriptor denoting the format of each line. A brief description of each line is also presented. A sample of the TSUNAMI-A data file for the Flattop-25 sample problem is provided in Figure 6.3.A.1. Here, only two profiles out of the 130 computed are shown.

  10. imzML: Imaging Mass Spectrometry Markup Language: A common data format for mass spectrometry imaging.

    PubMed

    Römpp, Andreas; Schramm, Thorsten; Hester, Alfons; Klinkert, Ivo; Both, Jean-Pierre; Heeren, Ron M A; Stöckli, Markus; Spengler, Bernhard

    2011-01-01

    Imaging mass spectrometry is the method of scanning a sample of interest and generating an "image" of the intensity distribution of a specific analyte. The data sets consist of a large number of mass spectra which are usually acquired with identical settings. Existing data formats are not sufficient to describe an MS imaging experiment completely. The data format imzML was developed to allow the flexible and efficient exchange of MS imaging data between different instruments and data analysis software.For this purpose, the MS imaging data is divided in two separate files. The mass spectral data is stored in a binary file to ensure efficient storage. All metadata (e.g., instrumental parameters, sample details) are stored in an XML file which is based on the standard data format mzML developed by HUPO-PSI. The original mzML controlled vocabulary was extended to include specific parameters of imaging mass spectrometry (such as x/y position and spatial resolution). The two files (XML and binary) are connected by offset values in the XML file and are unambiguously linked by a universally unique identifier. The resulting datasets are comparable in size to the raw data and the separate metadata file allows flexible handling of large datasets.Several imaging MS software tools already support imzML. This allows choosing from a (growing) number of processing tools. One is no longer limited to proprietary software, but is able to use the processing software which is best suited for a specific question or application. On the other hand, measurements from different instruments can be compared within one software application using identical settings for data processing. All necessary information for evaluating and implementing imzML can be found at http://www.imzML.org .

  11. Migration of the digital interactive breast-imaging teaching file

    NASA Astrophysics Data System (ADS)

    Cao, Fei; Sickles, Edward A.; Huang, H. K.; Zhou, Xiaoqiang

    1998-06-01

    The digital breast imaging teaching file developed during the last two years in our laboratory has been used successfully at UCSF (University of California, San Francisco) as a routine teaching tool for training radiology residents and fellows in mammography. Building on this success, we have ported the teaching file from an old Pixar imaging/Sun SPARC 470 display system to our newly designed telemammography display workstation (Ultra SPARC 2 platform with two DOME Md5/SBX display boards). The old Pixar/Sun 470 system, although adequate for fast and high-resolution image display, is 4- year-old technology, expensive to maintain and difficult to upgrade. The new display workstation is more cost-effective and is also compatible with the digital image format from a full-field direct digital mammography system. The digital teaching file is built on a sophisticated computer-aided instruction (CAI) model, which simulates the management sequences used in imaging interpretation and work-up. Each user can be prompted to respond by making his/her own observations, assessments, and work-up decisions as well as the marking of image abnormalities. This effectively replaces the traditional 'show-and-tell' teaching file experience with an interactive, response-driven type of instruction.

  12. Mass spectrometer output file format mzML.

    PubMed

    Deutsch, Eric W

    2010-01-01

    Mass spectrometry is an important technique for analyzing proteins and other biomolecular compounds in biological samples. Each of the vendors of these mass spectrometers uses a different proprietary binary output file format, which has hindered data sharing and the development of open source software for downstream analysis. The solution has been to develop, with the full participation of academic researchers as well as software and hardware vendors, an open XML-based format for encoding mass spectrometer output files, and then to write software to use this format for archiving, sharing, and processing. This chapter presents the various components and information available for this format, mzML. In addition to the XML schema that defines the file structure, a controlled vocabulary provides clear terms and definitions for the spectral metadata, and a semantic validation rules mapping file allows the mzML semantic validator to insure that an mzML document complies with one of several levels of requirements. Complete documentation and example files insure that the format may be uniformly implemented. At the time of release, there already existed several implementations of the format and vendors have committed to supporting the format in their products.

  13. 76 FR 43679 - Filing via the Internet; Notice of Additional File Formats for efiling

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-21

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. RM07-16-000] Filing via the Internet; Notice of Additional File Formats for efiling Take notice that the Commission has added to its list of acceptable file formats the four-character file extensions for Microsoft Office 2007/2010...

  14. MINC 2.0: A Flexible Format for Multi-Modal Images.

    PubMed

    Vincent, Robert D; Neelin, Peter; Khalili-Mahani, Najmeh; Janke, Andrew L; Fonov, Vladimir S; Robbins, Steven M; Baghdadi, Leila; Lerch, Jason; Sled, John G; Adalat, Reza; MacDonald, David; Zijdenbos, Alex P; Collins, D Louis; Evans, Alan C

    2016-01-01

    It is often useful that an imaging data format can afford rich metadata, be flexible, scale to very large file sizes, support multi-modal data, and have strong inbuilt mechanisms for data provenance. Beginning in 1992, MINC was developed as a system for flexible, self-documenting representation of neuroscientific imaging data with arbitrary orientation and dimensionality. The MINC system incorporates three broad components: a file format specification, a programming library, and a growing set of tools. In the early 2000's the MINC developers created MINC 2.0, which added support for 64-bit file sizes, internal compression, and a number of other modern features. Because of its extensible design, it has been easy to incorporate details of provenance in the header metadata, including an explicit processing history, unique identifiers, and vendor-specific scanner settings. This makes MINC ideal for use in large scale imaging studies and databases. It also makes it easy to adapt to new scanning sequences and modalities.

  15. FRS Geospatial Return File Format

    EPA Pesticide Factsheets

    The Geospatial Return File Format describes format that needs to be used to submit latitude and longitude coordinates for use in Envirofacts mapping applications. These coordinates are stored in the Geospatail Reference Tables.

  16. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  17. Effect of Instrumentation Length and Instrumentation Systems: Hand Versus Rotary Files on Apical Crack Formation - An In vitro Study.

    PubMed

    Devale, Madhuri R; Mahesh, M C; Bhandary, Shreetha

    2017-01-01

    Stresses generated during root canal instrumentation have been reported to cause apical cracks. The smaller, less pronounced defects like cracks can later propagate into vertical root fracture, when the tooth is subjected to repeated stresses from endodontic or restorative procedures. This study evaluated occurrence of apical cracks with stainless steel hand files, rotary NiTi RaCe and K3 files at two different instrumentation lengths. In the present in vitro study, 60 mandibular premolars were mounted in resin blocks with simulated periodontal ligament. Apical 3 mm of the root surfaces were exposed and stained using India ink. Preoperative images of root apices were obtained at 100x using stereomicroscope. The teeth were divided into six groups of 10 each. First two groups were instrumented with stainless steel files, next two groups with rotary NiTi RaCe files and the last two groups with rotary NiTi K3 files. The instrumentation was carried out till the apical foramen (Working Length-WL) and 1 mm short of the apical foramen (WL-1) with each file system. After root canal instrumentation, postoperative images of root apices were obtained. Preoperative and postoperative images were compared and the occurrence of cracks was recorded. Descriptive statistical analysis and Chi-square tests were used to analyze the results. Apical root cracks were seen in 30%, 35% and 20% of teeth instrumented with K-files, RaCe files and K3 files respectively. There was no statistical significance among three instrumentation systems in the formation of apical cracks (p=0.563). Apical cracks were seen in 40% and 20% of teeth instrumented with K-files; 60% and 10% of teeth with RaCe files and 40% and 0% of teeth with K3 files at WL and WL-1 respectively. For groups instrumented with hand files there was no statistical significance in number of cracks at WL and WL-1 (p=0.628). But for teeth instrumented with RaCe files and K3 files significantly more number of cracks were seen at WL than

  18. Informatics in radiology (infoRAD): multimedia extension of medical imaging resource center teaching files.

    PubMed

    Yang, Guo Liang; Aziz, Aamer; Narayanaswami, Banukumar; Anand, Ananthasubramaniam; Lim, C C Tchoyoson; Nowinski, Wieslaw Lucjan

    2005-01-01

    A new method has been developed for multimedia enhancement of electronic teaching files created by using the standard protocols and formats offered by the Medical Imaging Resource Center (MIRC) project of the Radiological Society of North America. The typical MIRC electronic teaching file consists of static pages only; with the new method, audio and visual content may be added to the MIRC electronic teaching file so that the entire image interpretation process can be recorded for teaching purposes. With an efficient system for encoding the audiovisual record of on-screen manipulation of radiologic images, the multimedia teaching files generated are small enough to be transmitted via the Internet with acceptable resolution. Students may respond with the addition of new audio and visual content and thereby participate in a discussion about a particular case. MIRC electronic teaching files with multimedia enhancement have the potential to augment the effectiveness of diagnostic radiology teaching. RSNA, 2005.

  19. PSTOOLS - FOUR PROGRAMS THAT INTERPRET/FORMAT POSTSCRIPT FILES

    NASA Technical Reports Server (NTRS)

    Choi, D.

    1994-01-01

    PSTOOLS is a package of four programs that operate on files written in the page description language, PostScript. The programs include a PostScript previewer for the IRIS workstation, a PostScript driver for the Matrix QCRZ film recorder, a PostScript driver for the Tektronix 4693D printer, and a PostScript code beautifier that formats PostScript files to be more legible. The three programs PSIRIS, PSMATRIX, and PSTEK are similar in that they all interpret the PostScript language and output the graphical results to a device, and they support color PostScript images. The common code which is shared by these three programs is included as a library of routines. PSPRETTY formats a PostScript file by appropriately indenting procedures and code delimited by "saves" and "restores." PSTOOLS does not use Adobe fonts. PSTOOLS is written in C-language for implementation on SGI IRIS 4D series workstations running IRIX 3.2 or later. A README file and UNIX man pages provide information regarding the installation and use of the PSTOOLS programs. A six-page manual which provides slightly more detailed information may be purchased separately. The standard distribution medium for this package is one .25 inch streaming magnetic tape cartridge in UNIX tar format. PSIRIS (the largest program) requires 1.2Mb of main memory. PSMATRIX requires the "gpib" board (IEEE 488) available from Silicon Graphics. Inc. The programs with graphical interfaces require that the IRIS have at least 24 bit planes. This package was developed in 1990 and updated in 1991. SGI, IRIS 4D, and IRIX are trademarks of Silicon Graphics, Inc. Matrix QCRZ is a registered trademark of the AGFA Group. Tektronix 4693D is a trademark of Tektronix, Inc. Adobe is a trademark of Adobe Systems Incorporated. PostScript is a registered trademark of Adobe Systems Incorporated. UNIX is a registered trademark of AT&T Bell Laboratories.

  20. Distributed file management for remote clinical image-viewing stations

    NASA Astrophysics Data System (ADS)

    Ligier, Yves; Ratib, Osman M.; Girard, Christian; Logean, Marianne; Trayser, Gerhard

    1996-05-01

    The Geneva PACS is based on a distributed architecture, with different archive servers used to store all the image files produced by digital imaging modalities. Images can then be visualized on different display stations with the Osiris software. Image visualization require to have the image file physically present on the local station. Thus, images must be transferred from archive servers to local display stations in an acceptable way, which means fast and user friendly where the notion of file must be hidden to users. The transfer of image files is done according to different schemes including prefetching and direct image selection. Prefetching allows the retrieval of previous studies of a patient in advance. A direct image selection is also provided in order to retrieve images on request. When images are transferred locally on the display station, they are stored in Papyrus files, each file containing a set of images. File names are used by the Osiris viewing software to open image sequences. But file names alone are not explicit enough to properly describe the content of the file. A specific utility has been developed to present a list of patients, and for each patient a list of exams which can be selected and automatically displayed. The system has been successfully tested in different clinical environments. It will be soon extended on a hospital wide basis.

  1. Effect of Instrumentation Length and Instrumentation Systems: Hand Versus Rotary Files on Apical Crack Formation – An In vitro Study

    PubMed Central

    Mahesh, MC; Bhandary, Shreetha

    2017-01-01

    Introduction Stresses generated during root canal instrumentation have been reported to cause apical cracks. The smaller, less pronounced defects like cracks can later propagate into vertical root fracture, when the tooth is subjected to repeated stresses from endodontic or restorative procedures. Aim This study evaluated occurrence of apical cracks with stainless steel hand files, rotary NiTi RaCe and K3 files at two different instrumentation lengths. Materials and Methods In the present in vitro study, 60 mandibular premolars were mounted in resin blocks with simulated periodontal ligament. Apical 3 mm of the root surfaces were exposed and stained using India ink. Preoperative images of root apices were obtained at 100x using stereomicroscope. The teeth were divided into six groups of 10 each. First two groups were instrumented with stainless steel files, next two groups with rotary NiTi RaCe files and the last two groups with rotary NiTi K3 files. The instrumentation was carried out till the apical foramen (Working Length-WL) and 1 mm short of the apical foramen (WL-1) with each file system. After root canal instrumentation, postoperative images of root apices were obtained. Preoperative and postoperative images were compared and the occurrence of cracks was recorded. Descriptive statistical analysis and Chi-square tests were used to analyze the results. Results Apical root cracks were seen in 30%, 35% and 20% of teeth instrumented with K-files, RaCe files and K3 files respectively. There was no statistical significance among three instrumentation systems in the formation of apical cracks (p=0.563). Apical cracks were seen in 40% and 20% of teeth instrumented with K-files; 60% and 10% of teeth with RaCe files and 40% and 0% of teeth with K3 files at WL and WL-1 respectively. For groups instrumented with hand files there was no statistical significance in number of cracks at WL and WL-1 (p=0.628). But for teeth instrumented with RaCe files and K3 files

  2. Digital camera with apparatus for authentication of images produced from an image file

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1993-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.

  3. Do you also have problems with the file format syndrome?

    PubMed

    De Cuyper, B; Nyssen, E; Christophe, Y; Cornelis, J

    1991-11-01

    In a biomedical data processing environment, an essential requirement is the ability to integrate a large class of standard modules for the acquisition, processing and display of the (image) data. Our approach to the management and manipulation of the different data formats is based on the specification of a common standard for the representation of data formats, called 'data nature descriptions' to emphasise that this representation not only specifies the structure but also the contents of data objects (files). The idea behind this concept is to associate each hardware and software component that produces or uses medical data with a description of the data objects manipulated by that component. In our approach a special software module (a format convertor generator) takes care of the appropriate data format conversions, required when two or more components of the system exchange data.

  4. Digital Camera with Apparatus for Authentication of Images Produced from an Image File

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1996-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.

  5. MSiReader: an open-source interface to view and analyze high resolving power MS imaging files on Matlab platform.

    PubMed

    Robichaud, Guillaume; Garrard, Kenneth P; Barry, Jeremy A; Muddiman, David C

    2013-05-01

    During the past decade, the field of mass spectrometry imaging (MSI) has greatly evolved, to a point where it has now been fully integrated by most vendors as an optional or dedicated platform that can be purchased with their instruments. However, the technology is not mature and multiple research groups in both academia and industry are still very actively studying the fundamentals of imaging techniques, adapting the technology to new ionization sources, and developing new applications. As a result, there important varieties of data file formats used to store mass spectrometry imaging data and, concurrent to the development of MSi, collaborative efforts have been undertaken to introduce common imaging data file formats. However, few free software packages to read and analyze files of these different formats are readily available. We introduce here MSiReader, a free open source application to read and analyze high resolution MSI data from the most common MSi data formats. The application is built on the Matlab platform (Mathworks, Natick, MA, USA) and includes a large selection of data analysis tools and features. People who are unfamiliar with the Matlab language will have little difficult navigating the user-friendly interface, and users with Matlab programming experience can adapt and customize MSiReader for their own needs.

  6. MSiReader: An Open-Source Interface to View and Analyze High Resolving Power MS Imaging Files on Matlab Platform

    NASA Astrophysics Data System (ADS)

    Robichaud, Guillaume; Garrard, Kenneth P.; Barry, Jeremy A.; Muddiman, David C.

    2013-05-01

    During the past decade, the field of mass spectrometry imaging (MSI) has greatly evolved, to a point where it has now been fully integrated by most vendors as an optional or dedicated platform that can be purchased with their instruments. However, the technology is not mature and multiple research groups in both academia and industry are still very actively studying the fundamentals of imaging techniques, adapting the technology to new ionization sources, and developing new applications. As a result, there important varieties of data file formats used to store mass spectrometry imaging data and, concurrent to the development of MSi, collaborative efforts have been undertaken to introduce common imaging data file formats. However, few free software packages to read and analyze files of these different formats are readily available. We introduce here MSiReader, a free open source application to read and analyze high resolution MSI data from the most common MSi data formats. The application is built on the Matlab platform (Mathworks, Natick, MA, USA) and includes a large selection of data analysis tools and features. People who are unfamiliar with the Matlab language will have little difficult navigating the user-friendly interface, and users with Matlab programming experience can adapt and customize MSiReader for their own needs.

  7. File formats commonly used in mass spectrometry proteomics.

    PubMed

    Deutsch, Eric W

    2012-12-01

    The application of mass spectrometry (MS) to the analysis of proteomes has enabled the high-throughput identification and abundance measurement of hundreds to thousands of proteins per experiment. However, the formidable informatics challenge associated with analyzing MS data has required a wide variety of data file formats to encode the complex data types associated with MS workflows. These formats encompass the encoding of input instruction for instruments, output products of the instruments, and several levels of information and results used by and produced by the informatics analysis tools. A brief overview of the most common file formats in use today is presented here, along with a discussion of related topics.

  8. Image Size Variation Influence on Corrupted and Non-viewable BMP Image

    NASA Astrophysics Data System (ADS)

    Azmi, Tengku Norsuhaila T.; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Hamid, Isredza Rahmi A.; Chai Wen, Chuah

    2017-08-01

    Image is one of the evidence component seek in digital forensics. Joint Photographic Experts Group (JPEG) format is most popular used in the Internet because JPEG files are very lossy and easy to compress that can speed up Internet transmitting processes. However, corrupted JPEG images are hard to recover due to the complexities of determining corruption point. Nowadays Bitmap (BMP) images are preferred in image processing compared to another formats because BMP image contain all the image information in a simple format. Therefore, in order to investigate the corruption point in JPEG, the file is required to be converted into BMP format. Nevertheless, there are many things that can influence the corrupting of BMP image such as the changes of image size that make the file non-viewable. In this paper, the experiment indicates that the size of BMP file influences the changes in the image itself through three conditions, deleting, replacing and insertion. From the experiment, we learnt by correcting the file size, it can able to produce a viewable file though partially. Then, it can be investigated further to identify the corruption point.

  9. 18 CFR 50.3 - Applications/pre-filing; rules and format.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... filings must be signed in compliance with § 385.2005 of this chapter. (e) The Commission will conduct a... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Applications/pre-filing... INTERSTATE ELECTRIC TRANSMISSION FACILITIES § 50.3 Applications/pre-filing; rules and format. (a) Filings are...

  10. File Formats Commonly Used in Mass Spectrometry Proteomics*

    PubMed Central

    Deutsch, Eric W.

    2012-01-01

    The application of mass spectrometry (MS) to the analysis of proteomes has enabled the high-throughput identification and abundance measurement of hundreds to thousands of proteins per experiment. However, the formidable informatics challenge associated with analyzing MS data has required a wide variety of data file formats to encode the complex data types associated with MS workflows. These formats encompass the encoding of input instruction for instruments, output products of the instruments, and several levels of information and results used by and produced by the informatics analysis tools. A brief overview of the most common file formats in use today is presented here, along with a discussion of related topics. PMID:22956731

  11. Mapping DICOM to OpenDocument format

    NASA Astrophysics Data System (ADS)

    Yu, Cong; Yao, Zhihong

    2009-02-01

    In order to enhance the readability, extensibility and sharing of DICOM files, we have introduced XML into DICOM file system (SPIE Volume 5748)[1] and the multilayer tree structure into DICOM (SPIE Volume 6145)[2]. In this paper, we proposed mapping DICOM to ODF(OpenDocument Format), for it is also based on XML. As a result, the new format realizes the separation of content(including text content and image) and display style. Meanwhile, since OpenDocument files take the format of a ZIP compressed archive, the new kind of DICOM files can benefit from ZIP's lossless compression to reduce file size. Moreover, this open format can also guarantee long-term access to data without legal or technical barriers, making medical images accessible to various fields.

  12. Keemei: cloud-based validation of tabular bioinformatics file formats in Google Sheets.

    PubMed

    Rideout, Jai Ram; Chase, John H; Bolyen, Evan; Ackermann, Gail; González, Antonio; Knight, Rob; Caporaso, J Gregory

    2016-06-13

    Bioinformatics software often requires human-generated tabular text files as input and has specific requirements for how those data are formatted. Users frequently manage these data in spreadsheet programs, which is convenient for researchers who are compiling the requisite information because the spreadsheet programs can easily be used on different platforms including laptops and tablets, and because they provide a familiar interface. It is increasingly common for many different researchers to be involved in compiling these data, including study coordinators, clinicians, lab technicians and bioinformaticians. As a result, many research groups are shifting toward using cloud-based spreadsheet programs, such as Google Sheets, which support the concurrent editing of a single spreadsheet by different users working on different platforms. Most of the researchers who enter data are not familiar with the formatting requirements of the bioinformatics programs that will be used, so validating and correcting file formats is often a bottleneck prior to beginning bioinformatics analysis. We present Keemei, a Google Sheets Add-on, for validating tabular files used in bioinformatics analyses. Keemei is available free of charge from Google's Chrome Web Store. Keemei can be installed and run on any web browser supported by Google Sheets. Keemei currently supports the validation of two widely used tabular bioinformatics formats, the Quantitative Insights into Microbial Ecology (QIIME) sample metadata mapping file format and the Spatially Referenced Genetic Data (SRGD) format, but is designed to easily support the addition of others. Keemei will save researchers time and frustration by providing a convenient interface for tabular bioinformatics file format validation. By allowing everyone involved with data entry for a project to easily validate their data, it will reduce the validation and formatting bottlenecks that are commonly encountered when human-generated data files are

  13. BOREAS Level-2 MAS Surface Reflectance and Temperature Images in BSQ Format

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Newcomer, Jeffrey (Editor); Lobitz, Brad; Spanner, Michael; Strub, Richard; Lobitz, Brad

    2000-01-01

    The BOReal Ecosystem-Atmosphere Study (BOREAS) Staff Science Aircraft Data Acquisition Program focused on providing the research teams with the remotely sensed aircraft data products they needed to compare and spatially extend point results. The MODIS Airborne Simulator (MAS) images, along with other remotely sensed data, were collected to provide spatially extensive information over the primary study areas. This information includes biophysical parameter maps such as surface reflectance and temperature. Collection of the MAS images occurred over the study areas during the 1994 field campaigns. The level-2 MAS data cover the dates of 21-Jul-1994, 24-Jul-1994, 04-Aug-1994, and 08-Aug-1994. The data are not geographically/geometrically corrected; however, files of relative X and Y coordinates for each image pixel were derived by using the C130 navigation data in a MAS scan model. The data are provided in binary image format files.

  14. 5 CFR 1201.14 - Electronic filing procedures.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... (PDF), and image files (files created by scanning). A list of formats allowed can be found at e-Appeal..., or by uploading the supporting documents in the form of one or more PDF files in which each...

  15. Effect of combined digital imaging parameters on endodontic file measurements.

    PubMed

    de Oliveira, Matheus Lima; Pinto, Geraldo Camilo de Souza; Ambrosano, Glaucia Maria Bovi; Tosoni, Guilherme Monteiro

    2012-10-01

    This study assessed the effect of the combination of a dedicated endodontic filter, spatial resolution, and contrast resolution on the determination of endodontic file lengths. Forty extracted single-rooted teeth were x-rayed with K-files (ISO size 10 and 15) in the root canals. Images were acquired using the VistaScan system (Dürr Dental, Beitigheim-Bissingen, Germany) under different combining parameters of spatial resolution (10 and 25 line pairs per millimeter [lp/mm]) and contrast resolution (8- and 16-bit depths). Subsequently, a dedicated endodontic filter was applied on the 16-bit images, creating 2 additional parameters. Six observers measured the length of the endodontic files in the root canals using the software that accompanies the system. The mean values of the actual file lengths and the measurements of the radiographic images were submitted to 1-way analysis of variance and the Tukey test at a level of significance of 5%. The intraobserver reproducibility was assessed by the intraclass correlation coefficient. All combined image parameters showed excellent intraobserver agreement with intraclass correlation coefficient means higher than 0.98. The imaging parameter of 25 lp/mm and 16 bit associated with the use of the endodontic filter did not differ significantly from the actual file lengths when both file sizes were analyzed together or separately (P > .05). When the size 15 file was evaluated separately, only 8-bit images differed significantly from the actual file lengths (P ≤ .05). The combination of an endodontic filter with high spatial resolution and high contrast resolution is recommended for the determination of file lengths when using storage phosphor plates. Copyright © 2012 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  16. Photon-HDF5: An Open File Format for Timestamp-Based Single-Molecule Fluorescence Experiments.

    PubMed

    Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier

    2016-01-05

    We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long-term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode, photomultiplier tube, or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc.) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. The format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference Python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. To encourage adoption by the academic and commercial communities, all software is released under the MIT open source license. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  17. Photon-HDF5: An Open File Format for Timestamp-Based Single-Molecule Fluorescence Experiments

    PubMed Central

    Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier

    2016-01-01

    We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long-term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode, photomultiplier tube, or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc.) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. The format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference Python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. To encourage adoption by the academic and commercial communities, all software is released under the MIT open source license. PMID:26745406

  18. An Efficient Format for Nearly Constant-Time Access to Arbitrary Time Intervals in Large Trace Files

    DOE PAGES

    Chan, Anthony; Gropp, William; Lusk, Ewing

    2008-01-01

    A powerful method to aid in understanding the performance of parallel applications uses log or trace files containing time-stamped events and states (pairs of events). These trace files can be very large, often hundreds or even thousands of megabytes. Because of the cost of accessing and displaying such files, other methods are often used that reduce the size of the tracefiles at the cost of sacrificing detail or other information. This paper describes a hierarchical trace file format that provides for display of an arbitrary time window in a time independent of the total size of the file and roughlymore » proportional to the number of events within the time window. This format eliminates the need to sacrifice data to achieve a smaller trace file size (since storage is inexpensive, it is necessary only to make efficient use of bandwidth to that storage). The format can be used to organize a trace file or to create a separate file of annotations that may be used with conventional trace files. We present an analysis of the time to access all of the events relevant to an interval of time and we describe experiments demonstrating the performance of this file format.« less

  19. 5 CFR 1201.14 - Electronic filing procedures.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...-Appeal Online, in which case service is governed by paragraph (j) of this section, or by non-electronic... (PDF), and image files (files created by scanning). A list of formats allowed can be found at e-Appeal... representatives of the appeals in which they were filed. (j) Service of electronic pleadings and MSPB documents...

  20. 5 CFR 1201.14 - Electronic filing procedures.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...-Appeal Online, in which case service is governed by paragraph (j) of this section, or by non-electronic... (PDF), and image files (files created by scanning). A list of formats allowed can be found at e-Appeal... representatives of the appeals in which they were filed. (j) Service of electronic pleadings and MSPB documents...

  1. 5 CFR 1201.14 - Electronic filing procedures.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...-Appeal Online, in which case service is governed by paragraph (j) of this section, or by non-electronic... (PDF), and image files (files created by scanning). A list of formats allowed can be found at e-Appeal... representatives of the appeals in which they were filed. (j) Service of electronic pleadings and MSPB documents...

  2. 5 CFR 1201.14 - Electronic filing procedures.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...-Appeal Online, in which case service is governed by paragraph (j) of this section, or by non-electronic... (PDF), and image files (files created by scanning). A list of formats allowed can be found at e-Appeal... representatives of the appeals in which they were filed. (j) Service of electronic pleadings and MSPB documents...

  3. Pre-Launch Algorithm and Data Format for the Level 1 Calibration Products for the EOS AM-1 Moderate Resolution Imaging Spectroradiometer (MODIS)

    NASA Technical Reports Server (NTRS)

    Guenther, Bruce W.; Godden, Gerald D.; Xiong, Xiao-Xiong; Knight, Edward J.; Qiu, Shi-Yue; Montgomery, Harry; Hopkins, M. M.; Khayat, Mohammad G.; Hao, Zhi-Dong; Smith, David E. (Technical Monitor)

    2000-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) radiometric calibration product is described for the thermal emissive and the reflective solar bands. Specific sensor design characteristics are identified to assist in understanding how the calibration algorithm software product is designed. The reflected solar band software products of radiance and reflectance factor both are described. The product file format is summarized and the MODIS Characterization Support Team (MCST) Homepage location for the current file format is provided.

  4. The digital geologic map of Colorado in ARC/INFO format, Part B. Common files

    USGS Publications Warehouse

    Green, Gregory N.

    1992-01-01

    This geologic map was prepared as a part of a study of digital methods and techniques as applied to complex geologic maps. The geologic map was digitized from the original scribe sheets used to prepare the published Geologic Map of Colorado (Tweto 1979). Consequently the digital version is at 1:500,000 scale using the Lambert Conformal Conic map projection parameters of the state base map. Stable base contact prints of the scribe sheets were scanned on a Tektronix 4991 digital scanner. The scanner automatically converts the scanned image to an ASCII vector format. These vectors were transferred to a VAX minicomputer, where they were then loaded into ARC/INFO. Each vector and polygon was given attributes derived from the original 1979 geologic map. This database was developed on a MicroVAX computer system using VAX V 5.4 nd ARC/INFO 5.0 software. UPDATE: April 1995, The update was done solely for the purpose of adding the abilitly to plot to an HP650c plotter. Two new ARC/INFO plot AMLs along with a lineset and shadeset for the HP650C design jet printer have been included. These new files are COLORADO.650, INDEX.650, TWETOLIN.E00 and TWETOSHD.E00. These files were created on a UNIX platform with ARC/INFO 6.1.2. Updated versions of INDEX.E00, CONTACT.E00, LINE.E00, DECO.E00 and BORDER.E00 files that included the newly defined HP650c items are also included. * Any use of trade, product, or firm names is for descriptive purposes only and does not imply endorsement by the U.S. Government. Descriptors: The Digital Geologic Map of Colorado in ARC/INFO Format Open-File Report 92-050

  5. 78 FR 17233 - Notice of Opportunity To File Amicus Briefs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-20

    .... Any commonly-used word processing format or PDF format is acceptable; text formats are preferable to image formats. Briefs may also be filed with the Office of the Clerk of the Board, Merit Systems...

  6. Portable document format file showing the surface models of cadaver whole body.

    PubMed

    Shin, Dong Sun; Chung, Min Suk; Park, Jin Seo; Park, Hyung Seon; Lee, Sangho; Moon, Young Lae; Jang, Hae Gwon

    2012-08-01

    In the Visible Korean project, 642 three-dimensional (3D) surface models have been built from the sectioned images of a male cadaver. It was recently discovered that popular PDF file enables users to approach the numerous surface models conveniently on Adobe Reader. Purpose of this study was to present a PDF file including systematized surface models of human body as the beneficial contents. To achieve the purpose, fitting software packages were employed in accordance with the procedures. Two-dimensional (2D) surface models including the original sectioned images were embedded into the 3D surface models. The surface models were categorized into systems and then groups. The adjusted surface models were inserted to a PDF file, where relevant multimedia data were added. The finalized PDF file containing comprehensive data of a whole body could be explored in varying manners. The PDF file, downloadable freely from the homepage (http://anatomy.co.kr), is expected to be used as a satisfactory self-learning tool of anatomy. Raw data of the surface models can be extracted from the PDF file and employed for various simulations for clinical practice. The technique to organize the surface models will be applied to manufacture of other PDF files containing various multimedia contents.

  7. Photon-HDF5: an open file format for single-molecule fluorescence experiments using photon-counting detectors

    DOE PAGES

    Ingargiola, A.; Laurence, T. A.; Boutelle, R.; ...

    2015-12-23

    We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode (SPAD), photomultiplier tube (PMT) or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. Themore » format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. As a result, to encourage adoption by the academic and commercial communities, all software is released under the MIT open source license.« less

  8. An easy and effective approach to manage radiologic portable document format (PDF) files using iTunes.

    PubMed

    Qian, Li Jun; Zhou, Mi; Xu, Jian Rong

    2008-07-01

    The objective of this article is to explain an easy and effective approach for managing radiologic files in portable document format (PDF) using iTunes. PDF files are widely used as a standard file format for electronic publications as well as for medical online documents. Unfortunately, there is a lack of powerful software to manage numerous PDF documents. In this article, we explain how to use the hidden function of iTunes (Apple Computer) to manage PDF documents as easily as managing music files.

  9. Hybrid cryptosystem for image file using elgamal and double playfair cipher algorithm

    NASA Astrophysics Data System (ADS)

    Hardi, S. M.; Tarigan, J. T.; Safrina, N.

    2018-03-01

    In this paper, we present an implementation of an image file encryption using hybrid cryptography. We chose ElGamal algorithm to perform asymmetric encryption and Double Playfair for the symmetric encryption. Our objective is to show that these algorithms are capable to encrypt an image file with an acceptable running time and encrypted file size while maintaining the level of security. The application was built using C# programming language and ran as a stand alone desktop application under Windows Operating System. Our test shows that the system is capable to encrypt an image with a resolution of 500×500 to a size of 976 kilobytes with an acceptable running time.

  10. Advanced Forensic Format: an Open Extensible Format for Disk Imaging

    NASA Astrophysics Data System (ADS)

    Garfinkel, Simson; Malan, David; Dubec, Karl-Alexander; Stevens, Christopher; Pham, Cecile

    This paper describes the Advanced Forensic Format (AFF), which is designed as an alternative to current proprietary disk image formats. AFF offers two significant benefits. First, it is more flexible because it allows extensive metadata to be stored with images. Second, AFF images consume less disk space than images in other formats (e.g., EnCase images). This paper also describes the Advanced Disk Imager, a new program for acquiring disk images that compares favorably with existing alternatives.

  11. MMTF-An efficient file format for the transmission, visualization, and analysis of macromolecular structures.

    PubMed

    Bradley, Anthony R; Rose, Alexander S; Pavelka, Antonín; Valasatava, Yana; Duarte, Jose M; Prlić, Andreas; Rose, Peter W

    2017-06-01

    Recent advances in experimental techniques have led to a rapid growth in complexity, size, and number of macromolecular structures that are made available through the Protein Data Bank. This creates a challenge for macromolecular visualization and analysis. Macromolecular structure files, such as PDB or PDBx/mmCIF files can be slow to transfer, parse, and hard to incorporate into third-party software tools. Here, we present a new binary and compressed data representation, the MacroMolecular Transmission Format, MMTF, as well as software implementations in several languages that have been developed around it, which address these issues. We describe the new format and its APIs and demonstrate that it is several times faster to parse, and about a quarter of the file size of the current standard format, PDBx/mmCIF. As a consequence of the new data representation, it is now possible to visualize structures with millions of atoms in a web browser, keep the whole PDB archive in memory or parse it within few minutes on average computers, which opens up a new way of thinking how to design and implement efficient algorithms in structural bioinformatics. The PDB archive is available in MMTF file format through web services and data that are updated on a weekly basis.

  12. MMTF—An efficient file format for the transmission, visualization, and analysis of macromolecular structures

    PubMed Central

    Pavelka, Antonín; Valasatava, Yana; Prlić, Andreas

    2017-01-01

    Recent advances in experimental techniques have led to a rapid growth in complexity, size, and number of macromolecular structures that are made available through the Protein Data Bank. This creates a challenge for macromolecular visualization and analysis. Macromolecular structure files, such as PDB or PDBx/mmCIF files can be slow to transfer, parse, and hard to incorporate into third-party software tools. Here, we present a new binary and compressed data representation, the MacroMolecular Transmission Format, MMTF, as well as software implementations in several languages that have been developed around it, which address these issues. We describe the new format and its APIs and demonstrate that it is several times faster to parse, and about a quarter of the file size of the current standard format, PDBx/mmCIF. As a consequence of the new data representation, it is now possible to visualize structures with millions of atoms in a web browser, keep the whole PDB archive in memory or parse it within few minutes on average computers, which opens up a new way of thinking how to design and implement efficient algorithms in structural bioinformatics. The PDB archive is available in MMTF file format through web services and data that are updated on a weekly basis. PMID:28574982

  13. Transforming Dermatologic Imaging for the Digital Era: Metadata and Standards.

    PubMed

    Caffery, Liam J; Clunie, David; Curiel-Lewandrowski, Clara; Malvehy, Josep; Soyer, H Peter; Halpern, Allan C

    2018-01-17

    Imaging is increasingly being used in dermatology for documentation, diagnosis, and management of cutaneous disease. The lack of standards for dermatologic imaging is an impediment to clinical uptake. Standardization can occur in image acquisition, terminology, interoperability, and metadata. This paper presents the International Skin Imaging Collaboration position on standardization of metadata for dermatologic imaging. Metadata is essential to ensure that dermatologic images are properly managed and interpreted. There are two standards-based approaches to recording and storing metadata in dermatologic imaging. The first uses standard consumer image file formats, and the second is the file format and metadata model developed for the Digital Imaging and Communication in Medicine (DICOM) standard. DICOM would appear to provide an advantage over using consumer image file formats for metadata as it includes all the patient, study, and technical metadata necessary to use images clinically. Whereas, consumer image file formats only include technical metadata and need to be used in conjunction with another actor-for example, an electronic medical record-to supply the patient and study metadata. The use of DICOM may have some ancillary benefits in dermatologic imaging including leveraging DICOM network and workflow services, interoperability of images and metadata, leveraging existing enterprise imaging infrastructure, greater patient safety, and better compliance to legislative requirements for image retention.

  14. 15 CFR 995.26 - Conversion of NOAA ENC ® files to other formats.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) Conversion of NOAA ENC files to other formats—(1) Content. CEVAD may provide NOAA ENC data in forms other... data files without degradation to positional accuracy or informational content. (2) Software certification. Conversion of NOAA ENC data to other formats must be accomplished within the constraints of IHO...

  15. [Intranet-based integrated information system of radiotherapy-related images and diagnostic reports].

    PubMed

    Nakamura, R; Sasaki, M; Oikawa, H; Harada, S; Tamakawa, Y

    2000-03-01

    To use an intranet technique to develop an information system that simultaneously supports both diagnostic reports and radiotherapy planning images. Using a file server as the gateway a radiation oncology LAN was connected to an already operative RIS LAN. Dose-distribution images were saved in tagged-image-file format by way of a screen dump to the file server. X-ray simulator images and portal images were saved in encapsulated postscript format in the file server and automatically converted to portable document format. The files on the file server were automatically registered to the Web server by the search engine and were available for searching and browsing using the Web browser. It took less than a minute to register planning images. For clients, searching and browsing the file took less than 3 seconds. Over 150,000 reports and 4,000 images from a six-month period were accessible. Because the intranet technique was used, construction and maintenance was completed without specialty. Prompt access to essential information about radiotherapy has been made possible by this system. It promotes public access to radiotherapy planning that may improve the quality of treatment.

  16. Experimental Directory Structure (Exdir): An Alternative to HDF5 Without Introducing a New File Format

    PubMed Central

    Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Lepperød, Mikkel E.; Tennøe, Simen; Fyhn, Marianne; Hafting, Torkel; Malthe-Sørenssen, Anders

    2018-01-01

    Natural sciences generate an increasing amount of data in a wide range of formats developed by different research groups and commercial companies. At the same time there is a growing desire to share data along with publications in order to enable reproducible research. Open formats have publicly available specifications which facilitate data sharing and reproducible research. Hierarchical Data Format 5 (HDF5) is a popular open format widely used in neuroscience, often as a foundation for other, more specialized formats. However, drawbacks related to HDF5's complex specification have initiated a discussion for an improved replacement. We propose a novel alternative, the Experimental Directory Structure (Exdir), an open specification for data storage in experimental pipelines which amends drawbacks associated with HDF5 while retaining its advantages. HDF5 stores data and metadata in a hierarchy within a complex binary file which, among other things, is not human-readable, not optimal for version control systems, and lacks support for easy access to raw data from external applications. Exdir, on the other hand, uses file system directories to represent the hierarchy, with metadata stored in human-readable YAML files, datasets stored in binary NumPy files, and raw data stored directly in subdirectories. Furthermore, storing data in multiple files makes it easier to track for version control systems. Exdir is not a file format in itself, but a specification for organizing files in a directory structure. Exdir uses the same abstractions as HDF5 and is compatible with the HDF5 Abstract Data Model. Several research groups are already using data stored in a directory hierarchy as an alternative to HDF5, but no common standard exists. This complicates and limits the opportunity for data sharing and development of common tools for reading, writing, and analyzing data. Exdir facilitates improved data storage, data sharing, reproducible research, and novel insight from

  17. Experimental Directory Structure (Exdir): An Alternative to HDF5 Without Introducing a New File Format.

    PubMed

    Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Lepperød, Mikkel E; Tennøe, Simen; Fyhn, Marianne; Hafting, Torkel; Malthe-Sørenssen, Anders

    2018-01-01

    Natural sciences generate an increasing amount of data in a wide range of formats developed by different research groups and commercial companies. At the same time there is a growing desire to share data along with publications in order to enable reproducible research. Open formats have publicly available specifications which facilitate data sharing and reproducible research. Hierarchical Data Format 5 (HDF5) is a popular open format widely used in neuroscience, often as a foundation for other, more specialized formats. However, drawbacks related to HDF5's complex specification have initiated a discussion for an improved replacement. We propose a novel alternative, the Experimental Directory Structure (Exdir), an open specification for data storage in experimental pipelines which amends drawbacks associated with HDF5 while retaining its advantages. HDF5 stores data and metadata in a hierarchy within a complex binary file which, among other things, is not human-readable, not optimal for version control systems, and lacks support for easy access to raw data from external applications. Exdir, on the other hand, uses file system directories to represent the hierarchy, with metadata stored in human-readable YAML files, datasets stored in binary NumPy files, and raw data stored directly in subdirectories. Furthermore, storing data in multiple files makes it easier to track for version control systems. Exdir is not a file format in itself, but a specification for organizing files in a directory structure. Exdir uses the same abstractions as HDF5 and is compatible with the HDF5 Abstract Data Model. Several research groups are already using data stored in a directory hierarchy as an alternative to HDF5, but no common standard exists. This complicates and limits the opportunity for data sharing and development of common tools for reading, writing, and analyzing data. Exdir facilitates improved data storage, data sharing, reproducible research, and novel insight from

  18. The Open Microscopy Environment: open image informatics for the biological sciences

    NASA Astrophysics Data System (ADS)

    Blackburn, Colin; Allan, Chris; Besson, Sébastien; Burel, Jean-Marie; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gault, David; Gillen, Kenneth; Leigh, Roger; Leo, Simone; Li, Simon; Lindner, Dominik; Linkert, Melissa; Moore, Josh; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Swedlow, Jason R.

    2016-07-01

    Despite significant advances in biological imaging and analysis, major informatics challenges remain unsolved: file formats are proprietary, storage and analysis facilities are lacking, as are standards for sharing image data and results. While the open FITS file format is ubiquitous in astronomy, astronomical imaging shares many challenges with biological imaging, including the need to share large image sets using secure, cross-platform APIs, and the need for scalable applications for processing and visualization. The Open Microscopy Environment (OME) is an open-source software framework developed to address these challenges. OME tools include: an open data model for multidimensional imaging (OME Data Model); an open file format (OME-TIFF) and library (Bio-Formats) enabling free access to images (5D+) written in more than 145 formats from many imaging domains, including FITS; and a data management server (OMERO). The Java-based OMERO client-server platform comprises an image metadata store, an image repository, visualization and analysis by remote access, allowing sharing and publishing of image data. OMERO provides a means to manage the data through a multi-platform API. OMERO's model-based architecture has enabled its extension into a range of imaging domains, including light and electron microscopy, high content screening, digital pathology and recently into applications using non-image data from clinical and genomic studies. This is made possible using the Bio-Formats library. The current release includes a single mechanism for accessing image data of all types, regardless of original file format, via Java, C/C++ and Python and a variety of applications and environments (e.g. ImageJ, Matlab and R).

  19. Java Library for Input and Output of Image Data and Metadata

    NASA Technical Reports Server (NTRS)

    Deen, Robert; Levoe, Steven

    2003-01-01

    A Java-language library supports input and output (I/O) of image data and metadata (label data) in the format of the Video Image Communication and Retrieval (VICAR) image-processing software and in several similar formats, including a subset of the Planetary Data System (PDS) image file format. The library does the following: It provides low-level, direct access layer, enabling an application subprogram to read and write specific image files, lines, or pixels, and manipulate metadata directly. Two coding/decoding subprograms ("codecs" for short) based on the Java Advanced Imaging (JAI) software provide access to VICAR and PDS images in a file-format-independent manner. The VICAR and PDS codecs enable any program that conforms to the specification of the JAI codec to use VICAR or PDS images automatically, without specific knowledge of the VICAR or PDS format. The library also includes Image I/O plugin subprograms for VICAR and PDS formats. Application programs that conform to the Image I/O specification of Java version 1.4 can utilize any image format for which such a plug-in subprogram exists, without specific knowledge of the format itself. Like the aforementioned codecs, the VICAR and PDS Image I/O plug-in subprograms support reading and writing of metadata.

  20. Image tools for UNIX

    NASA Technical Reports Server (NTRS)

    Banks, David C.

    1994-01-01

    This talk features two simple and useful tools for digital image processing in the UNIX environment. They are xv and pbmplus. The xv image viewer which runs under the X window system reads images in a number of different file formats and writes them out in different formats. The view area supports a pop-up control panel. The 'algorithms' menu lets you blur an image. The xv control panel also activates the color editor which displays the image's color map (if one exists). The xv image viewer is available through the internet. The pbmplus package is a set of tools designed to perform image processing from within a UNIX shell. The acronym 'pbm' stands for portable bit map. Like xv, the pbm plus tool can convert images from and to many different file formats. The source code and manual pages for pbmplus are also available through the internet. This software is in the public domain.

  1. File format for normalizing radiological concentration exposure rate and dose rate data for the effects of radioactive decay and weathering processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kraus, Terrence D.

    2017-04-01

    This report specifies the electronic file format that was agreed upon to be used as the file format for normalized radiological data produced by the software tool developed under this TI project. The NA-84 Technology Integration (TI) Program project (SNL17-CM-635, Normalizing Radiological Data for Analysis and Integration into Models) investigators held a teleconference on December 7, 2017 to discuss the tasks to be completed under the TI program project. During this teleconference, the TI project investigators determined that the comma-separated values (CSV) file format is the most suitable file format for the normalized radiological data that will be outputted frommore » the normalizing tool developed under this TI project. The CSV file format was selected because it provides the requisite flexibility to manage different types of radiological data (i.e., activity concentration, exposure rate, dose rate) from other sources [e.g., Radiological Assessment and Monitoring System (RAMS), Aerial Measuring System (AMS), Monitoring and Sampling). The CSV file format also is suitable for the file format of the normalized radiological data because this normalized data can then be ingested by other software [e.g., RAMS, Visual Sampling Plan (VSP)] used by the NA-84’s Consequence Management Program.« less

  2. 9 CFR 124.30 - Filing, format, and content of petitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... RESTORATION Due Diligence Petitions § 124.30 Filing, format, and content of petitions. (a) Any interested... diligence in seeking APHIS approval of the product during the regulatory review period. (b) The petition... subpart. (c) The petition must allege that the applicant failed to act with due diligence sometime during...

  3. Is HDF5 a Good Format to Replace UVFITS?

    NASA Astrophysics Data System (ADS)

    Price, D. C.; Barsdell, B. R.; Greenhill, L. J.

    2015-09-01

    The FITS (Flexible Image Transport System) data format was developed in the late 1970s for storage and exchange of astronomy-related image data. Since then, it has become a standard file format not only for images, but also for radio interferometer data (e.g. UVFITS, FITS-IDI). But is FITS the right format for next-generation telescopes to adopt? The newer Hierarchical Data Format (HDF5) file format offers considerable advantages over FITS, but has yet to gain widespread adoption within the radio astronomy. One of the major holdbacks is that HDF5 is not well supported by data reduction software packages. Here, we present a comparison of FITS, HDF5, and the MeasurementSet (MS) format for storage of interferometric data. In addition, we present a tool for converting between formats. We show that the underlying data model of FITS can be ported to HDF5, a first step toward achieving wider HDF5 support.

  4. OMERO and Bio-Formats 5: flexible access to large bioimaging datasets at scale

    NASA Astrophysics Data System (ADS)

    Moore, Josh; Linkert, Melissa; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Li, Simon; Lindner, Dominik; Moore, William J.; Patterson, Andrew J.; Pindelski, Blazej; Ramalingam, Balaji; Rozbicki, Emil; Tarkowska, Aleksandra; Walczysko, Petr; Allan, Chris; Burel, Jean-Marie; Swedlow, Jason

    2015-03-01

    The Open Microscopy Environment (OME) has built and released Bio-Formats, a Java-based proprietary file format conversion tool and OMERO, an enterprise data management platform under open source licenses. In this report, we describe new versions of Bio-Formats and OMERO that are specifically designed to support large, multi-gigabyte or terabyte scale datasets that are routinely collected across most domains of biological and biomedical research. Bio- Formats reads image data directly from native proprietary formats, bypassing the need for conversion into a standard format. It implements the concept of a file set, a container that defines the contents of multi-dimensional data comprised of many files. OMERO uses Bio-Formats to read files natively, and provides a flexible access mechanism that supports several different storage and access strategies. These new capabilities of OMERO and Bio-Formats make them especially useful for use in imaging applications like digital pathology, high content screening and light sheet microscopy that create routinely large datasets that must be managed and analyzed.

  5. 19 CFR 351.303 - Filing, document identification, format, translation, service, and certification of documents.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... submit a public version of a database in pdf format. The public version of the database must be publicly... interested party that files with the Department a request for an expedited antidumping review, an..., whichever is later. If the interested party that files the request is unable to locate a particular exporter...

  6. Incidence of Apical Crack Initiation during Canal Preparation using Hand Stainless Steel (K-File) and Hand NiTi (Protaper) Files.

    PubMed

    Soni, Dileep; Raisingani, Deepak; Mathur, Rachit; Madan, Nidha; Visnoi, Suchita

    2016-01-01

    To evaluate the incidence of apical crack initiation during canal preparation with stainless steel K-files and hand protaper files (in vitro study). Sixty extracted mandibular premo-lar teeth are randomly selected and embedded in an acrylic tube filled with autopolymerizing resin. A baseline image of the apical surface of each specimen was recorded under a digital microscope (80×). The cervical and middle thirds of all samples were flared with #2 and #1 Gates-Glidden (GG) drills, and a second image was recorded. The teeth were randomly divided into four groups of 15 teeth each according to the file type (hand K-file and hand-protaper) and working length (WL) (instrumented at WL and 1 mm less than WL). Final image after dye penetration and photomicrograph of the apical root surface were digitally recorded. Maximum numbers of cracks were observed with hand protaper files compared with hand K-file at the WL and 1 mm short of WL. Chi-square testing revealed a highly significant effect of WL on crack formation at WL and 1 mm short of WL (p = 0.000). Minimum numbers of cracks at WL and 1 mm short of WL were observed with hand K-file and maximum with hand protaper files. Soni D, Raisingani D, Mathur R, Madan N, Visnoi S. Incidence of Apical Crack Initiation during Canal Preparation using Hand Stainless Steel (K-File) and Hand NiTi (Protaper) Files. Int J Clin Pediatr Dent 2016;9(4):303-307.

  7. Comparison of two methods of digital imaging technology for small diameter K-file length determination.

    PubMed

    Maryam, Ehsani; Farida, Abesi; Farhad, Akbarzade; Soraya, Khafri

    2013-11-01

    Obtaining the proper working length in endodontic treatment is essential. The aim of this study was to compare the working length (WL) assessment of small diameter K-files using the two different digital imaging methods. The samples for this in-vitro experimental study consisted of 40 extracted single-rooted premolars. After access cavity preparation, the ISO files no. 6, 8, and 10 stainless steel K-files were inserted in the canals in the three different lengths to evaluate the results in a blinded manner: At the level of apical foramen(actual)1 mm short of apical foramen2 mm short of apical foramen A digital caliper was used to measure the length of the files which was considered as the Gold Standard. Five observers (two oral and maxillofacial radiologists and three endodontists) observed the digital radiographs which were obtained using PSP and CCD digital imaging sensors. The collected data were analyzed by SPSS 17 and Repeated Measures Paired T-test. In WL assessment of small diameter K-files, a significant statistical relationship was seen among the observers of two digital imaging techniques (P<0.001). However, no significant difference was observed between the two digital techniques in WL assessment of small diameter K-files (P<0.05). PSP and CCD digital imaging techniques were similar in WL assessment of canals using no. 6, 8, and 10 K-files.

  8. TM digital image products for applications. [computer compatible tapes

    NASA Technical Reports Server (NTRS)

    Barker, J. L.; Gunther, F. J.; Abrams, R. B.; Ball, D.

    1984-01-01

    The image characteristics of digital data generated by LANDSAT 4 thematic mapper (TM) are discussed. Digital data from the TM resides in tape files at various stages of image processing. Within each image data file, the image lines are blocked by a factor of either 5 for a computer compatible tape CCT-BT, or 4 for a CCT-AT and CCT-PT; in each format, the image file has a different format. Nominal geometric corrections which provide proper geodetic relationships between different parts of the image are available only for the CCT-PT. It is concluded that detector 3 of band 5 on the TM does not respond; this channel of data needs replacement. The empty bin phenomenon in CCT-AT images results from integer truncations of mixed-mode arithmetric operations.

  9. Effect of reciprocating file motion on microcrack formation in root canals: an SEM study.

    PubMed

    Ashwinkumar, V; Krithikadatta, J; Surendran, S; Velmurugan, N

    2014-07-01

    To compare dentinal microcrack formation whilst using Ni-Ti hand K-files, ProTaper hand and rotary files and the WaveOne reciprocating file. One hundred and fifty mandibular first molars were selected. Thirty teeth were left unprepared and served as controls, and the remaining 120 teeth were divided into four groups. Ni-Ti hand K-files, ProTaper hand files, ProTaper rotary files and WaveOne Primary reciprocating files were used to prepare the mesial canals. Roots were then sectioned 3, 6 and 9 mm from the apex, and the cut surface was observed under scanning electron microscope (SEM) and checked for the presence of dentinal microcracks. The control and Ni-Ti hand K-files groups were not associated with microcracks. In roots prepared with ProTaper hand files, ProTaper rotary files and WaveOne Primary reciprocating files, dentinal microcracks were present. There was a significant difference between control/Ni-Ti hand K-files group and ProTaper hand files/ProTaper rotary files/WaveOne Primary reciprocating file group (P < 0.001) with ProTaper rotary files producing the most microcracks. No significant difference was observed between teeth prepared with ProTaper hand files and WaveOne Primary reciprocating files. ProTaper rotary files were associated with significantly more microcracks than ProTaper hand files and WaveOne Primary reciprocating files. Ni-Ti hand K-files did not produce microcracks at any levels inside the root canals. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  10. Incidence of Apical Crack Initiation during Canal Preparation using Hand Stainless Steel (K-File) and Hand NiTi (Protaper) Files

    PubMed Central

    Raisingani, Deepak; Mathur, Rachit; Madan, Nidha; Visnoi, Suchita

    2016-01-01

    Aim To evaluate the incidence of apical crack initiation during canal preparation with stainless steel K-files and hand protaper files (in vitro study). Materials and methods Sixty extracted mandibular premo-lar teeth are randomly selected and embedded in an acrylic tube filled with autopolymerizing resin. A baseline image of the apical surface of each specimen was recorded under a digital microscope (80×). The cervical and middle thirds of all samples were flared with #2 and #1 Gates-Glidden (GG) drills, and a second image was recorded. The teeth were randomly divided into four groups of 15 teeth each according to the file type (hand K-file and hand-protaper) and working length (WL) (instrumented at WL and 1 mm less than WL). Final image after dye penetration and photomicrograph of the apical root surface were digitally recorded. Results Maximum numbers of cracks were observed with hand protaper files compared with hand K-file at the WL and 1 mm short of WL. Chi-square testing revealed a highly significant effect of WL on crack formation at WL and 1 mm short of WL (p = 0.000). Conclusion Minimum numbers of cracks at WL and 1 mm short of WL were observed with hand K-file and maximum with hand protaper files. How to cite this article Soni D, Raisingani D, Mathur R, Madan N, Visnoi S. Incidence of Apical Crack Initiation during Canal Preparation using Hand Stainless Steel (K-File) and Hand NiTi (Protaper) Files. Int J Clin Pediatr Dent 2016;9(4):303-307. PMID:28127160

  11. Covariance Data File Formats for Whisper-1.0 & Whisper-1.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan

    2017-01-09

    Whisper is a statistical analysis package developed in 2014 to support nuclear criticality safety (NCS) validation. It uses the sensitivity profile data for an application as computed by MCNP6 along with covariance files for the nuclear data to determine a baseline upper-subcritical-limit (USL) for the application. Whisper version 1.0 was first developed and used at LANL in 2014. During 2015-2016, Whisper was updated to version 1.1 and is to be included with the upcoming release of MCNP6.2. This report describes the file formats used for the covariance data in both Whisper-1.0 and Whisper-1.1.

  12. Cambio : a file format translation and analysis application for the nuclear response emergency community.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lasche, George P.

    2009-10-01

    Cambio is an application intended to automatically read and display any spectrum file of any format in the world that the nuclear emergency response community might encounter. Cambio also provides an analysis capability suitable for HPGe spectra when detector response and scattering environment are not well known. Why is Cambio needed: (1) Cambio solves the following problem - With over 50 types of formats from instruments used in the field and new format variations appearing frequently, it is impractical for every responder to have current versions of the manufacturer's software from every instrument used in the field; (2) Cambio convertsmore » field spectra to any one of several common formats that are used for analysis, saving valuable time in an emergency situation; (3) Cambio provides basic tools for comparing spectra, calibrating spectra, and isotope identification with analysis suited especially for HPGe spectra; and (4) Cambio has a batch processing capability to automatically translate a large number of archival spectral files of any format to one of several common formats, such as the IAEA SPE or the DHS N42. Currently over 540 analysts and members of the nuclear emergency response community worldwide are on the distribution list for updates to Cambio. Cambio users come from all levels of government, university, and commercial partners around the world that support efforts to counter terrorist nuclear activities. Cambio is Unclassified Unlimited Release (UUR) and distributed by internet downloads with email notifications whenever a new build of Cambio provides for new formats, bug fixes, or new or improved capabilities. Cambio is also provided as a DLL to the Karlsruhe Institute for Transuranium Elements so that Cambio's automatic file-reading capability can be included at the Nucleonica web site.« less

  13. Simplified generation of biomedical 3D surface model data for embedding into 3D portable document format (PDF) files for publication and education.

    PubMed

    Newe, Axel; Ganslandt, Thomas

    2013-01-01

    The usefulness of the 3D Portable Document Format (PDF) for clinical, educational, and research purposes has recently been shown. However, the lack of a simple tool for converting biomedical data into the model data in the necessary Universal 3D (U3D) file format is a drawback for the broad acceptance of this new technology. A new module for the image processing and rapid prototyping framework MeVisLab does not only provide a platform-independent possibility to create surface meshes out of biomedical/DICOM and other data and to export them into U3D--it also lets the user add meta data to these meshes to predefine colors and names that can be processed by a PDF authoring software while generating 3D PDF files. Furthermore, the source code of the respective module is available and well documented so that it can easily be modified for own purposes.

  14. GEWEX-RFA Data File Format and File Naming Convention

    Atmospheric Science Data Center

    2016-05-20

    ... documentation, will be stored for each data product. Each time data is added to, removed from, or modified in the file set for a product, ... including 29 days in leap-year Februaries. Time series files containing 15-minute data should start at the top of an hour to ...

  15. Painless File Extraction: The A(rc)--Z(oo) of Internet Archive Formats.

    ERIC Educational Resources Information Center

    Simmonds, Curtis

    1993-01-01

    Discusses extraction programs needed to postprocess software downloaded from the Internet that has been archived and compressed for the purposes of storage and file transfer. Archiving formats for DOS, Macintosh, and UNIX operating systems are described; and cross-platform compression utilities are explained. (LRW)

  16. ISMRM Raw data format: A proposed standard for MRI raw datasets.

    PubMed

    Inati, Souheil J; Naegele, Joseph D; Zwart, Nicholas R; Roopchansingh, Vinai; Lizak, Martin J; Hansen, David C; Liu, Chia-Ying; Atkinson, David; Kellman, Peter; Kozerke, Sebastian; Xue, Hui; Campbell-Washburn, Adrienne E; Sørensen, Thomas S; Hansen, Michael S

    2017-01-01

    This work proposes the ISMRM Raw Data format as a common MR raw data format, which promotes algorithm and data sharing. A file format consisting of a flexible header and tagged frames of k-space data was designed. Application Programming Interfaces were implemented in C/C++, MATLAB, and Python. Converters for Bruker, General Electric, Philips, and Siemens proprietary file formats were implemented in C++. Raw data were collected using magnetic resonance imaging scanners from four vendors, converted to ISMRM Raw Data format, and reconstructed using software implemented in three programming languages (C++, MATLAB, Python). Images were obtained by reconstructing the raw data from all vendors. The source code, raw data, and images comprising this work are shared online, serving as an example of an image reconstruction project following a paradigm of reproducible research. The proposed raw data format solves a practical problem for the magnetic resonance imaging community. It may serve as a foundation for reproducible research and collaborations. The ISMRM Raw Data format is a completely open and community-driven format, and the scientific community is invited (including commercial vendors) to participate either as users or developers. Magn Reson Med 77:411-421, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  18. Image Viewer using Digital Imaging and Communications in Medicine (DICOM)

    NASA Astrophysics Data System (ADS)

    Baraskar, Trupti N.

    2010-11-01

    Digital Imaging and Communications in Medicine is a standard for handling, storing, printing, and transmitting information in medical imaging. The National Electrical Manufacturers Association holds the copyright to this standard. It was developed by the DICOM Standards committee. The other image viewers cannot collectively store the image details as well as the patient's information. So the image may get separated from the details, but DICOM file format stores the patient's information and the image details. Main objective is to develop a DICOM image viewer. The image viewer will open .dcm i.e. DICOM image file and also will have additional features such as zoom in, zoom out, black and white inverter, magnifier, blur, B/W inverter, horizontal and vertical flipping, sharpening, contrast, brightness and .gif converter are incorporated.

  19. File Management In Space

    NASA Technical Reports Server (NTRS)

    Critchfield, Anna R.; Zepp, Robert H.

    2000-01-01

    We propose that the user interact with the spacecraft as if the spacecraft were a file server, so that the user can select and receive data as files in standard formats (e.g., tables or images, such as jpeg) via the Internet. Internet technology will be used end-to-end from the spacecraft to authorized users, such as the flight operation team, and project scientists. The proposed solution includes a ground system and spacecraft architecture, mission operations scenarios, and an implementation roadmap showing migration from current practice to the future, where distributed users request and receive files of spacecraft data from archives or spacecraft with equal ease. This solution will provide ground support personnel and scientists easy, direct, secure access to their authorized data without cumbersome processing, and can be extended to support autonomous communications with the spacecraft.

  20. Theory of Remote Image Formation

    NASA Astrophysics Data System (ADS)

    Blahut, Richard E.

    2004-11-01

    In many applications, images, such as ultrasonic or X-ray signals, are recorded and then analyzed with digital or optical processors in order to extract information. Such processing requires the development of algorithms of great precision and sophistication. This book presents a unified treatment of the mathematical methods that underpin the various algorithms used in remote image formation. The author begins with a review of transform and filter theory. He then discusses two- and three-dimensional Fourier transform theory, the ambiguity function, image construction and reconstruction, tomography, baseband surveillance systems, and passive systems (where the signal source might be an earthquake or a galaxy). Information-theoretic methods in image formation are also covered, as are phase errors and phase noise. Throughout the book, practical applications illustrate theoretical concepts, and there are many homework problems. The book is aimed at graduate students of electrical engineering and computer science, and practitioners in industry. Presents a unified treatment of the mathematical methods that underpin the algorithms used in remote image formation Illustrates theoretical concepts with reference to practical applications Provides insights into the design parameters of real systems

  1. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments.

    PubMed

    Gorgolewski, Krzysztof J; Auer, Tibor; Calhoun, Vince D; Craddock, R Cameron; Das, Samir; Duff, Eugene P; Flandin, Guillaume; Ghosh, Satrajit S; Glatard, Tristan; Halchenko, Yaroslav O; Handwerker, Daniel A; Hanke, Michael; Keator, David; Li, Xiangrui; Michael, Zachary; Maumet, Camille; Nichols, B Nolan; Nichols, Thomas E; Pellman, John; Poline, Jean-Baptiste; Rokem, Ariel; Schaefer, Gunnar; Sochat, Vanessa; Triplett, William; Turner, Jessica A; Varoquaux, Gaël; Poldrack, Russell A

    2016-06-21

    The development of magnetic resonance imaging (MRI) techniques has defined modern neuroimaging. Since its inception, tens of thousands of studies using techniques such as functional MRI and diffusion weighted imaging have allowed for the non-invasive study of the brain. Despite the fact that MRI is routinely used to obtain data for neuroscience research, there has been no widely adopted standard for organizing and describing the data collected in an imaging experiment. This renders sharing and reusing data (within or between labs) difficult if not impossible and unnecessarily complicates the application of automatic pipelines and quality assurance protocols. To solve this problem, we have developed the Brain Imaging Data Structure (BIDS), a standard for organizing and describing MRI datasets. The BIDS standard uses file formats compatible with existing software, unifies the majority of practices already common in the field, and captures the metadata necessary for most common data processing operations.

  2. COMBINE archive and OMEX format: one file to share all information to reproduce a modeling project.

    PubMed

    Bergmann, Frank T; Adams, Richard; Moodie, Stuart; Cooper, Jonathan; Glont, Mihai; Golebiewski, Martin; Hucka, Michael; Laibe, Camille; Miller, Andrew K; Nickerson, David P; Olivier, Brett G; Rodriguez, Nicolas; Sauro, Herbert M; Scharm, Martin; Soiland-Reyes, Stian; Waltemath, Dagmar; Yvon, Florent; Le Novère, Nicolas

    2014-12-14

    With the ever increasing use of computational models in the biosciences, the need to share models and reproduce the results of published studies efficiently and easily is becoming more important. To this end, various standards have been proposed that can be used to describe models, simulations, data or other essential information in a consistent fashion. These constitute various separate components required to reproduce a given published scientific result. We describe the Open Modeling EXchange format (OMEX). Together with the use of other standard formats from the Computational Modeling in Biology Network (COMBINE), OMEX is the basis of the COMBINE Archive, a single file that supports the exchange of all the information necessary for a modeling and simulation experiment in biology. An OMEX file is a ZIP container that includes a manifest file, listing the content of the archive, an optional metadata file adding information about the archive and its content, and the files describing the model. The content of a COMBINE Archive consists of files encoded in COMBINE standards whenever possible, but may include additional files defined by an Internet Media Type. Several tools that support the COMBINE Archive are available, either as independent libraries or embedded in modeling software. The COMBINE Archive facilitates the reproduction of modeling and simulation experiments in biology by embedding all the relevant information in one file. Having all the information stored and exchanged at once also helps in building activity logs and audit trails. We anticipate that the COMBINE Archive will become a significant help for modellers, as the domain moves to larger, more complex experiments such as multi-scale models of organs, digital organisms, and bioengineering.

  3. Desktop document delivery using portable document format (PDF) files and the Web.

    PubMed Central

    Shipman, J P; Gembala, W L; Reeder, J M; Zick, B A; Rainwater, M J

    1998-01-01

    Desktop access to electronic full-text literature was rated one of the most desirable services in a client survey conducted by the University of Washington Libraries. The University of Washington Health Sciences Libraries (UW HSL) conducted a ten-month pilot test from August 1996 to May 1997 to determine the feasibility of delivering electronic journal articles via the Internet to remote faculty. Articles were scanned into Adobe Acrobat Portable Document Format (PDF) files and delivered to individuals using Multipurpose Internet Mail Extensions (MIME) standard e-mail attachments and the Web. Participants retrieved scanned articles and used the Adobe Acrobat Reader software to view and print files. The pilot test required a special programming effort to automate the client notification and file deletion processes. Test participants were satisfied with the pilot test despite some technical difficulties. Desktop delivery is now offered as a routine delivery method from the UW HSL. PMID:9681165

  4. Desktop document delivery using portable document format (PDF) files and the Web.

    PubMed

    Shipman, J P; Gembala, W L; Reeder, J M; Zick, B A; Rainwater, M J

    1998-07-01

    Desktop access to electronic full-text literature was rated one of the most desirable services in a client survey conducted by the University of Washington Libraries. The University of Washington Health Sciences Libraries (UW HSL) conducted a ten-month pilot test from August 1996 to May 1997 to determine the feasibility of delivering electronic journal articles via the Internet to remote faculty. Articles were scanned into Adobe Acrobat Portable Document Format (PDF) files and delivered to individuals using Multipurpose Internet Mail Extensions (MIME) standard e-mail attachments and the Web. Participants retrieved scanned articles and used the Adobe Acrobat Reader software to view and print files. The pilot test required a special programming effort to automate the client notification and file deletion processes. Test participants were satisfied with the pilot test despite some technical difficulties. Desktop delivery is now offered as a routine delivery method from the UW HSL.

  5. Multistatic synthetic aperture radar image formation.

    PubMed

    Krishnan, V; Swoboda, J; Yarman, C E; Yazici, B

    2010-05-01

    In this paper, we consider a multistatic synthetic aperture radar (SAR) imaging scenario where a swarm of airborne antennas, some of which are transmitting, receiving or both, are traversing arbitrary flight trajectories and transmitting arbitrary waveforms without any form of multiplexing. The received signal at each receiving antenna may be interfered by the scattered signal due to multiple transmitters and additive thermal noise at the receiver. In this scenario, standard bistatic SAR image reconstruction algorithms result in artifacts in reconstructed images due to these interferences. In this paper, we use microlocal analysis in a statistical setting to develop a filtered-backprojection (FBP) type analytic image formation method that suppresses artifacts due to interference while preserving the location and orientation of edges of the scene in the reconstructed image. Our FBP-type algorithm exploits the second-order statistics of the target and noise to suppress the artifacts due to interference in a mean-square sense. We present numerical simulations to demonstrate the performance of our multistatic SAR image formation algorithm with the FBP-type bistatic SAR image reconstruction algorithm. While we mainly focus on radar applications, our image formation method is also applicable to other problems arising in fields such as acoustic, geophysical and medical imaging.

  6. Java Image I/O for VICAR, PDS, and ISIS

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Levoe, Steven R.

    2011-01-01

    This library, written in Java, supports input and output of images and metadata (labels) in the VICAR, PDS image, and ISIS-2 and ISIS-3 file formats. Three levels of access exist. The first level comprises the low-level, direct access to the file. This allows an application to read and write specific image tiles, lines, or pixels and to manipulate the label data directly. This layer is analogous to the C-language "VICAR Run-Time Library" (RTL), which is the image I/O library for the (C/C++/Fortran) VICAR image processing system from JPL MIPL (Multimission Image Processing Lab). This low-level library can also be used to read and write labeled, uncompressed images stored in formats similar to VICAR, such as ISIS-2 and -3, and a subset of PDS (image format). The second level of access involves two codecs based on Java Advanced Imaging (JAI) to provide access to VICAR and PDS images in a file-format-independent manner. JAI is supplied by Sun Microsystems as an extension to desktop Java, and has a number of codecs for formats such as GIF, TIFF, JPEG, etc. Although Sun has deprecated the codec mechanism (replaced by IIO), it is still used in many places. The VICAR and PDS codecs allow any program written using the JAI codec spec to use VICAR or PDS images automatically, with no specific knowledge of the VICAR or PDS formats. Support for metadata (labels) is included, but is format-dependent. The PDS codec, when processing PDS images with an embedded VIAR label ("dual-labeled images," such as used for MER), presents the VICAR label in a new way that is compatible with the VICAR codec. The third level of access involves VICAR, PDS, and ISIS Image I/O plugins. The Java core includes an "Image I/O" (IIO) package that is similar in concept to the JAI codec, but is newer and more capable. Applications written to the IIO specification can use any image format for which a plug-in exists, with no specific knowledge of the format itself.

  7. Informatics in radiology (infoRAD): Vendor-neutral case input into a server-based digital teaching file system.

    PubMed

    Kamauu, Aaron W C; DuVall, Scott L; Robison, Reid J; Liimatta, Andrew P; Wiggins, Richard H; Avrin, David E

    2006-01-01

    Although digital teaching files are important to radiology education, there are no current satisfactory solutions for export of Digital Imaging and Communications in Medicine (DICOM) images from picture archiving and communication systems (PACS) in desktop publishing format. A vendor-neutral digital teaching file, the Radiology Interesting Case Server (RadICS), offers an efficient tool for harvesting interesting cases from PACS without requiring modifications of the PACS configurations. Radiologists push imaging studies from PACS to RadICS via the standard DICOM Send process, and the RadICS server automatically converts the DICOM images into the Joint Photographic Experts Group format, a common desktop publishing format. They can then select key images and create an interesting case series at the PACS workstation. RadICS was tested successfully against multiple unmodified commercial PACS. Using RadICS, radiologists are able to harvest and author interesting cases at the point of clinical interpretation with minimal disruption in clinical work flow. RSNA, 2006

  8. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments

    PubMed Central

    Gorgolewski, Krzysztof J.; Auer, Tibor; Calhoun, Vince D.; Craddock, R. Cameron; Das, Samir; Duff, Eugene P.; Flandin, Guillaume; Ghosh, Satrajit S.; Glatard, Tristan; Halchenko, Yaroslav O.; Handwerker, Daniel A.; Hanke, Michael; Keator, David; Li, Xiangrui; Michael, Zachary; Maumet, Camille; Nichols, B. Nolan; Nichols, Thomas E.; Pellman, John; Poline, Jean-Baptiste; Rokem, Ariel; Schaefer, Gunnar; Sochat, Vanessa; Triplett, William; Turner, Jessica A.; Varoquaux, Gaël; Poldrack, Russell A.

    2016-01-01

    The development of magnetic resonance imaging (MRI) techniques has defined modern neuroimaging. Since its inception, tens of thousands of studies using techniques such as functional MRI and diffusion weighted imaging have allowed for the non-invasive study of the brain. Despite the fact that MRI is routinely used to obtain data for neuroscience research, there has been no widely adopted standard for organizing and describing the data collected in an imaging experiment. This renders sharing and reusing data (within or between labs) difficult if not impossible and unnecessarily complicates the application of automatic pipelines and quality assurance protocols. To solve this problem, we have developed the Brain Imaging Data Structure (BIDS), a standard for organizing and describing MRI datasets. The BIDS standard uses file formats compatible with existing software, unifies the majority of practices already common in the field, and captures the metadata necessary for most common data processing operations. PMID:27326542

  9. ISMRM Raw Data Format: A Proposed Standard for MRI Raw Datasets

    PubMed Central

    Inati, Souheil J.; Naegele, Joseph D.; Zwart, Nicholas R.; Roopchansingh, Vinai; Lizak, Martin J.; Hansen, David C.; Liu, Chia-Ying; Atkinson, David; Kellman, Peter; Kozerke, Sebastian; Xue, Hui; Campbell-Washburn, Adrienne E.; Sørensen, Thomas S.; Hansen, Michael S.

    2015-01-01

    Purpose This work proposes the ISMRM Raw Data (ISMRMRD) format as a common MR raw data format, which promotes algorithm and data sharing. Methods A file format consisting of a flexible header and tagged frames of k-space data was designed. Application Programming Interfaces were implemented in C/C++, MATLAB, and Python. Converters for Bruker, General Electric, Philips, and Siemens proprietary file formats were implemented in C++. Raw data were collected using MRI scanners from four vendors, converted to ISMRMRD format, and reconstructed using software implemented in three programming languages (C++, MATLAB, Python). Results Images were obtained by reconstructing the raw data from all vendors. The source code, raw data, and images comprising this work are shared online, serving as an example of an image reconstruction project following a paradigm of reproducible research. Conclusion The proposed raw data format solves a practical problem for the MRI community. It may serve as a foundation for reproducible research and collaborations. The ISMRMRD format is a completely open and community-driven format, and the scientific community is invited (including commercial vendors) to participate either as users or developers. PMID:26822475

  10. Simbol-X Formation Flight and Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Civitani, M.; Djalal, S.; Le Duigou, J. M.; La Marle, O.; Chipaux, R.

    2009-05-01

    Simbol-X is the first operational mission relying on two satellites flying in formation. The dynamics of the telescope, due to the formation flight concept, raises a variety of problematic, like image reconstruction, that can be better evaluated via a simulation tools. We present here the first results obtained with Simulos, simulation tool aimed to study the relative spacecrafts navigation and the weight of the different parameters in image reconstruction and telescope performance evaluation. The simulation relies on attitude and formation flight sensors models, formation flight dynamics and control, mirror model and focal plane model, while the image reconstruction is based on the Line of Sight (LOS) concept.

  11. Embedding and Publishing Interactive, 3-Dimensional, Scientific Figures in Portable Document Format (PDF) Files

    PubMed Central

    Barnes, David G.; Vidiassov, Michail; Ruthensteiner, Bernhard; Fluke, Christopher J.; Quayle, Michelle R.; McHenry, Colin R.

    2013-01-01

    With the latest release of the S2PLOT graphics library, embedding interactive, 3-dimensional (3-d) scientific figures in Adobe Portable Document Format (PDF) files is simple, and can be accomplished without commercial software. In this paper, we motivate the need for embedding 3-d figures in scholarly articles. We explain how 3-d figures can be created using the S2PLOT graphics library, exported to Product Representation Compact (PRC) format, and included as fully interactive, 3-d figures in PDF files using the movie15 LaTeX package. We present new examples of 3-d PDF figures, explain how they have been made, validate them, and comment on their advantages over traditional, static 2-dimensional (2-d) figures. With the judicious use of 3-d rather than 2-d figures, scientists can now publish, share and archive more useful, flexible and faithful representations of their study outcomes. The article you are reading does not have embedded 3-d figures. The full paper, with embedded 3-d figures, is recommended and is available as a supplementary download from PLoS ONE (File S2). PMID:24086243

  12. Embedding and publishing interactive, 3-dimensional, scientific figures in Portable Document Format (PDF) files.

    PubMed

    Barnes, David G; Vidiassov, Michail; Ruthensteiner, Bernhard; Fluke, Christopher J; Quayle, Michelle R; McHenry, Colin R

    2013-01-01

    With the latest release of the S2PLOT graphics library, embedding interactive, 3-dimensional (3-d) scientific figures in Adobe Portable Document Format (PDF) files is simple, and can be accomplished without commercial software. In this paper, we motivate the need for embedding 3-d figures in scholarly articles. We explain how 3-d figures can be created using the S2PLOT graphics library, exported to Product Representation Compact (PRC) format, and included as fully interactive, 3-d figures in PDF files using the movie15 LaTeX package. We present new examples of 3-d PDF figures, explain how they have been made, validate them, and comment on their advantages over traditional, static 2-dimensional (2-d) figures. With the judicious use of 3-d rather than 2-d figures, scientists can now publish, share and archive more useful, flexible and faithful representations of their study outcomes. The article you are reading does not have embedded 3-d figures. The full paper, with embedded 3-d figures, is recommended and is available as a supplementary download from PLoS ONE (File S2).

  13. Lossless data embedding for all image formats

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav; Du, Rui

    2002-04-01

    Lossless data embedding has the property that the distortion due to embedding can be completely removed from the watermarked image without accessing any side channel. This can be a very important property whenever serious concerns over the image quality and artifacts visibility arise, such as for medical images, due to legal reasons, for military images or images used as evidence in court that may be viewed after enhancement and zooming. We formulate two general methodologies for lossless embedding that can be applied to images as well as any other digital objects, including video, audio, and other structures with redundancy. We use the general principles as guidelines for designing efficient, simple, and high-capacity lossless embedding methods for three most common image format paradigms - raw, uncompressed formats (BMP), lossy or transform formats (JPEG), and palette formats (GIF, PNG). We close the paper with examples of how the concept of lossless data embedding can be used as a powerful tool to achieve a variety of non-trivial tasks, including elegant lossless authentication using fragile watermarks. Note on terminology: some authors coined the terms erasable, removable, reversible, invertible, and distortion-free for the same concept.

  14. 37 CFR 1.615 - Format of papers filed in a supplemental examination proceeding.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2013-07-01 2013-07-01 false Format of papers filed in a supplemental examination proceeding. 1.615 Section 1.615 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES...

  15. 37 CFR 1.615 - Format of papers filed in a supplemental examination proceeding.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2014-07-01 2014-07-01 false Format of papers filed in a supplemental examination proceeding. 1.615 Section 1.615 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES...

  16. 46 CFR 67.218 - Optional filing of instruments in portable document format as attachments to electronic mail.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... recording under § 67.200 may be submitted in portable document format (.pdf) as an attachment to electronic... submitted for filing in .pdf format pertains to a vessel that is not a currently documented vessel, a... with the National Vessel Documentation Center or must be submitted in .pdf format with the instrument...

  17. BOREAS RSS-14 Level-1a GOES-8 Visible, IR and Water Vapor Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Newcomer, Jeffrey A.; Faysash, David; Cooper, Harry J.; Smith, Eric A.

    2000-01-01

    The BOREAS RSS-14 team collected and processed several GOES-7 and GOES-8 image data sets that covered the BOREAS study region. The level-1a GOES-8 images were created by BORIS personnel from the level-1 images delivered by FSU personnel. The data cover 14-Jul-1995 to 21-Sep-1995 and 12-Feb-1996 to 03-Oct-1996. The data start out as three bands with 8-bit pixel values and end up as five bands with 10-bit pixel values. No major problems with the data have been identified. The differences between the level-1 and level-1a GOES-8 data are the formatting and packaging of the data. The images missing from the temporal series of level-1 GOES-8 images were zero-filled by BORIS staff to create files consistent in size and format. In addition, BORIS staff packaged all the images of a given type from a given day into a single file, removed the header information from the individual level-1 files, and placed it into a single descriptive ASCII header file. The data are contained in binary image format files. Due to the large size of the images, the level-1a GOES-8 data are not contained on the BOREAS CD-ROM set. An inventory listing file is supplied on the CD-ROM to inform users of what data were collected. The level-1a GOES-8 image data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). See sections 15 and 16 for more information. The data files are available on a CD-ROM (see document number 20010000884).

  18. SCIFIO: an extensible framework to support scientific image formats.

    PubMed

    Hiner, Mark C; Rueden, Curtis T; Eliceiri, Kevin W

    2016-12-07

    No gold standard exists in the world of scientific image acquisition; a proliferation of instruments each with its own proprietary data format has made out-of-the-box sharing of that data nearly impossible. In the field of light microscopy, the Bio-Formats library was designed to translate such proprietary data formats to a common, open-source schema, enabling sharing and reproduction of scientific results. While Bio-Formats has proved successful for microscopy images, the greater scientific community was lacking a domain-independent framework for format translation. SCIFIO (SCientific Image Format Input and Output) is presented as a freely available, open-source library unifying the mechanisms of reading and writing image data. The core of SCIFIO is its modular definition of formats, the design of which clearly outlines the components of image I/O to encourage extensibility, facilitated by the dynamic discovery of the SciJava plugin framework. SCIFIO is structured to support coexistence of multiple domain-specific open exchange formats, such as Bio-Formats' OME-TIFF, within a unified environment. SCIFIO is a freely available software library developed to standardize the process of reading and writing scientific image formats.

  19. Highway Safety Information System guidebook for the Minnesota state data files. Volume 1 : SAS file formats

    DOT National Transportation Integrated Search

    2001-02-01

    The Minnesota data system includes the following basic files: Accident data (Accident File, Vehicle File, Occupant File); Roadlog File; Reference Post File; Traffic File; Intersection File; Bridge (Structures) File; and RR Grade Crossing File. For ea...

  20. PySE: Python Source Extractor for radio astronomical images

    NASA Astrophysics Data System (ADS)

    Spreeuw, Hanno; Swinbank, John; Molenaar, Gijs; Staley, Tim; Rol, Evert; Sanders, John; Scheers, Bart; Kuiack, Mark

    2018-05-01

    PySE finds and measures sources in radio telescope images. It is run with several options, such as the detection threshold (a multiple of the local noise), grid size, and the forced clean beam fit, followed by a list of input image files in standard FITS or CASA format. From these, PySe provides a list of found sources; information such as the calculated background image, source list in different formats (e.g. text, region files importable in DS9), and other data may be saved. PySe can be integrated into a pipeline; it was originally written as part of the LOFAR Transient Detection Pipeline (TraP, ascl:1412.011).

  1. New Powder Diffraction File (PDF-4) in relational database format: advantages and data-mining capabilities.

    PubMed

    Kabekkodu, Soorya N; Faber, John; Fawcett, Tim

    2002-06-01

    The International Centre for Diffraction Data (ICDD) is responding to the changing needs in powder diffraction and materials analysis by developing the Powder Diffraction File (PDF) in a very flexible relational database (RDB) format. The PDF now contains 136,895 powder diffraction patterns. In this paper, an attempt is made to give an overview of the PDF-4, search/match methods and the advantages of having the PDF-4 in RDB format. Some case studies have been carried out to search for crystallization trends, properties, frequencies of space groups and prototype structures. These studies give a good understanding of the basic structural aspects of classes of compounds present in the database. The present paper also reports data-mining techniques and demonstrates the power of a relational database over the traditional (flat-file) database structures.

  2. Converting CSV Files to RKSML Files

    NASA Technical Reports Server (NTRS)

    Trebi-Ollennu, Ashitey; Liebersbach, Robert

    2009-01-01

    A computer program converts, into a format suitable for processing on Earth, files of downlinked telemetric data pertaining to the operation of the Instrument Deployment Device (IDD), which is a robot arm on either of the Mars Explorer Rovers (MERs). The raw downlinked data files are in comma-separated- value (CSV) format. The present program converts the files into Rover Kinematics State Markup Language (RKSML), which is an Extensible Markup Language (XML) format that facilitates representation of operations of the IDD and enables analysis of the operations by means of the Rover Sequencing Validation Program (RSVP), which is used to build sequences of commanded operations for the MERs. After conversion by means of the present program, the downlinked data can be processed by RSVP, enabling the MER downlink operations team to play back the actual IDD activity represented by the telemetric data against the planned IDD activity. Thus, the present program enhances the diagnosis of anomalies that manifest themselves as differences between actual and planned IDD activities.

  3. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format.

    PubMed

    Ismail, Mahmoud; Philbin, James

    2015-04-01

    The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies' metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata.

  4. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format

    PubMed Central

    Ismail, Mahmoud; Philbin, James

    2015-01-01

    Abstract. The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies’ metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata. PMID:26158117

  5. In vitro particle image velocity measurements in a model root canal: flow around a polymer rotary finishing file.

    PubMed

    Koch, Jon D; Smith, Nicholas A; Garces, Daniel; Gao, Luyang; Olsen, F Kris

    2014-03-01

    Root canal irrigation is vital to thorough debridement and disinfection, but the mechanisms that contribute to its effectiveness are complex and uncertain. Traditionally, studies in this area have relied on before-and-after static comparisons to assess effectiveness, but new in situ tools are being developed to provide real-time assessments of irrigation. The aim in this work was to measure a cross section of the velocity field in the fluid flow around a polymer rotary finishing file in a model root canal. Fluorescent microparticles were seeded into an optically accessible acrylic root canal model. A polymer rotary finishing file was activated in a static position. After laser excitation, fluorescence from the microparticles was imaged onto a frame-transfer camera. Two consecutive images were cross-correlated to provide a measurement of a projected, 2-dimensional velocity field. The method reveals that fluid velocities can be much higher than the velocity of the file because of the shape of the file. Furthermore, these high velocities are in the axial direction of the canal rather than only in the direct of motion of the file. Particle image velocimetry indicates that fluid velocities induced by the rotating file can be much larger than the speed of the file. Particle image velocimetry can provide qualitative insight and quantitative measurements that may be useful for validating computational fluid dynamic models and connecting clinical observations to physical explanations in dental research. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  6. Alview: Portable Software for Viewing Sequence Reads in BAM Formatted Files.

    PubMed

    Finney, Richard P; Chen, Qing-Rong; Nguyen, Cu V; Hsu, Chih Hao; Yan, Chunhua; Hu, Ying; Abawi, Massih; Bian, Xiaopeng; Meerzaman, Daoud M

    2015-01-01

    The name Alview is a contraction of the term Alignment Viewer. Alview is a compiled to native architecture software tool for visualizing the alignment of sequencing data. Inputs are files of short-read sequences aligned to a reference genome in the SAM/BAM format and files containing reference genome data. Outputs are visualizations of these aligned short reads. Alview is written in portable C with optional graphical user interface (GUI) code written in C, C++, and Objective-C. The application can run in three different ways: as a web server, as a command line tool, or as a native, GUI program. Alview is compatible with Microsoft Windows, Linux, and Apple OS X. It is available as a web demo at https://cgwb.nci.nih.gov/cgi-bin/alview. The source code and Windows/Mac/Linux executables are available via https://github.com/NCIP/alview.

  7. Digitizing an Analog Radiography Teaching File Under Time Constraint: Trade-Offs in Efficiency and Image Quality.

    PubMed

    Loehfelm, Thomas W; Prater, Adam B; Debebe, Tequam; Sekhar, Aarti K

    2017-02-01

    We digitized the radiography teaching file at Black Lion Hospital (Addis Ababa, Ethiopia) during a recent trip, using a standard digital camera and a fluorescent light box. Our goal was to photograph every radiograph in the existing library while optimizing the final image size to the maximum resolution of a high quality tablet computer, preserving the contrast resolution of the radiographs, and minimizing total library file size. A secondary important goal was to minimize the cost and time required to take and process the images. Three workers were able to efficiently remove the radiographs from their storage folders, hang them on the light box, operate the camera, catalog the image, and repack the radiographs back to the storage folder. Zoom, focal length, and film speed were fixed, while aperture and shutter speed were manually adjusted for each image, allowing for efficiency and flexibility in image acquisition. Keeping zoom and focal length fixed, which kept the view box at the same relative position in all of the images acquired during a single photography session, allowed unused space to be batch-cropped, saving considerable time in post-processing, at the expense of final image resolution. We present an analysis of the trade-offs in workflow efficiency and final image quality, and demonstrate that a few people with minimal equipment can efficiently digitize a teaching file library.

  8. Retrieving high-resolution images over the Internet from an anatomical image database

    NASA Astrophysics Data System (ADS)

    Strupp-Adams, Annette; Henderson, Earl

    1999-12-01

    The Visible Human Data set is an important contribution to the national collection of anatomical images. To enhance the availability of these images, the National Library of Medicine has supported the design and development of a prototype object-oriented image database which imports, stores, and distributes high resolution anatomical images in both pixel and voxel formats. One of the key database modules is its client-server Internet interface. This Web interface provides a query engine with retrieval access to high-resolution anatomical images that range in size from 100KB for browser viewable rendered images, to 1GB for anatomical structures in voxel file formats. The Web query and retrieval client-server system is composed of applet GUIs, servlets, and RMI application modules which communicate with each other to allow users to query for specific anatomical structures, and retrieve image data as well as associated anatomical images from the database. Selected images can be downloaded individually as single files via HTTP or downloaded in batch-mode over the Internet to the user's machine through an applet that uses Netscape's Object Signing mechanism. The image database uses ObjectDesign's object-oriented DBMS, ObjectStore that has a Java interface. The query and retrieval systems has been tested with a Java-CDE window system, and on the x86 architecture using Windows NT 4.0. This paper describes the Java applet client search engine that queries the database; the Java client module that enables users to view anatomical images online; the Java application server interface to the database which organizes data returned to the user, and its distribution engine that allow users to download image files individually and/or in batch-mode.

  9. Planetary image conversion task

    NASA Technical Reports Server (NTRS)

    Martin, M. D.; Stanley, C. L.; Laughlin, G.

    1985-01-01

    The Planetary Image Conversion Task group processed 12,500 magnetic tapes containing raw imaging data from JPL planetary missions and produced an image data base in consistent format on 1200 fully packed 6250-bpi tapes. The output tapes will remain at JPL. A copy of the entire tape set was delivered to US Geological Survey, Flagstaff, Ariz. A secondary task converted computer datalogs, which had been stored in project specific MARK IV File Management System data types and structures, to flat-file, text format that is processable on any modern computer system. The conversion processing took place at JPL's Image Processing Laboratory on an IBM 370-158 with existing software modified slightly to meet the needs of the conversion task. More than 99% of the original digital image data was successfully recovered by the conversion task. However, processing data tapes recorded before 1975 was destructive. This discovery is of critical importance to facilities responsible for maintaining digital archives since normal periodic random sampling techniques would be unlikely to detect this phenomenon, and entire data sets could be wiped out in the act of generating seemingly positive sampling results. Reccomended follow-on activities are also included.

  10. Image formation analysis and high resolution image reconstruction for plenoptic imaging systems.

    PubMed

    Shroff, Sapna A; Berkner, Kathrin

    2013-04-01

    Plenoptic imaging systems are often used for applications like refocusing, multimodal imaging, and multiview imaging. However, their resolution is limited to the number of lenslets. In this paper we investigate paraxial, incoherent, plenoptic image formation, and develop a method to recover some of the resolution for the case of a two-dimensional (2D) in-focus object. This enables the recovery of a conventional-resolution, 2D image from the data captured in a plenoptic system. We show simulation results for a plenoptic system with a known response and Gaussian sensor noise.

  11. Image editing with Adobe Photoshop 6.0.

    PubMed

    Caruso, Ronald D; Postel, Gregory C

    2002-01-01

    The authors introduce Photoshop 6.0 for radiologists and demonstrate basic techniques of editing gray-scale cross-sectional images intended for publication and for incorporation into computerized presentations. For basic editing of gray-scale cross-sectional images, the Tools palette and the History/Actions palette pair should be displayed. The History palette may be used to undo a step or series of steps. The Actions palette is a menu of user-defined macros that save time by automating an action or series of actions. Converting an image to 8-bit gray scale is the first editing function. Cropping is the next action. Both decrease file size. Use of the smallest file size necessary for the purpose at hand is recommended. Final file size for gray-scale cross-sectional neuroradiologic images (8-bit, single-layer TIFF [tagged image file format] at 300 pixels per inch) intended for publication varies from about 700 Kbytes to 3 Mbytes. Final file size for incorporation into computerized presentations is about 10-100 Kbytes (8-bit, single-layer, gray-scale, high-quality JPEG [Joint Photographic Experts Group]), depending on source and intended use. Editing and annotating images before they are inserted into presentation software is highly recommended, both for convenience and flexibility. Radiologists should find that image editing can be carried out very rapidly once the basic steps are learned and automated. Copyright RSNA, 2002

  12. Selective document image data compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1998-05-19

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.

  13. Selective document image data compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1998-01-01

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)

  14. Report of the IAU Commission 4 Working Group on Standardizing Access to Ephemerides and File Format Specification

    DTIC Science & Technology

    2014-12-01

    format for the orientation of a body. It further recommends support- ing data be stored in a text PCK. These formats are used by the SPICE system...INTRODUCTION These file formats were developed for and are used by the SPICE system, developed by the Navigation and Ancillary Information Facility (NAIF...of NASA’s Jet Propulsion Laboratory (JPL). Most users will want to use either the SPICE libraries or CALCEPH, developed by the Institut de mécanique

  15. Spatiotemporal matrix image formation for programmable ultrasound scanners

    NASA Astrophysics Data System (ADS)

    Berthon, Beatrice; Morichau-Beauchant, Pierre; Porée, Jonathan; Garofalakis, Anikitos; Tavitian, Bertrand; Tanter, Mickael; Provost, Jean

    2018-02-01

    As programmable ultrasound scanners become more common in research laboratories, it is increasingly important to develop robust software-based image formation algorithms that can be obtained in a straightforward fashion for different types of probes and sequences with a small risk of error during implementation. In this work, we argue that as the computational power keeps increasing, it is becoming practical to directly implement an approximation to the matrix operator linking reflector point targets to the corresponding radiofrequency signals via thoroughly validated and widely available simulations software. Once such a spatiotemporal forward-problem matrix is constructed, standard and thus highly optimized inversion procedures can be leveraged to achieve very high quality images in real time. Specifically, we show that spatiotemporal matrix image formation produces images of similar or enhanced quality when compared against standard delay-and-sum approaches in phantoms and in vivo, and show that this approach can be used to form images even when using non-conventional probe designs for which adapted image formation algorithms are not readily available.

  16. A JPEG backward-compatible HDR image compression

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Ebrahimi, Touradj

    2012-10-01

    High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.

  17. Viewing Files | Smokefree 60+

    Cancer.gov

    In addition to standard HTML webpages, our website contains files in other formats. You may need additional software or browser plug-ins to view some of these files. The following list shows each format along with links to the corresponding freely available plug-ins or viewers. Documents  Adobe Acrobat Reader (.pdf)

  18. Informatics in radiology (infoRAD): free DICOM image viewing and processing software for the Macintosh computer: what's available and what it can do for you.

    PubMed

    Escott, Edward J; Rubinstein, David

    2004-01-01

    It is often necessary for radiologists to use digital images in presentations and conferences. Most imaging modalities produce images in the Digital Imaging and Communications in Medicine (DICOM) format. The image files tend to be large and thus cannot be directly imported into most presentation software, such as Microsoft PowerPoint; the large files also consume storage space. There are many free programs that allow viewing and processing of these files on a personal computer, including conversion to more common file formats such as the Joint Photographic Experts Group (JPEG) format. Free DICOM image viewing and processing software for computers running on the Microsoft Windows operating system has already been evaluated. However, many people use the Macintosh (Apple Computer) platform, and a number of programs are available for these users. The World Wide Web was searched for free DICOM image viewing or processing software that was designed for the Macintosh platform or is written in Java and is therefore platform independent. The features of these programs and their usability were evaluated. There are many free programs for the Macintosh platform that enable viewing and processing of DICOM images. (c) RSNA, 2004.

  19. Parallax Player: a stereoscopic format converter

    NASA Astrophysics Data System (ADS)

    Feldman, Mark H.; Lipton, Lenny

    2003-05-01

    The Parallax Player is a software application that is, in essence, a stereoscopic format converter. Various formats may be inputted and outputted. In addition to being able to take any one of a wide variety of different formats and play them back on many different kinds of PCs and display screens. The Parallax Player has built into it the capability to produce ersatz stereo from a planar still or movie image. The player handles two basic forms of digital content - still images, and movies. It is assumed that all data is digital, either created by means of a photographic film process and later digitized, or directly captured or authored in a digital form. In its current implementation, running on a number of Windows Operating Systems, The Parallax Player reads in a broad selection of contemporary file formats.

  20. Towards an easier creation of three-dimensional data for embedding into scholarly 3D PDF (Portable Document Format) files

    PubMed Central

    2015-01-01

    The Portable Document Format (PDF) allows for embedding three-dimensional (3D) models and is therefore particularly suitable to communicate respective data, especially as regards scholarly articles. The generation of the necessary model data, however, is still challenging, especially for inexperienced users. This prevents an unrestrained proliferation of 3D PDF usage in scholarly communication. This article introduces a new solution for the creation of three of types of 3D geometry (point clouds, polylines and triangle meshes), that is based on MeVisLab, a framework for biomedical image processing. This solution enables even novice users to generate the model data files without requiring programming skills and without the need for an intensive training by simply using it as a conversion tool. Advanced users can benefit from the full capability of MeVisLab to generate and export the model data as part of an overall processing chain. Although MeVisLab is primarily designed for handling biomedical image data, the new module is not restricted to this domain. It can be used for all scientific disciplines. PMID:25780759

  1. Towards an easier creation of three-dimensional data for embedding into scholarly 3D PDF (Portable Document Format) files.

    PubMed

    Newe, Axel

    2015-01-01

    The Portable Document Format (PDF) allows for embedding three-dimensional (3D) models and is therefore particularly suitable to communicate respective data, especially as regards scholarly articles. The generation of the necessary model data, however, is still challenging, especially for inexperienced users. This prevents an unrestrained proliferation of 3D PDF usage in scholarly communication. This article introduces a new solution for the creation of three of types of 3D geometry (point clouds, polylines and triangle meshes), that is based on MeVisLab, a framework for biomedical image processing. This solution enables even novice users to generate the model data files without requiring programming skills and without the need for an intensive training by simply using it as a conversion tool. Advanced users can benefit from the full capability of MeVisLab to generate and export the model data as part of an overall processing chain. Although MeVisLab is primarily designed for handling biomedical image data, the new module is not restricted to this domain. It can be used for all scientific disciplines.

  2. Using compressed images in multimedia education

    NASA Astrophysics Data System (ADS)

    Guy, William L.; Hefner, Lance V.

    1996-04-01

    The classic radiologic teaching file consists of hundreds, if not thousands, of films of various ages, housed in paper jackets with brief descriptions written on the jackets. The development of a good teaching file has been both time consuming and voluminous. Also, any radiograph to be copied was unavailable during the reproduction interval, inconveniencing other medical professionals needing to view the images at that time. These factors hinder motivation to copy films of interest. If a busy radiologist already has an adequate example of a radiological manifestation, it is unlikely that he or she will exert the effort to make a copy of another similar image even if a better example comes along. Digitized radiographs stored on CD-ROM offer marked improvement over the copied film teaching files. Our institution has several laser digitizers which are used to rapidly scan radiographs and produce high quality digital images which can then be converted into standard microcomputer (IBM, Mac, etc.) image format. These images can be stored on floppy disks, hard drives, rewritable optical disks, recordable CD-ROM disks, or removable cartridge media. Most hospital computer information systems include radiology reports in their database. We demonstrate that the reports for the images included in the users teaching file can be copied and stored on the same storage media as the images. The radiographic or sonographic image and the corresponding dictated report can then be 'linked' together. The description of the finding or findings of interest on the digitized image is thus electronically tethered to the image. This obviates the need to write much additional detail concerning the radiograph, saving time. In addition, the text on this disk can be indexed such that all files with user specified features can be instantly retrieve and combined in a single report, if desired. With the use of newer image compression techniques, hundreds of cases may be stored on a single CD

  3. MSL: Facilitating automatic and physical analysis of published scientific literature in PDF format.

    PubMed

    Ahmed, Zeeshan; Dandekar, Thomas

    2015-01-01

    Published scientific literature contains millions of figures, including information about the results obtained from different scientific experiments e.g. PCR-ELISA data, microarray analysis, gel electrophoresis, mass spectrometry data, DNA/RNA sequencing, diagnostic imaging (CT/MRI and ultrasound scans), and medicinal imaging like electroencephalography (EEG), magnetoencephalography (MEG), echocardiography  (ECG), positron-emission tomography (PET) images. The importance of biomedical figures has been widely recognized in scientific and medicine communities, as they play a vital role in providing major original data, experimental and computational results in concise form. One major challenge for implementing a system for scientific literature analysis is extracting and analyzing text and figures from published PDF files by physical and logical document analysis. Here we present a product line architecture based bioinformatics tool 'Mining Scientific Literature (MSL)', which supports the extraction of text and images by interpreting all kinds of published PDF files using advanced data mining and image processing techniques. It provides modules for the marginalization of extracted text based on different coordinates and keywords, visualization of extracted figures and extraction of embedded text from all kinds of biological and biomedical figures using applied Optimal Character Recognition (OCR). Moreover, for further analysis and usage, it generates the system's output in different formats including text, PDF, XML and images files. Hence, MSL is an easy to install and use analysis tool to interpret published scientific literature in PDF format.

  4. Visualization of seismic tomography on Google Earth: Improvement of KML generator and its web application to accept the data file in European standard format

    NASA Astrophysics Data System (ADS)

    Yamagishi, Y.; Yanaka, H.; Tsuboi, S.

    2009-12-01

    We have developed a conversion tool for the data of seismic tomography into KML, called KML generator, and made it available on the web site (http://www.jamstec.go.jp/pacific21/google_earth). The KML generator enables us to display vertical and horizontal cross sections of the model on Google Earth in three-dimensional manner, which would be useful to understand the Earth's interior. The previous generator accepts text files of grid-point data having longitude, latitude, and seismic velocity anomaly. Each data file contains the data for each depth. Metadata, such as bibliographic reference, grid-point interval, depth, are described in other information file. We did not allow users to upload their own tomographic model to the web application, because there is not standard format to represent tomographic model. Recently European seismology research project, NEIRES (Network of Research Infrastructures for European Seismology), advocates that the data of seismic tomography should be standardized. They propose a new format based on JSON (JavaScript Object Notation), which is one of the data-interchange formats, as a standard one for the tomography. This format consists of two parts, which are metadata and grid-point data values. The JSON format seems to be powerful to handle and to analyze the tomographic model, because the structure of the format is fully defined by JavaScript objects, thus the elements are directly accessible by a script. In addition, there exist JSON libraries for several programming languages. The International Federation of Digital Seismograph Network (FDSN) adapted this format as a FDSN standard format for seismic tomographic model. There might be a possibility that this format would not only be accepted by European seismologists but also be accepted as the world standard. Therefore we improve our KML generator for seismic tomography to accept the data file having also JSON format. We also improve the web application of the generator so that the

  5. Software for browsing sectioned images of a dog body and generating a 3D model.

    PubMed

    Park, Jin Seo; Jung, Yong Wook

    2016-01-01

    The goals of this study were (1) to provide accessible and instructive browsing software for sectioned images and a portable document format (PDF) file that includes three-dimensional (3D) models of an entire dog body and (2) to develop techniques for segmentation and 3D modeling that would enable an investigator to perform these tasks without the aid of a computer engineer. To achieve these goals, relatively important or large structures in the sectioned images were outlined to generate segmented images. The sectioned and segmented images were then packaged into browsing software. In this software, structures in the sectioned images are shown in detail and in real color. After 3D models were made from the segmented images, the 3D models were exported into a PDF file. In this format, the 3D models could be manipulated freely. The browsing software and PDF file are available for study by students, for lecture for teachers, and for training for clinicians. These files will be helpful for anatomical study by and clinical training of veterinary students and clinicians. Furthermore, these techniques will be useful for researchers who study two-dimensional images and 3D models. © 2015 Wiley Periodicals, Inc.

  6. OpenMSI: A High-Performance Web-Based Platform for Mass Spectrometry Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Greiner, Annette; Cholia, Shreyas

    Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data accessmore » (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis.« less

  7. Effect of ProTaper Gold, Self-Adjusting File, and XP-endo Shaper Instruments on Dentinal Microcrack Formation: A Micro-computed Tomographic Study.

    PubMed

    Bayram, H Melike; Bayram, Emre; Ocak, Mert; Uygun, Ahmet Demirhan; Celik, Hakan Hamdi

    2017-07-01

    The aim of the present study was to evaluate the frequency of dentinal microcracks observed after root canal preparation with ProTaper Universal (PTU; Dentsply Tulsa Dental Specialties, Tulsa, OK), ProTaper Gold (PTG; Dentsply Tulsa Dental Specialties), Self-Adjusting File (SAF; ReDent Nova, Ra'anana, Israel), and XP-endo Shaper (XP; FKG Dentaire, La Chaux-de-Fonds, Switzerland) instruments using micro-computed tomographic (CT) analysis. Forty extracted human mandibular premolars having single-canal and straight root were randomly assigned to 4 experimental groups (n = 10) according to the different nickel-titanium systems used for root canal preparation: PTU, PTG, SAF, and XP. In the SAF and XP groups, the canals were first prepared with a K-file until #25 at the working length, and then the SAF or XP files were used. The specimens were scanned using high-resolution micro-computed tomographic imaging before and after root canal preparation. Afterward, preoperative and postoperative cross-sectional images of the teeth were screened to identify the presence of dentinal defects. For each group, the number of microcracks was determined as a percentage rate. The McNemar test was used to determine significant differences before and after instrumentation. The level of significance was set at P ≤ .05. The PTU system significantly increased the percentage rate of microcracks compared with preoperative specimens (P < .05). No new dentinal microcracks were observed in the PTG, SAF, or XP groups. Root canal preparations with the PTG, SAF, and XP systems did not induce the formation of new dentinal microcracks on straight root canals of mandibular premolars. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  8. An analysis of image storage systems for scalable training of deep neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Young, Steven R; Patton, Robert M

    This study presents a principled empirical evaluation of image storage systems for training deep neural networks. We employ the Caffe deep learning framework to train neural network models for three different data sets, MNIST, CIFAR-10, and ImageNet. While training the models, we evaluate five different options to retrieve training image data: (1) PNG-formatted image files on local file system; (2) pushing pixel arrays from image files into a single HDF5 file on local file system; (3) in-memory arrays to hold the pixel arrays in Python and C++; (4) loading the training data into LevelDB, a log-structured merge tree based key-valuemore » storage; and (5) loading the training data into LMDB, a B+tree based key-value storage. The experimental results quantitatively highlight the disadvantage of using normal image files on local file systems to train deep neural networks and demonstrate reliable performance with key-value storage based storage systems. When training a model on the ImageNet dataset, the image file option was more than 17 times slower than the key-value storage option. Along with measurements on training time, this study provides in-depth analysis on the cause of performance advantages/disadvantages of each back-end to train deep neural networks. We envision the provided measurements and analysis will shed light on the optimal way to architect systems for training neural networks in a scalable manner.« less

  9. 1995 Joseph E. Whitley, MD, Award. A World Wide Web gateway to the radiologic learning file.

    PubMed

    Channin, D S

    1995-12-01

    Computer networks in general, and the Internet specifically, are changing the way information is manipulated in the world at large and in radiology. The goal of this project was to develop a computer system in which images from the Radiologic Learning File, available previously only via a single-user laser disc, are made available over a generic, high-availability computer network to many potential users simultaneously. Using a networked workstation in our laboratory and freely available distributed hypertext software, we established a World Wide Web (WWW) information server for radiology. Images from the Radiologic Learning File are requested through the WWW client software, digitized from a single laser disc containing the entire teaching file and then transmitted over the network to the client. The text accompanying each image is incorporated into the transmitted document. The Radiologic Learning File is now on-line, and requests to view the cases result in the delivery of the text and images. Image digitization via a frame grabber takes 1/30th of a second. Conversion of the image to a standard computer graphic format takes 45-60 sec. Text and image transmission speed on a local area network varies between 200 and 400 kilobytes (KB) per second depending on the network load. We have made images from a laser disc of the Radiologic Learning File available through an Internet-based hypertext server. The images previously available through a single-user system located in a remote section of our department are now ubiquitously available throughout our department via the department's computer network. We have thus converted a single-user, limited functionality system into a multiuser, widely available resource.

  10. Converting Inhouse Subject Card Files to Electronic Keyword Files.

    ERIC Educational Resources Information Center

    Culmer, Carita M.

    The library at Phoenix College developed the Controversial Issues Files (CIF), a "home made" card file containing references pertinent to specific ongoing assignments. Although the CIF had proven itself to be an excellent resource tool for beginning researchers, it was cumbersome to maintain in the card format, and was limited to very…

  11. Chapter 6. Tabular data and graphical images in support of the U.S. Geological Survey National Oil and Gas Assessment-East Texas basin and Louisiana-Mississippi salt basins provinces, Jurassic Smackover interior salt basins total petroleum system (504902), Travis Peak and Hosston formations.

    USGS Publications Warehouse

    ,

    2006-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on the CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  12. Single molecule image formation, reconstruction and processing: introduction.

    PubMed

    Ashok, Amit; Piestun, Rafael; Stallinga, Sjoerd

    2016-07-01

    The ability to image at the single molecule scale has revolutionized research in molecular biology. This feature issue presents a collection of articles that provides new insights into the fundamental limits of single molecule imaging and reports novel techniques for image formation and analysis.

  13. BOREAS Forest Cover Data Layers over the SSA-MSA in Raster Format

    NASA Technical Reports Server (NTRS)

    Nickeson, Jaime; Gruszka, F; Hall, F.

    2000-01-01

    This data set, originally provided as vector polygons with attributes, has been processed by BORIS staff to provide raster files that can be used for modeling or for comparison purposes. The original data were received as ARC/INFO coverages or as export files from SERM. The data include information on forest parameters for the BOREAS SSA-MSA. Most of the data used for this product were acquired by BORIS in 1993; the maps were produced from aerial photography taken as recently as 1988. The data are stored in binary, image format files.

  14. MSL: Facilitating automatic and physical analysis of published scientific literature in PDF format

    PubMed Central

    Ahmed, Zeeshan; Dandekar, Thomas

    2018-01-01

    Published scientific literature contains millions of figures, including information about the results obtained from different scientific experiments e.g. PCR-ELISA data, microarray analysis, gel electrophoresis, mass spectrometry data, DNA/RNA sequencing, diagnostic imaging (CT/MRI and ultrasound scans), and medicinal imaging like electroencephalography (EEG), magnetoencephalography (MEG), echocardiography  (ECG), positron-emission tomography (PET) images. The importance of biomedical figures has been widely recognized in scientific and medicine communities, as they play a vital role in providing major original data, experimental and computational results in concise form. One major challenge for implementing a system for scientific literature analysis is extracting and analyzing text and figures from published PDF files by physical and logical document analysis. Here we present a product line architecture based bioinformatics tool ‘Mining Scientific Literature (MSL)’, which supports the extraction of text and images by interpreting all kinds of published PDF files using advanced data mining and image processing techniques. It provides modules for the marginalization of extracted text based on different coordinates and keywords, visualization of extracted figures and extraction of embedded text from all kinds of biological and biomedical figures using applied Optimal Character Recognition (OCR). Moreover, for further analysis and usage, it generates the system’s output in different formats including text, PDF, XML and images files. Hence, MSL is an easy to install and use analysis tool to interpret published scientific literature in PDF format. PMID:29721305

  15. BOREAS RSS-14 Level-2 GOES-7 Shortwave and Longwave Radiation Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Gu, Jiujing; Smith, Eric A.

    2000-01-01

    The BOREAS RSS-14 team collected and processed several GOES-7 and GOES-8 image data sets that covered the BOREAS study region. This data set contains images of shortwave and longwave radiation at the surface and top of the atmosphere derived from collected GOES-7 data. The data cover the time period of 05-Feb-1994 to 20-Sep-1994. The images missing from the temporal series were zero-filled to create a consistent sequence of files. The data are stored in binary image format files. Due to the large size of the images, the level-1a GOES-7 data are not contained on the BOREAS CD-ROM set. An inventory listing file is supplied on the CD-ROM to inform users of what data were collected. The level-1a GOES-7 image data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). See sections 15 and 16 for more information. The data files are available on a CD-ROM (see document number 20010000884).

  16. Computer image analysis in obtaining characteristics of images: greenhouse tomatoes in the process of generating learning sets of artificial neural networks

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Przybył, J.; Koszela, K.; Boniecki, P.; Mueller, W.; Raba, B.; Lewicki, A.; Przybył, K.

    2014-04-01

    The aim of the project was to make the software which on the basis on image of greenhouse tomato allows for the extraction of its characteristics. Data gathered during the image analysis and processing were used to build learning sets of artificial neural networks. Program enables to process pictures in jpeg format, acquisition of statistical information of the picture and export them to an external file. Produced software is intended to batch analyze collected research material and obtained information saved as a csv file. Program allows for analysis of 33 independent parameters implicitly to describe tested image. The application is dedicated to processing and image analysis of greenhouse tomatoes. The program can be used for analysis of other fruits and vegetables of a spherical shape.

  17. BOREAS RSS-20 POLDER Radiance Images From the NASA C-130

    NASA Technical Reports Server (NTRS)

    Leroy, M.; Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Smith, David E. (Technical Monitor)

    2000-01-01

    These Boreal Ecosystem-Atmosphere Study (BOREAS) Remote Sensing Science (RSS)-20 data are a subset of images collected by the Polarization and Directionality of Earth's Reflectance (POLDER) instrument over tower sites in the BOREAS study areas during the intensive field campaigns (IFCs) in 1994. The POLDER images presented here from the NASA ARC C-130 aircraft are made available for illustration purposes only. The data are stored in binary image-format files. The POLDER radiance images are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). The data files are available on a CD-ROM (see document number 20010000884).

  18. NCEP BUFR File Structure

    Science.gov Websites

    . These tables may be defined within a separate ASCII text file (see Description and Format of BUFR Tables time, the BUFR tables are usually read from an external ASCII text file (although it is also possible reports. Click here to view the ASCII text file (called /nwprod/fix/bufrtab.002 on the NCEP CCS machines

  19. Low-Speed Fingerprint Image Capture System User`s Guide, June 1, 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitus, B.R.; Goddard, J.S.; Jatko, W.B.

    1993-06-01

    The Low-Speed Fingerprint Image Capture System (LS-FICS) uses a Sun workstation controlling a Lenzar ElectroOptics Opacity 1000 imaging system to digitize fingerprint card images to support the Federal Bureau of Investigation`s (FBI`s) Automated Fingerprint Identification System (AFIS) program. The system also supports the operations performed by the Oak Ridge National Laboratory- (ORNL-) developed Image Transmission Network (ITN) prototype card scanning system. The input to the system is a single FBI fingerprint card of the agreed-upon standard format and a user-specified identification number. The output is a file formatted to be compatible with the National Institute of Standards and Technology (NIST)more » draft standard for fingerprint data exchange dated June 10, 1992. These NIST compatible files contain the required print and text images. The LS-FICS is designed to provide the FBI with the capability of scanning fingerprint cards into a digital format. The FBI will replicate the system to generate a data base of test images. The Host Workstation contains the image data paths and the compression algorithm. A local area network interface, disk storage, and tape drive are used for the image storage and retrieval, and the Lenzar Opacity 1000 scanner is used to acquire the image. The scanner is capable of resolving 500 pixels/in. in both x and y directions. The print images are maintained in full 8-bit gray scale and compressed with an FBI-approved wavelet-based compression algorithm. The text fields are downsampled to 250 pixels/in. and 2-bit gray scale. The text images are then compressed using a lossless Huffman coding scheme. The text fields retrieved from the output files are easily interpreted when displayed on the screen. Detailed procedures are provided for system calibration and operation. Software tools are provided to verify proper system operation.« less

  20. PROPOSED STANDARD TO GREATLY EXPAND PUBLIC ACCESS AND EXPLORATION OF TOXICITY DATA: EVALUATION OF STRUCTURE DATA FILE FORMAT

    EPA Science Inventory



    PROPOSED ST ANDARD TO GREA TL Y EXP AND PUBLIC ACCESS AND EXPLORATION OF TOXICITY DATA: EVALUATION OF STRUCTURE DATA FILE FORMAT

    The ability to assess the potential toxicity of environmental, pharmaceutical, or industrial chemicals based on chemical structure in...

  1. Data files from the Grays Harbor Sediment Transport Experiment Spring 2001

    USGS Publications Warehouse

    Landerman, Laura A.; Sherwood, Christopher R.; Gelfenbaum, Guy; Lacy, Jessica; Ruggiero, Peter; Wilson, Douglas; Chisholm, Tom; Kurrus, Keith

    2005-01-01

    This publication consists of two DVD-ROMs, both of which are presented here. This report describes data collected during the Spring 2001 Grays Harbor Sediment Transport Experiment, and provides additional information needed to interpret the data. Two DVDs accompany this report; both contain documentation in html format that assist the user in navigating through the data. DVD-ROM-1 contains a digital version of this report in .pdf format, raw Aquatec acoustic backscatter (ABS) data in .zip format, Sonar data files in .avi format, and coastal processes and morphology data in ASCII format. ASCII data files are provided in .zip format; bundled coastal processes ASCII files are separated by deployment and instrument; bundled morphology ASCII files are separated into monthly data collection efforts containing the beach profiles collected (or extracted from the surface map) at that time; weekly surface maps are also bundled together. DVD-ROM-2 contains a digital version of this report in .pdf format, the binary data files collected by the SonTek instrumentation, calibration files for the pressure sensors, and Matlab m-files for loading the ABS data into Matlab and cleaning-up the optical backscatter (OBS) burst time-series data.

  2. Providing Internet Access to High-Resolution Lunar Images

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2008-01-01

    The OnMoon server is a computer program that provides Internet access to high-resolution Lunar images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of the Moon. The OnMoon server implements the Open Geospatial Consortium (OGC) Web Map Service (WMS) server protocol and supports Moon-specific extensions. Unlike other Internet map servers that provide Lunar data using an Earth coordinate system, the OnMoon server supports encoding of data in Moon-specific coordinate systems. The OnMoon server offers access to most of the available high-resolution Lunar image and elevation data. This server can generate image and map files in the tagged image file format (TIFF) or the Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. Full-precision spectral arithmetic processing is also available, by use of a custom SLD extension. This server can dynamically add shaded relief based on the Lunar elevation to any image layer. This server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.

  3. Providing Internet Access to High-Resolution Mars Images

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2008-01-01

    The OnMars server is a computer program that provides Internet access to high-resolution Mars images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of Mars. The OnMars server is an implementation of the Open Geospatial Consortium (OGC) Web Map Service (WMS) server. Unlike other Mars Internet map servers that provide Martian data using an Earth coordinate system, the OnMars WMS server supports encoding of data in Mars-specific coordinate systems. The OnMars server offers access to most of the available high-resolution Martian image and elevation data, including an 8-meter-per-pixel uncontrolled mosaic of most of the Mars Global Surveyor (MGS) Mars Observer Camera Narrow Angle (MOCNA) image collection, which is not available elsewhere. This server can generate image and map files in the tagged image file format (TIFF), Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. The OnMars server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.

  4. 18 CFR 35.7 - Electronic filing requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Electronic filing... § 35.7 Electronic filing requirements. (a) General rule. All filings made in proceedings initiated... declarations or statements and electronic signatures. (c) Format requirements for electronic filing. The...

  5. 18 CFR 35.7 - Electronic filing requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Electronic filing... § 35.7 Electronic filing requirements. (a) General rule. All filings made in proceedings initiated... declarations or statements and electronic signatures. (c) Format requirements for electronic filing. The...

  6. 18 CFR 35.7 - Electronic filing requirements.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Electronic filing... § 35.7 Electronic filing requirements. (a) General rule. All filings made in proceedings initiated... declarations or statements and electronic signatures. (c) Format requirements for electronic filing. The...

  7. 18 CFR 35.7 - Electronic filing requirements.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Electronic filing... § 35.7 Electronic filing requirements. (a) General rule. All filings made in proceedings initiated... declarations or statements and electronic signatures. (c) Format requirements for electronic filing. The...

  8. Status of the Planet Formation Imager (PFI) concept

    NASA Astrophysics Data System (ADS)

    Ireland, Michael J.; Monnier, John D.; Kraus, Stefan; Isella, Andrea; Minardi, Stefano; Petrov, Romain; ten Brummelaar, Theo; Young, John; Vasisht, Gautam; Mozurkewich, David; Rinehart, Stephen; Michael, Ernest A.; van Belle, Gerard; Woillez, Julien

    2016-08-01

    The Planet Formation Imager (PFI) project aims to image the period of planet assembly directly, resolving structures as small as a giant planet's Hill sphere. These images will be required in order to determine the key mechanisms for planet formation at the time when processes of grain growth, protoplanet assembly, magnetic fields, disk/planet dynamical interactions and complex radiative transfer all interact - making some planetary systems habitable and others inhospitable. We will present the overall vision for the PFI concept, focusing on the key technologies and requirements that are needed to achieve the science goals. Based on these key requirements, we will define a cost envelope range for the design and highlight where the largest uncertainties lie at this conceptual stage.

  9. Kepler Data Validation Time Series File: Description of File Format and Content

    NASA Technical Reports Server (NTRS)

    Mullally, Susan E.

    2016-01-01

    The Kepler space mission searches its time series data for periodic, transit-like signatures. The ephemerides of these events, called Threshold Crossing Events (TCEs), are reported in the TCE tables at the NASA Exoplanet Archive (NExScI). Those TCEs are then further evaluated to create planet candidates and populate the Kepler Objects of Interest (KOI) table, also hosted at the Exoplanet Archive. The search, evaluation and export of TCEs is performed by two pipeline modules, TPS (Transit Planet Search) and DV (Data Validation). TPS searches for the strongest, believable signal and then sends that information to DV to fit a transit model, compute various statistics, and remove the transit events so that the light curve can be searched for other TCEs. More on how this search is done and on the creation of the TCE table can be found in Tenenbaum et al. (2012), Seader et al. (2015), Jenkins (2002). For each star with at least one TCE, the pipeline exports a file that contains the light curves used by TPS and DV to find and evaluate the TCE(s). This document describes the content of these DV time series files, and this introduction provides a bit of context for how the data in these files are used by the pipeline.

  10. IIPImage: Large-image visualization

    NASA Astrophysics Data System (ADS)

    Pillay, Ruven

    2014-08-01

    IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.

  11. Context-dependent JPEG backward-compatible high-dynamic range image compression

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Ebrahimi, Touradj

    2013-10-01

    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  12. Transmission of digital images within the NTSC analog format

    DOEpatents

    Nickel, George H.

    2004-06-15

    HDTV and NTSC compatible image communication is done in a single NTSC channel bandwidth. Luminance and chrominance image data of a scene to be transmitted is obtained. The image data is quantized and digitally encoded to form digital image data in HDTV transmission format having low-resolution terms and high-resolution terms. The low-resolution digital image data terms are transformed to a voltage signal corresponding to NTSC color subcarrier modulation with retrace blanking and color bursts to form a NTSC video signal. The NTSC video signal and the high-resolution digital image data terms are then transmitted in a composite NTSC video transmission. In a NTSC receiver, the NTSC video signal is processed directly to display the scene. In a HDTV receiver, the NTSC video signal is processed to invert the color subcarrier modulation to recover the low-resolution terms, where the recovered low-resolution terms are combined with the high-resolution terms to reconstruct the scene in a high definition format.

  13. Image Formation in Lenses and Mirrors, a Complete Representation

    ERIC Educational Resources Information Center

    Bartlett, Albert A.

    1976-01-01

    Provides tables and graphs that give a complete and simple picture of the relationships of image distance, object distance, and magnification in all formations of images by simple lenses and mirrors. (CP)

  14. High-performance web viewer for cardiac images

    NASA Astrophysics Data System (ADS)

    dos Santos, Marcelo; Furuie, Sergio S.

    2004-04-01

    With the advent of the digital devices for medical diagnosis the use of the regular films in radiology has decreased. Thus, the management and handling of medical images in digital format has become an important and critical task. In Cardiology, for example, the main difficulty is to display dynamic images with the appropriated color palette and frame rate used on acquisition process by Cath, Angio and Echo systems. In addition, other difficulty is handling large images in memory by any existing personal computer, including thin clients. In this work we present a web-based application that carries out these tasks with robustness and excellent performance, without burdening the server and network. This application provides near-diagnostic quality display of cardiac images stored as DICOM 3.0 files via a web browser and provides a set of resources that allows the viewing of still and dynamic images. It can access image files from the local disks, or network connection. Its features include: allows real-time playback, dynamic thumbnails image viewing during loading, access to patient database information, image processing tools, linear and angular measurements, on-screen annotations, image printing and exporting DICOM images to other image formats, and many others, all characterized by a pleasant user-friendly interface, inside a Web browser by means of a Java application. This approach offers some advantages over the most of medical images viewers, such as: facility of installation, integration with other systems by means of public and standardized interfaces, platform independence, efficient manipulation and display of medical images, all with high performance.

  15. Efficacy of ProTaper universal retreatment files in removing filling materials during root canal retreatment.

    PubMed

    Giuliani, Valentina; Cocchetti, Roberto; Pagavino, Gabriella

    2008-11-01

    The aim of this study was to evaluate the efficacy of the ProTaper Universal System rotary retreatment system and of Profile 0.06 and hand instruments (K-file) in the removal of root filling materials. Forty-two extracted single-rooted anterior teeth were selected. The root canals were enlarged with nickel-titanium (NiTi) rotary files, filled with gutta-percha and sealer, and randomly divided into 3 experimental groups. The filling materials were removed with solvent in conjunction with one of the following devices and techniques: the ProTaper Universal System for retreatment, ProFile 0.06, and hand instruments (K-file). The roots were longitudinally sectioned, and the image of the root surface was photographed. The images were captured in JPEG format; the areas of the remaining filling materials and the time required for removing the gutta-percha and sealer were calculated by using the nonparametric one-way Kruskal-Wallis test and Tukey-Kramer tests, respectively. The group that showed better results for removing filling materials was the ProTaper Universal System for retreatment files, whereas the group of ProFile rotary instruments yielded better root canal cleanliness than the hand instruments, even though there was no statistically significant difference. The ProTaper Universal System for retreatment and ProFile rotary instruments worked significantly faster than the K-file. The ProTaper Universal System for retreatment files left cleaner root canal walls than the K-file hand instruments and the ProFile Rotary instruments, although none of the devices used guaranteed complete removal of the filling materials. The rotary NiTi system proved to be faster than hand instruments in removing root filling materials.

  16. EROS main image file - A picture perfect database for Landsat imagery and aerial photography

    NASA Technical Reports Server (NTRS)

    Jack, R. F.

    1984-01-01

    The Earth Resources Observation System (EROS) Program was established by the U.S. Department of the Interior in 1966 under the administration of the Geological Survey. It is primarily concerned with the application of remote sensing techniques for the management of natural resources. The retrieval system employed to search the EROS database is called INORAC (Inquiry, Ordering, and Accounting). A description is given of the types of images identified in EROS, taking into account Landsat imagery, Skylab images, Gemini/Apollo photography, and NASA aerial photography. Attention is given to retrieval commands, geographic coordinate searching, refinement techniques, various online functions, and questions regarding the access to the EROS Main Image File.

  17. Incidence of apical crack formation and propagation during removal of root canal filling materials with different engine driven nickel-titanium instruments.

    PubMed

    Özyürek, Taha; Tek, Vildan; Yılmaz, Koray; Uslu, Gülşah

    2017-11-01

    To determine the incidence of crack formation and propagation in apical root dentin after retreatment procedures performed using ProTaper Universal Retreatment (PTR), Mtwo-R, ProTaper Next (PTN), and Twisted File Adaptive (TFA) systems. The study consisted of 120 extracted mandibular premolars. One millimeter from the apex of each tooth was ground perpendicular to the long axis of the tooth, and the apical surface was polished. Twenty teeth served as the negative control group. One hundred teeth were prepared, obturated, and then divided into 5 retreatment groups. The retreatment procedures were performed using the following files: PTR, Mtwo-R, PTN, TFA, and hand files. After filling material removal, apical enlargement was done using apical size 0.50 mm ProTaper Universal (PTU), Mtwo, PTN, TFA, and hand files. Digital images of the apical root surfaces were recorded before preparation, after preparation, after obturation, after filling removal, and after apical enlargement using a stereomicroscope. The images were then inspected for the presence of new apical cracks and crack propagation. Data were analyzed with χ 2 tests using SPSS 21.0 software. New cracks and crack propagation occurred in all the experimental groups during the retreatment process. Nickel-titanium rotary file systems caused significantly more apical crack formation and propagation than the hand files. The PTU system caused significantly more apical cracks than the other groups after the apical enlargement stage. This study showed that retreatment procedures and apical enlargement after the use of retreatment files can cause crack formation and propagation in apical dentin.

  18. Optimizing Cloud Based Image Storage, Dissemination and Processing Through Use of Mrf and Lerc

    NASA Astrophysics Data System (ADS)

    Becker, Peter; Plesea, Lucian; Maurer, Thomas

    2016-06-01

    The volume and numbers of geospatial images being collected continue to increase exponentially with the ever increasing number of airborne and satellite imaging platforms, and the increasing rate of data collection. As a result, the cost of fast storage required to provide access to the imagery is a major cost factor in enterprise image management solutions to handle, process and disseminate the imagery and information extracted from the imagery. Cloud based object storage offers to provide significantly lower cost and elastic storage for this imagery, but also adds some disadvantages in terms of greater latency for data access and lack of traditional file access. Although traditional file formats geoTIF, JPEG2000 and NITF can be downloaded from such object storage, their structure and available compression are not optimum and access performance is curtailed. This paper provides details on a solution by utilizing a new open image formats for storage and access to geospatial imagery optimized for cloud storage and processing. MRF (Meta Raster Format) is optimized for large collections of scenes such as those acquired from optical sensors. The format enables optimized data access from cloud storage, along with the use of new compression options which cannot easily be added to existing formats. The paper also provides an overview of LERC a new image compression that can be used with MRF that provides very good lossless and controlled lossy compression.

  19. BOREAS Level-2 NS001 TMS Imagery: Reflectance and Temperature in BSQ Format

    NASA Technical Reports Server (NTRS)

    Lobitz, Brad; Spanner, Michael; Hall, Forrest G. (Editor); Newcomer, Jeffrey A. (Editor); Strub, Richard

    2000-01-01

    For BOREAS, the NS001 TMS images, along with the other remotely sensed data, were collected to provide spatially extensive information over the primary study areas. This information includes detailed land cover and biophysical parameter maps such as fPAR and LAI. Collection of the NS001 images occurred over the study areas during the 1994 field campaigns. The level-2 NS001 data are atmospherically corrected versions of some of the best original NS001 imagery and cover the dates of 19-Apr-1994, 07-Jun-1994, 21-Jul-1994, 08-Aug-1994, and 16-Sep-1994. The data are not geographically/geometrically corrected; however, files of relative X and Y coordinates for each image pixel were derived by using the C130 INS data in an NS001 scan model. The data are provided in binary image format files.

  20. Image processing techniques for digital orthophotoquad production

    USGS Publications Warehouse

    Hood, Joy J.; Ladner, L. J.; Champion, Richard A.

    1989-01-01

    Orthophotographs have long been recognized for their value as supplements or alternatives to standard maps. Recent trends towards digital cartography have resulted in efforts by the US Geological Survey to develop a digital orthophotoquad production system. Digital image files were created by scanning color infrared photographs on a microdensitometer. Rectification techniques were applied to remove tile and relief displacement, thereby creating digital orthophotos. Image mosaicking software was then used to join the rectified images, producing digital orthophotos in quadrangle format.

  1. Simple Ontology Format (SOFT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorokine, Alexandre

    2011-10-01

    Simple Ontology Format (SOFT) library and file format specification provides a set of simple tools for developing and maintaining ontologies. The library, implemented as a perl module, supports parsing and verification of the files in SOFt format, operations with ontologies (adding, removing, or filtering of entities), and converting of ontologies into other formats. SOFT allows users to quickly create ontologies using only a basic text editor, verify it, and portray it in a graph layout system using customized styles.

  2. Target-Oriented High-Resolution SAR Image Formation via Semantic Information Guided Regularizations

    NASA Astrophysics Data System (ADS)

    Hou, Biao; Wen, Zaidao; Jiao, Licheng; Wu, Qian

    2018-04-01

    Sparsity-regularized synthetic aperture radar (SAR) imaging framework has shown its remarkable performance to generate a feature enhanced high resolution image, in which a sparsity-inducing regularizer is involved by exploiting the sparsity priors of some visual features in the underlying image. However, since the simple prior of low level features are insufficient to describe different semantic contents in the image, this type of regularizer will be incapable of distinguishing between the target of interest and unconcerned background clutters. As a consequence, the features belonging to the target and clutters are simultaneously affected in the generated image without concerning their underlying semantic labels. To address this problem, we propose a novel semantic information guided framework for target oriented SAR image formation, which aims at enhancing the interested target scatters while suppressing the background clutters. Firstly, we develop a new semantics-specific regularizer for image formation by exploiting the statistical properties of different semantic categories in a target scene SAR image. In order to infer the semantic label for each pixel in an unsupervised way, we moreover induce a novel high-level prior-driven regularizer and some semantic causal rules from the prior knowledge. Finally, our regularized framework for image formation is further derived as a simple iteratively reweighted $\\ell_1$ minimization problem which can be conveniently solved by many off-the-shelf solvers. Experimental results demonstrate the effectiveness and superiority of our framework for SAR image formation in terms of target enhancement and clutters suppression, compared with the state of the arts. Additionally, the proposed framework opens a new direction of devoting some machine learning strategies to image formation, which can benefit the subsequent decision making tasks.

  3. 47 CFR 1.10008 - What are IBFS file numbers?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign file numbers to electronic... information, see The International Bureau Filing System File Number Format Public Notice, DA-04-568 (released... 47 Telecommunication 1 2010-10-01 2010-10-01 false What are IBFS file numbers? 1.10008 Section 1...

  4. 47 CFR 1.10008 - What are IBFS file numbers?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign file numbers to electronic... information, see The International Bureau Filing System File Number Format Public Notice, DA-04-568 (released... 47 Telecommunication 1 2011-10-01 2011-10-01 false What are IBFS file numbers? 1.10008 Section 1...

  5. PeakML/mzMatch: a file format, Java library, R library, and tool-chain for mass spectrometry data analysis.

    PubMed

    Scheltema, Richard A; Jankevics, Andris; Jansen, Ritsert C; Swertz, Morris A; Breitling, Rainer

    2011-04-01

    The recent proliferation of high-resolution mass spectrometers has generated a wealth of new data analysis methods. However, flexible integration of these methods into configurations best suited to the research question is hampered by heterogeneous file formats and monolithic software development. The mzXML, mzData, and mzML file formats have enabled uniform access to unprocessed raw data. In this paper we present our efforts to produce an equally simple and powerful format, PeakML, to uniformly exchange processed intermediary and result data. To demonstrate the versatility of PeakML, we have developed an open source Java toolkit for processing, filtering, and annotating mass spectra in a customizable pipeline (mzMatch), as well as a user-friendly data visualization environment (PeakML Viewer). The PeakML format in particular enables the flexible exchange of processed data between software created by different groups or companies, as we illustrate by providing a PeakML-based integration of the widely used XCMS package with mzMatch data processing tools. As an added advantage, downstream analysis can benefit from direct access to the full mass trace information underlying summarized mass spectrometry results, providing the user with the means to rapidly verify results. The PeakML/mzMatch software is freely available at http://mzmatch.sourceforge.net, with documentation, tutorials, and a community forum.

  6. 41 CFR 301-52.3 - Am I required to file a travel claim in a specific format and must the claim be signed?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... that time, you must file your travel claim in the format prescribed by your agency. If the prescribed... travel claim in a specific format and must the claim be signed? 301-52.3 Section 301-52.3 Public Contracts and Property Management Federal Travel Regulation System TEMPORARY DUTY (TDY) TRAVEL ALLOWANCES...

  7. 41 CFR 301-52.3 - Am I required to file a travel claim in a specific format and must the claim be signed?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... that time, you must file your travel claim in the format prescribed by your agency. If the prescribed... travel claim in a specific format and must the claim be signed? 301-52.3 Section 301-52.3 Public Contracts and Property Management Federal Travel Regulation System TEMPORARY DUTY (TDY) TRAVEL ALLOWANCES...

  8. 41 CFR 301-52.3 - Am I required to file a travel claim in a specific format and must the claim be signed?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... that time, you must file your travel claim in the format prescribed by your agency. If the prescribed... travel claim in a specific format and must the claim be signed? 301-52.3 Section 301-52.3 Public Contracts and Property Management Federal Travel Regulation System TEMPORARY DUTY (TDY) TRAVEL ALLOWANCES...

  9. 41 CFR 301-52.3 - Am I required to file a travel claim in a specific format and must the claim be signed?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... that time, you must file your travel claim in the format prescribed by your agency. If the prescribed... travel claim in a specific format and must the claim be signed? 301-52.3 Section 301-52.3 Public Contracts and Property Management Federal Travel Regulation System TEMPORARY DUTY (TDY) TRAVEL ALLOWANCES...

  10. 41 CFR 301-52.3 - Am I required to file a travel claim in a specific format and must the claim be signed?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... that time, you must file your travel claim in the format prescribed by your agency. If the prescribed... travel claim in a specific format and must the claim be signed? 301-52.3 Section 301-52.3 Public Contracts and Property Management Federal Travel Regulation System TEMPORARY DUTY (TDY) TRAVEL ALLOWANCES...

  11. Challenges for data storage in medical imaging research.

    PubMed

    Langer, Steve G

    2011-04-01

    Researchers in medical imaging have multiple challenges for storing, indexing, maintaining viability, and sharing their data. Addressing all these concerns requires a constellation of tools, but not all of them need to be local to the site. In particular, the data storage challenges faced by researchers can begin to require professional information technology skills. With limited human resources and funds, the medical imaging researcher may be better served with an outsourcing strategy for some management aspects. This paper outlines an approach to manage the main objectives faced by medical imaging scientists whose work includes processing and data mining on non-standard file formats, and relating those files to the their DICOM standard descendents. The capacity of the approach scales as the researcher's need grows by leveraging the on-demand provisioning ability of cloud computing.

  12. Development of a user-friendly system for image processing of electron microscopy by integrating a web browser and PIONE with Eos.

    PubMed

    Tsukamoto, Takafumi; Yasunaga, Takuo

    2014-11-01

    Eos (Extensible object-oriented system) is one of the powerful applications for image processing of electron micrographs. In usual cases, Eos works with only character user interfaces (CUI) under the operating systems (OS) such as OS-X or Linux, not user-friendly. Thus, users of Eos need to be expert at image processing of electron micrographs, and have a little knowledge of computer science, as well. However, all the persons who require Eos does not an expert for CUI. Thus we extended Eos to a web system independent of OS with graphical user interfaces (GUI) by integrating web browser.Advantage to use web browser is not only to extend Eos with GUI, but also extend Eos to work under distributed computational environment. Using Ajax (Asynchronous JavaScript and XML) technology, we implemented more comfortable user-interface on web browser. Eos has more than 400 commands related to image processing for electron microscopy, and the usage of each command is different from each other. Since the beginning of development, Eos has managed their user-interface by using the interface definition file of "OptionControlFile" written in CSV (Comma-Separated Value) format, i.e., Each command has "OptionControlFile", which notes information for interface and its usage generation. Developed GUI system called "Zephyr" (Zone for Easy Processing of HYpermedia Resources) also accessed "OptionControlFIle" and produced a web user-interface automatically, because its mechanism is mature and convenient,The basic actions of client side system was implemented properly and can supply auto-generation of web-form, which has functions of execution, image preview, file-uploading to a web server. Thus the system can execute Eos commands with unique options for each commands, and process image analysis. There remain problems of image file format for visualization and workspace for analysis: The image file format information is useful to check whether the input/output file is correct and we also

  13. Synthetic aperture radar target detection, feature extraction, and image formation techniques

    NASA Technical Reports Server (NTRS)

    Li, Jian

    1994-01-01

    This report presents new algorithms for target detection, feature extraction, and image formation with the synthetic aperture radar (SAR) technology. For target detection, we consider target detection with SAR and coherent subtraction. We also study how the image false alarm rates are related to the target template false alarm rates when target templates are used for target detection. For feature extraction from SAR images, we present a computationally efficient eigenstructure-based 2D-MODE algorithm for two-dimensional frequency estimation. For SAR image formation, we present a robust parametric data model for estimating high resolution range signatures of radar targets and for forming high resolution SAR images.

  14. Verification of respiratory-gated radiotherapy with new real-time tumour-tracking radiotherapy system using cine EPID images and a log file.

    PubMed

    Shiinoki, Takehiro; Hanazawa, Hideki; Yuasa, Yuki; Fujimoto, Koya; Uehara, Takuya; Shibuya, Keiko

    2017-02-21

    A combined system comprising the TrueBeam linear accelerator and a new real-time tumour-tracking radiotherapy system, SyncTraX, was installed at our institution. The objectives of this study are to develop a method for the verification of respiratory-gated radiotherapy with SyncTraX using cine electronic portal image device (EPID) images and a log file and to verify this treatment in clinical cases. Respiratory-gated radiotherapy was performed using TrueBeam and the SyncTraX system. Cine EPID images and a log file were acquired for a phantom and three patients during the course of the treatment. Digitally reconstructed radiographs (DRRs) were created for each treatment beam using a planning CT set. The cine EPID images, log file, and DRRs were analysed using a developed software. For the phantom case, the accuracy of the proposed method was evaluated to verify the respiratory-gated radiotherapy. For the clinical cases, the intra- and inter-fractional variations of the fiducial marker used as an internal surrogate were calculated to evaluate the gating accuracy and set-up uncertainty in the superior-inferior (SI), anterior-posterior (AP), and left-right (LR) directions. The proposed method achieved high accuracy for the phantom verification. For the clinical cases, the intra- and inter-fractional variations of the fiducial marker were  ⩽3 mm and  ±3 mm in the SI, AP, and LR directions. We proposed a method for the verification of respiratory-gated radiotherapy with SyncTraX using cine EPID images and a log file and showed that this treatment is performed with high accuracy in clinical cases.

  15. IMAGEP - A FORTRAN ALGORITHM FOR DIGITAL IMAGE PROCESSING

    NASA Technical Reports Server (NTRS)

    Roth, D. J.

    1994-01-01

    IMAGEP is a FORTRAN computer algorithm containing various image processing, analysis, and enhancement functions. It is a keyboard-driven program organized into nine subroutines. Within the subroutines are other routines, also, selected via keyboard. Some of the functions performed by IMAGEP include digitization, storage and retrieval of images; image enhancement by contrast expansion, addition and subtraction, magnification, inversion, and bit shifting; display and movement of cursor; display of grey level histogram of image; and display of the variation of grey level intensity as a function of image position. This algorithm has possible scientific, industrial, and biomedical applications in material flaw studies, steel and ore analysis, and pathology, respectively. IMAGEP is written in VAX FORTRAN for DEC VAX series computers running VMS. The program requires the use of a Grinnell 274 image processor which can be obtained from Mark McCloud Associates, Campbell, CA. An object library of the required GMR series software is included on the distribution media. IMAGEP requires 1Mb of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in VAX FILES-11 format. It is also available on a TK50 tape cartridge in VAX FILES-11 format. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation.

  16. BOREAS TE-20 Soils Data Over the NSA-MSA and Tower Sites in Raster Format

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Veldhuis, Hugo; Knapp, David; Veldhuis, Hugo

    2000-01-01

    The BOREAS TE-20 team collected several data sets for use in developing and testing models of forest ecosystem dynamics. This data set was gridded from vector layers of soil maps that were received from Dr. Hugo Veldhuis, who did the original mapping in the field during 1994. The vector layers were gridded into raster files that cover the NSA-MSA and tower sites. The data are stored in binary, image format files. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Center (DAAC).

  17. Cytoscape file of chemical networks

    EPA Pesticide Factsheets

    The maximum connectivity scores of pairwise chemical conditions summarized from Cmap results in a file with Cytoscape format (http://www.cytoscape.org/). The figures in the publication were generated from this file. The Cytoscape file is formed from importing the eight text file therein.This dataset is associated with the following publication:Wang , R., A. Biales , N. Garcia-Reyero, E. Perkins, D. Villeneuve, G. Ankley, and D. Bencic. Fish Connectivity Mapping: Linking Chemical Stressors by Their MOA-Driven Transcriptomic Profiles. BMC Genomics. BioMed Central Ltd, London, UK, 17(84): 1-20, (2016).

  18. Image formation in diffusion MRI: A review of recent technical developments

    PubMed Central

    Miller, Karla L.

    2017-01-01

    Diffusion magnetic resonance imaging (MRI) is a standard imaging tool in clinical neurology, and is becoming increasingly important for neuroscience studies due to its ability to depict complex neuroanatomy (eg, white matter connectivity). Single‐shot echo‐planar imaging is currently the predominant formation method for diffusion MRI, but suffers from blurring, distortion, and low spatial resolution. A number of methods have been proposed to address these limitations and improve diffusion MRI acquisition. Here, the recent technical developments for image formation in diffusion MRI are reviewed. We discuss three areas of advance in diffusion MRI: improving image fidelity, accelerating acquisition, and increasing the signal‐to‐noise ratio. Level of Evidence: 5 Technical Efficacy: Stage 1 J. MAGN. RESON. IMAGING 2017;46:646–662 PMID:28194821

  19. 77 FR 59692 - 2014 Diversity Immigrant Visa Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-28

    ... the E-DV system. The entry will not be accepted and must be resubmitted. Group or family photographs... must be in the Joint Photographic Experts Group (JPEG) format. Image File Size: The maximum file size...). Image File Format: The image must be in the Joint Photographic Experts Group (JPEG) format. Image File...

  20. The browse file of NASA/JPL quick-look radar images from the Loch Linnhe 1989 experiment

    NASA Technical Reports Server (NTRS)

    Brown, Walter E., Jr. (Editor)

    1989-01-01

    The Jet Propulsion Laboratory (JPL) Aircraft Synthetic Aperture Radar (AIRSAR) was deployed to Scotland to obtain radar imagery of ship wakes generated in Loch Linnhe. These observations were part of a joint US and UK experiment to study the internal waves generated by ships under partially controlled conditions. The AIRSAR was mounted on the NASA-Ames DC-8 aircraft. The data acquisition sequence consisted of 8 flights, each about 6 hours in duration, wherein 24 observations of the instrumented site were made on each flight. This Browse File provides the experimenters with a reference of the real time imagery (approximately 100 images) obtained on the 38-deg track. These radar images are copies of those obtained at the time of observation and show the general geometry of the ship wake features. To speed up processing during this flight, the images were all processed around zero Doppler, and thus azimuth ambiguities often occur when the drift angel (yaw) exceeded a few degrees. However, even with the various shortcomings, it is believed that the experimenter will find the Browse File useful in establishing a basis for further investigations.

  1. CytometryML and other data formats

    NASA Astrophysics Data System (ADS)

    Leif, Robert C.

    2006-02-01

    Cytology automation and research will be enhanced by the creation of a common data format. This data format would provide the pathology and research communities with a uniform way for annotating and exchanging images, flow cytometry, and associated data. This specification and/or standard will include descriptions of the acquisition device, staining, the binary representations of the image and list-mode data, the measurements derived from the image and/or the list-mode data, and descriptors for clinical/pathology and research. An international, vendor-supported, non-proprietary specification will allow pathologists, researchers, and companies to develop and use image capture/analysis software, as well as list-mode analysis software, without worrying about incompatibilities between proprietary vendor formats. Presently, efforts to create specifications and/or descriptions of these formats include the Laboratory Digital Imaging Project (LDIP) Data Exchange Specification; extensions to the Digital Imaging and Communications in Medicine (DICOM); Open Microscopy Environment (OME); Flowcyt, an extension to the present Flow Cytometry Standard (FCS); and CytometryML. The feasibility of creating a common data specification for digital microscopy and flow cytometry in a manner consistent with its use for medical devices and interoperability with both hospital information and picture archiving systems has been demonstrated by the creation of the CytometryML schemas. The feasibility of creating a software system for digital microscopy has been demonstrated by the OME. CytometryML consists of schemas that describe instruments and their measurements. These instruments include digital microscopes and flow cytometers. Optical components including the instruments' excitation and emission parts are described. The description of the measurements made by these instruments includes the tagged molecule, data acquisition subsystem, and the format of the list-mode and/or image data. Many

  2. 14 CFR 221.195 - Requirement for filing printed material.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS TARIFFS Electronically Filed Tariffs § 221.195 Requirement for filing printed material. (a) Any tariff, or revision thereto, filed in paper format which accompanies....190(b). Further, such paper tariff, or revision thereto, shall be filed in accordance with the...

  3. Informatics in radiology: automated structured reporting of imaging findings using the AIM standard and XML.

    PubMed

    Zimmerman, Stefan L; Kim, Woojin; Boonn, William W

    2011-01-01

    Quantitative and descriptive imaging data are a vital component of the radiology report and are frequently of paramount importance to the ordering physician. Unfortunately, current methods of recording these data in the report are both inefficient and error prone. In addition, the free-text, unstructured format of a radiology report makes aggregate analysis of data from multiple reports difficult or even impossible without manual intervention. A structured reporting work flow has been developed that allows quantitative data created at an advanced imaging workstation to be seamlessly integrated into the radiology report with minimal radiologist intervention. As an intermediary step between the workstation and the reporting software, quantitative and descriptive data are converted into an extensible markup language (XML) file in a standardized format specified by the Annotation and Image Markup (AIM) project of the National Institutes of Health Cancer Biomedical Informatics Grid. The AIM standard was created to allow image annotation data to be stored in a uniform machine-readable format. These XML files containing imaging data can also be stored on a local database for data mining and analysis. This structured work flow solution has the potential to improve radiologist efficiency, reduce errors, and facilitate storage of quantitative and descriptive imaging data for research. Copyright © RSNA, 2011.

  4. Verification of respiratory-gated radiotherapy with new real-time tumour-tracking radiotherapy system using cine EPID images and a log file

    NASA Astrophysics Data System (ADS)

    Shiinoki, Takehiro; Hanazawa, Hideki; Yuasa, Yuki; Fujimoto, Koya; Uehara, Takuya; Shibuya, Keiko

    2017-02-01

    A combined system comprising the TrueBeam linear accelerator and a new real-time tumour-tracking radiotherapy system, SyncTraX, was installed at our institution. The objectives of this study are to develop a method for the verification of respiratory-gated radiotherapy with SyncTraX using cine electronic portal image device (EPID) images and a log file and to verify this treatment in clinical cases. Respiratory-gated radiotherapy was performed using TrueBeam and the SyncTraX system. Cine EPID images and a log file were acquired for a phantom and three patients during the course of the treatment. Digitally reconstructed radiographs (DRRs) were created for each treatment beam using a planning CT set. The cine EPID images, log file, and DRRs were analysed using a developed software. For the phantom case, the accuracy of the proposed method was evaluated to verify the respiratory-gated radiotherapy. For the clinical cases, the intra- and inter-fractional variations of the fiducial marker used as an internal surrogate were calculated to evaluate the gating accuracy and set-up uncertainty in the superior-inferior (SI), anterior-posterior (AP), and left-right (LR) directions. The proposed method achieved high accuracy for the phantom verification. For the clinical cases, the intra- and inter-fractional variations of the fiducial marker were  ⩽3 mm and  ±3 mm in the SI, AP, and LR directions. We proposed a method for the verification of respiratory-gated radiotherapy with SyncTraX using cine EPID images and a log file and showed that this treatment is performed with high accuracy in clinical cases. This work was partly presented at the 58th Annual meeting of American Association of Physicists in Medicine.

  5. BOREAS RSS-7 Landsat TM LAI IMages of the SSA and NSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Chen, Jing; Cihlar, Josef

    2000-01-01

    The BOReal Ecosystem-Atmosphere Study Remote Sensing Science (BOREAS RSS-7) team used Landsat Thematic Mapper (TM) images processed at CCRS to produce images of Leaf Area Index (LAI) for the BOREAS study areas. Two images acquired on 06-Jun and 09-Aug-1991 were used for the SSA, and one image acquired on 09-Jun-1994 was used for the NSA. The LAI images are based on ground measurements and Landsat TM Reduced Simple Ratio (RSR) images. The data are stored in binary image-format files.

  6. Formation of the image on the receiver of thermal radiation

    NASA Astrophysics Data System (ADS)

    Akimenko, Tatiana A.

    2018-04-01

    The formation of the thermal picture of the observed scene with the verification of the quality of the thermal images obtained is one of the important stages of the technological process that determine the quality of the thermal imaging observation system. In this article propose to consider a model for the formation of a thermal picture of a scene, which must take into account: the features of the object of observation as the source of the signal; signal transmission through the physical elements of the thermal imaging system that produce signal processing at the optical, photoelectronic and electronic stages, which determines the final parameters of the signal and its compliance with the requirements for thermal information and measurement systems.

  7. BOREAS Regional Soils Data in Raster Format and AEAC Projection

    NASA Technical Reports Server (NTRS)

    Monette, Bryan; Knapp, David; Hall, Forrest G. (Editor); Nickeson, Jaime (Editor)

    2000-01-01

    This data set was gridded by BOREAS Information System (BORIS) Staff from a vector data set received from the Canadian Soil Information System (CanSIS). The original data came in two parts that covered Saskatchewan and Manitoba. The data were gridded and merged into one data set of 84 files covering the BOREAS region. The data were gridded into the AEAC projection. Because the mapping of the two provinces was done separately in the original vector data, there may be discontinuities in some of the soil layers because of different interpretations of certain soil properties. The data are stored in binary, image format files.

  8. A Study of NetCDF as an Approach for High Performance Medical Image Storage

    NASA Astrophysics Data System (ADS)

    Magnus, Marcone; Coelho Prado, Thiago; von Wangenhein, Aldo; de Macedo, Douglas D. J.; Dantas, M. A. R.

    2012-02-01

    The spread of telemedicine systems increases every day. The systems and PACS based on DICOM images has become common. This rise reflects the need to develop new storage systems, more efficient and with lower computational costs. With this in mind, this article discusses a study for application in NetCDF data format as the basic platform for storage of DICOM images. The study case comparison adopts an ordinary database, the HDF5 and the NetCDF to storage the medical images. Empirical results, using a real set of images, indicate that the time to retrieve images from the NetCDF for large scale images has a higher latency compared to the other two methods. In addition, the latency is proportional to the file size, which represents a drawback to a telemedicine system that is characterized by a large amount of large image files.

  9. Data management in pattern recognition and image processing systems

    NASA Technical Reports Server (NTRS)

    Zobrist, A. L.; Bryant, N. A.

    1976-01-01

    Data management considerations are important to any system which handles large volumes of data or where the manipulation of data is technically sophisticated. A particular problem is the introduction of image-formatted files into the mainstream of data processing application. This report describes a comprehensive system for the manipulation of image, tabular, and graphical data sets which involve conversions between the various data types. A key characteristic is the use of image processing technology to accomplish data management tasks. Because of this, the term 'image-based information system' has been adopted.

  10. Adding Data Management Services to Parallel File Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandt, Scott

    2015-03-04

    The objective of this project, called DAMASC for “Data Management in Scientific Computing”, is to coalesce data management with parallel file system management to present a declarative interface to scientists for managing, querying, and analyzing extremely large data sets efficiently and predictably. Managing extremely large data sets is a key challenge of exascale computing. The overhead, energy, and cost of moving massive volumes of data demand designs where computation is close to storage. In current architectures, compute/analysis clusters access data in a physically separate parallel file system and largely leave it scientist to reduce data movement. Over the past decadesmore » the high-end computing community has adopted middleware with multiple layers of abstractions and specialized file formats such as NetCDF-4 and HDF5. These abstractions provide a limited set of high-level data processing functions, but have inherent functionality and performance limitations: middleware that provides access to the highly structured contents of scientific data files stored in the (unstructured) file systems can only optimize to the extent that file system interfaces permit; the highly structured formats of these files often impedes native file system performance optimizations. We are developing Damasc, an enhanced high-performance file system with native rich data management services. Damasc will enable efficient queries and updates over files stored in their native byte-stream format while retaining the inherent performance of file system data storage via declarative queries and updates over views of underlying files. Damasc has four key benefits for the development of data-intensive scientific code: (1) applications can use important data-management services, such as declarative queries, views, and provenance tracking, that are currently available only within database systems; (2) the use of these services becomes easier, as they are provided within a familiar file

  11. Format( )MEDIC( )Input

    NASA Astrophysics Data System (ADS)

    Foster, K.

    1994-09-01

    This document is a description of a computer program called Format( )MEDIC( )Input. The purpose of this program is to allow the user to quickly reformat wind velocity data in the Model Evaluation Database (MEDb) into a reasonable 'first cut' set of MEDIC input files (MEDIC.nml, StnLoc.Met, and Observ.Met). The user is cautioned that these resulting input files must be reviewed for correctness and completeness. This program will not format MEDb data into a Problem Station Library or Problem Metdata File. A description of how the program reformats the data is provided, along with a description of the required and optional user input and a description of the resulting output files. A description of the MEDb is not provided here but can be found in the RAS Division Model Evaluation Database Description document.

  12. mzDB: A File Format Using Multiple Indexing Strategies for the Efficient Analysis of Large LC-MS/MS and SWATH-MS Data Sets*

    PubMed Central

    Bouyssié, David; Dubois, Marc; Nasso, Sara; Gonzalez de Peredo, Anne; Burlet-Schiltz, Odile; Aebersold, Ruedi; Monsarrat, Bernard

    2015-01-01

    The analysis and management of MS data, especially those generated by data independent MS acquisition, exemplified by SWATH-MS, pose significant challenges for proteomics bioinformatics. The large size and vast amount of information inherent to these data sets need to be properly structured to enable an efficient and straightforward extraction of the signals used to identify specific target peptides. Standard XML based formats are not well suited to large MS data files, for example, those generated by SWATH-MS, and compromise high-throughput data processing and storing. We developed mzDB, an efficient file format for large MS data sets. It relies on the SQLite software library and consists of a standardized and portable server-less single-file database. An optimized 3D indexing approach is adopted, where the LC-MS coordinates (retention time and m/z), along with the precursor m/z for SWATH-MS data, are used to query the database for data extraction. In comparison with XML formats, mzDB saves ∼25% of storage space and improves access times by a factor of twofold up to even 2000-fold, depending on the particular data access. Similarly, mzDB shows also slightly to significantly lower access times in comparison with other formats like mz5. Both C++ and Java implementations, converting raw or XML formats to mzDB and providing access methods, will be released under permissive license. mzDB can be easily accessed by the SQLite C library and its drivers for all major languages, and browsed with existing dedicated GUIs. The mzDB described here can boost existing mass spectrometry data analysis pipelines, offering unprecedented performance in terms of efficiency, portability, compactness, and flexibility. PMID:25505153

  13. BOREAS TE-17 Production Efficiency Model Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G.; Papagno, Andrea (Editor); Goetz, Scott J.; Goward, Samual N.; Prince, Stephen D.; Czajkowski, Kevin; Dubayah, Ralph O.

    2000-01-01

    A Boreal Ecosystem-Atmospheric Study (BOREAS) version of the Global Production Efficiency Model (http://www.inform.umd.edu/glopem/) was developed by TE-17 (Terrestrial Ecology) to generate maps of gross and net primary production, autotrophic respiration, and light use efficiency for the BOREAS region. This document provides basic information on the model and how the maps were generated. The data generated by the model are stored in binary image-format files. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  14. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  15. Geometric Constructions for Image Formation by a Converging Lens

    ERIC Educational Resources Information Center

    Zurcher, Ulrich

    2012-01-01

    Light rays emerge from an object in all directions. In introductory texts, three "special" rays are selected to draw the image produced by lenses and mirrors. This presentation may suggest to students that these three rays are necessary for the formation of an image. We discuss that the three rays attain their "special status" from the geometric…

  16. Preliminary Image Map of the 2007 Santiago Fire Perimeter, Orange Quadrangle, Orange County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  17. Preliminary Image Map of the 2007 Ranch Fire Perimeter, Fillmore Quadrangle, Ventura County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  18. Preliminary Image Map of the 2007 Ranch Fire Perimeter, Piru Quadrangle, Ventura County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  19. Preliminary Image Map of the 2007 Santiago Fire Perimeter, Tustin Quadrangle, Orange County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  20. ENDF-6 Formats Manual Data Formats and Procedures for the Evaluated Nuclear Data File ENDF/B-VI and ENDF/B-VII

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herman, M.; Members of the Cross Sections Evaluation Working Group

    2009-06-01

    In December 2006, the Cross Section Evaluation Working Group (CSEWG) of the United States released the new ENDF/B-VII.0 library. This represented considerable achievement as it was the 1st major release since 1990 when ENDF/B-VI has been made publicly available. The two libraries have been released in the same format, ENDF-6, which has been originally developed for the ENDF/B-VI library. In the early stage of work on the VII-th generation of the library CSEWG made important decision to use the same formats. This decision was adopted even though it was argued that it would be timely to modernize the formats andmore » several interesting ideas were proposed. After careful deliberation CSEWG concluded that actual implementation would require considerable resources needed to modify processing codes and to guarantee high quality of the files processed by these codes. In view of this the idea of format modernization has been postponed and ENDF-6 format was adopted for the new ENDF/B-VII library. In several other areas related to ENDF we made our best to move beyond established tradition and achieve maximum modernization. Thus, the 'Big Paper' on ENDF/B-VII.0 has been published, also in December 2006, as the Special Issue of Nuclear Data Sheets 107 (1996) 2931-3060. The new web retrieval and plotting system for ENDF-6 formatted data, Sigma, was developed by the NNDC and released in 2007. Extensive paper has been published on the advanced tool for nuclear reaction data evaluation, EMPIRE, in 2007. This effort was complemented with release of updated set of ENDF checking codes in 2009. As the final item on this list, major revision of ENDF-6 Formats Manual was made. This work started in 2006 and came to fruition in 2009 as documented in the present report.« less

  1. Technical Note: Development and validation of an open data format for CT projection data.

    PubMed

    Chen, Baiyu; Duan, Xinhui; Yu, Zhicong; Leng, Shuai; Yu, Lifeng; McCollough, Cynthia

    2015-12-01

    Lack of access to projection data from patient CT scans is a major limitation for development and validation of new reconstruction algorithms. To meet this critical need, this work developed and validated a vendor-neutral format for CT projection data, which will further be employed to build a library of patient projection data for public access. A digital imaging and communication in medicine (DICOM)-like format was created for CT projection data (CT-PD), named the DICOM-CT-PD format. The format stores attenuation information in the DICOM image data block and stores parameters necessary for reconstruction in the DICOM header under various tags (51 tags to store the geometry and scan parameters and 9 tags to store patient information). To validate the accuracy and completeness of the new format, CT projection data from helical scans of the ACR CT accreditation phantom were acquired from two clinical CT scanners (Somatom Definition Flash, Siemens Healthcare, Forchheim, Germany and Discovery CT750 HD, GE Healthcare, Waukesha, WI). After decoding (by the authors for Siemens, by the manufacturer for GE), the projection data were converted to the DICOM-CT-PD format. Off-line CT reconstructions were performed by internal and external reconstruction researchers using only the information stored in the DICOM-CT-PD files and the DICOM-CT-PD field definitions. Compared with the commercially reconstructed CT images, the off-line reconstructed images created using the DICOM-CT-PD format are similar in terms of CT numbers (differences of 5 HU for the bone insert and -9 HU for the air insert), image noise (±1 HU), and low contrast detectability (6 mm rods visible in both). Because of different reconstruction approaches, slightly different in-plane and cross-plane high contrast spatial resolution were obtained compared to those reconstructed on the scanners (axial plane: GE off-line, 7 lp/cm; GE commercial, 7 lp/cm; Siemens off-line, 8 lp/cm; Siemens commercial, 7 lp/cm. Coronal

  2. UFO (UnFold Operator) default data format

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kissel, L.; Biggs, F.; Marking, T.R.

    The default format for the storage of x,y data for use with the UFO code is described. The format assumes that the data stored in a file is a matrix of values; two columns of this matrix are selected to define a function of the form y = f(x). This format is specifically designed to allow for easy importation of data obtained from other sources, or easy entry of data using a text editor, with a minimum of reformatting. This format is flexible and extensible through the use of inline directives stored in the optional header of the file. Amore » special extension of the format implements encoded data which significantly reduces the storage required as compared wth the unencoded form. UFO supports several extensions to the file specification that implement execute-time operations, such as, transformation of the x and/or y values, selection of specific columns of the matrix for association with the x and y values, input of data directly from other formats (e.g., DAMP and PFF), and a simple type of library-structured file format. Several examples of the use of the format are given.« less

  3. Preservation of root canal anatomy using self-adjusting file instrumentation with glide path prepared by 20/0.02 hand files versus 20/0.04 rotary files

    PubMed Central

    Jain, Niharika; Pawar, Ajinkya M.; Ukey, Piyush D.; Jain, Prashant K.; Thakur, Bhagyashree; Gupta, Abhishek

    2017-01-01

    Objectives: To compare the relative axis modification and canal concentricity after glide path preparation with 20/0.02 hand K-file (NITIFLEX®) and 20/0.04 rotary file (HyFlex™ CM) with subsequent instrumentation with 1.5 mm self-adjusting file (SAF). Materials and Methods: One hundred and twenty ISO 15, 0.02 taper, Endo Training Blocks (Dentsply Maillefer, Ballaigues, Switzerland) were acquired and randomly divided into following two groups (n = 60): group 1, establishing glide path till 20/0.02 hand K-file (NITIFLEX®) followed by instrumentation with 1.5 mm SAF; and Group 2, establishing glide path till 20/0.04 rotary file (HyFlex™ CM) followed by instrumentation with 1.5 mm SAF. Pre- and post-instrumentation digital images were processed with MATLAB R 2013 software to identify the central axis, and then superimposed using digital imaging software (Picasa 3.0 software, Google Inc., California, USA) taking five landmarks as reference points. Student's t-test for pairwise comparisons was applied with the level of significance set at 0.05. Results: Training blocks instrumented with 20/0.04 rotary file and SAF were associated less deviation in canal axis (at all the five marked points), representing better canal concentricity compared to those, in which glide path was established by 20/0.02 hand K-files followed by SAF instrumentation. Conclusion: Canal geometry is better maintained after SAF instrumentation with a prior glide path established with 20/0.04 rotary file. PMID:28855752

  4. The Planet Formation Imager (PFI) Project

    NASA Astrophysics Data System (ADS)

    Aarnio, Alicia; Monnier, John; Kraus, Stefan; Ireland, Michael

    2016-07-01

    Among the most fascinating and hotly-debated areas in contemporary astrophysics are the means by which planetary systems are assembled from the large rotating disks of gas and dust which attend a stellar birth. Although important work is being done both in theory and observation, a full understanding of the physics of planet formation can only be achieved by opening observational windows able to directly witness the process in action. The key requirement is then to probe planet-forming systems at the natural spatial scales over which material is being assembled. By definition, this is the so-called Hill Sphere, which delineates the region of influence of a gravitating body within its surrounding environment. The Planet Formation Imager project has crystallized around this challenging goal: to deliver resolved images of Hill-Sphere-sized structures within candidate planet-hosting disks in the nearest star-forming regions. In this contribution I outline the primary science case of PFI and give an overview about the work of the PFI science and technical working group and present radiation-hydrodynamics simulations from which we derive preliminary specifications that guide the design of the facility. Finally, I give an overview about the technologies that we are investigating in order to meet the specifications.

  5. Imaging Planet Formation Inside the Diffraction Limit

    NASA Astrophysics Data System (ADS)

    Sallum, Stephanie Elise

    For decades, astronomers have used observations of mature planetary systems to constrain planet formation theories, beginning with our own solar system and now the thousands of known exoplanets. Recent advances in instrumentation have given us a direct view of some steps in the planet formation process, such as large-scale protostar and protoplanetary disk features and evolution. However, understanding the details of how planets accrete and interact with their environment requires direct observations of protoplanets themselves. Transition disks, protoplanetary disks with inner clearings that may be caused by forming planets, are the best targets for these studies. Their large distances, compared to the stars normally targeted for direct imaging of exoplanets, make protoplanet detection difficult and necessitate novel imaging techniques. In this dissertation, I describe the results of using non-redundant masking (NRM) to search for forming planets in transition disk clearings. I first present a data reduction pipeline that I wrote to this end, using example datasets and simulations to demonstrate reduction and imaging optimizations. I discuss two transition disk NRM case studies: T Cha and LkCa 15. In the case of T Cha, while we detect significant asymmetries, the data cannot be explained by orbiting companions. The fluxes and orbital motion of the LkCa 15 companion signals, however, can be naturally explained by protoplanets in the disk clearing. I use these datasets and simulated observations to illustrate the effects of scattered light from transition disk material on NRM protoplanet searches. I then demonstrate the utility of the dual-aperture Large Binocular Telescope Interferometer's NRM mode on the bright B[e] star MWC 349A. I discuss the implications of this work for planet formation studies as well as future prospects for NRM and related techniques on next generation instruments.

  6. 78 FR 59743 - Bureau of Consular Affairs; Registration for the Diversity Immigrant (DV-2015) Visa Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-27

    ... already a U.S. citizen or a Lawful Permanent Resident, but you will not be penalized if you do. Group... specifications: Image File Format: The miage must be in the Joint Photographic Experts Group (JPEG) format. Image... in the Joint Photographic Experts Group (JPEG) format. Image File Size: The maximum image file size...

  7. BOREAS TE-18 Biomass Density Image of the SSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. This biomass density image covers almost the entire BOREAS SSA. The pixels for which biomass density is computed include areas, that are in conifer land cover classes only. The biomass density values represent the amount of overstory biomass (i.e., tree biomass only) per unit area. It is derived from a Landsat-5 TM image collected on 02-Sep-1994. The technique that was used to create this image is very similar to the technique that was as used to create the physical classification of the SSA. The data are provided in a binary image file format. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  8. BOREAS RSS-16 AIRSAR CM Images: Integrated Processor Version 6.1 Level-3b

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Saatchi, Susan; Newcomer, Jeffrey A.; Strub, Richard; Irani, Fred

    2000-01-01

    The BOREAS RSS-16 team used satellite and aircraft SAR data in conjunction with various ground measurements to determine the moisture regime of the boreal forest. RSS-16 assisted with the acquisition and ordering of NASA JPL AIRSAR data collected from the NASA DC-8 aircraft. The NASA JPL AIRSAR is a side-looking imaging radar system that utilizes the SAR principle to obtain high resolution images that represent the radar backscatter of the imaged surface at different frequencies and polarizations. The information contained in each pixel of the AIRSAR data represents the radar backscatter for all possible combinations of horizontal and vertical transmit and receive polarizations (i.e., HH, HV, VH, and VV). Geographically, the data cover portions of the BOREAS SSA and NSA. Temporally, the data were acquired from 12-Aug-1993 to 31-Jul-1995. The level-3b AIRSAR CM data are in compressed Stokes matrix format, which has 10 bytes per pixel. From this data format, it is possible to synthesize a number of different radar backscatter measurements. The data are stored in binary image-format files. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  9. BOREAS Level-3b Landsat TM Imagery: At-sensor Radiances in BSQ Format

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime; Knapp, David; Newcomer, Jeffrey A.; Cihlar, Josef

    2000-01-01

    For BOREAS, the level-3b Landsat TM data, along with the other remotely sensed images, were collected in order to provide spatially extensive information over the primary study areas. This information includes radiant energy, detailed land cover, and biophysical parameter maps such as FPAR and LAI. Although very similar in content to the level-3a Landsat TM products, the level-3b images were created to provide users with a directly usable at-sensor radiance image. Geographically, the level-3b images cover the BOREAS NSA and SSA. Temporally, the images cover the period of 22-Jun-1984 to 09-Jul-1996. The images are available in binary, image format files.

  10. Casimage project: a digital teaching files authoring environment.

    PubMed

    Rosset, Antoine; Muller, Henning; Martins, Martina; Dfouni, Natalia; Vallée, Jean-Paul; Ratib, Osman

    2004-04-01

    The goal of the Casimage project is to offer an authoring and editing environment integrated with the Picture Archiving and Communication Systems (PACS) for creating image-based electronic teaching files. This software is based on a client/server architecture allowing remote access of users to a central database. This authoring environment allows radiologists to create reference databases and collection of digital images for teaching and research directly from clinical cases being reviewed on PACS diagnostic workstations. The environment includes all tools to create teaching files, including textual description, annotations, and image manipulation. The software also allows users to generate stand-alone CD-ROMs and web-based teaching files to easily share their collections. The system includes a web server compatible with the Medical Imaging Resource Center standard (MIRC, http://mirc.rsna.org) to easily integrate collections in the RSNA web network dedicated to teaching files. This software could be installed on any PACS workstation to allow users to add new cases at any time and anywhere during clinical operations. Several images collections were created with this tool, including thoracic imaging that was subsequently made available on a CD-Rom and on our web site and through the MIRC network for public access.

  11. Format requirements of thermal neutron scattering data in a nuclear data format to succeed the ENDF format

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, D.

    2014-03-31

    In November 2012, the Working Party on Evaluation Cooperation Subgroup 38 (WPEC-SG38) began with the task of developing a nuclear data format and supporting infrastructure to replace the now nearly 50 year old ENDF format. The first step in this process is to develop requirements for the new format and infrastructure. In this talk, I will review the status of ENDF's Thermal Scattering Law (TSL) formats as well as support for this data in the GND format (from which the new format is expected to evolve). Finally, I hope to begin a dialog with members of the thermal neutron scatteringmore » community so that their data needs can be accurately and easily accommodated by the new format and tools, as captured by the requirements document. During this discussion, we must keep in mind that the new tools and format must; Support what is in existing data files; Support new things we want to put in data files; and Be flexible enough for us to adapt it to future unanticipated challenges.« less

  12. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    PubMed

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  13. XML Files

    MedlinePlus

    ... this page, please enable JavaScript. MedlinePlus produces XML data sets that you are welcome to download and use. If you have questions about the MedlinePlus XML files, please contact us . For additional sources of MedlinePlus data in XML format, visit our Web service page, ...

  14. 17 CFR 232.14 - Paper filings not accepted without exemption.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 17 Commodity and Securities Exchanges 2 2011-04-01 2011-04-01 false Paper filings not accepted... COMMISSION REGULATION S-T-GENERAL RULES AND REGULATIONS FOR ELECTRONIC FILINGS General § 232.14 Paper filings not accepted without exemption. The Commission will not accept in paper format any filing required to...

  15. Preliminary Image Map of the 2007 Rice Fire Perimeter, Bonsall Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  16. Preliminary Image Map of the 2007 Harris Fire Perimeter, Tecate Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  17. Preliminary Image Map of the 2007 Witch Fire Perimeter, Escondido Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  18. Preliminary Image Map of the 2007 Witch Fire Perimeter, Ramona Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  19. Preliminary Image Map of the 2007 Santiago Fire Perimeter, Lake Forest Quadrangle, Orange County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  20. Preliminary Image Map of the 2007 Cajon Fire Perimeter, Devore Quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  1. Preliminary Image Map of the 2007 Harris Fire Perimeter, Dulzura Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  2. Preliminary Image Map of the 2007 Harris Fire Perimeter, Potrero Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  3. Preliminary Image Map of the 2007 Witch Fire Perimeter, Poway Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  4. Preliminary Image Map of the 2007 Poomacha Fire Perimeter, Pala Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  5. BOREAS Level-1B TIMS Imagery: At-sensor Radiance in BSQ Format

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Strub, Richard; Newcomer, Jeffrey A.; Chernobieff, Sonia

    2000-01-01

    The Boreal Ecosystem-Atmospheric Study (BOREAS) Staff Science Aircraft Data Acquisition Program focused on providing the research teams with the remotely sensed satellite data products they needed to compare and spatially extend point results. For BOREAS, the Thermal Infrared Multispectral Scanner (TIMS) imagery, along with other aircraft images, was collected to provide spatially extensive information over the primary study areas. The Level-1b TIMS images cover the time periods of 16 to 20 Apr 1994 and 06 to 17 Sep 1994. The system calibrated images are stored in binary image format files. The TIMS images are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  6. 76 FR 24467 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-02

    ... Schedule A Image to be effective 12/20/2010. Filed Date: 04/25/2011. Accession Number: 20110425-5169... per Order in ER11-2750-000 to resubmit the Schedule A Image to be effective 12/28/2010. Filed Date: 04... (toll free). For TTY, call (202) 502-8659. Dated: April 26, 2011. Nathaniel J. Davis, Sr., Deputy...

  7. Incorporating the APS Catalog of the POSS I and Image Archive in ADS

    NASA Technical Reports Server (NTRS)

    Humphreys, Roberta M.

    1998-01-01

    The primary purpose of this contract was to develop the software to both create and access an on-line database of images from digital scans of the Palomar Sky Survey. This required modifying our DBMS (called Star Base) to create an image database from the actual raw pixel data from the scans. The digitized images are processed into a set of coordinate-reference index and pixel files that are stored in run-length files, thus achieving an efficient lossless compression. For efficiency and ease of referencing, each digitized POSS I plate is then divided into 900 subplates. Our custom DBMS maps each query into the corresponding POSS plate(s) and subplate(s). All images from the appropriate subplates are retrieved from disk with byte-offsets taken from the index files. These are assembled on-the-fly into a GIF image file for browser display, and a FITS format image file for retrieval. The FITS images have a pixel size of 0.33 arcseconds. The FITS header contains astrometric and photometric information. This method keeps the disk requirements manageable while allowing for future improvements. When complete, the APS Image Database will contain over 130 Gb of data. A set of web pages query forms are available on-line, as well as an on-line tutorial and documentation. The database is distributed to the Internet by a high-speed SGI server and a high-bandwidth disk system. URL is http://aps.umn.edu/IDB/. The image database software is written in perl and C and has been compiled on SGI computers with MIX5.3. A copy of the written documentation is included and the software is on the accompanying exabyte tape.

  8. Methods for identification of images acquired with digital cameras

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  9. Compressing images for the Internet

    NASA Astrophysics Data System (ADS)

    Beretta, Giordano B.

    1998-01-01

    The World Wide Web has rapidly become the hot new mass communications medium. Content creators are using similar design and layout styles as in printed magazines, i.e., with many color images and graphics. The information is transmitted over plain telephone lines, where the speed/price trade-off is much more severe than in the case of printed media. The standard design approach is to use palettized color and to limit as much as possible the number of colors used, so that the images can be encoded with a small number of bits per pixel using the Graphics Interchange Format (GIF) file format. The World Wide Web standards contemplate a second data encoding method (JPEG) that allows color fidelity but usually performs poorly on text, which is a critical element of information communicated on this medium. We analyze the spatial compression of color images and describe a methodology for using the JPEG method in a way that allows a compact representation while preserving full color fidelity.

  10. Dynamic Torsional and Cyclic Fracture Behavior of ProFile Rotary Instruments at Continuous or Reciprocating Rotation as Visualized with High-speed Digital Video Imaging.

    PubMed

    Tokita, Daisuke; Ebihara, Arata; Miyara, Kana; Okiji, Takashi

    2017-08-01

    This study examined the dynamic fracture behavior of nickel-titanium rotary instruments in torsional or cyclic loading at continuous or reciprocating rotation by means of high-speed digital video imaging. The ProFile instruments (size 30, 0.06 taper; Dentsply Maillefer, Ballaigues, Switzerland) were categorized into 4 groups (n = 7 in each group) as follows: torsional/continuous (TC), torsional/reciprocating (TR), cyclic/continuous (CC), and cyclic/reciprocating (CR). Torsional loading was performed by rotating the instruments by holding the tip with a vise. For cyclic loading, a custom-made device with a 38° curvature was used. Dynamic fracture behavior was observed with a high-speed camera. The time to fracture was recorded, and the fractured surface was examined with scanning electron microscopy. The TC group initially exhibited necking of the file followed by the development of an initial crack line. The TR group demonstrated opening and closing of a crack according to its rotation in the cutting and noncutting directions, respectively. The CC group separated without any detectable signs of deformation. In the CR group, initial crack formation was recognized in 5 of 7 samples. The reciprocating rotation exhibited a longer time to fracture in both torsional and cyclic fatigue testing (P < .05). The scanning electron microscopic images showed a severely deformed surface in the TR group. The dynamic fracture behavior of NiTi rotary instruments, as visualized with high-speed digital video imaging, varied between the different modes of rotation and different fatigue testing. Reciprocating rotation induced a slower crack propagation and conferred higher fatigue resistance than continuous rotation in both torsional and cyclic loads. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  11. An Image Archive With The ACR/NEMA Message Formats

    NASA Astrophysics Data System (ADS)

    Seshadri, Sridhar B.; Khalsa, Satjeet; Arenson, Ronald L.; Brikman, Inna; Davey, Michael J.

    1988-06-01

    An image archive has been designed to manage and store radiologic images received from within the main Hospital and a from a suburban orthopedic clinic. Images are stored on both magnetic as well as optical media. Prior comparison examinations are combined with the current examination to generate a 'viewing folder' that is sent to the display station for primary diagnosis. An 'archive-manager' controls the database managment, periodic optical disk backup and 'viewing-folder' generation. Images are converted into the ACR/NEMA message format before being written to the optical disk. The software design of the 'archive-manager' and its associated modules is presented. Enhancements to the system are discussed.

  12. BOREAS Forest Cover Data Layers of the NSA in Raster Format

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David; Tuinhoff, Manning

    2000-01-01

    This data set was processed by BORIS staff from the original vector data of species, crown closure, cutting class, and site classification/subtype into raster files. The original polygon data were received from Linnet Graphics, the distributor of data for MNR. In the case of the species layer, the percentages of species composition were removed. This reduced the amount of information contained in the species layer of the gridded product, but it was necessary in order to make the gridded product easier to use. The original maps were produced from 1:15,840-scale aerial photography collected in 1988 over an area of the BOREAS NSA MSA. The data are stored in binary, image format files and they are available from Oak Ridge National Laboratory. The data files are available on a CD-ROM (see document number 20010000884).

  13. Adding and Deleting Images

    EPA Pesticide Factsheets

    Images are added via the Drupal WebCMS Editor. Once an image is uploaded onto a page, it is available via the Library and your files. You can edit the metadata, delete the image permanently, and/or replace images on the Files tab.

  14. BOREAS RSS-14 Level-1 GOES-8 Visible, IR and Water Vapor Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Faysash, David; Cooper, Harry J.; Smith, Eric A.; Newcomer, Jeffrey A.

    2000-01-01

    The BOREAS RSS-14 team collected and processed several GOES-7 and GOES-8 image data sets that covered the BOREAS study region. The level-1 BOREAS GOES-8 images are raw data values collected by RSS-14 personnel at FSU and delivered to BORIS. The data cover 14-Jul-1995 to 21-Sep-1995 and 01-Jan-1996 to 03-Oct-1996. The data start out containing three 8-bit spectral bands and end up containing five 10-bit spectral bands. No major problems with the data have been identified. The data are contained in binary image format files. Due to the large size of the images, the level-1 GOES-8 data are not contained on the BOREAS CD-ROM set. An inventory listing file is supplied on the CD-ROM to inform users of what data were collected. The level-1 GOES-8 image data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). See sections 15 and 16 for more information. The data files are available on a CD-ROM (see document number 20010000884).

  15. Video multiple watermarking technique based on image interlacing using DWT.

    PubMed

    Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  16. Preliminary Image Map of the 2007 Harris Fire Perimeter, Barrett Lake Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  17. Preliminary Image Map of the 2007 Buckweed Fire Perimeter, Green Valley Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  18. Preliminary Image Map of the 2007 Witch Fire Perimeter, Warners Ranch Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  19. Preliminary Image Map of the 2007 Harris Fire Perimeter, Otay Mesa Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  20. Preliminary Image Map of the 2007 Buckweed Fire Perimeter, Agua Dulce Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  1. Preliminary Image Map of the 2007 Witch Fire Perimeter, San Pasqual Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  2. Preliminary Image Map of the 2007 Buckweed Fire Perimeter, Mint Canyon Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  3. Preliminary Image Map of the 2007 Poomacha Fire Perimeter, Boucher Hill Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  4. Preliminary Image Map of the 2007 Ammo Fire Perimeter, Margarita Peak Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  5. Preliminary Image Map of the 2007 Harris Fire Perimeter, Otay Mountain Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  6. Preliminary Image Map of the 2007 Poomacha Fire Perimeter, Palomar Observatory Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  7. Preliminary Image Map of the 2007 Witch Fire Perimeter, Santa Ysabel Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  8. Preliminary Image Map of the 2007 Harris Fire Perimeter, Jamul Mountains Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  9. Preliminary Image Map of the 2007 Slide Fire Perimeter, Butler Peak Quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  10. Preliminary Image Map of the 2007 Canyon Fire Perimeter, Malibu Beach Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  11. Preliminary Image Map of the 2007 Witch Fire Perimeter, Valley Center Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  12. Preliminary Image Map of the 2007 Slide Fire Perimeter, Harrison Mountain Quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  13. Preliminary Image Map of the 2007 Buckweed Fire Perimeter, Sleepy Valley Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  14. Preliminary Image Map of the 2007 Witch Fire Perimeter, Tule Springs Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  15. Preliminary Image Map of the 2007 Harris Fire Perimeter, Morena Reservoir Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  16. Preliminary Image Map of the 2007 Slide Fire Perimeter, Keller Peak Quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  17. Why can't I manage my digital images like MP3s? The evolution and intent of multimedia metadata

    NASA Astrophysics Data System (ADS)

    Goodrum, Abby; Howison, James

    2005-01-01

    This paper considers the deceptively simple question: Why can't digital images be managed in the simple and effective manner in which digital music files are managed? We make the case that the answer is different treatments of metadata in different domains with different goals. A central difference between the two formats stems from the fact that digital music metadata lookup services are collaborative and automate the movement from a digital file to the appropriate metadata, while image metadata services do not. To understand why this difference exists we examine the divergent evolution of metadata standards for digital music and digital images and observed that the processes differ in interesting ways according to their intent. Specifically music metadata was developed primarily for personal file management and community resource sharing, while the focus of image metadata has largely been on information retrieval. We argue that lessons from MP3 metadata can assist individuals facing their growing personal image management challenges. Our focus therefore is not on metadata for cultural heritage institutions or the publishing industry, it is limited to the personal libraries growing on our hard-drives. This bottom-up approach to file management combined with p2p distribution radically altered the music landscape. Might such an approach have a similar impact on image publishing? This paper outlines plans for improving the personal management of digital images-doing image metadata and file management the MP3 way-and considers the likelihood of success.

  18. Why can't I manage my digital images like MP3s? The evolution and intent of multimedia metadata

    NASA Astrophysics Data System (ADS)

    Goodrum, Abby; Howison, James

    2004-12-01

    This paper considers the deceptively simple question: Why can"t digital images be managed in the simple and effective manner in which digital music files are managed? We make the case that the answer is different treatments of metadata in different domains with different goals. A central difference between the two formats stems from the fact that digital music metadata lookup services are collaborative and automate the movement from a digital file to the appropriate metadata, while image metadata services do not. To understand why this difference exists we examine the divergent evolution of metadata standards for digital music and digital images and observed that the processes differ in interesting ways according to their intent. Specifically music metadata was developed primarily for personal file management and community resource sharing, while the focus of image metadata has largely been on information retrieval. We argue that lessons from MP3 metadata can assist individuals facing their growing personal image management challenges. Our focus therefore is not on metadata for cultural heritage institutions or the publishing industry, it is limited to the personal libraries growing on our hard-drives. This bottom-up approach to file management combined with p2p distribution radically altered the music landscape. Might such an approach have a similar impact on image publishing? This paper outlines plans for improving the personal management of digital images-doing image metadata and file management the MP3 way-and considers the likelihood of success.

  19. BOREAS Regional DEM in Raster Format and AEAC Projection

    NASA Technical Reports Server (NTRS)

    Knapp, David; Verdin, Kristine; Hall, Forrest G. (Editor)

    2000-01-01

    This data set is based on the GTOPO30 Digital Elevation Model (DEM) produced by the United States Geological Survey EROS Data Center (USGS EDC). The BOReal Ecosystem-Atmosphere Study (BOREAS) region (1,000 km x 1000 km) was extracted from the GTOPO30 data and reprojected by BOREAS staff into the Albers Equal-Area Conic (AEAC) projection. The pixel size of these data is 1 km. The data are stored in binary, image format files.

  20. BOREAS TE-18 Landsat TM Maximum Likelihood Classification Image of the NSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. The objective of this classification is to provide the BOREAS investigators with a data product that characterizes the land cover of the NSA. A Landsat-5 TM image from 20-Aug-1988 was used to derive this classification. A standard supervised maximum likelihood classification approach was used to produce this classification. The data are provided in a binary image format file. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  1. Data File Standard for Flow Cytometry, version FCS 3.1.

    PubMed

    Spidlen, Josef; Moore, Wayne; Parks, David; Goldberg, Michael; Bray, Chris; Bierre, Pierre; Gorombey, Peter; Hyun, Bill; Hubbard, Mark; Lange, Simon; Lefebvre, Ray; Leif, Robert; Novo, David; Ostruszka, Leo; Treister, Adam; Wood, James; Murphy, Robert F; Roederer, Mario; Sudar, Damir; Zigon, Robert; Brinkman, Ryan R

    2010-01-01

    The flow cytometry data file standard provides the specifications needed to completely describe flow cytometry data sets within the confines of the file containing the experimental data. In 1984, the first Flow Cytometry Standard format for data files was adopted as FCS 1.0. This standard was modified in 1990 as FCS 2.0 and again in 1997 as FCS 3.0. We report here on the next generation flow cytometry standard data file format. FCS 3.1 is a minor revision based on suggested improvements from the community. The unchanged goal of the standard is to provide a uniform file format that allows files created by one type of acquisition hardware and software to be analyzed by any other type.The FCS 3.1 standard retains the basic FCS file structure and most features of previous versions of the standard. Changes included in FCS 3.1 address potential ambiguities in the previous versions and provide a more robust standard. The major changes include simplified support for international characters and improved support for storing compensation. The major additions are support for preferred display scale, a standardized way of capturing the sample volume, information about originality of the data file, and support for plate and well identification in high throughput, plate based experiments. Please see the normative version of the FCS 3.1 specification in Supporting Information for this manuscript (or at http://www.isac-net.org/ in the Current standards section) for a complete list of changes.

  2. BOREAS TE-18 Landsat TM Physical Classification Image of the SSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. The objective of this classification is to provide the BOREAS investigators with a data product that characterizes the land cover of the SSA. A Landsat-5 TM image from 02-Sep-1994 was used to derive the classification. A technique was implemented that uses reflectances of various land cover types along with a geometric optical canopy model to produce spectral trajectories. These trajectories are used as training data to classify the image into the different land cover classes. These data are provided in a binary image file format. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  3. Optimum Image Formation for Spaceborne Microwave Radiometer Products.

    PubMed

    Long, David G; Brodzik, Mary J

    2016-05-01

    This paper considers some of the issues of radiometer brightness image formation and reconstruction for use in the NASA-sponsored Calibrated Passive Microwave Daily Equal-Area Scalable Earth Grid 2.0 Brightness Temperature Earth System Data Record project, which generates a multisensor multidecadal time series of high-resolution radiometer products designed to support climate studies. Two primary reconstruction algorithms are considered: the Backus-Gilbert approach and the radiometer form of the scatterometer image reconstruction (SIR) algorithm. These are compared with the conventional drop-in-the-bucket (DIB) gridded image formation approach. Tradeoff study results for the various algorithm options are presented to select optimum values for the grid resolution, the number of SIR iterations, and the BG gamma parameter. We find that although both approaches are effective in improving the spatial resolution of the surface brightness temperature estimates compared to DIB, SIR requires significantly less computation. The sensitivity of the reconstruction to the accuracy of the measurement spatial response function (MRF) is explored. The partial reconstruction of the methods can tolerate errors in the description of the sensor measurement response function, which simplifies the processing of historic sensor data for which the MRF is not known as well as modern sensors. Simulation tradeoff results are confirmed using actual data.

  4. FTOOLS: A general package of software to manipulate FITS files

    NASA Astrophysics Data System (ADS)

    Blackburn, J. K.; Shaw, R. A.; Payne, H. E.; Hayes, J. J. E.; Heasarc

    1999-12-01

    FTOOLS, a highly modular collection of utilities for processing and analyzing data in the FITS (Flexible Image Transport System) format, has been developed in support of the HEASARC (High Energy Astrophysics Research Archive Center) at NASA's Goddard Space Flight Center. The FTOOLS package contains many utility programs which perform modular tasks on any FITS image or table, as well as higher-level analysis programs designed specifically for data from current and past high energy astrophysics missions. The utility programs for FITS tables are especially rich and powerful, and provide functions for presentation of file contents, extraction of specific rows or columns, appending or merging tables, binning values in a column or selecting subsets of rows based on a boolean expression. Individual FTOOLS programs can easily be chained together in scripts to achieve more complex operations such as the generation and displaying of spectra or light curves. FTOOLS development began in 1991 and has produced the main set of data analysis software for the current ASCA and RXTE space missions and for other archival sets of X-ray and gamma-ray data. The FTOOLS software package is supported on most UNIX platforms and on Windows machines. The user interface is controlled by standard parameter files that are very similar to those used by IRAF. The package is self documenting through a stand alone help task called fhelp. Software is written in ANSI C and FORTRAN to provide portability across most computer systems. The data format dependencies between hardware platforms are isolated through the FITSIO library package.

  5. Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service.

    PubMed

    Bao, Shunxing; Plassard, Andrew J; Landman, Bennett A; Gokhale, Aniruddha

    2017-04-01

    Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based "medical image processing-as-a-service" offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop's distributed file system. Despite this promise, HBase's load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy

  6. Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service

    PubMed Central

    Bao, Shunxing; Plassard, Andrew J.; Landman, Bennett A.; Gokhale, Aniruddha

    2017-01-01

    Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based “medical image processing-as-a-service” offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop’s distributed file system. Despite this promise, HBase’s load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split

  7. 75 FR 60846 - Bureau of Consular Affairs; Registration for the Diversity Immigrant (DV-2012) Visa Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-01

    ... need to submit a photo for a child who is already a U.S. citizen or a Legal Permanent Resident. Group... Joint Photographic Experts Group (JPEG) format; it must have a maximum image file size of two hundred... (dpi); the image file format in Joint Photographic Experts Group (JPEG) format; the maximum image file...

  8. Bistatic SAR: Signal Processing and Image Formation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wahl, Daniel E.; Yocky, David A.

    This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013more » on Kirtland Air Force Base, New Mexico.« less

  9. Preliminary Image Map of the 2007 Santiago Fire Perimeter, Santiago Peak Quadrangle, Orange and Riverside Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  10. Preliminary Image Map of the 2007 Poomacha Fire Perimeter, Pechanga Quadrangle, Riverside and San Diego Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  11. Preliminary Image Map of the 2007 Poomacha Fire Perimeter, Temecula Quadrangle, Riverside and San Diego Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  12. Preliminary Image Map of the 2007 Ammo Fire Perimeter, San Onofre Bluff Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  13. Preliminary Image Map of the 2007 Witch Fire Perimeter, El Cajon Mountain Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  14. Preliminary Image Map of the 2007 Ammo Fire Perimeter, Las Pulgas Canyon Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  15. Preliminary Image Map of the 2007 Cajon Fire Perimeter, San Bernardino North Quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  16. Preliminary Image Map of the 2007 Witch Fire Perimeter, San Vicente Reservoir Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  17. Preliminary Image Map of the 2007 Magic and Buckweed Fire Perimeters, Newhall Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  18. Preliminary Image Map of the 2007 Grass Valley Fire Perimeter, Lake Arrowhead Quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  19. Preliminary Image Map of the 2007 Buckweed Fire Perimeter, Warm Springs Mountain Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  20. Preliminary Image Map of the 2007 Witch Fire Perimeter, Rancho Santa Fe Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  1. SU-E-T-261: Plan Quality Assurance of VMAT Using Fluence Images Reconstituted From Log-Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsuta, Y; Shimizu, E; Matsunaga, K

    2014-06-01

    Purpose: A successful VMAT plan delivery includes precise modulations of dose rate, gantry rotational and multi-leaf collimator (MLC) shapes. One of the main problem in the plan quality assurance is dosimetric errors associated with leaf-positional errors are difficult to analyze because they vary with MU delivered and leaf number. In this study, we calculated integrated fluence error image (IFEI) from log-files and evaluated plan quality in the area of all and individual MLC leaves scanned. Methods: The log-file reported the expected and actual position for inner 20 MLC leaves and the dose fraction every 0.25 seconds during prostate VMAT onmore » Elekta Synergy. These data were imported to in-house software that developed to calculate expected and actual fluence images from the difference of opposing leaf trajectories and dose fraction at each time. The IFEI was obtained by adding all of the absolute value of the difference between expected and actual fluence images corresponding. Results: In the area all MLC leaves scanned in the IFEI, the average and root mean square (rms) were 2.5 and 3.6 MU, the area of errors below 10, 5 and 3 MU were 98.5, 86.7 and 68.1 %, the 95 % of area was covered with less than error of 7.1 MU. In the area individual MLC leaves scanned in the IFEI, the average and rms value were 2.1 – 3.0 and 3.1 – 4.0 MU, the area of errors below 10, 5 and 3 MU were 97.6 – 99.5, 81.7 – 89.5 and 51.2 – 72.8 %, the 95 % of area was covered with less than error of 6.6 – 8.2 MU. Conclusion: The analysis of the IFEI reconstituted from log-file was provided detailed information about the delivery in the area of all and individual MLC leaves scanned.« less

  2. Computer printing and filing of microbiology reports. 1. Description of the system.

    PubMed Central

    Goodwin, C S; Smith, B C

    1976-01-01

    From March 1974 all reports from this microbiology department have been computer printed and filed. The system was designed to include every medically important microorganism and test. Technicians at the laboratory bench made their results computer-readable using Port-a-punch cards, and specimen details were recorded on paper-tape, allowing the full description of each specimen to appear on the report. A summary form of each microbiology phrase enabled copies of reports to be printed on wide paper with 12 to 18 reports per sheet; such copies, in alphabetical order for one day, and cumulatively for one week were used by staff answering enquiries to the office. This format could also be used for printing allthe reports for one patient. Retrieval of results from the files was easily performed and was useful to medical and laboratory staff and for control-of-infection purposes. The system was written in COBOL and was designed to be as cost-effective as possible without sacrificing accuracy; the cost of a report and its filing was 17-97 pence. Images PMID:939809

  3. Can ASCII data files be standardized for Earth Science?

    NASA Astrophysics Data System (ADS)

    Evans, K. D.; Chen, G.; Wilson, A.; Law, E.; Olding, S. W.; Krotkov, N. A.; Conover, H.

    2015-12-01

    NASA's Earth Science Data Systems Working Groups (ESDSWG) was created over 10 years ago. The role of the ESDSWG is to make recommendations relevant to NASA's Earth science data systems from user experiences. Each group works independently focusing on a unique topic. Participation in ESDSWG groups comes from a variety of NASA-funded science and technology projects, such as MEaSUREs, NASA information technology experts, affiliated contractor, staff and other interested community members from academia and industry. Recommendations from the ESDSWG groups will enhance NASA's efforts to develop long term data products. Each year, the ESDSWG has a face-to-face meeting to discuss recommendations and future efforts. Last year's (2014) ASCII for Science Data Working Group (ASCII WG) completed its goals and made recommendations on a minimum set of information that is needed to make ASCII files at least human readable and usable for the foreseeable future. The 2014 ASCII WG created a table of ASCII files and their components as a means for understanding what kind of ASCII formats exist and what components they have in common. Using this table and adding information from other ASCII file formats, we will discuss the advantages and disadvantages of a standardized format. For instance, Space Geodesy scientists have been using the same RINEX/SINEX ASCII format for decades. Astronomers mostly archive their data in the FITS format. Yet Earth scientists seem to have a slew of ASCII formats, such as ICARTT, netCDF (an ASCII dump) and the IceBridge ASCII format. The 2015 Working Group is focusing on promoting extendibility and machine readability of ASCII data. Questions have been posed, including, Can we have a standardized ASCII file format? Can it be machine-readable and simultaneously human-readable? We will present a summary of the current used ASCII formats in terms of advantages and shortcomings, as well as potential improvements.

  4. Preliminary results of 3D dose calculations with MCNP-4B code from a SPECT image.

    PubMed

    Rodríguez Gual, M; Lima, F F; Sospedra Alfonso, R; González González, J; Calderón Marín, C

    2004-01-01

    Interface software was developed to generate the input file to run Monte Carlo MCNP-4B code from medical image in Interfile format version 3.3. The software was tested using a spherical phantom of tomography slides with known cumulated activity distribution in Interfile format generated with IMAGAMMA medical image processing system. The 3D dose calculation obtained with Monte Carlo MCNP-4B code was compared with the voxel S factor method. The results show a relative error between both methods less than 1 %.

  5. Fundamental study of compression for movie files of coronary angiography

    NASA Astrophysics Data System (ADS)

    Ando, Takekazu; Tsuchiya, Yuichiro; Kodera, Yoshie

    2005-04-01

    When network distribution of movie files was considered as reference, it could be useful that the lossy compression movie files which has small file size. We chouse three kinds of coronary stricture movies with different moving speed as an examination object; heart rate of slow, normal and fast movies. The movies of MPEG-1, DivX5.11, WMV9 (Windows Media Video 9), and WMV9-VCM (Windows Media Video 9-Video Compression Manager) were made from three kinds of AVI format movies with different moving speeds. Five kinds of movies that are four kinds of compression movies and non-compression AVI instead of the DICOM format were evaluated by Thurstone's method. The Evaluation factors of movies were determined as "sharpness, granularity, contrast, and comprehensive evaluation." In the virtual bradycardia movie, AVI was the best evaluation at all evaluation factors except the granularity. In the virtual normal movie, an excellent compression technique is different in all evaluation factors. In the virtual tachycardia movie, MPEG-1 was the best evaluation at all evaluation factors expects the contrast. There is a good compression form depending on the speed of movies because of the difference of compression algorithm. It is thought that it is an influence by the difference of the compression between frames. The compression algorithm for movie has the compression between the frames and the intra-frame compression. As the compression algorithm give the different influence to image by each compression method, it is necessary to examine the relation of the compression algorithm and our results.

  6. Extract and visualize geolocation from any text file

    NASA Astrophysics Data System (ADS)

    Boustani, M.

    2015-12-01

    There are variety of text file formats such as PDF, HTML and more which contains words about locations(countries, cities, regions and more). GeoParser developed as one of sub-projects under DARPA Memex to help finding any geolocation information crawled website data. It is a web application benefiting from Apache Tika to extract locations from any text file format and visualize geolocations on the map. https://github.com/MBoustani/GeoParserhttps://github.com/chrismattmann/tika-pythonhttp://www.darpa.mil/program/memex

  7. BOREAS TE-18 Landsat TM Maximum Likelihood Classification Image of the SSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. The objective of this classification is to provide the BOREAS investigators with a data product that characterizes the land cover of the SSA. A Landsat-5 TM image from 02-Sep- 1994 was used to derive the classification. A technique was implemented that uses reflectances of various land cover types along with a geometric optical canopy model to produce spectral trajectories. These trajectories are used as training data to classify the image into the different land cover classes. These data are provided in a binary image file format. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Center (DAAC).

  8. Data File Standard for Flow Cytometry, Version FCS 3.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spidlen, Josef; Moore, Wayne; Parks, David

    2009-11-10

    The flow cytometry data file standard provides the specifications needed to completely describe flow cytometry data sets within the confines of the file containing the experimental data. In 1984, the first Flow Cytometry Standard format for data files was adopted as FCS 1.0. This standard was modified in 1990 as FCS 2.0 and again in 1997 as FCS 3.0. We report here on the next generation flow cytometry standard data file format. FCS 3.1 is a minor revision based on suggested improvements from the community. The unchanged goal of the standard is to provide a uniform file format that allowsmore » files created by one type of acquisition hardware and software to be analyzed by any other type. The FCS 3.1 standard retains the basic FCS file structure and most features of previous versions of the standard. Changes included in FCS 3.1 address potential ambiguities in the previous versions and provide a more robust standard. The major changes include simplified support for international characters and improved support for storing compensation. The major additions are support for preferred display scale, a standardized way of capturing the sample volume, information about originality of the data file, and support for plate and well identification in high throughput, plate based experiments. Please see the normative version of the FCS 3.1 specification in Supporting Information for this manuscript (or at http://www.isac-net.org/ in the Current standards section) for a complete list of changes.« less

  9. Metadata requirements for results of diagnostic imaging procedures: a BIIF profile to support user applications

    NASA Astrophysics Data System (ADS)

    Brown, Nicholas J.; Lloyd, David S.; Reynolds, Melvin I.; Plummer, David L.

    2002-05-01

    A visible digital image is rendered from a set of digital image data. Medical digital image data can be stored as either: (a) pre-rendered format, corresponding to a photographic print, or (b) un-rendered format, corresponding to a photographic negative. The appropriate image data storage format and associated header data (metadata) required by a user of the results of a diagnostic procedure recorded electronically depends on the task(s) to be performed. The DICOM standard provides a rich set of metadata that supports the needs of complex applications. Many end user applications, such as simple report text viewing and display of a selected image, are not so demanding and generic image formats such as JPEG are sometimes used. However, these are lacking some basic identification requirements. In this paper we make specific proposals for minimal extensions to generic image metadata of value in various domains, which enable safe use in the case of two simple healthcare end user scenarios: (a) viewing of text and a selected JPEG image activated by a hyperlink and (b) viewing of one or more JPEG images together with superimposed text and graphics annotation using a file specified by a profile of the ISO/IEC Basic Image Interchange Format (BIIF).

  10. Preliminary Image Map of the 2007 Ranch Fire Perimeter, Cobblestone Mountain Quadrangle, Los Angeles and Ventura Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  11. Preliminary Image Map of the 2007 Witch and Poomacha Fire Perimeters, Rodriguez Mountain Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  12. Preliminary Image Map of the 2007 Ammo Fire Perimeter, San Clemente Quadrangle, Orange and San Diego Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  13. Preliminary Image Map of the 2007 Witch and Poomacha Fire Perimeters, Mesa Grande Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  14. Preliminary Image Map of the 2007 Ranch Fire Perimeter, Whitaker Peak Quadrangle, Los Angeles and Ventura Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  15. Preliminary Image Map of the 2007 Poomacha Fire Perimeter, Vail Lake Quadrangle, Riverside and San Diego Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  16. Guide to GFS History File Change on May 1, 2007

    Science.gov Websites

    Guide to GFS History File Change on May 1, 2007 On May 1, 2007 12Z, the GFS had a major change. The change caused the internal binary GFS history file to change formats. The file is still in spectral space but now pressure is calculated in a different way. Sometime in the future, the GFS history file may be

  17. Pancreatic Cancer Detection Consortium (PCDC) | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"2256","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"A 3-dimensional image of a human torso highlighting the pancreas.","field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"A 3-dimensional image of a human torso

  18. SU-E-T-473: A Patient-Specific QC Paradigm Based On Trajectory Log Files and DICOM Plan Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeMarco, J; McCloskey, S; Low, D

    Purpose: To evaluate a remote QC tool for monitoring treatment machine parameters and treatment workflow. Methods: The Varian TrueBeamTM linear accelerator is a digital machine that records machine axis parameters and MLC leaf positions as a function of delivered monitor unit or control point. This information is saved to a binary trajectory log file for every treatment or imaging field in the patient treatment session. A MATLAB analysis routine was developed to parse the trajectory log files for a given patient, compare the expected versus actual machine and MLC positions as well as perform a cross-comparison with the DICOM-RT planmore » file exported from the treatment planning system. The parsing routine sorts the trajectory log files based on the time and date stamp and generates a sequential report file listing treatment parameters and provides a match relative to the DICOM-RT plan file. Results: The trajectory log parsing-routine was compared against a standard record and verify listing for patients undergoing initial IMRT dosimetry verification and weekly and final chart QC. The complete treatment course was independently verified for 10 patients of varying treatment site and a total of 1267 treatment fields were evaluated including pre-treatment imaging fields where applicable. In the context of IMRT plan verification, eight prostate SBRT plans with 4-arcs per plan were evaluated based on expected versus actual machine axis parameters. The average value for the maximum RMS MLC error was 0.067±0.001mm and 0.066±0.002mm for leaf bank A and B respectively. Conclusion: A real-time QC analysis program was tested using trajectory log files and DICOM-RT plan files. The parsing routine is efficient and able to evaluate all relevant machine axis parameters during a patient treatment course including MLC leaf positions and table positions at time of image acquisition and during treatment.« less

  19. Effect of various digital processing algorithms on the measurement accuracy of endodontic file length.

    PubMed

    Kal, Betül Ilhan; Baksi, B Güniz; Dündar, Nesrin; Sen, Bilge Hakan

    2007-02-01

    The aim of this study was to compare the accuracy of endodontic file lengths after application of various image enhancement modalities. Endodontic files of three different ISO sizes were inserted in 20 single-rooted extracted permanent mandibular premolar teeth and standardized images were obtained. Original digital images were then enhanced using five processing algorithms. Six evaluators measured the length of each file on each image. The measurements from each processing algorithm and each file size were compared using repeated measures ANOVA and Bonferroni tests (P = 0.05). Paired t test was performed to compare the measurements with the true lengths of the files (P = 0.05). All of the processing algorithms provided significantly shorter measurements than the true length of each file size (P < 0.05). The threshold enhancement modality produced significantly higher mean error values (P < 0.05), while there was no significant difference among the other enhancement modalities (P > 0.05). Decrease in mean error value was observed with increasing file size (P < 0.05). Invert, contrast/brightness and edge enhancement algorithms may be recommended for accurate file length measurements when utilizing storage phosphor plates.

  20. Network Configuration Analysis for Formation Flying Satellites

    NASA Technical Reports Server (NTRS)

    Knoblock, Eric J.; Wallett, Thomas M.; Konangi, Vijay K.; Bhasin, Kul B.

    2001-01-01

    The performance of two networks to support autonomous multi-spacecraft formation flying systems is presented. Both systems are comprised of a ten-satellite formation, with one of the satellites designated as the central or 'mother ship.' All data is routed through the mother ship to the terrestrial network. The first system uses a TCP/EP over ATM protocol architecture within the formation, and the second system uses the IEEE 802.11 protocol architecture within the formation. The simulations consist of file transfers using either the File Transfer Protocol (FTP) or the Simple Automatic File Exchange (SAFE) Protocol. The results compare the IP queuing delay, IP queue size and IP processing delay at the mother ship as well as end-to-end delay for both systems. In all cases, using IEEE 802.11 within the formation yields less delay. Also, the throughput exhibited by SAFE is better than FTP.

  1. BOREAS TE-18 Landsat TM Physical Classification Image of the NSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. The objective of this classification is to provide the BOREAS investigators with a data product that characterizes the land cover of the NSA. A Landsat-5 TM image from 21-Jun-1995 was used to derive the classification. A technique was implemented that uses reflectances of various land cover types along with a geometric optical canopy model to produce spectral trajectories. These trajectories are used in a way that is similar to training data to classify the image into the different land cover classes. The data are provided in a binary, image file format. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  2. Solid Hydrogen Experiments for Atomic Propellants: Particle Formation Energy and Imaging Analyses

    NASA Technical Reports Server (NTRS)

    Palaszewski, Bryan

    2002-01-01

    This paper presents particle formation energy balances and detailed analyses of the images from experiments that were conducted on the formation of solid hydrogen particles in liquid helium during the Phase II testing in 2001. Solid particles of hydrogen were frozen in liquid helium and observed with a video camera. The solid hydrogen particle sizes and the total mass of hydrogen particles were estimated. The particle formation efficiency is also estimated. Particle sizes from the Phase I testing in 1999 and the Phase II testing in 2001 were similar. Though the 2001 testing created similar particles sizes, many new particle formation phenomena were observed. These experiment image analyses are one of the first steps toward visually characterizing these particles and it allows designers to understand what issues must be addressed in atomic propellant feed system designs for future aerospace vehicles.

  3. Solid Hydrogen Experiments for Atomic Propellants: Particle Formation, Imaging, Observations, and Analyses

    NASA Technical Reports Server (NTRS)

    Palaszewski, Bryan

    2005-01-01

    This report presents particle formation observations and detailed analyses of the images from experiments that were conducted on the formation of solid hydrogen particles in liquid helium. Hydrogen was frozen into particles in liquid helium, and observed with a video camera. The solid hydrogen particle sizes and the total mass of hydrogen particles were estimated. These newly analyzed data are from the test series held on February 28, 2001. Particle sizes from previous testing in 1999 and the testing in 2001 were similar. Though the 2001 testing created similar particles sizes, many new particle formation phenomena were observed: microparticles and delayed particle formation. These experiment image analyses are some of the first steps toward visually characterizing these particles, and they allow designers to understand what issues must be addressed in atomic propellant feed system designs for future aerospace vehicles.

  4. BOREAS Soils Data over the SSA in Raster Format and AEAC Projection

    NASA Technical Reports Server (NTRS)

    Knapp, David; Rostad, Harold; Hall, Forrest G. (Editor)

    2000-01-01

    This data set consists of GIS layers that describe the soils of the BOREAS SSA. The original data were submitted as vector layers that were gridded by BOREAS staff to a 30-meter pixel size in the AEAC projection. These data layers include the soil code (which relates to the soil name), modifier (which also relates to the soil name), and extent (indicating the extent that this soil exists within the polygon). There are three sets of these layers representing the primary, secondary, and tertiary soil characteristics. Thus, there is a total of nine layers in this data set along with supporting files. The data are stored in binary, image format files.

  5. Personalization of structural PDB files.

    PubMed

    Woźniak, Tomasz; Adamiak, Ryszard W

    2013-01-01

    PDB format is most commonly applied by various programs to define three-dimensional structure of biomolecules. However, the programs often use different versions of the format. Thus far, no comprehensive solution for unifying the PDB formats has been developed. Here we present an open-source, Python-based tool called PDBinout for processing and conversion of various versions of PDB file format for biostructural applications. Moreover, PDBinout allows to create one's own PDB versions. PDBinout is freely available under the LGPL licence at http://pdbinout.ibch.poznan.pl.

  6. Shuttle Data Center File-Processing Tool in Java

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Miller, Walter H.

    2006-01-01

    A Java-language computer program has been written to facilitate mining of data in files in the Shuttle Data Center (SDC) archives. This program can be executed on a variety of workstations or via Web-browser programs. This program is partly similar to prior C-language programs used for the same purpose, while differing from those programs in that it exploits the platform-neutrality of Java in implementing several features that are important for analysis of large sets of time-series data. The program supports regular expression queries of SDC archive files, reads the files, interleaves the time-stamped samples according to a chosen output, then transforms the results into that format. A user can choose among a variety of output file formats that are useful for diverse purposes, including plotting, Markov modeling, multivariate density estimation, and wavelet multiresolution analysis, as well as for playback of data in support of simulation and testing.

  7. E-submission chronic toxicology study supplemental files

    EPA Pesticide Factsheets

    The formats and instructions in these documents are designed to be used as an example or guide for registrants to format electronic files for submission of animal toxicology data to OPP for review in support of registration and reevaluation of pesticides.

  8. High Speed Large Format Photon Counting Microchannel Plate Imaging Sensors

    NASA Astrophysics Data System (ADS)

    Siegmund, O.; Ertley, C.; Vallerga, J.; Craven, C.; Popecki, M.; O'Mahony, A.; Minot, M.

    The development of a new class of microchannel plate technology, using atomic layer deposition (ALD) techniques applied to a borosilicate microcapillary array is enabling the implementation of larger, more stable detectors for Astronomy and remote sensing. Sealed tubes with MCPs with SuperGenII, bialkali, GaAs and GaN photocathodes have been developed to cover a wide range of optical/UV sensing applications. Formats of 18mm and 25mm circular, and 50mm (Planacon) and 20cm square have been constructed for uses from night time remote reconnaissance and biological single-molecule fluorescence lifetime imaging microscopy, to large area focal plane imagers for Astronomy, neutron detection and ring imaging Cherenkov detection. The large focal plane areas were previously unattainable, but the new developments in construction of ALD microchannel plates allow implementation of formats of 20cm or more. Continuing developments in ALD microchannel plates offer improved overall sealed tube lifetime and gain stability, and furthermore show reduced levels of radiation induced background. High time resolution astronomical and remote sensing applications can be addressed with microchannel plate based imaging, photon time tagging detector sealed tube schemes. Photon counting imaging readouts for these devices vary from cross strip (XS), cross delay line (XDL), to stripline anodes, and pad arrays depending on the intended application. The XS and XDL readouts have been implemented in formats from 22mm, and 50mm to 20cm. Both use MCP charge signals detected on two orthogonal layers of conductive fingers to encode event X-Y positions. XDL readout uses signal propagation delay to encode positions while XS readout uses charge cloud centroiding. Spatial resolution readout of XS detectors can be better than 20 microns FWHM, with good image linearity while using low gain (<10^6), allowing high local counting rates and longer overall tube lifetime. XS tubes with electronics can encode event

  9. Viewing Files — EDRN Public Portal

    Cancer.gov

    In addition to standard HTML Web pages, our web site contain other file formats. You may need additional software or browser plug-ins to view some of the information available on our site. This document lists show each format, along with links to the corresponding freely available plug-ins or viewers.

  10. Rapid 3D bioprinting from medical images: an application to bone scaffolding

    NASA Astrophysics Data System (ADS)

    Lee, Daniel Z.; Peng, Matthew W.; Shinde, Rohit; Khalid, Arbab; Hong, Abigail; Pennacchi, Sara; Dawit, Abel; Sipzner, Daniel; Udupa, Jayaram K.; Rajapakse, Chamith S.

    2018-03-01

    Bioprinting of tissue has its applications throughout medicine. Recent advances in medical imaging allows the generation of 3-dimensional models that can then be 3D printed. However, the conventional method of converting medical images to 3D printable G-Code instructions has several limitations, namely significant processing time for large, high resolution images, and the loss of microstructural surface information from surface resolution and subsequent reslicing. We have overcome these issues by creating a JAVA program that skips the intermediate triangularization and reslicing steps and directly converts binary dicom images into G-Code. In this study, we tested the two methods of G-Code generation on the application of synthetic bone graft scaffold generation. We imaged human cadaveric proximal femurs at an isotropic resolution of 0.03mm using a high resolution peripheral quantitative computed tomography (HR-pQCT) scanner. These images, of the Digital Imaging and Communications in Medicine (DICOM) format, were then processed through two methods. In each method, slices and regions of print were selected, filtered to generate a smoothed image, and thresholded. In the conventional method, these processed images are converted to the STereoLithography (STL) format and then resliced to generate G-Code. In the new, direct method, these processed images are run through our JAVA program and directly converted to G-Code. File size, processing time, and print time were measured for each. We found that this new method produced a significant reduction in G-Code file size as well as processing time (92.23% reduction). This allows for more rapid 3D printing from medical images.

  11. [Continuous observation of canal aberrations in S-shaped simulated root canal prepared by hand-used ProTaper files].

    PubMed

    Xia, Ling-yun; Leng, Wei-dong; Mao, Min; Yang, Guo-biao; Xiang, Yong-gang; Chen, Xin-mei

    2009-08-01

    To observe the formation of canal aberrations in S-shaped root canals prepared by every file of hand-used ProTaper. Fifteen S-shaped simulated resin root canals were selected. Each root canal was prepared by every file of hand-used ProTaper following the manufacturer instruction. The images of canals prepared by S1, S2, F1, F2 and F3 were taken and stored, which were divided into group S1, S2, F1, F2 and F3. One image of canal unprepared was superposed with the images of the same root canal in these five groups respectively to observe the types and number of canal aberrations, which included unprepared area, danger zone, ledge, elbow, zip and perforation. SPSS12.0 software pakage was used for Fisher's exact probabilities in 2x2 table. Unprepared area decreased following preparation by every file of ProTaper, but it still existed when the canal preparation was finished. The incidence of danger zone, elbow and zip in group F1 was 15/15, 11/15, 4/15, respectively, which was significantly higher than that in group S2(2/15,0,0) (P<0.001). Ledge appeared after prepared by F2, and increased sharply in group F3. None perforation was found in all groups. The incidence of canal aberrations begins to increase after prepared by finishing files of ProTaper.The presence of unprepared area suggests that it is essential to rinse canal abundantly during complicated canal preparation and canal antisepsis after preparation.

  12. BOREAS Level-3a Landsat TM Imagery: Scaled At-sensor Radiance in BSQ Format

    NASA Technical Reports Server (NTRS)

    Nickerson, Jaime; Hall, Forrest G. (Editor); Knapp, David; Newcomer, Jeffrey A.; Cihlar, Josef

    2000-01-01

    For BOREAS, the level-3a Landsat TM data, along with the other remotely sensed images, were collected in order to provide spatially extensive information over the primary study areas. This information includes radiant energy, detailed land cover, and biophysical parameter maps such as FPAR and LAI. Although very similar in content to the level-3s Landsat TM products, the level-3a images were created to provide users with a more usable BSQ format and to provide information that permitted direct determination of per-pixel latitude and longitude coordinates. Geographically, the level-3a images cover the BOREAS NSA and SSA. Temporally, the images cover the period of 22-Jun-1984 to 30-Jul-1996. The images are available in binary, image-format files. With permission from CCRS and RSI, several of the full-resolution images are included on the BOREAS CD-ROM series. Due to copyright issues, the images not included on the CD-ROM may not be publicly available. See Sections 15 and 16 for information about how to acquire the data. Information about the images not on the CD-ROMs is provided in an inventory listing on the CD-ROMs.

  13. Index files for Belle II - very small skim containers

    NASA Astrophysics Data System (ADS)

    Sevior, Martin; Bloomfield, Tristan; Kuhr, Thomas; Ueda, I.; Miyake, H.; Hara, T.

    2017-10-01

    The Belle II experiment[1] employs the root file format[2] for recording data and is investigating the use of “index-files” to reduce the size of data skims. These files contain pointers to the location of interesting events within the total Belle II data set and reduce the size of data skims by 2 orders of magnitude. We implement this scheme on the Belle II grid by recording the parent file metadata and the event location within the parent file. While the scheme works, it is substantially slower than a normal sequential read of standard skim files using default root file parameters. We investigate the performance of the scheme by adjusting the “splitLevel” and “autoflushsize” parameters of the root files in the parent data files.

  14. Toward privacy-preserving JPEG image retrieval

    NASA Astrophysics Data System (ADS)

    Cheng, Hang; Wang, Jingyue; Wang, Meiqing; Zhong, Shangping

    2017-07-01

    This paper proposes a privacy-preserving retrieval scheme for JPEG images based on local variance. Three parties are involved in the scheme: the content owner, the server, and the authorized user. The content owner encrypts JPEG images for privacy protection by jointly using permutation cipher and stream cipher, and then, the encrypted versions are uploaded to the server. With an encrypted query image provided by an authorized user, the server may extract blockwise local variances in different directions without knowing the plaintext content. After that, it can calculate the similarity between the encrypted query image and each encrypted database image by a local variance-based feature comparison mechanism. The authorized user with the encryption key can decrypt the returned encrypted images with plaintext content similar to the query image. The experimental results show that the proposed scheme not only provides effective privacy-preserving retrieval service but also ensures both format compliance and file size preservation for encrypted JPEG images.

  15. Preliminary Image Map of the 2007 Santiago Fire Perimeter, Black Star Canyon Quadrangle, Orange, Riverside, and San Bernardino Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  16. Preliminary Image Map of the 2007 Ranch and Magic Fire Perimeters, Val Verde Quadrangle, Los Angeles and Ventura Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  17. Analyzing microtomography data with Python and the scikit-image library.

    PubMed

    Gouillart, Emmanuelle; Nunez-Iglesias, Juan; van der Walt, Stéfan

    2017-01-01

    The exploration and processing of images is a vital aspect of the scientific workflows of many X-ray imaging modalities. Users require tools that combine interactivity, versatility, and performance. scikit-image is an open-source image processing toolkit for the Python language that supports a large variety of file formats and is compatible with 2D and 3D images. The toolkit exposes a simple programming interface, with thematic modules grouping functions according to their purpose, such as image restoration, segmentation, and measurements. scikit-image users benefit from a rich scientific Python ecosystem that contains many powerful libraries for tasks such as visualization or machine learning. scikit-image combines a gentle learning curve, versatile image processing capabilities, and the scalable performance required for the high-throughput analysis of X-ray imaging data.

  18. Short-Term File Reference Patterns in a UNIX Environment,

    DTIC Science & Technology

    1986-03-01

    accounts mentioned ahose. This includes major administrative and status files (for example, /etc/ passwd ), system libraries, system include files and so on...34 files are those appearing in / and /etc. Examples are /vmunix (the bootable kernel image) and /etc/ passwd (passwords and other information on accounts...as /etc/ passwd ). The small size of opened files (55% are under 1024 bytes, a common block transfer size, and 75% are under 4096 bytes) suggests that

  19. Indexing and filing of pathological illustrations.

    PubMed Central

    Brown, R A; Fawkes, R S; Beck, J S

    1975-01-01

    An inexpensive feature card retrieval system has been combined with the Systematised Nomenclature of Pathology (SNOP) to provide simple but efficient means of indexing and filing 2 in. x 2 in. transparencies within a department of pathology. Using this system 2400 transparencies and the associated index cards can be conveniently stored in one drawer of a standard filing cabinet. Images PMID:1123438

  20. Networks for Autonomous Formation Flying Satellite Systems

    NASA Technical Reports Server (NTRS)

    Knoblock, Eric J.; Konangi, Vijay K.; Wallett, Thomas M.; Bhasin, Kul B.

    2001-01-01

    The performance of three communications networks to support autonomous multi-spacecraft formation flying systems is presented. All systems are comprised of a ten-satellite formation arranged in a star topology, with one of the satellites designated as the central or "mother ship." All data is routed through the mother ship to the terrestrial network. The first system uses a TCP/lP over ATM protocol architecture within the formation the second system uses the IEEE 802.11 protocol architecture within the formation and the last system uses both of the previous architectures with a constellation of geosynchronous satellites serving as an intermediate point-of-contact between the formation and the terrestrial network. The simulations consist of file transfers using either the File Transfer Protocol (FTP) or the Simple Automatic File Exchange (SAFE) Protocol. The results compare the IF queuing delay, and IP processing delay at the mother ship as well as application-level round-trip time for both systems, In all cases, using IEEE 802.11 within the formation yields less delay. Also, the throughput exhibited by SAFE is better than FTP.

  1. Arkansas and Louisiana Aeromagnetic and Gravity Maps and Data - A Website for Distribution of Data

    USGS Publications Warehouse

    Bankey, Viki; Daniels, David L.

    2008-01-01

    This report contains digital data, image files, and text files describing data formats for aeromagnetic and gravity data used to compile the State aeromagnetic and gravity maps of Arkansas and Louisiana. The digital files include grids, images, ArcInfo, and Geosoft compatible files. In some of the data folders, ASCII files with the extension 'txt' describe the format and contents of the data files. Read the 'txt' files before using the data files.

  2. NASA Standard for Airborne Data: ICARTT Format ESDS-RFC-019

    NASA Astrophysics Data System (ADS)

    Thornhill, A.; Brown, C.; Aknan, A.; Crawford, J. H.; Chen, G.; Williams, E. J.

    2011-12-01

    Airborne field studies generate a plethora of data products in the effort to study atmospheric composition and processes. Data file formats for airborne field campaigns are designed to present data in an understandable and organized way to support collaboration and to document relevant and important meta data. The ICARTT file format was created to facilitate data management during the International Consortium for Atmospheric Research on Transport and Transformation (ICARTT) campaign in 2004 that involved government-agencies and university participants from five countries. Since this mission the ICARTT format has been used in subsequent field campaigns such as Polar Study Using Aircraft Remote Sensing, Surface Measurements and Models of Climates, Chemistry, Aerosols, and Transport (POLARCAT) and the first phase of Deriving Information on Surface Conditions from COlumn and VERtically Resolved Observations Relevant to Air Quality (DISCOVER-AQ). The ICARTT file format has been endorsed as a standard format for airborne data by the Standard Process Group (SPG), one of the Earth Science Data Systems Working Groups (ESDSWG) in 2010. The detailed description of the ICARTT format can be found at http://www-air.larc.nasa.gov/missions/etc/ESDS-RFC-019-v1.00.pdf. The ICARTT data format is an ASCII, comma delimited format that was based on the NASA Ames and GTE file formats. The file header is detailed enough to fully describe the data for users outside of the instrument group and includes a description of the meta data. The ICARTT scanning tools, format structure, implementations, and examples will be presented.

  3. BOREAS RSS-14 Level -3 Gridded Radiometer and Satellite Surface Radiation Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Hodges, Gary; Smith, Eric A.

    2000-01-01

    The BOREAS RSS-14 team collected and processed GOES-7 and -8 images of the BOREAS region as part of its effort to characterize the incoming, reflected, and emitted radiation at regional scales. This data set contains surface radiation parameters, such as net radiation and net solar radiation, that have been interpolated from GOES-7 images and AMS data onto the standard BOREAS mapping grid at a resolution of 5 km N-S and E-W. While some parameters are taken directly from the AMS data set, others have been corrected according to calibrations carried out during IFC-2 in 1994. The corrected values as well as the uncorrected values are included. For example, two values of net radiation are provided: an uncorrected value (Rn), and a value that has been corrected according to the calibrations (Rn-COR). The data are provided in binary image format data files. Some of the data files on the BOREAS CD-ROMs have been compressed using the Gzip program. See section 8.2 for details. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  4. CSAM: Compressed SAM format.

    PubMed

    Cánovas, Rodrigo; Moffat, Alistair; Turpin, Andrew

    2016-12-15

    Next generation sequencing machines produce vast amounts of genomic data. For the data to be useful, it is essential that it can be stored and manipulated efficiently. This work responds to the combined challenge of compressing genomic data, while providing fast access to regions of interest, without necessitating decompression of whole files. We describe CSAM (Compressed SAM format), a compression approach offering lossless and lossy compression for SAM files. The structures and techniques proposed are suitable for representing SAM files, as well as supporting fast access to the compressed information. They generate more compact lossless representations than BAM, which is currently the preferred lossless compressed SAM-equivalent format; and are self-contained, that is, they do not depend on any external resources to compress or decompress SAM files. An implementation is available at https://github.com/rcanovas/libCSAM CONTACT: canovas-ba@lirmm.frSupplementary Information: Supplementary data is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Accessible and informative sectioned images, color-coded images, and surface models of the ear.

    PubMed

    Park, Hyo Seok; Chung, Min Suk; Shin, Dong Sun; Jung, Yong Wook; Park, Jin Seo

    2013-08-01

    In our previous research, we created state-of-the-art sectioned images, color-coded images, and surface models of the human ear. Our ear data would be more beneficial and informative if they were more easily accessible. Therefore, the purpose of this study was to distribute the browsing software and the PDF file in which ear images are to be readily obtainable and freely explored. Another goal was to inform other researchers of our methods for establishing the browsing software and the PDF file. To achieve this, sectioned images and color-coded images of ear were prepared (voxel size 0.1 mm). In the color-coded images, structures related to hearing, equilibrium, and structures originated from the first and second pharyngeal arches were segmented supplementarily. The sectioned and color-coded images of right ear were added to the browsing software, which displayed the images serially along with structure names. The surface models were reconstructed to be combined into the PDF file where they could be freely manipulated. Using the browsing software and PDF file, sectional and three-dimensional shapes of ear structures could be comprehended in detail. Furthermore, using the PDF file, clinical knowledge could be identified through virtual otoscopy. Therefore, the presented educational tools will be helpful to medical students and otologists by improving their knowledge of ear anatomy. The browsing software and PDF file can be downloaded without charge and registration at our homepage (http://anatomy.dongguk.ac.kr/ear/). Copyright © 2013 Wiley Periodicals, Inc.

  6. Direct imaging search for the "missing link" in giant planet formation

    NASA Astrophysics Data System (ADS)

    Ngo, Henry; Mawet, Dimitri; Ruane, Garreth; Xuan, Wenhao; Bowler, Brendan; Cook, Therese; Zawol, Zoe

    2018-01-01

    While transit and radial velocity detection techniques have probed giant planet populations at close separations (within a few au), current direct imaging surveys are finding giant planets at separations of 10s-100s au. Furthermore, these directly imaged planets are very massive, including some with masses above the deuterium burning limit. It is not certain whether these objects represent the high mass end of planet formation scenarios or the low mass end of star formation. We present a direct imaging survey to search for the "missing link" population between the close-in RV and transiting giant planets and the extremely distant directly imaged giant planets (i.e. giant planets between 5-10 au). Finding and characterizing this population allows for comparisons with the formation models of closer-in planets and connects directly imaged planets with closer-in planets in semi-major axis phase space. In addition, microlensing surveys have suggested a large reservoir of giant planets exist in this region. To find these "missing link" giant planets, our survey searches for giant planets around M-stars. The ubiquity of M-stars provide a large number of nearby targets and their L-band contrast with planets allow for sensitivities to smaller planet masses than surveys conducted at shorter wavelengths. Along with careful target selection, we use Keck's L-band vector vortex coronagraph to enable sensitivities of a few Jupiter masses as close as 4 au to their host stars. We present our completed 2-year survey targeting 200 young (10-150 Myr), nearby M-stars and our ongoing work to follow-up over 40 candidate objects.

  7. Analysis towards VMEM File of a Suspended Virtual Machine

    NASA Astrophysics Data System (ADS)

    Song, Zheng; Jin, Bo; Sun, Yongqing

    With the popularity of virtual machines, forensic investigators are challenged with more complicated situations, among which discovering the evidences in virtualized environment is of significant importance. This paper mainly analyzes the file suffixed with .vmem in VMware Workstation, which stores all pseudo-physical memory into an image. The internal file structure of .vmem file is studied and disclosed. Key information about processes and threads of a suspended virtual machine is revealed. Further investigation into the Windows XP SP3 heap contents is conducted and a proof-of-concept tool is provided. Different methods to obtain forensic memory images are introduced, with both advantages and limits analyzed. We conclude with an outlook.

  8. NASA Thesaurus Data File

    NASA Technical Reports Server (NTRS)

    2012-01-01

    The NASA Thesaurus contains the authorized NASA subject terms used to index and retrieve materials in the NASA Aeronautics and Space Database (NA&SD) and NASA Technical Reports Server (NTRS). The scope of this controlled vocabulary includes not only aerospace engineering, but all supporting areas of engineering and physics, the natural space sciences (astronomy, astrophysics, planetary science), Earth sciences, and the biological sciences. The NASA Thesaurus Data File contains all valid terms and hierarchical relationships, USE references, and related terms in machine-readable form. The Data File is available in the following formats: RDF/SKOS, RDF/OWL, ZThes-1.0, and CSV/TXT.

  9. Improving the interactivity and functionality of Web-based radiology teaching files with the Java programming language.

    PubMed

    Eng, J

    1997-01-01

    Java is a programming language that runs on a "virtual machine" built into World Wide Web (WWW)-browsing programs on multiple hardware platforms. Web pages were developed with Java to enable Web-browsing programs to overlay transparent graphics and text on displayed images so that the user could control the display of labels and annotations on the images, a key feature not available with standard Web pages. This feature was extended to include the presentation of normal radiologic anatomy. Java programming was also used to make Web browsers compatible with the Digital Imaging and Communications in Medicine (DICOM) file format. By enhancing the functionality of Web pages, Java technology should provide greater incentive for using a Web-based approach in the development of radiology teaching material.

  10. Catalog Descriptions Using VOTable Files

    NASA Astrophysics Data System (ADS)

    Thompson, R.; Levay, K.; Kimball, T.; White, R.

    2008-08-01

    Additional information is frequently required to describe database table contents and make it understandable to users. For this reason, the Multimission Archive at Space Telescope (MAST) creates Òdescription filesÓ for each table/catalog. After trying various XML and CSV formats, we finally chose VOTable. These files are easy to update via an HTML form, easily read using an XML parser such as (in our case) the PHP5 SimpleXML extension, and have found multiple uses in our data access/retrieval process.

  11. Data Science Bowl Launched to Improve Lung Cancer Screening | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"2078","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Data Science Bowl Logo","field_file_image_title_text[und][0][value]":"Data Science Bowl Logo","field_folder[und]":"76"},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Data Science Bowl

  12. Evaluation of canal transportation after preparation with Reciproc single-file systems with or without glide path files.

    PubMed

    Aydin, Ugur; Karataslioglu, Emrah

    2017-01-01

    Canal transportation is a common sequel caused by rotary instruments. The purpose of the present study is to evaluate the degree of transportation after the use of Reciproc single-file instruments with or without glide path files. Thirty resin blocks with L-shaped canals were divided into three groups ( n = 10). Group 1 - canals were prepared with Reciproc-25 file. Group 2 - glide path file-G1 was used before Reciproc. Group 3 - glide path files-G1 and G2 were used before Reciproc. Pre- and post-instrumentation images were superimposed under microscope, and resin removed from the inner and outer surfaces of the root canal was calculated throughout 10 points. Statistical analysis was performed with Kruskal-Wallis test and post hoc Dunn test. For coronal and middle one-thirds, there was no significant difference among groups ( P > 0.05). For apical section, transportation of Group 1 was significantly higher than other groups ( P < 0.05). Using glide path files before Reciproc single-file system reduced the degree of apical canal transportation.

  13. Cloud-based processing of multi-spectral imaging data

    NASA Astrophysics Data System (ADS)

    Bernat, Amir S.; Bolton, Frank J.; Weiser, Reuven; Levitz, David

    2017-03-01

    Multispectral imaging holds great promise as a non-contact tool for the assessment of tissue composition. Performing multi - spectral imaging on a hand held mobile device would allow to bring this technology and with it knowledge to low resource settings to provide a state of the art classification of tissue health. This modality however produces considerably larger data sets than white light imaging and requires preliminary image analysis for it to be used. The data then needs to be analyzed and logged, while not requiring too much of the system resource or a long computation time and battery use by the end point device. Cloud environments were designed to allow offloading of those problems by allowing end point devices (smartphones) to offload computationally hard tasks. For this end we present a method where the a hand held device based around a smartphone captures a multi - spectral dataset in a movie file format (mp4) and compare it to other image format in size, noise and correctness. We present the cloud configuration used for segmenting images to frames where they can later be used for further analysis.

  14. Software for Automated Reading of STEP Files by I-DEAS(trademark)

    NASA Technical Reports Server (NTRS)

    Pinedo, John

    2003-01-01

    A program called "readstep" enables the I-DEAS(tm) computer-aided-design (CAD) software to automatically read Standard for the Exchange of Product Model Data (STEP) files. (The STEP format is one of several used to transfer data between dissimilar CAD programs.) Prior to the development of "readstep," it was necessary to read STEP files into I-DEAS(tm) one at a time in a slow process that required repeated intervention by the user. In operation, "readstep" prompts the user for the location of the desired STEP files and the names of the I-DEAS(tm) project and model file, then generates an I-DEAS(tm) program file called "readstep.prg" and two Unix shell programs called "runner" and "controller." The program "runner" runs I-DEAS(tm) sessions that execute readstep.prg, while "controller" controls the execution of "runner" and edits readstep.prg if necessary. The user sets "runner" and "controller" into execution simultaneously, and then no further intervention by the user is required. When "runner" has finished, the user should see only parts from successfully read STEP files present in the model file. STEP files that could not be read successfully (e.g., because of format errors) should be regenerated before attempting to read them again.

  15. Chapter 2: Tabular Data and Graphical Images in Support of the U.S. Geological Survey National Oil and Gas Assessment - The Wind River Basin Province

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2007-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files) because of the number and variety of platforms and software available.

  16. Real-Time Processing of Pressure-Sensitive Paint Images

    DTIC Science & Technology

    2006-12-01

    intermediate or final data to the hard disk in 3D grid format. In addition to the pressure or pressure coefficient at every grid point, the saved file may...occurs. Nevertheless, to achieve an accurate mapping between 2D image coordinates and 3D spatial coordinates, additional parameters must be introduced. A...improved mapping between the 2D and 3D coordinates. In a more sophisticated approach, additional terms corresponding to specific deformation modes

  17. Early Detection | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"171","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Early Detection Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Early Detection Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Early

  18. NIH Seeks Input on In-patient Clinical Research Areas | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"2476","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of the National Institutes of Health Clinical Center (Building 10) in Bethesda, Maryland.","field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of

  19. Development of Software to Model AXAF-I Image Quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees; Hawkins, Lamar

    1996-01-01

    This draft final report describes the work performed under the delivery order number 145 from May 1995 through August 1996. The scope of work included a number of software development tasks for the performance modeling of AXAF-I. A number of new capabilities and functions have been added to the GT software, which is the command mode version of the GRAZTRACE software, originally developed by MSFC. A structural data interface has been developed for the EAL (old SPAR) finite element analysis FEA program, which is being used by MSFC Structural Analysis group for the analysis of AXAF-I. This interface utility can read the structural deformation file from the EAL and other finite element analysis programs such as NASTRAN and COSMOS/M, and convert the data to a suitable format that can be used for the deformation ray-tracing to predict the image quality for a distorted mirror. There is a provision in this utility to expand the data from finite element models assuming 180 degrees symmetry. This utility has been used to predict image characteristics for the AXAF-I HRMA, when subjected to gravity effects in the horizontal x-ray ground test configuration. The development of the metrology data processing interface software has also been completed. It can read the HDOS FITS format surface map files, manipulate and filter the metrology data, and produce a deformation file, which can be used by GT for ray tracing for the mirror surface figure errors. This utility has been used to determine the optimum alignment (axial spacing and clocking) for the four pairs of AXAF-I mirrors. Based on this optimized alignment, the geometric images and effective focal lengths for the as built mirrors were predicted to cross check the results obtained by Kodak.

  20. A novel fuzzy logic-based image steganography method to ensure medical data security.

    PubMed

    Karakış, R; Güler, I; Çapraz, I; Bilir, E

    2015-12-01

    This study aims to secure medical data by combining them into one file format using steganographic methods. The electroencephalogram (EEG) is selected as hidden data, and magnetic resonance (MR) images are also used as the cover image. In addition to the EEG, the message is composed of the doctor׳s comments and patient information in the file header of images. Two new image steganography methods that are based on fuzzy-logic and similarity are proposed to select the non-sequential least significant bits (LSB) of image pixels. The similarity values of the gray levels in the pixels are used to hide the message. The message is secured to prevent attacks by using lossless compression and symmetric encryption algorithms. The performance of stego image quality is measured by mean square of error (MSE), peak signal-to-noise ratio (PSNR), structural similarity measure (SSIM), universal quality index (UQI), and correlation coefficient (R). According to the obtained result, the proposed method ensures the confidentiality of the patient information, and increases data repository and transmission capacity of both MR images and EEG signals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. A new version of Visual tool for estimating the fractal dimension of images

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Felea, D.; Besliu, C.; Jipa, Al.; Bordeianu, C. C.; Stan, E.; Esanu, T.

    2010-04-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images (Grossu et al., 2009 [1]). The earlier version was limited to bi-dimensional sets of points, stored in bitmap files. The application was extended for working also with comma separated values files and three-dimensional images. New version program summaryProgram title: Fractal Analysis v02 Catalogue identifier: AEEG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9999 No. of bytes in distributed program, including test data, etc.: 4 366 783 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30 M Classification: 14 Catalogue identifier of previous version: AEEG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1999 Does the new version supersede the previous version?: Yes Nature of problem: Estimating the fractal dimension of 2D and 3D images. Solution method: Optimized implementation of the box-counting algorithm. Reasons for new version:The previous version was limited to bitmap image files. The new application was extended in order to work with objects stored in comma separated values (csv) files. The main advantages are: Easier integration with other applications (csv is a widely used, simple text file format); Less resources consumed and improved performance (only the information of interest, the "black points", are stored); Higher resolution (the points coordinates are loaded into Visual Basic double variables [2]); Possibility of storing three-dimensional objects (e.g. the 3D Sierpinski gasket). In this version the optimized box-counting algorithm [1] was extended to the three

  2. BOREAS Level-3s Landsat TM Imagery Scaled At-sensor Radiance in LGSOWG Format

    NASA Technical Reports Server (NTRS)

    Nickeson, Jaime; Knapp, David; Newcomer, Jeffrey A.; Cihlar, Josef; Hall, Forrest G. (Editor)

    2000-01-01

    For BOReal Ecosystem-Atmosphere Study (BOREAS),the level-3s Landsat Thematic Mapper (TM) data, along with the other remotely sensed images,were collected in order to provide spatially extensive information over the primary study areas. This information includes radiant energy,detailed land cover, and biophysical parameter maps such as Fraction of Photosynthetically Active Radiation (FPAR) and Leaf area Index (LAI). CCRS collected and supplied the level-3s images to BOREAS for use in the remote sensing research activities. Geographically,the bulk of the level-3s images cover the BOREAS Northern Study Area (NSA) and Southern Study Area (SSA) with a few images covering the area between the NSA and SSA. Temporally,the images cover the period of 22-Jun-1984 to 30-Jul-1996. The images are available in binary,image-format files.

  3. Batch Conversion of 1-D FITS Spectra to Common Graphical Display Files

    NASA Astrophysics Data System (ADS)

    MacConnell, Darrell J.; Patterson, A. P.; Wing, R. F.; Costa, E.; Jedrzejewski, R. I.

    2008-09-01

    Authors DJM, RFW, and EC have accumulated about 1000 spectra of cool stars from CTIO, ESO, and LCO over the interval 1985 to 1994 and processed them with the standard IRAF tasks into FITS files of normalized intensity vs. wavelength. With the growth of the Web as a means of exchanging and preserving scientific information, we desired to put the spectra into a Web-readable format. We have searched without success sites such as the Goddard FITS Image Viewer page, http://fits.gsfc.nasa.gov/fits_viewer.html, for a program to convert a large number of 1-d stellar spectra from FITS format into common formats such as PDF, PS, or PNG. Author APP has written a Python script to do this using the PyFITS module and plotting routines from Pylab. The program determines the wavelength calibration using header keywords and creates PNG plots with a legend read from a CSV file that may contain the star name, position, spectral type, etc. It could readily be adapted to perform almost any kind of simple batch processing of astronomical data. The program may be obtained from the first author (jack@stsci.edu). Support for DJM from the research program for CSC astronomers at STScI is gratefully acknowledged. The Space Telescope Science Institute is operated by the Association of Universities for Research in Astronomy Inc. under NASA contract NAS 5-26555.

  4. Clinical applications of modern imaging technology: stereo image formation and location of brain cancer

    NASA Astrophysics Data System (ADS)

    Wang, Dezong; Wang, Jinxiang

    1994-05-01

    It is very important to locate the tumor for a patient, who has cancer in his brain. If he only gets X-CT or MRI pictures, the doctor does not know the size, shape location of the tumor and the relation between the tumor and other organs. This paper presents the formation of stereo images of cancer. On the basis of color code and color 3D reconstruction. The stereo images of tumor, brain and encephalic truncus are formed. The stereo image of cancer can be round on X, Y, Z-coordinates to show the shape from different directions. In order to show the location of tumor, stereo image of tumor and encephalic truncus are provided on different angles. The cross section pictures are also offered to indicate the relation of brain, tumor and encephalic truncus on cross sections. In this paper the calculating of areas, volume and the space between cancer and the side of the brain are also described.

  5. SIDS-toADF File Mapping Manual

    NASA Technical Reports Server (NTRS)

    McCarthy, Douglas; Smith, Matthew; Poirier, Diane; Smith, Charles A. (Technical Monitor)

    2002-01-01

    The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: 1) The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its I/O software, which handles the actual reading and writing of data from and to external storage media; 2) The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; 3) The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and 4) The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The SIDS-toADF File Mapping Manual specifies the exact manner in which, under CGNS conventions, CFD data structures (the SIDS) are to be stored in (i.e., mapped onto) the file structure provided by the database manager (ADF). The result is a conforming CGNS database. Adherence to the mapping conventions guarantees uniform meaning and location of CFD data within ADF files, and thereby allows the construction of

  6. Influence of cervical preflaring on apical file size determination.

    PubMed

    Pecora, J D; Capelli, A; Guerisoli, D M Z; Spanó, J C E; Estrela, C

    2005-07-01

    To investigate the influence of cervical preflaring with different instruments (Gates-Glidden drills, Quantec Flare series instruments and LA Axxess burs) on the first file that binds at working length (WL) in maxillary central incisors. Forty human maxillary central incisors with complete root formation were used. After standard access cavities, a size 06 K-file was inserted into each canal until the apical foramen was reached. The WL was set 1 mm short of the apical foramen. Group 1 received the initial apical instrument without previous preflaring of the cervical and middle thirds of the root canal. Group 2 had the cervical and middle portion of the root canals enlarged with Gates-Glidden drills sizes 90, 110 and 130. Group 3 had the cervical and middle thirds of the root canals enlarged with nickel-titanium Quantec Flare series instruments. Titanium-nitrite treated, stainless steel LA Axxess burs were used for preflaring the cervical and middle portions of root canals from group 4. Each canal was sized using manual K-files, starting with size 08 files with passive movements until the WL was reached. File sizes were increased until a binding sensation was felt at the WL, and the instrument size was recorded for each tooth. The apical region was then observed under a stereoscopic magnifier, images were recorded digitally and the differences between root canal and maximum file diameters were evaluated for each sample. Significant differences were found between experimental groups regarding anatomical diameter at the WL and the first file to bind in the canal (P < 0.01, 95% confidence interval). The major discrepancy was found when no preflaring was performed (0.151 mm average). The LA Axxess burs produced the smallest differences between anatomical diameter and first file to bind (0.016 mm average). Gates-Glidden drills and Flare instruments were ranked in an intermediary position, with no statistically significant differences between them (0.093 mm average). The

  7. DICOM to print, 35-mm slides, web, and video projector: tutorial using Adobe Photoshop.

    PubMed

    Gurney, Jud W

    2002-10-01

    Preparing images for publication has dealt with film and the photographic process. With picture archiving and communications systems, many departments will no longer produce film. This will change how images are produced for publication. DICOM, the file format for radiographic images, has to be converted and then prepared for traditional publication, 35-mm slides, the newest techniques of video projection, and the World Wide Web. Tagged image file format is the common format for traditional print publication, whereas joint photographic expert group is the current file format for the World Wide Web. Each medium has specific requirements that can be met with a common image-editing program such as Adobe Photoshop (Adobe Systems, San Jose, CA). High-resolution images are required for print, a process that requires interpolation. However, the Internet requires images with a small file size for rapid transmission. The resolution of each output differs and the image resolution must be optimized to match the output of the publishing medium.

  8. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  9. TOLNet Data Format for Lidar Ozone Profile & Surface Observations

    NASA Astrophysics Data System (ADS)

    Chen, G.; Aknan, A. A.; Newchurch, M.; Leblanc, T.

    2015-12-01

    The Tropospheric Ozone Lidar Network (TOLNet) is an interagency initiative started by NASA, NOAA, and EPA in 2011. TOLNet currently has six Lidars and one ozonesonde station. TOLNet provides high-resolution spatio-temporal measurements of tropospheric (surface to tropopause) ozone and aerosol vertical profiles to address fundamental air-quality science questions. The TOLNet data format was developed by TOLNet members as a community standard for reporting ozone profile observations. The development of this new format was primarily based on the existing NDAAC (Network for the Detection of Atmospheric Composition Change) format and ICARTT (International Consortium for Atmospheric Research on Transport and Transformation) format. The main goal is to present the Lidar observations in self-describing and easy-to-use data files. The TOLNet format is an ASCII format containing a general file header, individual profile headers, and the profile data. The last two components repeat for all profiles recorded in the file. The TOLNet format is both human and machine readable as it adopts standard metadata entries and fixed variable names. In addition, software has been developed to check for format compliance. To be presented is a detailed description of the TOLNet format protocol and scanning software.

  10. BOREAS RSS-17 1994 ERS-1 Level-3 Freeze/Thaw Backscatter Change Images

    NASA Technical Reports Server (NTRS)

    Rignot, Eric; Nickeson, Jaime (Editor); Hall, Forrest G. (Editor); Way, JoBea; McDonald, Kyle C.; Smith, David E. (Technical Monitor)

    2000-01-01

    The Boreal Ecosystem-Atmosphere Study (BOREAS) Remote Sensing Science (RSS)-17 team acquired and analyzed imaging radar data from the European Space Agency's (ESA's) European Remote Sensing Satellite (ERS)-1 over a complete annual cycle at the BOREAS sites in Canada in 1994 to detect shifts in radar backscatter related to varying environmental conditions. Two independent transitions corresponding to soil thaw and possible canopy thaw were revealed by the data. The results demonstrated that radar provides an ability to observe thaw transitions at the beginning of the growing season, which in turn helps constrain the length of the growing season. The data set presented here includes change maps derived from radar backscatter images that were mosaicked together to cover the southern BOREAS sites. The image values used for calculating the changes are given relative to the reference mosaic image. The data are stored in binary image format files. The imaging radar data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). The data files are available on a CD-ROM (see document number 20010000884).

  11. To Image...or Not to Image?

    ERIC Educational Resources Information Center

    Bruley, Karina

    1996-01-01

    Provides a checklist of considerations for installing document image processing with an electronic document management system. Other topics include scanning; indexing; the image file life cycle; benefits of imaging; document-driven workflow; and planning for workplace changes like postsorting, creating a scanning room, redeveloping job tasks and…

  12. Distributed PACS using distributed file system with hierarchical meta data servers.

    PubMed

    Hiroyasu, Tomoyuki; Minamitani, Yoshiyuki; Miki, Mitsunori; Yokouchi, Hisatake; Yoshimi, Masato

    2012-01-01

    In this research, we propose a new distributed PACS (Picture Archiving and Communication Systems) which is available to integrate several PACSs that exist in each medical institution. The conventional PACS controls DICOM file into one data-base. On the other hand, in the proposed system, DICOM file is separated into meta data and image data and those are stored individually. Using this mechanism, since file is not always accessed the entire data, some operations such as finding files, changing titles, and so on can be performed in high-speed. At the same time, as distributed file system is utilized, accessing image files can also achieve high-speed access and high fault tolerant. The introduced system has a more significant point. That is the simplicity to integrate several PACSs. In the proposed system, only the meta data servers are integrated and integrated system can be constructed. This system also has the scalability of file access with along to the number of file numbers and file sizes. On the other hand, because meta-data server is integrated, the meta data server is the weakness of this system. To solve this defect, hieratical meta data servers are introduced. Because of this mechanism, not only fault--tolerant ability is increased but scalability of file access is also increased. To discuss the proposed system, the prototype system using Gfarm was implemented. For evaluating the implemented system, file search operating time of Gfarm and NFS were compared.

  13. AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves

    NASA Astrophysics Data System (ADS)

    Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.

    2017-02-01

    ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.

  14. A software platform for the analysis of dermatology images

    NASA Astrophysics Data System (ADS)

    Vlassi, Maria; Mavraganis, Vlasios; Asvestas, Panteleimon

    2017-11-01

    The purpose of this paper is to present a software platform developed in Python programming environment that can be used for the processing and analysis of dermatology images. The platform provides the capability for reading a file that contains a dermatology image. The platform supports image formats such as Windows bitmaps, JPEG, JPEG2000, portable network graphics, TIFF. Furthermore, it provides suitable tools for selecting, either manually or automatically, a region of interest (ROI) on the image. The automated selection of a ROI includes filtering for smoothing the image and thresholding. The proposed software platform has a friendly and clear graphical user interface and could be a useful second-opinion tool to a dermatologist. Furthermore, it could be used to classify images including from other anatomical parts such as breast or lung, after proper re-training of the classification algorithms.

  15. Petroleum system modeling of the western Canada sedimentary basin - isopach grid files

    USGS Publications Warehouse

    Higley, Debra K.; Henry, Mitchell E.; Roberts, Laura N.R.

    2005-01-01

    This publication contains zmap-format grid files of isopach intervals that represent strata associated with Devonian to Holocene petroleum systems of the Western Canada Sedimentary Basin (WCSB) of Alberta, British Columbia, and Saskatchewan, Canada. Also included is one grid file that represents elevations relative to sea level of the top of the Lower Cretaceous Mannville Group. Vertical and lateral scales are in meters. The age range represented by the stratigraphic intervals comprising the grid files is 373 million years ago (Ma) to present day. File names, age ranges, formation intervals, and primary petroleum system elements are listed in table 1. Metadata associated with this publication includes information on the study area and the zmap-format files. The digital files listed in table 1 were compiled as part of the Petroleum Processes Research Project being conducted by the Central Energy Resources Team of the U.S. Geological Survey, which focuses on modeling petroleum generation, 3 migration, and accumulation through time for petroleum systems of the WCSB. Primary purposes of the WCSB study are to Construct the 1-D/2-D/3-D petroleum system models of the WCSB. Actual boundaries of the study area are documented within the metadata; excluded are northern Alberta and eastern Saskatchewan, but fringing areas of the United States are included.Publish results of the research and the grid files generated for use in the 3-D model of the WCSB.Evaluate the use of petroleum system modeling in assessing undiscovered oil and gas resources for geologic provinces across the World.

  16. Segy-change: The swiss army knife for the SEG-Y files

    NASA Astrophysics Data System (ADS)

    Stanghellini, Giuseppe; Carrara, Gabriela

    Data collected during active and passive seismic surveys can be stored in many different, more or less standard, formats. One of the most popular is the SEG-Y format, developed since 1975 to store single-line seismic digital data on tapes, and now evolved to store them into hard-disk and other media as well. Unfortunately, sometimes, files that are claimed to be recorded in the SEG-Y format cannot be processed using available free or industrial packages. Aiming to solve this impasse we present segy-change, a pre-processing software program to view, analyze, change and fix errors present in SEG-Y data files. It is written in C language and it can be used also as a software library and is compatible with most operating systems. Segy-change allows the user to display and optionally change the values inside all parts of a SEG-Y file: the file header, the trace headers and the data blocks. In addition, it allows to do a quality check on the data by plotting the traces. We provide instructions and examples on how to use the software.

  17. Pore Formation and Mobility Investigation video images

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Video images sent to the ground allow scientists to watch the behavior of the bubbles as they control the melting and freezing of the material during the Pore Formation and Mobility Investigation (PFMI) in the Microgravity Science Glovebox aboard the International Space Station. While the investigation studies the way that metals behave at the microscopic scale on Earth -- and how voids form -- the experiment uses a transparent material called succinonitrile that behaves like a metal to study this problem. The bubbles do not float to the top of the material in microgravity, so they can study their interactions.

  18. Comparative study of root-canal shaping with stainless steel and rotary NiTi files performed by preclinical dental students.

    PubMed

    Alrahabi, Mothanna

    2015-01-01

    We evaluated the use of NiTi rotary and stainless steel endodontic instruments for canal shaping by undergraduate students. We also assessed the quality of root canal preparation as well as the occurrence of iatrogenic events during instrumentation. In total, 30 third-year dental students attending Taibah University Dental College prepared 180 simulated canals in resin blocks with NiTi rotary instruments and stainless steel hand files. Superimposed images were prepared to measure the removal of material at different levels from apical termination using the GSA image analysis software. Preparation time, procedural accidents, and canal shape after preparation were analyzed using χ 2 and t-tests. The statistical significance level was set at P < 0.05. There were significant differences in preparation time between NiTi instruments and stainless steel files; the former was associated with shorter preparation time, less ledge formation (1.1% vs. 14.4%), and greater instrument fracture (5.56% vs. 1.1%). These results indicate that NiTi rotary instruments result in better canal geometry and cause less canal transportation. Manual instrumentation using stainless steel files is safer than rotary instrumentation for inexperienced students. Intensive preclinical training is a prerequisite for using NiTi rotary instruments. These results prompted us to reconsider theoretical and practical coursework when teaching endodontics.

  19. SW New Mexico Oil Well Formation Tops

    DOE Data Explorer

    Shari Kelley

    2015-10-21

    Rock formation top picks from oil wells from southwestern New Mexico from scout cards and other sources. There are differing formation tops interpretations for some wells, so for those wells duplicate formation top data are presented in this file.

  20. MXA: a customizable HDF5-based data format for multi-dimensional data sets

    NASA Astrophysics Data System (ADS)

    Jackson, M.; Simmons, J. P.; De Graef, M.

    2010-09-01

    A new digital file format is proposed for the long-term archival storage of experimental data sets generated by serial sectioning instruments. The format is known as the multi-dimensional eXtensible Archive (MXA) format and is based on the public domain Hierarchical Data Format (HDF5). The MXA data model, its description by means of an eXtensible Markup Language (XML) file with associated Document Type Definition (DTD) are described in detail. The public domain MXA package is available through a dedicated web site (mxa.web.cmu.edu), along with implementation details and example data files.

  1. ChemEngine: harvesting 3D chemical structures of supplementary data from PDF files.

    PubMed

    Karthikeyan, Muthukumarasamy; Vyas, Renu

    2016-01-01

    Digital access to chemical journals resulted in a vast array of molecular information that is now available in the supplementary material files in PDF format. However, extracting this molecular information, generally from a PDF document format is a daunting task. Here we present an approach to harvest 3D molecular data from the supporting information of scientific research articles that are normally available from publisher's resources. In order to demonstrate the feasibility of extracting truly computable molecules from PDF file formats in a fast and efficient manner, we have developed a Java based application, namely ChemEngine. This program recognizes textual patterns from the supplementary data and generates standard molecular structure data (bond matrix, atomic coordinates) that can be subjected to a multitude of computational processes automatically. The methodology has been demonstrated via several case studies on different formats of coordinates data stored in supplementary information files, wherein ChemEngine selectively harvested the atomic coordinates and interpreted them as molecules with high accuracy. The reusability of extracted molecular coordinate data was demonstrated by computing Single Point Energies that were in close agreement with the original computed data provided with the articles. It is envisaged that the methodology will enable large scale conversion of molecular information from supplementary files available in the PDF format into a collection of ready- to- compute molecular data to create an automated workflow for advanced computational processes. Software along with source codes and instructions available at https://sourceforge.net/projects/chemengine/files/?source=navbar.Graphical abstract.

  2. BOREAS RSS-16 Level-3b DC-8 AIRSAR SY Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Saatchi, Sasan; Newcomer, Jeffrey A.; Strub, Richard; Irani, Fred

    2000-01-01

    The BOREAS RSS-16 team used satellite and aircraft SAR data in conjunction with various ground measurements to determine the moisture regime of the boreal forest. RSS-16 assisted with the acquisition and ordering of NASA JPL AIRSAR data collected from the NASA DC-8 aircraft. The NASA JPL AIRSAR is a side-looking imaging radar system that utilizes the SAR principle to obtain high-resolution images that represent the radar backscatter of the imaged surface at different frequencies and polarizations. The information contained in each pixel of the AIRSAR data represents the radar backscatter for all possible combinations of horizontal and vertical transmit and receive polarizations (i.e., HH, HV, VH, and VV). Geographically, the data cover portions of the BOREAS SSA and NSA. Temporally, the data were acquired from 12-Aug-1993 to 31-Jul-1995. The level-3b AIRSAR SY data are the JPL synoptic product and contain 3 of the 12 total frequency and polarization combinations that are possible. The data are stored in binary image format files. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  3. Effect of satellite formations and imaging modes on global albedo estimation

    NASA Astrophysics Data System (ADS)

    Nag, Sreeja; Gatebe, Charles K.; Miller, David W.; de Weck, Olivier L.

    2016-05-01

    We confirm the applicability of using small satellite formation flight for multi-angular earth observation to retrieve global, narrow band, narrow field-of-view albedo. The value of formation flight is assessed using a coupled systems engineering and science evaluation model, driven by Model Based Systems Engineering and Observing System Simulation Experiments. Albedo errors are calculated against bi-directional reflectance data obtained from NASA airborne campaigns made by the Cloud Absorption Radiometer for the seven major surface types, binned using MODIS' land cover map - water, forest, cropland, grassland, snow, desert and cities. A full tradespace of architectures with three to eight satellites, maintainable orbits and imaging modes (collective payload pointing strategies) are assessed. For an arbitrary 4-sat formation, changing the reference, nadir-pointing satellite dynamically reduces the average albedo error to 0.003, from 0.006 found in the static referencecase. Tracking pre-selected waypoints with all the satellites reduces the average error further to 0.001, allows better polar imaging and continued operations even with a broken formation. An albedo error of 0.001 translates to 1.36 W/m2 or 0.4% in Earth's outgoing radiation error. Estimation errors are found to be independent of the satellites' altitude and inclination, if the nadir-looking is changed dynamically. The formation satellites are restricted to differ in only right ascension of planes and mean anomalies within slotted bounds. Three satellites in some specific formations show average albedo errors of less than 2% with respect to airborne, ground data and seven satellites in any slotted formation outperform the monolithic error of 3.6%. In fact, the maximum possible albedo error, purely based on angular sampling, of 12% for monoliths is outperformed by a five-satellite formation in any slotted arrangement and an eight satellite formation can bring that error down four fold to 3%. More than

  4. Dagik: A Quick Look System of the Geospace Data in KML format

    NASA Astrophysics Data System (ADS)

    Yoshida, D.; Saito, A.

    2007-12-01

    Dagik (Daily Geospace data in KML) is a quick look plot sharing system using Google Earth as a data browser. It provides daily data lists that contain network links to the KML/KMZ files of various geospace data. KML is a markup language to display data on Google Earth, and KMZ is a compressed file of KML. Users can browse the KML/KMZ files with the following procedures: 1) download "dagik.kml" from Dagik homepage (http://www- step.kugi.kyoto-u.ac.jp/dagik/), and open it with Google Earth, 2) select date, 3) select data type to browse. Dagik is a collection of network links to KML/KMZ files. The daily Dagik files are available since 1957, though they contain only the geomagnetic index data in the early periods. There are three activities of Dagik. The first one is the generation of the daily data lists, the second is to provide several useful tools, such as observatory lists, and the third is to assist researchers to make KML/KMZ data plots. To make the plot browsing easy, there are three rules for Dagik plot format: 1) one file contains one UT day data, 2) use common plot panel size, 3) share the data list. There are three steps to join Dagik as a plot provider: 1) make KML/KMZ files of the data, 2) put the KML/KMZ files on Web, 3) notice Dagik group the URL address and description of the files. The KML/KMZ files will be included in Dagik data list. As of September 2007, quick looks of several geosphace data, such as GPS total electron content data, ionosonde data, magnetometer data, FUV imaging data by a satellite, ground-based airglow data, and satellite footprint data, are available. The system of Dagik is introduced in the presentation. u.ac.jp/dagik/

  5. BOREAS TGB-5 Fire History of Manitoba 1980 to 1991 in Raster Format

    NASA Technical Reports Server (NTRS)

    Stocks, Brian J.; Zepp, Richard; Knapp, David; Hall, Forrest G. (Editor); Conrad, Sara K. (Editor)

    2000-01-01

    The BOReal Ecosystem-Atmosphere Study Trace Gas Biogeochemistry (BOREAS TGB-5) team collected several data sets related to the effects of fire on the exchange of trace gases between the surface and the atmosphere. This raster format data set covers the province of Manitoba between 1980 and 1991. The data were gridded into the Albers Equal-Area Conic (AEAC) projection from the original vector data. The original vector data were produced by Forestry Canada from hand-drawn boundaries of fires on photocopies of 1:250,000-scale maps. The locational accuracy of the data is considered fair to poor. When the locations of some fire boundaries were compared to Landsat TM images, they were found to be off by as much as a few kilometers. This problem should be kept in mind when using these data. The data are stored in binary, image format files.

  6. South African Learners' Conceptual Understanding about Image Formation by Lenses

    ERIC Educational Resources Information Center

    John, Merlin; Molepo, Jacob Maisha; Chirwa, Max

    2017-01-01

    The purpose of this research was to explore South African Grade 11 learners' conceptual understanding of "image formation by lenses". The participants for this study were 70 Grade 11 learners from a selected senior secondary school in Mthatha, Eastern Cape Province, South Africa. The qualitative approach employed in the study made use of…

  7. Some utilities to help produce Rich Text Files from Stata.

    PubMed

    Gillman, Matthew S

    Producing RTF files from Stata can be difficult and somewhat cryptic. Utilities are introduced to simplify this process; one builds up a table row-by-row, another inserts a PNG image file into an RTF document, and the others start and finish the RTF document.

  8. Snake River Plain Geothermal Play Fairway Analysis - Phase 1 KMZ files

    DOE Data Explorer

    John Shervais

    2015-10-10

    This dataset contain raw data files in kmz files (Google Earth georeference format). These files include volcanic vent locations and age, the distribution of fine-grained lacustrine sediments (which act as both a seal and an insulating layer for hydrothermal fluids), and post-Miocene faults compiled from the Idaho Geological Survey, the USGS Quaternary Fault database, and unpublished mapping. It also contains the Composite Common Risk Segment Map created during Phase 1 studies, as well as a file with locations of select deep wells used to interrogate the subsurface.

  9. TiConverter: A training image converting tool for multiple-point geostatistics

    NASA Astrophysics Data System (ADS)

    Fadlelmula F., Mohamed M.; Killough, John; Fraim, Michael

    2016-11-01

    TiConverter is a tool developed to ease the application of multiple-point geostatistics whether by the open source Stanford Geostatistical Modeling Software (SGeMS) or other available commercial software. TiConverter has a user-friendly interface and it allows the conversion of 2D training images into numerical representations in four different file formats without the need for additional code writing. These are the ASCII (.txt), the geostatistical software library (GSLIB) (.txt), the Isatis (.dat), and the VTK formats. It performs the conversion based on the RGB color system. In addition, TiConverter offers several useful tools including image resizing, smoothing, and segmenting tools. The purpose of this study is to introduce the TiConverter, and to demonstrate its application and advantages with several examples from the literature.

  10. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  11. Multispectral Snapshot Imagers Onboard Small Satellite Formations for Multi-Angular Remote Sensing

    NASA Technical Reports Server (NTRS)

    Nag, Sreeja; Hewagama, Tilak; Georgiev, Georgi; Pasquale, Bert; Aslam, Shahid; Gatebe, Charles K.

    2017-01-01

    Multispectral snapshot imagers are capable of producing 2D spatial images with a single exposure at selected, numerous wavelengths using the same camera, therefore operate differently from push broom or whiskbroom imagers. They are payloads of choice in multi-angular, multi-spectral imaging missions that use small satellites flying in controlled formation, to retrieve Earth science measurements dependent on the targets Bidirectional Reflectance-Distribution Function (BRDF). Narrow fields of view are needed to capture images with moderate spatial resolution. This paper quantifies the dependencies of the imagers optical system, spectral elements and camera on the requirements of the formation mission and their impact on performance metrics such as spectral range, swath and signal to noise ratio (SNR). All variables and metrics have been generated from a comprehensive, payload design tool. The baseline optical parameters selected (diameter 7 cm, focal length 10.5 cm, pixel size 20 micron, field of view 1.15 deg) and snapshot imaging technologies are available. The spectral components shortlisted were waveguide spectrometers, acousto-optic tunable filters (AOTF), electronically actuated Fabry-Perot interferometers, and integral field spectrographs. Qualitative evaluation favored AOTFs because of their low weight, small size, and flight heritage. Quantitative analysis showed that waveguide spectrometers perform better in terms of achievable swath (10-90 km) and SNR (greater than 20) for 86 wavebands, but the data volume generated will need very high bandwidth communication to downlink. AOTFs meet the external data volume caps well as the minimum spectral (wavebands) and radiometric (SNR) requirements, therefore are found to be currently feasible in spite of lower swath and SNR.

  12. An Approach Using Parallel Architecture to Storage DICOM Images in Distributed File System

    NASA Astrophysics Data System (ADS)

    Soares, Tiago S.; Prado, Thiago C.; Dantas, M. A. R.; de Macedo, Douglas D. J.; Bauer, Michael A.

    2012-02-01

    Telemedicine is a very important area in medical field that is expanding daily motivated by many researchers interested in improving medical applications. In Brazil was started in 2005, in the State of Santa Catarina has a developed server called the CyclopsDCMServer, which the purpose to embrace the HDF for the manipulation of medical images (DICOM) using a distributed file system. Since then, many researches were initiated in order to seek better performance. Our approach for this server represents an additional parallel implementation in I/O operations since HDF version 5 has an essential feature for our work which supports parallel I/O, based upon the MPI paradigm. Early experiments using four parallel nodes, provide good performance when compare to the serial HDF implemented in the CyclopsDCMServer.

  13. QX MAN: Q and X file manipulation

    NASA Technical Reports Server (NTRS)

    Krein, Mark A.

    1992-01-01

    QX MAN is a grid and solution file manipulation program written primarily for the PARC code and the GRIDGEN family of grid generation codes. QX MAN combines many of the features frequently encountered in grid generation, grid refinement, the setting-up of initial conditions, and post processing. QX MAN allows the user to manipulate single block and multi-block grids (and their accompanying solution files) by splitting, concatenating, rotating, translating, re-scaling, and stripping or adding points. In addition, QX MAN can be used to generate an initial solution file for the PARC code. The code was written to provide several formats for input and output in order for it to be useful in a broad spectrum of applications.

  14. Online molecular image repository and analysis system: A multicenter collaborative open-source infrastructure for molecular imaging research and application.

    PubMed

    Rahman, Mahabubur; Watabe, Hiroshi

    2018-05-01

    Molecular imaging serves as an important tool for researchers and clinicians to visualize and investigate complex biochemical phenomena using specialized instruments; these instruments are either used individually or in combination with targeted imaging agents to obtain images related to specific diseases with high sensitivity, specificity, and signal-to-noise ratios. However, molecular imaging, which is a multidisciplinary research field, faces several challenges, including the integration of imaging informatics with bioinformatics and medical informatics, requirement of reliable and robust image analysis algorithms, effective quality control of imaging facilities, and those related to individualized disease mapping, data sharing, software architecture, and knowledge management. As a cost-effective and open-source approach to address these challenges related to molecular imaging, we develop a flexible, transparent, and secure infrastructure, named MIRA, which stands for Molecular Imaging Repository and Analysis, primarily using the Python programming language, and a MySQL relational database system deployed on a Linux server. MIRA is designed with a centralized image archiving infrastructure and information database so that a multicenter collaborative informatics platform can be built. The capability of dealing with metadata, image file format normalization, and storing and viewing different types of documents and multimedia files make MIRA considerably flexible. With features like logging, auditing, commenting, sharing, and searching, MIRA is useful as an Electronic Laboratory Notebook for effective knowledge management. In addition, the centralized approach for MIRA facilitates on-the-fly access to all its features remotely through any web browser. Furthermore, the open-source approach provides the opportunity for sustainable continued development. MIRA offers an infrastructure that can be used as cross-boundary collaborative MI research platform for the rapid

  15. BOREAS RSS-7 Regional LAI and FPAR Images From 10-Day AVHRR-LAC Composites

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Chen, Jing; Cihlar, Josef

    2000-01-01

    The BOReal Ecosystem-Atmosphere Study Remote Sensing Science (BOREAS RSS-7) team collected various data sets to develop and validate an algorithm to allow the retrieval of the spatial distribution of Leaf Area Index (LAI) from remotely sensed images. Advanced Very High Resolution Radiometer (AVHRR) level-4c 10-day composite Normalized Difference Vegetation Index (NDVI) images produced at CCRS were used to produce images of LAI and the Fraction of Photosynthetically Active Radiation (FPAR) absorbed by plant canopies for the three summer IFCs in 1994 across the BOREAS region. The algorithms were developed based on ground measurements and Landsat Thematic Mapper (TM) images. The data are stored in binary image format files.

  16. Evaluation of a new filing system's ability to maintain canal morphology.

    PubMed

    Thompson, Matthew; Sidow, Stephanie J; Lindsey, Kimberly; Chuang, Augustine; McPherson, James C

    2014-06-01

    The manufacturer of the Hyflex CM endodontic files claims the files remain centered within the canal, and if unwound during treatment, they will regain their original shape after sterilization. The purpose of this study was to evaluate and compare the canal centering ability of the Hyflex CM and the ProFile ISO filing systems after repeated uses in simulated canals, followed by autoclaving. Sixty acrylic blocks with a canal curvature of 45° were stained with methylene blue, photographed, and divided into 2 groups, H (Hyflex CM) and P (ProFile ISO). The groups were further subdivided into 3 subgroups: H1, H2, H3; P1, P2, P3 (n = 10). Groups H1 and P1 were instrumented to 40 (.04) with the respective file system. Used files were autoclaved for 26 minutes at 126°C. After sterilization, the files were used to instrument groups H2 and P2. The same sterilization and instrumentation procedure was repeated for groups H3 and P3. Post-instrumentation digital images were taken and superimposed over the pre-instrumentation images. Changes in the location of the center of the canal at predetermined reference points were recorded and compared within subgroups and between filing systems. Statistical differences in intergroup and intragroup transportation measures were analyzed by using the Kruskal-Wallis analysis of variance of ranks with the Bonferroni post hoc test. There was a difference between Hyflex CM and ProFile ISO groups, although it was not statistically significant. Intragroup differences for both Hyflex CM and ProFile ISO groups were not significant (P < .05). The Hyflex CM and ProFile ISO files equally maintained the original canal's morphology after 2 sterilization cycles. Published by Elsevier Inc.

  17. SAR image formation with azimuth interpolation after azimuth transform

    DOEpatents

    Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM

    2008-07-08

    Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.

  18. Textured digital elevation model formation from low-cost UAV LADAR/digital image data

    NASA Astrophysics Data System (ADS)

    Bybee, Taylor C.; Budge, Scott E.

    2015-05-01

    Textured digital elevation models (TDEMs) have valuable use in precision agriculture, situational awareness, and disaster response. However, scientific-quality models are expensive to obtain using conventional aircraft-based methods. The cost of creating an accurate textured terrain model can be reduced by using a low-cost (<$20k) UAV system fitted with ladar and electro-optical (EO) sensors. A texel camera fuses calibrated ladar and EO data upon simultaneous capture, creating a texel image. This eliminates the problem of fusing the data in a post-processing step and enables both 2D- and 3D-image registration techniques to be used. This paper describes formation of TDEMs using simulated data from a small UAV gathering swaths of texel images of the terrain below. Being a low-cost UAV, only a coarse knowledge of position and attitude is known, and thus both 2D- and 3D-image registration techniques must be used to register adjacent swaths of texel imagery to create a TDEM. The process of creating an aggregate texel image (a TDEM) from many smaller texel image swaths is described. The algorithm is seeded with the rough estimate of position and attitude of each capture. Details such as the required amount of texel image overlap, registration models, simulated flight patterns (level and turbulent), and texture image formation are presented. In addition, examples of such TDEMs are shown and analyzed for accuracy.

  19. Auto Draw from Excel Input Files

    NASA Technical Reports Server (NTRS)

    Strauss, Karl F.; Goullioud, Renaud; Cox, Brian; Grimes, James M.

    2011-01-01

    The design process often involves the use of Excel files during project development. To facilitate communications of the information in the Excel files, drawings are often generated. During the design process, the Excel files are updated often to reflect new input. The problem is that the drawings often lag the updates, often leading to confusion of the current state of the design. The use of this program allows visualization of complex data in a format that is more easily understandable than pages of numbers. Because the graphical output can be updated automatically, the manual labor of diagram drawing can be eliminated. The more frequent update of system diagrams can reduce confusion and reduce errors and is likely to uncover symmetric problems earlier in the design cycle, thus reducing rework and redesign.

  20. Image formation of thick three-dimensional objects in differential-interference-contrast microscopy.

    PubMed

    Trattner, Sigal; Kashdan, Eugene; Feigin, Micha; Sochen, Nir

    2014-05-01

    The differential-interference-contrast (DIC) microscope is of widespread use in life sciences as it enables noninvasive visualization of transparent objects. The goal of this work is to model the image formation process of thick three-dimensional objects in DIC microscopy. The model is based on the principles of electromagnetic wave propagation and scattering. It simulates light propagation through the components of the DIC microscope to the image plane using a combined geometrical and physical optics approach and replicates the DIC image of the illuminated object. The model is evaluated by comparing simulated images of three-dimensional spherical objects with the recorded images of polystyrene microspheres. Our computer simulations confirm that the model captures the major DIC image characteristics of the simulated object, and it is sensitive to the defocusing effects.

  1. Chapter 3: Tabular Data and Graphical Images in Support of the U.S. Geological Survey National Oil and Gas Assessment - Western Gulf Province, Smackover-Austin-Eagle Ford Composite Total Petroleum System (504702)

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2006-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  2. Some utilities to help produce Rich Text Files from Stata

    PubMed Central

    Gillman, Matthew S.

    2018-01-01

    Producing RTF files from Stata can be difficult and somewhat cryptic. Utilities are introduced to simplify this process; one builds up a table row-by-row, another inserts a PNG image file into an RTF document, and the others start and finish the RTF document. PMID:29731697

  3. 76 FR 39757 - Filing Procedures

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-06

    ... an optical character recognition process, such a document may contain recognition errors. CAUTION... network speed e-filing of these documents may be difficult. Pursuant to section II(C) above, the Secretary... optical scan format or a typed ``electronic signature,'' e.g., ``/s/Jane Doe.'' (3) In the case of a...

  4. The design of an fast Fourier filter for enhancing diagnostically relevant structures - endodontic files.

    PubMed

    Bruellmann, Dan; Sander, Steven; Schmidtmann, Irene

    2016-05-01

    The endodontic working length is commonly determined by electronic apex locators and intraoral periapical radiographs. No algorithms for the automatic detection of endodontic files in dental radiographs have been described in the recent literature. Teeth from the mandibles of pig cadavers were accessed, and digital radiographs of these specimens were obtained using an optical bench. The specimens were then recorded in identical positions and settings after the insertion of endodontic files of known sizes (ISO sizes 10-15). The frequency bands generated by the endodontic files were determined using fast Fourier transforms (FFTs) to convert the resulting images into frequency spectra. The detected frequencies were used to design a pre-segmentation filter, which was programmed using Delphi XE RAD Studio software (Embarcadero Technologies, San Francisco, USA) and tested on 20 radiographs. For performance evaluation purposes, the gauged lengths (measured with a caliper) of visible endodontic files were measured in the native and filtered images. The software was able to segment the endodontic files in both the samples and similar dental radiographs. We observed median length differences of 0.52 mm (SD: 2.76 mm) and 0.46 mm (SD: 2.33 mm) in the native and post-segmentation images, respectively. Pearson's correlation test revealed a significant correlation of 0.915 between the true length and the measured length in the native images; the corresponding correlation for the filtered images was 0.97 (p=0.0001). The algorithm can be used to automatically detect and measure the lengths of endodontic files in digital dental radiographs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Software to Compare NPP HDF5 Data Files

    NASA Technical Reports Server (NTRS)

    Wiegand, Chiu P.; LeMoigne-Stewart, Jacqueline; Ruley, LaMont T.

    2013-01-01

    This software was developed for the NPOESS (National Polar-orbiting Operational Environmental Satellite System) Preparatory Project (NPP) Science Data Segment. The purpose of this software is to compare HDF5 (Hierarchical Data Format) files specific to NPP and report whether the HDF5 files are identical. If the HDF5 files are different, users have the option of printing out the list of differences in the HDF5 data files. The user provides paths to two directories containing a list of HDF5 files to compare. The tool would select matching HDF5 file names from the two directories and run the comparison on each file. The user can also select from three levels of detail. Level 0 is the basic level, which simply states whether the files match or not. Level 1 is the intermediate level, which lists the differences between the files. Level 2 lists all the details regarding the comparison, such as which objects were compared, and how and where they are different. The HDF5 tool is written specifically for the NPP project. As such, it ignores certain attributes (such as creation_date, creation_ time, etc.) in the HDF5 files. This is because even though two HDF5 files could represent exactly the same granule, if they are created at different times, the creation date and time would be different. This tool is smart enough to ignore differences that are not relevant to NPP users.

  6. A convertor and user interface to import CAD files into worldtoolkit virtual reality systems

    NASA Technical Reports Server (NTRS)

    Wang, Peter Hor-Ching

    1996-01-01

    Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C

  7. 18 CFR 270.304 - Tight formation gas.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... determination that natural gas is tight formation gas must file with the jurisdictional agency an application... formation; (d) A complete copy of the well log, including the log heading identifying the designated tight...

  8. WE-DE-206-03: MRI Image Formation - Slice Selection, Phase Encoding, Frequency Encoding, K-Space, SNR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, C.

    Magnetic resonance imaging (MRI) has become an essential part of clinical imaging due to its ability to render high soft tissue contrast. Instead of ionizing radiation, MRI use strong magnetic field, radio frequency waves and field gradients to create diagnostic useful images. It can be used to image the anatomy and also functional and physiological activities within the human body. Knowledge of the basic physical principles underlying MRI acquisition is vitally important to successful image production and proper image interpretation. This lecture will give an overview of the spin physics, imaging principle of MRI, the hardware of the MRI scanner,more » and various pulse sequences and their applications. It aims to provide a conceptual foundation to understand the image formation process of a clinical MRI scanner. Learning Objectives: Understand the origin of the MR signal and contrast from the spin physics level. Understand the main hardware components of a MRI scanner and their purposes Understand steps for MR image formation including spatial encoding and image reconstruction Understand the main kinds of MR pulse sequences and their characteristics.« less

  9. Planet Formation Imager (PFI): science vision and key requirements

    NASA Astrophysics Data System (ADS)

    Kraus, Stefan; Monnier, John D.; Ireland, Michael J.; Duchêne, Gaspard; Espaillat, Catherine; Hönig, Sebastian; Juhasz, Attila; Mordasini, Chris; Olofsson, Johan; Paladini, Claudia; Stassun, Keivan; Turner, Neal; Vasisht, Gautam; Harries, Tim J.; Bate, Matthew R.; Gonzalez, Jean-François; Matter, Alexis; Zhu, Zhaohuan; Panic, Olja; Regaly, Zsolt; Morbidelli, Alessandro; Meru, Farzana; Wolf, Sebastian; Ilee, John; Berger, Jean-Philippe; Zhao, Ming; Kral, Quentin; Morlok, Andreas; Bonsor, Amy; Ciardi, David; Kane, Stephen R.; Kratter, Kaitlin; Laughlin, Greg; Pepper, Joshua; Raymond, Sean; Labadie, Lucas; Nelson, Richard P.; Weigelt, Gerd; ten Brummelaar, Theo; Pierens, Arnaud; Oudmaijer, Rene; Kley, Wilhelm; Pope, Benjamin; Jensen, Eric L. N.; Bayo, Amelia; Smith, Michael; Boyajian, Tabetha; Quiroga-Nuñez, Luis Henry; Millan-Gabet, Rafael; Chiavassa, Andrea; Gallenne, Alexandre; Reynolds, Mark; de Wit, Willem-Jan; Wittkowski, Markus; Millour, Florentin; Gandhi, Poshak; Ramos Almeida, Cristina; Alonso Herrero, Almudena; Packham, Chris; Kishimoto, Makoto; Tristram, Konrad R. W.; Pott, Jörg-Uwe; Surdej, Jean; Buscher, David; Haniff, Chris; Lacour, Sylvestre; Petrov, Romain; Ridgway, Steve; Tuthill, Peter; van Belle, Gerard; Armitage, Phil; Baruteau, Clement; Benisty, Myriam; Bitsch, Bertram; Paardekooper, Sijme-Jan; Pinte, Christophe; Masset, Frederic; Rosotti, Giovanni

    2016-08-01

    The Planet Formation Imager (PFI) project aims to provide a strong scientific vision for ground-based optical astronomy beyond the upcoming generation of Extremely Large Telescopes. We make the case that a breakthrough in angular resolution imaging capabilities is required in order to unravel the processes involved in planet formation. PFI will be optimised to provide a complete census of the protoplanet population at all stellocentric radii and over the age range from 0.1 to 100 Myr. Within this age period, planetary systems undergo dramatic changes and the final architecture of planetary systems is determined. Our goal is to study the planetary birth on the natural spatial scale where the material is assembled, which is the "Hill Sphere" of the forming planet, and to characterise the protoplanetary cores by measuring their masses and physical properties. Our science working group has investigated the observational characteristics of these young protoplanets as well as the migration mechanisms that might alter the system architecture. We simulated the imprints that the planets leave in the disk and study how PFI could revolutionise areas ranging from exoplanet to extragalactic science. In this contribution we outline the key science drivers of PFI and discuss the requirements that will guide the technology choices, the site selection, and potential science/technology tradeoffs.

  10. Formation Flying and the Stellar Imager Mission Concept

    NASA Technical Reports Server (NTRS)

    Carpenter, Kenneth G.

    2003-01-01

    The Stellar Imager (SI) is envisioned as a space-based, W-optical interferometer composed of 10 or more one-meter class elements distributed with a maximum baseline of 0.5 km. image stars and binaries with sufficient resolution to enable long-term studies of stellar magnetic activity patterns, for comparison with those on the sun. It will also support asteroseismology (acoustic imaging) to probe stellar internal structure, differential rotation, and large-scale circulations. SI will enable us to understand the various effects of the magnetic fields of stars, the dynamos that generate these fields, and the internal structure and dynamics of the stars. The ultimate goal of the mission is to achieve the best-possible forecasting of solar activity as a driver of climate and space weather on time scales ranging from months up to decades, and an understanding of the impact of stellar magnetic activity on life in the Universe. In this paper we briefly describe the scientific goals of the mission, the performance requirements needed to address these goals, and the "enabling technology" development efforts required, with specific attention for this meeting to the formation-flying aspects. It is designed to

  11. CBEFF Common Biometric Exchange File Format

    DTIC Science & Technology

    2001-01-03

    and systems. Points of contact for CBEFF and liaisons to other organizations can be found in Appendix F. 2. Purpose The purpose of CBEFF is...0x40 Signature Dynamics 0x80 Keystroke Dynamics 0x100 Lip Movement 0x200 Thermal Face Image 0x400 Thermal Hand Image 0x800 Gait 0x1000 Body...this process is negligible from the Biometric Objects point of view, unless the process creating the livescan sample to compare against the

  12. A high-speed network for cardiac image review.

    PubMed

    Elion, J L; Petrocelli, R R

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage.

  13. A high-speed network for cardiac image review.

    PubMed Central

    Elion, J. L.; Petrocelli, R. R.

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage. PMID:7949964

  14. Medusae Fossae Formation - High Resolution Image

    NASA Technical Reports Server (NTRS)

    1998-01-01

    An exotic terrain of wind-eroded ridges and residual smooth surfaces are seen in one of the highest resolution images ever taken of Mars from orbit. The Medusae Fossae formation is believed to be formed of the fragmental ejecta of huge explosive volcanic eruptions. When subjected to intense wind-blasting over hundreds of millions of years, this material erodes easily once the uppermost tougher crust is breached. The crust, or cap rock, can be seen in the upper right part of the picture. The finely-spaced ridges are similar to features on Earth called yardangs, which are formed by intense winds plucking individual grains from, and by wind-driven sand blasting particles off, sedimentary deposits.

    The image was taken on October 30, 1997 at 11:05 AM PST, shortly after the Mars Global Surveyor spacecraft's 31st closest approach to Mars. The image covers an area 3.6 X 21.5 km (2.2 X 13.4 miles) at 3.6 m (12 feet) per picture element--craters only 11 m (36 feet, about the size of a swimming pool) across can be seen. The best Viking view of the area (VO 1 387S34) has a resolution of 240 m/pixel, or 67 times lower resolution than the MOC frame.

    Malin Space Science Systems (MSSS) and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  15. Feasibility of the optical imaging of thrombus formation in a rotary blood pump by near-infrared light.

    PubMed

    Sakota, Daisuke; Murashige, Tomotaka; Kosaka, Ryo; Nishida, Masahiro; Maruyama, Osamu

    2014-09-01

    Blood coagulation is one of the primary concerns when using mechanical circulatory support devices such as blood pumps. Noninvasive detection and imaging of thrombus formation is useful not only for the development of more hemocompatible devices but also for the management of blood coagulation to avoid risk of infarction. The objective of this study is to investigate the use of near-infrared light for imaging of thrombus formation in a rotary blood pump. The optical properties of a thrombus at wavelengths ranging from 600 to 750 nm were analyzed using a hyperspectral imaging (HSI) system. A specially designed hydrodynamically levitated centrifugal blood pump with a visible bottom area was used. In vitro antithrombogenic testing was conducted five times with the pump using bovine whole blood in which the activated blood clotting time was adjusted to 200 s prior to the experiment. Two halogen lights were used for the light sources. The forward scattering through the pump and backward scattering on the pump bottom area were imaged using the HSI system. HSI showed an increase in forward scattering at wavelengths ranging from 670 to 750 nm in the location of thrombus formation. The time at which the thrombus began to form in the impeller rotating at 2780 rpm could be detected. The spectral difference between the whole blood and the thrombus was utilized to image thrombus formation. The results indicate the feasibility of dynamically detecting and imaging thrombus formation in a rotary blood pump. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  16. Study of carbonate concretions using imaging spectroscopy in the Frontier Formation, Wyoming

    NASA Astrophysics Data System (ADS)

    de Linaje, Virginia Alonso; Khan, Shuhab D.; Bhattacharya, Janok

    2018-04-01

    Imaging spectroscopy is applied to study diagenetic processes of the Wall Creek Member of the Cretaceous Frontier Formation, Wyoming. Visible Near-Infrared and Shortwave-Infrared hyperspectral cameras were used to scan near vertical and well-exposed outcrop walls to analyze lateral and vertical geochemical variations. Reflectance spectra were analyzed and compared with high-resolution laboratory spectral and hyperspectral imaging data. Spectral Angle Mapper (SAM) and Mixture Tuned Matched Filtering (MTMF) classification algorithms were applied to quantify facies and mineral abundances in the Frontier Formation. MTMF is the most effective and reliable technique when studying spectrally similar materials. Classification results show that calcite cement in concretions associated with the channel facies is homogeneously distributed, whereas the bar facies was shown to be interbedded with layers of non-calcite-cemented sandstone.

  17. BOREAS Level-3s SPOT Imagery: Scaled At-sensor Radiance in LGSOWG Format

    NASA Technical Reports Server (NTRS)

    Strub, Richard; Nickeson, Jaime; Newcomer, Jeffrey A.; Hall, Forrest G. (Editor); Cihlar, Josef

    2000-01-01

    For BOReal Ecosystem-Atmosphere Study (BOREAS), the level-3s Satellite Pour l'Observation de la Terre (SPOT) data, along with the other remotely sensed images, were collected in order to provide spatially extensive information over the primary study areas. This information includes radiant energy, detailed land cover, and biophysical parameter maps such as Fraction of Photosynthetically Active Radiation (FPAR) and Leaf Area Index (LAI). The SPOT images acquired for the BOREAS project were selected primarily to fill temporal gaps in the Landsat Thematic Mapper (TM) image data collection. CCRS collected and supplied the level-3s images to BOREAS Information System (BORIS) for use in the remote sensing research activities. Spatially, the level-3s images cover 60- by 60-km portions of the BOREAS Northern Study Area (NSA) and Southern Study Area (SSA). Temporally, the images cover the period of 17-Apr-1994 to 30-Aug-1996. The images are available in binary image format files. Due to copyright issues, the SPOT images may not be publicly available.

  18. Development of a web-based DICOM-SR viewer for CAD data of multiple sclerosis lesions in an imaging informatics-based efolder

    NASA Astrophysics Data System (ADS)

    Ma, Kevin; Wong, Jonathan; Zhong, Mark; Zhang, Jeff; Liu, Brent

    2014-03-01

    In the past, we have presented an imaging-informatics based eFolder system for managing and analyzing imaging and lesion data of multiple sclerosis (MS) patients, which allows for data storage, data analysis, and data mining in clinical and research settings. The system integrates the patient's clinical data with imaging studies and a computer-aided detection (CAD) algorithm for quantifying MS lesion volume, lesion contour, locations, and sizes in brain MRI studies. For compliance with IHE integration protocols, long-term storage in PACS, and data query and display in a DICOM compliant clinical setting, CAD results need to be converted into DICOM-Structured Report (SR) format. Open-source dcmtk and customized XML templates are used to convert quantitative MS CAD results from MATLAB to DICOM-SR format. A web-based GUI based on our existing web-accessible DICOM object (WADO) image viewer has been designed to display the CAD results from generated SR files. The GUI is able to parse DICOM-SR files and extract SR document data, then display lesion volume, location, and brain matter volume along with the referenced DICOM imaging study. In addition, the GUI supports lesion contour overlay, which matches a detected MS lesion with its corresponding DICOM-SR data when a user selects either the lesion or the data. The methodology of converting CAD data in native MATLAB format to DICOM-SR and displaying the tabulated DICOM-SR along with the patient's clinical information, and relevant study images in the GUI will be demonstrated. The developed SR conversion model and GUI support aim to further demonstrate how to incorporate CAD post-processing components in a PACS and imaging informatics-based environment.

  19. ListingAnalyst: A program for analyzing the main output file from MODFLOW

    USGS Publications Warehouse

    Winston, Richard B.; Paulinski, Scott

    2014-01-01

    ListingAnalyst is a Windows® program for viewing the main output file from MODFLOW-2005, MODFLOW-NWT, or MODFLOW-LGR. It organizes and displays large files quickly without using excessive memory. The sections and subsections of the file are displayed in a tree-view control, which allows the user to navigate quickly to desired locations in the files. ListingAnalyst gathers error and warning messages scattered throughout the main output file and displays them all together in an error and a warning tab. A grid view displays tables in a readable format and allows the user to copy the table into a spreadsheet. The user can also search the file for terms of interest.

  20. Radiology teaching file cases on the World Wide Web.

    PubMed

    Scalzetti, E M

    1997-08-01

    The presentation of a radiographic teaching file on the World Wide Web can be enhanced by attending to principles of web design. Chief among these are appropriate control of page layout, minimization of the time required to download a page from the remote server, and provision for navigation within and among the web pages that constitute the site. Page layout is easily accomplished by the use of tables; column widths can be fixed to maintain an acceptable line length for text. Downloading time is minimized by rigorous editing and by optimal compression of image files; beyond this, techniques like preloading of images and specification of image width and height are also helpful. Navigation controls should be clear, consistent, and readily available.

  1. X-ray imaging physics for nuclear medicine technologists. Part 2: X-ray interactions and image formation.

    PubMed

    Seibert, J Anthony; Boone, John M

    2005-03-01

    The purpose is to review in a 4-part series: (i) the basic principles of x-ray production, (ii) x-ray interactions and data capture/conversion, (iii) acquisition/creation of the CT image, and (iv) operational details of a modern multislice CT scanner integrated with a PET scanner. In part 1, the production and characteristics of x-rays were reviewed. In this article, the principles of x-ray interactions and image formation are discussed, in preparation for a general review of CT (part 3) and a more detailed investigation of PET/CT scanners in part 4.

  2. Fpack and Funpack Utilities for FITS Image Compression and Uncompression

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2008-01-01

    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.

  3. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  4. Large-format InGaAs focal plane arrays for SWIR imaging

    NASA Astrophysics Data System (ADS)

    Hood, Andrew D.; MacDougal, Michael H.; Manzo, Juan; Follman, David; Geske, Jonathan C.

    2012-06-01

    FLIR Electro Optical Components will present our latest developments in large InGaAs focal plane arrays, which are used for low light level imaging in the short wavelength infrared (SWIR) regime. FLIR will present imaging from their latest small pitch (15 μm) focal plane arrays in VGA and High Definition (HD) formats. FLIR will present characterization of the FPA including dark current measurements as well as the use of correlated double sampling to reduce read noise. FLIR will show imagery as well as FPA-level characterization data.

  5. The Effects of Prior Knowledge and Instruction on Understanding Image Formation.

    ERIC Educational Resources Information Center

    Galili, Igal; And Others

    1993-01-01

    Reports a study (n=27) concerning the knowledge about image formation exhibited by students following instruction in geometrical optics in an activity-based college physics course for prospective elementary teachers. Student diagrams and verbal comments indicate their knowledge can be described as an intermediate state: a hybridization of…

  6. Radiotherapy supporting system based on the image database using IS&C magneto-optical disk

    NASA Astrophysics Data System (ADS)

    Ando, Yutaka; Tsukamoto, Nobuhiro; Kunieda, Etsuo; Kubo, Atsushi

    1994-05-01

    Since radiation oncologists make the treatment plan by prior experience, information about previous cases is helpful in planning the radiation treatment. We have developed an supporting system for the radiation therapy. The case-based reasoning method was implemented in order to search old treatments and images of past cases. This system evaluates similarities between the current case and all stored cases (case base). The portal images of the similar cases can be retrieved for reference images, as well as treatment records which show examples of the radiation treatment. By this system radiotherapists can easily make suitable plans of the radiation therapy. This system is useful to prevent inaccurate plannings due to preconceptions and/or lack of knowledge. Images were stored into magneto-optical disks and the demographic data is recorded to the hard disk which is equipped in the personal computer. Images can be displayed quickly on the radiotherapist's demands. The radiation oncologist can refer past cases which are recorded in the case base and decide the radiation treatment of the current case. The file and data format of magneto-optical disk is the IS&C format. This format provides the interchangeability and reproducibility of the medical information which includes images and other demographic data.

  7. Secure Display of Space-Exploration Images

    NASA Technical Reports Server (NTRS)

    Cheng, Cecilia; Thornhill, Gillian; McAuley, Michael

    2006-01-01

    Java EDR Display Interface (JEDI) is software for either local display or secure Internet distribution, to authorized clients, of image data acquired from cameras aboard spacecraft engaged in exploration of remote planets. ( EDR signifies experimental data record, which, in effect, signifies image data.) Processed at NASA s Multimission Image Processing Laboratory (MIPL), the data can be from either near-realtime processing streams or stored files. JEDI uses the Java Advanced Imaging application program interface, plus input/output packages that are parts of the Video Image Communication and Retrieval software of the MIPL, to display images. JEDI can be run as either a standalone application program or within a Web browser as a servlet with an applet front end. In either operating mode, JEDI communicates using the HTTP(s) protocol(s). In the Web-browser case, the user must provide a password to gain access. For each user and/or image data type, there is a configuration file, called a "personality file," containing parameters that control the layout of the displays and the information to be included in them. Once JEDI has accepted the user s password, it processes the requested EDR (provided that user is authorized to receive the specific EDR) to create a display according to the user s personality file.

  8. BOREAS Level-4c AVHRR-LAC Ten-Day Composite Images: Surface Parameters

    NASA Technical Reports Server (NTRS)

    Cihlar, Josef; Chen, Jing; Huang, Fengting; Nickeson, Jaime; Newcomer, Jeffrey A.; Hall, Forrest G. (Editor)

    2000-01-01

    The BOReal Ecosystem-Atmosphere Study (BOREAS) Staff Science Satellite Data Acquisition Program focused on providing the research teams with the remotely sensed satellite data products they needed to compare and spatially extend point results. Manitoba Remote Sensing Center (MRSC) and BOREAS Information System (BORIS) personnel acquired, processed, and archived data from the Advanced Very High Resolution Radiometer (AVHRR) instruments on the NOAA-11 and -14 satellites. The AVHRR data were acquired by CCRS and were provided to BORIS for use by BOREAS researchers. These AVHRR level-4c data are gridded, 10-day composites of surface parameters produced from sets of single-day images. Temporally, the 10-day compositing periods begin 11-Apr-1994 and end 10-Sep-1994. Spatially, the data cover the entire BOREAS region. The data are stored in binary image format files. Note: Some of the data files on the BOREAS CD-ROMs have been compressed using the Gzip program.

  9. Method for measuring anterior chamber volume by image analysis

    NASA Astrophysics Data System (ADS)

    Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli

    2007-12-01

    Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.

  10. SEGY to ASCII Conversion and Plotting Program 2.0

    USGS Publications Warehouse

    Goldman, Mark R.

    2005-01-01

    INTRODUCTION SEGY has long been a standard format for storing seismic data and header information. Almost every seismic processing package can read and write seismic data in SEGY format. In the data processing world, however, ASCII format is the 'universal' standard format. Very few general-purpose plotting or computation programs will accept data in SEGY format. The software presented in this report, referred to as SEGY to ASCII (SAC), converts seismic data written in SEGY format (Barry et al., 1975) to an ASCII data file, and then creates a postscript file of the seismic data using a general plotting package (GMT, Wessel and Smith, 1995). The resulting postscript file may be plotted by any standard postscript plotting program. There are two versions of SAC: one version for plotting a SEGY file that contains a single gather, such as a stacked CDP or migrated section, and a second version for plotting multiple gathers from a SEGY file containing more than one gather, such as a collection of shot gathers. Note that if a SEGY file has multiple gathers, then each gather must have the same number of traces per gather, and each trace must have the same sample interval and number of samples per trace. SAC will read several common standards of SEGY data, including SEGY files with sample values written in either IBM or IEEE floating-point format. In addition, utility programs are present to convert non-standard Seismic Unix (.sux) SEGY files and PASSCAL (.rsy) SEGY files to standard SEGY files. SAC allows complete user control over all plotting parameters including label size and font, tick mark intervals, trace scaling, and the inclusion of a title and descriptive text. SAC shell scripts create a postscript image of the seismic data in vector rather than bitmap format, using GMT's pswiggle command. Although this can produce a very large postscript file, the image quality is generally superior to that of a bitmap image, and commercial programs such as Adobe Illustrator

  11. Band registration of tuneable frame format hyperspectral UAV imagers in complex scenes

    NASA Astrophysics Data System (ADS)

    Honkavaara, Eija; Rosnell, Tomi; Oliveira, Raquel; Tommaselli, Antonio

    2017-12-01

    A recent revolution in miniaturised sensor technology has provided markets with novel hyperspectral imagers operating in the frame format principle. In the case of unmanned aerial vehicle (UAV) based remote sensing, the frame format technology is highly attractive in comparison to the commonly utilised pushbroom scanning technology, because it offers better stability and the possibility to capture stereoscopic data sets, bringing an opportunity for 3D hyperspectral object reconstruction. Tuneable filters are one of the approaches for capturing multi- or hyperspectral frame images. The individual bands are not aligned when operating a sensor based on tuneable filters from a mobile platform, such as UAV, because the full spectrum recording is carried out in the time-sequential principle. The objective of this investigation was to study the aspects of band registration of an imager based on tuneable filters and to develop a rigorous and efficient approach for band registration in complex 3D scenes, such as forests. The method first determines the orientations of selected reference bands and reconstructs the 3D scene using structure-from-motion and dense image matching technologies. The bands, without orientation, are then matched to the oriented bands accounting the 3D scene to provide exterior orientations, and afterwards, hyperspectral orthomosaics, or hyperspectral point clouds, are calculated. The uncertainty aspects of the novel approach were studied. An empirical assessment was carried out in a forested environment using hyperspectral images captured with a hyperspectral 2D frame format camera, based on a tuneable Fabry-Pérot interferometer (FPI) on board a multicopter and supported by a high spatial resolution consumer colour camera. A theoretical assessment showed that the method was capable of providing band registration accuracy better than 0.5-pixel size. The empirical assessment proved the performance and showed that, with the novel method, most parts of

  12. Tabular data and graphical images in support of the U.S. Geological Survey National Oil and Gas Assessment -- San Joaquin Basin (5010): Chapter 28 in Petroleum systems and geologic assessment of oil and gas in the San Joaquin Basin Province, California

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2007-01-01

    This chapter describes data used in support of the assessment process. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD–ROM. Computers and software may import the data without transcription from the portable document format (.pdf) files of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  13. Standard interface files and procedures for reactor physics codes, version III

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carmichael, B.M.

    Standards and procedures for promoting the exchange of reactor physics codes are updated to Version-III status. Standards covering program structure, interface files, file handling subroutines, and card input format are included. The implementation status of the standards in codes and the extension of the standards to new code areas are summarized. (15 references) (auth)

  14. Live-cell imaging for the assessment of the dynamics of autophagosome formation: focus on early steps.

    PubMed

    Karanasios, Eleftherios; Ktistakis, Nicholas T

    2015-03-01

    Autophagy is a cytosolic degradative pathway, which through a series of complicated membrane rearrangements leads to the formation of a unique double membrane vesicle, the autophagosome. The use of fluorescent proteins has allowed visualizing the autophagosome formation in live cells and in real time, almost 40 years after electron microscopy studies observed these structures for the first time. In the last decade, live-cell imaging has been extensively used to study the dynamics of autophagosome formation in cultured mammalian cells. Hereby we will discuss how the live-cell imaging studies have tried to settle the debate about the origin of the autophagosome membrane and how they have described the way different autophagy proteins coordinate in space and time in order to drive autophagosome formation. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. The Use of an On-Board MV Imager for Plan Verification of Intensity Modulated Radiation Therapy and Volumetrically Modulated Arc Therapy

    NASA Astrophysics Data System (ADS)

    Walker, Justin A.

    The introduction of complex treatment modalities such as IMRT and VMAT has led to the development of many devices for plan verification. One such innovation in this field is the repurposing of the portal imager to not only be used for tumor localization but for recording dose distributions as well. Several advantages make portal imagers attractive options for this purpose. Very high spatial resolution allows for better verification of small field plans than may be possible with commercially available devices. Because the portal imager is attached to the gantry set up is simpler than any other method available, requiring no additional accessories, and often can be accomplished from outside the treatment room. Dose images capture by the portal imager are in digital format make permanent records that can be analyzed immediately. Portal imaging suffers from a few limitations however that must be overcome. Images captured contain dose information and a calibration must be maintained for image to dose conversion. Dose images can only be taken perpendicular to the treatment beam allowing only for planar dose comparison. Planar dose files are themself difficult to obtain for VMAT treatments and an in-house script had to be developed to create such a file before analysis could be performed. Using the methods described in this study, excellent agreement between planar dose files generated and dose images taken were found. The average agreement for IMRT field analyzed being greater than 97% for non-normalized images at 3mm and 3%. Comparable agreement for VAMT plans was found as well with the average agreement being greater than 98%.

  16. Students' Attitudes to and Usage of Academic Feedback Provided via Audio Files

    ERIC Educational Resources Information Center

    Merry, Stephen; Orsmond, Paul

    2008-01-01

    This study explores students' attitudes to the provision of formative feedback on academic work using audio files together with the ways in which students implement such feedback within their learning. Fifteen students received audio file feedback on written work and were subsequently interviewed regarding their utilisation of that feedback within…

  17. Utilizing HDF4 File Content Maps for the Cloud

    NASA Technical Reports Server (NTRS)

    Lee, Hyokyung Joe

    2016-01-01

    We demonstrate a prototype study that HDF4 file content map can be used for efficiently organizing data in cloud object storage system to facilitate cloud computing. This approach can be extended to any binary data formats and to any existing big data analytics solution powered by cloud computing because HDF4 file content map project started as long term preservation of NASA data that doesn't require HDF4 APIs to access data.

  18. 76 FR 10405 - Federal Copyright Protection of Sound Recordings Fixed Before February 15, 1972

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-24

    ... file in either the Adobe Portable Document File (PDF) format that contains searchable, accessible text (not an image); Microsoft Word; WordPerfect; Rich Text Format (RTF); or ASCII text file format (not a..., comments may be delivered in hard copy. If hand delivered by a private party, an original [[Page 10406...

  19. The key image and case log application: new radiology software for teaching file creation and case logging that incorporates elements of a social network.

    PubMed

    Rowe, Steven P; Siddiqui, Adeel; Bonekamp, David

    2014-07-01

    To create novel radiology key image software that is easy to use for novice users, incorporates elements adapted from social networking Web sites, facilitates resident and fellow education, and can serve as the engine for departmental sharing of interesting cases and follow-up studies. Using open-source programming languages and software, radiology key image software (the key image and case log application, KICLA) was developed. This system uses a lightweight interface with the institutional picture archiving and communications systems and enables the storage of key images, image series, and cine clips. It was designed to operate with minimal disruption to the radiologists' daily workflow. Many features of the user interface have been inspired by social networking Web sites, including image organization into private or public folders, flexible sharing with other users, and integration of departmental teaching files into the system. We also review the performance, usage, and acceptance of this novel system. KICLA was implemented at our institution and achieved widespread popularity among radiologists. A large number of key images have been transmitted to the system since it became available. After this early experience period, the most commonly encountered radiologic modalities are represented. A survey distributed to users revealed that most of the respondents found the system easy to use (89%) and fast at allowing them to record interesting cases (100%). Hundred percent of respondents also stated that they would recommend a system such as KICLA to their colleagues. The system described herein represents a significant upgrade to the Digital Imaging and Communications in Medicine teaching file paradigm with efforts made to maximize its ease of use and inclusion of characteristics inspired by social networking Web sites that allow the system additional functionality such as individual case logging. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  20. Putting "Reference" in the Publications Reference File.

    ERIC Educational Resources Information Center

    Zink, Steven D.

    1980-01-01

    Argues for more widespread utilization of the U.S. Government Printing Office's Publications Reference File, a reference tool in microfiche format used to answer questions about current U.S. government documents and their availability. Ways to accomplish this task are suggested. (Author/JD)

  1. Development of an indexed integrated neuroradiology reports for teaching file creation

    NASA Astrophysics Data System (ADS)

    Tameem, Hussain Z.; Morioka, Craig; Bennett, David; El-Saden, Suzie; Sinha, Usha; Taira, Ricky; Bui, Alex; Kangarloo, Hooshang

    2007-03-01

    The decrease in reimbursement rates for radiology procedures has placed even more pressure on radiology departments to increase their clinical productivity. Clinical faculties have less time for teaching residents, but with the advent and prevalence of an electronic environment that includes PACS, RIS, and HIS, there is an opportunity to create electronic teaching files for fellows, residents, and medical students. Experienced clinicians, who select the most appropriate radiographic image, and clinical information relevant to that patient, create these teaching files. Important cases are selected based on the difficulty in determining the diagnosis or the manifestation of rare diseases. This manual process of teaching file creation is time consuming and may not be practical under the pressure of increased demands on the radiologist. It is the goal of this research to automate the process of teaching file creation by manually selecting key images and automatically extracting key sections from clinical reports and laboratories. The text report is then processed for indexing to two standard nomenclatures UMLS and RADLEX. Interesting teaching files can then be queried based on specific anatomy and findings found within the clinical reports.

  2. The hippocampal formation participates in novel picture encoding: evidence from functional magnetic resonance imaging.

    PubMed Central

    Stern, C E; Corkin, S; González, R G; Guimaraes, A R; Baker, J R; Jennings, P J; Carr, C A; Sugiura, R M; Vedantham, V; Rosen, B R

    1996-01-01

    Considerable evidence exists to support the hypothesis that the hippocampus and related medial temporal lobe structures are crucial for the encoding and storage of information in long-term memory. Few human imaging studies, however, have successfully shown signal intensity changes in these areas during encoding or retrieval. Using functional magnetic resonance imaging (fMRI), we studied normal human subjects while they performed a novel picture encoding task. High-speed echo-planar imaging techniques evaluated fMRI signal changes throughout the brain. During the encoding of novel pictures, statistically significant increases in fMRI signal were observed bilaterally in the posterior hippocampal formation and parahippocampal gyrus and in the lingual and fusiform gyri. To our knowledge, this experiment is the first fMRI study to show robust signal changes in the human hippocampal region. It also provides evidence that the encoding of novel, complex pictures depends upon an interaction between ventral cortical regions, specialized for object vision, and the hippocampal formation and parahippocampal gyrus, specialized for long-term memory. Images Fig. 1 Fig. 3 PMID:8710927

  3. Testing the Forensic Interestingness of Image Files Based on Size and Type

    DTIC Science & Technology

    2017-09-01

    there were still a lot of uninteresting files that were marked as interesting. Also, the results do not show a correlation between the...interesting, but there were still a lot of uninteresting files that were marked as interesting. Also, the results do not show a correlation between...7  IV.  DESCRIPTION OF METHODOLOGY ...........................................................11  A.  TEST SETUP

  4. What is meant by Format Version? Product Version? Collection?

    Atmospheric Science Data Center

    2017-10-12

    The format Version is used to distinguish between software deliveries to ASDC that result in a product format change. The format version is given in the MISR data file name using the designator _Fnn_ where nn is the version number. ...

  5. Software for hyperspectral, joint photographic experts group (.JPG), portable network graphics (.PNG) and tagged image file format (.TIFF) segmentation

    NASA Astrophysics Data System (ADS)

    Bruno, L. S.; Rodrigo, B. P.; Lucio, A. de C. Jorge

    2016-10-01

    This paper presents a system developed by an application of a neural network Multilayer Perceptron for drone acquired agricultural image segmentation. This application allows a supervised user training the classes that will posteriorly be interpreted by neural network. These classes will be generated manually with pre-selected attributes in the application. After the attribute selection a segmentation process is made to allow the relevant information extraction for different types of images, RGB or Hyperspectral. The application allows extracting the geographical coordinates from the image metadata, geo referencing all pixels on the image. In spite of excessive memory consume on hyperspectral images regions of interest, is possible to perform segmentation, using bands chosen by user that can be combined in different ways to obtain different results.

  6. High-contrast multilayer imaging of biological organisms through dark-field digital refocusing.

    PubMed

    Faridian, Ahmad; Pedrini, Giancarlo; Osten, Wolfgang

    2013-08-01

    We have developed an imaging system to extract high contrast images from different layers of biological organisms. Utilizing a digital holographic approach, the system works without scanning through layers of the specimen. In dark-field illumination, scattered light has the main contribution in image formation, but in the case of coherent illumination, this creates a strong speckle noise that reduces the image quality. To remove this restriction, the specimen has been illuminated with various speckle-fields and a hologram has been recorded for each speckle-field. Each hologram has been analyzed separately and the corresponding intensity image has been reconstructed. The final image has been derived by averaging over the reconstructed images. A correlation approach has been utilized to determine the number of speckle-fields required to achieve a desired contrast and image quality. The reconstructed intensity images in different object layers are shown for different sea urchin larvae. Two multimedia files are attached to illustrate the process of digital focusing.

  7. Chapter 3. Tabular data and graphical images in support of the U.S. Geological Survey National Oil and Gas Assessment--East Texas basin and Louisiana-Mississippi salt basins provinces, Jurassic Smackover Interior salt basins total petroleum system (504902), Cotton Valley group.

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2006-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on the CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  8. Moderate Resolution Imaging Spectroradiometer (MODIS) Overview

    USGS Publications Warehouse

    ,

    2008-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) is an instrument that collects remotely sensed data used by scientists for monitoring, modeling, and assessing the effects of natural processes and human actions on the Earth's surface. The continual calibration of the MODIS instruments, the refinement of algorithms used to create higher-level products, and the ongoing product validation make MODIS images a valuable time series (2000-present) of geophysical and biophysical land-surface measurements. Carried on two National Aeronautics and Space Administration (NASA) Earth Observing System (EOS) satellites, MODIS acquires morning (EOS-Terra) and afternoon (EOS-Aqua) views almost daily. Terra data acquisitions began in February 2000 and Aqua data acquisitions began in July 2002. Land data are generated only as higher-level products, removing the burden of common types of data processing from the user community. MODIS-based products describing ecological dynamics, radiation budget, and land cover are projected onto a sinusoidal mapping grid and distributed as 10- by 10-degree tiles at 250-, 500-, or 1,000-meter spatial resolution. Some products are also created on a 0.05-degree geographic grid to support climate modeling studies. All MODIS products are distributed in the Hierarchical Data Format-Earth Observing System (HDF-EOS) file format and are available through file transfer protocol (FTP) or on digital video disc (DVD) media. Versions 4 and 5 of MODIS land data products are currently available and represent 'validated' collections defined in stages of accuracy that are based on the number of field sites and time periods for which the products have been validated. Version 5 collections incorporate the longest time series of both Terra and Aqua MODIS data products.

  9. High throughput imaging cytometer with acoustic focussing† †Electronic supplementary information (ESI) available: High throughput imaging cytometer with acoustic focussing. See DOI: 10.1039/c5ra19497k Click here for additional data file. Click here for additional data file. Click here for additional data file. Click here for additional data file. Click here for additional data file. Click here for additional data file. Click here for additional data file. Click here for additional data file.

    PubMed Central

    Zmijan, Robert; Jonnalagadda, Umesh S.; Carugo, Dario; Kochi, Yu; Lemm, Elizabeth; Packham, Graham; Hill, Martyn

    2015-01-01

    We demonstrate an imaging flow cytometer that uses acoustic levitation to assemble cells and other particles into a sheet structure. This technique enables a high resolution, low noise CMOS camera to capture images of thousands of cells with each frame. While ultrasonic focussing has previously been demonstrated for 1D cytometry systems, extending the technology to a planar, much higher throughput format and integrating imaging is non-trivial, and represents a significant jump forward in capability, leading to diagnostic possibilities not achievable with current systems. A galvo mirror is used to track the images of the moving cells permitting exposure times of 10 ms at frame rates of 50 fps with motion blur of only a few pixels. At 80 fps, we demonstrate a throughput of 208 000 beads per second. We investigate the factors affecting motion blur and throughput, and demonstrate the system with fluorescent beads, leukaemia cells and a chondrocyte cell line. Cells require more time to reach the acoustic focus than beads, resulting in lower throughputs; however a longer device would remove this constraint. PMID:29456838

  10. Mass-storage management for distributed image/video archives

    NASA Astrophysics Data System (ADS)

    Franchi, Santina; Guarda, Roberto; Prampolini, Franco

    1993-04-01

    The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.

  11. An open-source solution for advanced imaging flow cytometry data analysis using machine learning.

    PubMed

    Hennig, Holger; Rees, Paul; Blasi, Thomas; Kamentsky, Lee; Hung, Jane; Dao, David; Carpenter, Anne E; Filby, Andrew

    2017-01-01

    Imaging flow cytometry (IFC) enables the high throughput collection of morphological and spatial information from hundreds of thousands of single cells. This high content, information rich image data can in theory resolve important biological differences among complex, often heterogeneous biological samples. However, data analysis is often performed in a highly manual and subjective manner using very limited image analysis techniques in combination with conventional flow cytometry gating strategies. This approach is not scalable to the hundreds of available image-based features per cell and thus makes use of only a fraction of the spatial and morphometric information. As a result, the quality, reproducibility and rigour of results are limited by the skill, experience and ingenuity of the data analyst. Here, we describe a pipeline using open-source software that leverages the rich information in digital imagery using machine learning algorithms. Compensated and corrected raw image files (.rif) data files from an imaging flow cytometer (the proprietary .cif file format) are imported into the open-source software CellProfiler, where an image processing pipeline identifies cells and subcellular compartments allowing hundreds of morphological features to be measured. This high-dimensional data can then be analysed using cutting-edge machine learning and clustering approaches using "user-friendly" platforms such as CellProfiler Analyst. Researchers can train an automated cell classifier to recognize different cell types, cell cycle phases, drug treatment/control conditions, etc., using supervised machine learning. This workflow should enable the scientific community to leverage the full analytical power of IFC-derived data sets. It will help to reveal otherwise unappreciated populations of cells based on features that may be hidden to the human eye that include subtle measured differences in label free detection channels such as bright-field and dark-field imagery

  12. Earth observation image data format

    NASA Technical Reports Server (NTRS)

    Sos, J. Y.

    1976-01-01

    A flexible format for computer compatable tape (CCT) containing multispectral earth observation sensor data is described. The driving functions which comprise the data format requirements are summarized and general data format guidelines are discussed.

  13. High resolution imaging studies into the formation of laser-induced periodic surface structures.

    PubMed

    Kerr, N C; Clark, S E; Emmony, D C

    1989-09-01

    We report the results of an investigation into the formation mechanism of laser-induced ripple structures based on obtaining direct images of a surface while the transient heating induced by a KrF excimer laser is still present. These images reveal transient but well-defined periodic heating patterns which, if enough subsequent excimer pulses are incident on the surface, become permanently induced ripple structures. It is evident from these transient images that the surface heating is confined to the induced structures, thus strongly supporting the idea that at low fluences the ripples are formed by localizing surface melting.

  14. New directions in the CernVM file system

    NASA Astrophysics Data System (ADS)

    Blomer, Jakob; Buncic, Predrag; Ganis, Gerardo; Hardi, Nikola; Meusel, Rene; Popescu, Radu

    2017-10-01

    The CernVM File System today is commonly used to host and distribute application software stacks. In addition to this core task, recent developments expand the scope of the file system into two new areas. Firstly, CernVM-FS emerges as a good match for container engines to distribute the container image contents. Compared to native container image distribution (e.g. through the “Docker registry”), CernVM-FS massively reduces the network traffic for image distribution. This has been shown, for instance, by a prototype integration of CernVM-FS into Mesos developed by Mesosphere, Inc. We present a path for a smooth integration of CernVM-FS and Docker. Secondly, CernVM-FS recently raised new interest as an option for the distribution of experiment conditions data. Here, the focus is on improved versioning capabilities of CernVM-FS that allows to link the conditions data of a run period to the state of a CernVM-FS repository. Lastly, CernVM-FS has been extended to provide a name space for physics data for the LIGO and CMS collaborations. Searching through a data namespace is often done by a central, experiment specific database service. A name space on CernVM-FS can particularly benefit from an existing, scalable infrastructure and from the POSIX file system interface.

  15. FORMATOMATIC: a program for converting diploid allelic data between common formats for population genetic analysis.

    PubMed

    Manoukis, Nicholas C

    2007-07-01

    There has been a great increase in both the number of population genetic analysis programs and the size of data sets being studied with them. Since the file formats required by the most popular and useful programs are variable, automated reformatting or conversion between them is desirable. formatomatic is an easy to use program that can read allelic data files in genepop, raw (csv) or convert formats and create data files in nine formats: raw (csv), arlequin, genepop, immanc/bayesass +, migrate, newhybrids, msvar, baps and structure. Use of formatomatic should greatly reduce time spent reformatting data sets and avoid unnecessary errors.

  16. Quantitative histogram analysis of images

    NASA Astrophysics Data System (ADS)

    Holub, Oliver; Ferreira, Sérgio T.

    2006-11-01

    A routine for histogram analysis of images has been written in the object-oriented, graphical development environment LabVIEW. The program converts an RGB bitmap image into an intensity-linear greyscale image according to selectable conversion coefficients. This greyscale image is subsequently analysed by plots of the intensity histogram and probability distribution of brightness, and by calculation of various parameters, including average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of the histogram and the median of the probability distribution. The program allows interactive selection of specific regions of interest (ROI) in the image and definition of lower and upper threshold levels (e.g., to permit the removal of a constant background signal). The results of the analysis of multiple images can be conveniently saved and exported for plotting in other programs, which allows fast analysis of relatively large sets of image data. The program file accompanies this manuscript together with a detailed description of two application examples: The analysis of fluorescence microscopy images, specifically of tau-immunofluorescence in primary cultures of rat cortical and hippocampal neurons, and the quantification of protein bands by Western-blot. The possibilities and limitations of this kind of analysis are discussed. Program summaryTitle of program: HAWGC Catalogue identifier: ADXG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXG_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Mobile Intel Pentium III, AMD Duron Installations: No installation necessary—Executable file together with necessary files for LabVIEW Run-time engine Operating systems or monitors under which the program has been tested: WindowsME/2000/XP Programming language used: LabVIEW 7.0 Memory required to execute with typical data:˜16MB for starting and ˜160MB used for

  17. bwtool: a tool for bigWig files

    PubMed Central

    Pohl, Andy; Beato, Miguel

    2014-01-01

    BigWig files are a compressed, indexed, binary format for genome-wide signal data for calculations (e.g. GC percent) or experiments (e.g. ChIP-seq/RNA-seq read depth). bwtool is a tool designed to read bigWig files rapidly and efficiently, providing functionality for extracting data and summarizing it in several ways, globally or at specific regions. Additionally, the tool enables the conversion of the positions of signal data from one genome assembly to another, also known as ‘lifting’. We believe bwtool can be useful for the analyst frequently working with bigWig data, which is becoming a standard format to represent functional signals along genomes. The article includes supplementary examples of running the software. Availability and implementation: The C source code is freely available under the GNU public license v3 at http://cromatina.crg.eu/bwtool. Contact: andrew.pohl@crg.eu, andypohl@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24489365

  18. Gender Issues: An Activity File.

    ERIC Educational Resources Information Center

    Fountain, Susan

    This activity file grew out of research of an "Images of Women in Development" project of the Centre for Global Education at the University of York, England. The activities are intended for students in the 8- to 13-year-old range to learn more about gender issues. The activities are divided into four sections: (1) awareness-raising activities in…

  19. Image processing tool for automatic feature recognition and quantification

    DOEpatents

    Chen, Xing; Stoddard, Ryan J.

    2017-05-02

    A system for defining structures within an image is described. The system includes reading of an input file, preprocessing the input file while preserving metadata such as scale information and then detecting features of the input file. In one version the detection first uses an edge detector followed by identification of features using a Hough transform. The output of the process is identified elements within the image.

  20. Quantitative Microbial Risk Assessment Tutorial: Publishing a Microbial Density Time Series as a Txt File

    EPA Science Inventory

    A SARA Timeseries Utility supports analysis and management of time-varying environmental data including listing, graphing, computing statistics, computing meteorological data and saving in a WDM or text file. File formats supported include WDM, HSPF Binary (.hbn), USGS RDB, and T...

  1. The digital geologic map of Colorado in ARC/INFO format, Part A. Documentation

    USGS Publications Warehouse

    Green, Gregory N.

    1992-01-01

    This geologic map was prepared as a part of a study of digital methods and techniques as applied to complex geologic maps. The geologic map was digitized from the original scribe sheets used to prepare the published Geologic Map of Colorado (Tweto 1979). Consequently the digital version is at 1:500,000 scale using the Lambert Conformal Conic map projection parameters of the state base map. Stable base contact prints of the scribe sheets were scanned on a Tektronix 4991 digital scanner. The scanner automatically converts the scanned image to an ASCII vector format. These vectors were transferred to a VAX minicomputer, where they were then loaded into ARC/INFO. Each vector and polygon was given attributes derived from the original 1979 geologic map. This database was developed on a MicroVAX computer system using VAX V 5.4 nd ARC/INFO 5.0 software. UPDATE: April 1995, The update was done solely for the purpose of adding the abilitly to plot to an HP650c plotter. Two new ARC/INFO plot AMLs along with a lineset and shadeset for the HP650C design jet printer have been included. These new files are COLORADO.650, INDEX.650, TWETOLIN.E00 and TWETOSHD.E00. These files were created on a UNIX platform with ARC/INFO 6.1.2. Updated versions of INDEX.E00, CONTACT.E00, LINE.E00, DECO.E00 and BORDER.E00 files that included the newly defined HP650c items are also included. * Any use of trade, product, or firm names is for descriptive purposes only and does not imply endorsement by the U.S. Government. Descriptors: The Digital Geologic Map of Colorado in ARC/INFO Format Open-File Report 92-050

  2. A Summary of Proposed Changes to the Current ICARTT Format Standards and their Implications to Future Airborne Studies

    NASA Astrophysics Data System (ADS)

    Northup, E. A.; Kusterer, J.; Quam, B.; Chen, G.; Early, A. B.; Beach, A. L., III

    2015-12-01

    The current ICARTT file format standards were developed for the purpose of fulfilling the data management needs for the International Consortium for Atmospheric Research on Transport and Transformation (ICARTT) campaign in 2004. The goal of the ICARTT file format was to establish a common and simple to use data file format to promote data exchange and collaboration among science teams with similar science objectives. ICARTT has been the NASA standard since 2010, and is widely used by NOAA, NSF, and international partners (DLR, FAAM). Despite its level of acceptance, there are a number of issues with the current ICARTT format, especially concerning the machine readability. To enhance usability, the ICARTT Refresh Earth Science Data Systems Working Group (ESDSWG) was established to enable a platform for atmospheric science data producers, users (e.g. modelers) and data managers to collaborate on developing criteria for this file format. Ultimately, this is a cross agency effort to improve and aggregate the metadata records being produced. After conducting a survey to identify deficiencies in the current format, we determined which are considered most important to the various communities. Numerous recommendations were made to improve upon the file format while maintaining backward compatibility. The recommendations made to date and their advantages and limitations will be discussed.

  3. BOREAS RSS-10 TOMS Circumpolar One-Degree PAR Images

    NASA Technical Reports Server (NTRS)

    Dye, Dennis G.; Holben, Brent; Nickeson, Jaime (Editor); Hall, Forrest G. (Editor); Smith, David E. (Technical Monitor)

    2000-01-01

    The Boreal Ecosystem-Atmosphere Study (BOREAS) Remote Sensing Science (RSS)-10 team investigated the magnitude of daily, seasonal, and yearly variations of Photosynthetically Active Radiation (PAR) from ground and satellite observations. This data set contains satellite estimates of surface-incident PAR (400-700 nm, MJ/sq m) at one-degree spatial resolution. The spatial coverage is circumpolar from latitudes of 41 to 66 degrees north. The temporal coverage is from May through September for years 1979 through 1989. Eleven-year statistics are also provided: (1) mean, (2) standard deviation, and (3) coefficient of variation for 1979-89. The PAR estimates were derived from the global gridded ultraviolet reflectivity data product (average of 360, 380 nm) from the Nimbus-7 Total Ozone Mapping Spectrometer (TOMS). Image mask data are provided for identifying the boreal forest zone, and ocean/land and snow/ice-covered areas. The data are available as binary image format data files. The PAR data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). The data files are available on a CD-ROM (see document number 20010000884).

  4. Cardio-PACs: a new opportunity

    NASA Astrophysics Data System (ADS)

    Heupler, Frederick A., Jr.; Thomas, James D.; Blume, Hartwig R.; Cecil, Robert A.; Heisler, Mary

    2000-05-01

    It is now possible to replace film-based image management in the cardiac catheterization laboratory with a Cardiology Picture Archiving and Communication System (Cardio-PACS) based on digital imaging technology. The first step in the conversion process is installation of a digital image acquisition system that is capable of generating high-quality DICOM-compatible images. The next three steps, which are the subject of this presentation, involve image display, distribution, and storage. Clinical requirements and associated cost considerations for these three steps are listed below: Image display: (1) Image quality equal to film, with DICOM format, lossless compression, image processing, desktop PC-based with color monitor, and physician-friendly imaging software; (2) Performance specifications include: acquire 30 frames/sec; replay 15 frames/sec; access to file server 5 seconds, and to archive 5 minutes; (3) Compatibility of image file, transmission, and processing formats; (4) Image manipulation: brightness, contrast, gray scale, zoom, biplane display, and quantification; (5) User-friendly control of image review. Image distribution: (1) Standard IP-based network between cardiac catheterization laboratories, file server, long-term archive, review stations, and remote sites; (2) Non-proprietary formats; (3) Bidirectional distribution. Image storage: (1) CD-ROM vs disk vs tape; (2) Verification of data integrity; (3) User-designated storage capacity for catheterization laboratory, file server, long-term archive. Costs: (1) Image acquisition equipment, file server, long-term archive; (2) Network infrastructure; (3) Review stations and software; (4) Maintenance and administration; (5) Future upgrades and expansion; (6) Personnel.

  5. Integration of DICOM and openEHR standards

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Yao, Zhihong; Liu, Lei

    2011-03-01

    The standard format for medical imaging storage and transmission is DICOM. openEHR is an open standard specification in health informatics that describes the management and storage, retrieval and exchange of health data in electronic health records. Considering that the integration of DICOM and openEHR is beneficial to information sharing, on the basis of XML-based DICOM format, we developed a method of creating a DICOM Imaging Archetype in openEHR to enable the integration of DICOM and openEHR. Each DICOM file contains abundant imaging information. However, because reading a DICOM involves looking up the DICOM Data Dictionary, the readability of a DICOM file has been limited. openEHR has innovatively adopted two level modeling method, making clinical information divided into lower level, the information model, and upper level, archetypes and templates. But one critical challenge posed to the development of openEHR is the information sharing problem, especially in imaging information sharing. For example, some important imaging information cannot be displayed in an openEHR file. In this paper, to enhance the readability of a DICOM file and semantic interoperability of an openEHR file, we developed a method of mapping a DICOM file to an openEHR file by adopting the form of archetype defined in openEHR. Because an archetype has a tree structure, after mapping a DICOM file to an openEHR file, the converted information is structuralized in conformance with openEHR format. This method enables the integration of DICOM and openEHR and data exchange without losing imaging information between two standards.

  6. A model for a PC-based, universal-format, multimedia digitization system: moving beyond the scanner.

    PubMed

    McEachen, James C; Cusack, Thomas J; McEachen, John C

    2003-08-01

    Digitizing images for use in case presentations based on hardcopy films, slides, photographs, negatives, books, and videos can present a challenging task. Scanners and digital cameras have become standard tools of the trade. Unfortunately, use of these devices to digitize multiple images in many different media formats can be a time-consuming and in some cases unachievable process. The authors' goal was to create a PC-based solution for digitizing multiple media formats in a timely fashion while maintaining adequate image presentation quality. The authors' PC-based solution makes use of off-the-shelf hardware applications to include a digital document camera (DDC), VHS video player, and video-editing kit. With the assistance of five staff radiologists, the authors examined the quality of multiple image types digitized with this equipment. The authors also quantified the speed of digitization of various types of media using the DDC and video-editing kit. With regard to image quality, the five staff radiologists rated the digitized angiography, CT, and MR images as adequate to excellent for use in teaching files and case presentations. With regard to digitized plain films, the average rating was adequate. As for performance, the authors recognized a 68% improvement in the time required to digitize hardcopy films using the DDC instead of a professional quality scanner. The PC-based solution provides a means for digitizing multiple images from many different types of media in a timely fashion while maintaining adequate image presentation quality.

  7. PDB Editor: a user-friendly Java-based Protein Data Bank file editor with a GUI.

    PubMed

    Lee, Jonas; Kim, Sung Hou

    2009-04-01

    The Protein Data Bank file format is the format most widely used by protein crystallographers and biologists to disseminate and manipulate protein structures. Despite this, there are few user-friendly software packages available to efficiently edit and extract raw information from PDB files. This limitation often leads to many protein crystallographers wasting significant time manually editing PDB files. PDB Editor, written in Java Swing GUI, allows the user to selectively search, select, extract and edit information in parallel. Furthermore, the program is a stand-alone application written in Java which frees users from the hassles associated with platform/operating system-dependent installation and usage. PDB Editor can be downloaded from http://sourceforge.net/projects/pdbeditorjl/.

  8. Galileo SSI/Ida Radiometrically Calibrated Images V1.0

    NASA Astrophysics Data System (ADS)

    Domingue, D. L.

    2016-05-01

    This data set includes Galileo Orbiter SSI radiometrically calibrated images of the asteroid 243 Ida, created using ISIS software and assuming nadir pointing. This is an original delivery of radiometrically calibrated files, not an update to existing files. All images archived include the asteroid within the image frame. Calibration was performed in 2013-2014.

  9. 14 CFR 221.121 - How to prepare and file applications for Special Tariff Permission.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS TARIFFS Special Tariff Permission To... notice shall conform to the requirements of § 221.212 if filed electronically. (b) Number of paper copies and place of filing. For paper format applications, the original and one copy of each such application...

  10. Shaping ability of 4 different single-file systems in simulated S-shaped canals.

    PubMed

    Saleh, Abdulrahman Mohammed; Vakili Gilani, Pouyan; Tavanafar, Saeid; Schäfer, Edgar

    2015-04-01

    The aim of this study was to compare the shaping ability of 4 different single-file systems in simulated S-shaped canals. Sixty-four S-shaped canals in resin blocks were prepared to an apical size of 25 using Reciproc (VDW, Munich, Germany), WaveOne (Dentsply Maillefer, Ballaigues, Switzerland), OneShape (Micro Méga, Besançon, France), and F360 (Komet Brasseler, Lemgo, Germany) (n = 16 canals/group) systems. Composite images were made from the superimposition of pre- and postinstrumentation images. The amount of resin removed by each system was measured by using a digital template and image analysis software. Canal aberrations and the preparation time were also recorded. The data were statistically analyzed by using analysis of variance, Tukey, and chi-square tests. Canals prepared with the F360 and OneShape systems were better centered compared with the Reciproc and WaveOne systems. Reciproc and WaveOne files removed significantly greater amounts of resin from the inner side of both curvatures (P < .05). Instrumentation with OneShape and Reciproc files was significantly faster compared with WaveOne and F360 files (P < .05). No instrument fractured during canal preparation. Under the conditions of this study, all single-file instruments were safe to use and were able to prepare the canals efficiently. However, single-file systems that are less tapered seem to be more favorable when preparing S-shaped canals. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  11. Filtering NetCDF Files by Using the EverVIEW Slice and Dice Tool

    USGS Publications Warehouse

    Conzelmann, Craig; Romañach, Stephanie S.

    2010-01-01

    Network Common Data Form (NetCDF) is a self-describing, machine-independent file format for storing array-oriented scientific data. It was created to provide a common interface between applications and real-time meteorological and other scientific data. Over the past few years, there has been a growing movement within the community of natural resource managers in The Everglades, Fla., to use NetCDF as the standard data container for datasets based on multidimensional arrays. As a consequence, a need surfaced for additional tools to view and manipulate NetCDF datasets, specifically to filter the files by creating subsets of large NetCDF files. The U.S. Geological Survey (USGS) and the Joint Ecosystem Modeling (JEM) group are working to address these needs with applications like the EverVIEW Slice and Dice Tool, which allows users to filter grid-based NetCDF files, thus targeting those data most important to them. The major functions of this tool are as follows: (1) to create subsets of NetCDF files temporally, spatially, and by data value; (2) to view the NetCDF data in table form; and (3) to export the filtered data to a comma-separated value (CSV) file format. The USGS and JEM will continue to work with scientists and natural resource managers across The Everglades to solve complex restoration problems through technological advances.

  12. Analyzing huge pathology images with open source software.

    PubMed

    Deroulers, Christophe; Ameisen, David; Badoual, Mathilde; Gerin, Chloé; Granier, Alexandre; Lartaud, Marc

    2013-06-06

    Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer's memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. The virtual slide(s) for this article can be found here

  13. Analyzing huge pathology images with open source software

    PubMed Central

    2013-01-01

    Background Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer’s memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. Results We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Conclusions Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. Virtual slides The virtual slide(s) for this article can be found here: http

  14. ISA-TAB-Nano: a specification for sharing nanomaterial research data in spreadsheet-based format.

    PubMed

    Thomas, Dennis G; Gaheen, Sharon; Harper, Stacey L; Fritts, Martin; Klaessig, Fred; Hahn-Dantona, Elizabeth; Paik, David; Pan, Sue; Stafford, Grace A; Freund, Elaine T; Klemm, Juli D; Baker, Nathan A

    2013-01-14

    The high-throughput genomics communities have been successfully using standardized spreadsheet-based formats to capture and share data within labs and among public repositories. The nanomedicine community has yet to adopt similar standards to share the diverse and multi-dimensional types of data (including metadata) pertaining to the description and characterization of nanomaterials. Owing to the lack of standardization in representing and sharing nanomaterial data, most of the data currently shared via publications and data resources are incomplete, poorly-integrated, and not suitable for meaningful interpretation and re-use of the data. Specifically, in its current state, data cannot be effectively utilized for the development of predictive models that will inform the rational design of nanomaterials. We have developed a specification called ISA-TAB-Nano, which comprises four spreadsheet-based file formats for representing and integrating various types of nanomaterial data. Three file formats (Investigation, Study, and Assay files) have been adapted from the established ISA-TAB specification; while the Material file format was developed de novo to more readily describe the complexity of nanomaterials and associated small molecules. In this paper, we have discussed the main features of each file format and how to use them for sharing nanomaterial descriptions and assay metadata. The ISA-TAB-Nano file formats provide a general and flexible framework to record and integrate nanomaterial descriptions, assay data (metadata and endpoint measurements) and protocol information. Like ISA-TAB, ISA-TAB-Nano supports the use of ontology terms to promote standardized descriptions and to facilitate search and integration of the data. The ISA-TAB-Nano specification has been submitted as an ASTM work item to obtain community feedback and to provide a nanotechnology data-sharing standard for public development and adoption.

  15. ISA-TAB-Nano: A Specification for Sharing Nanomaterial Research Data in Spreadsheet-based Format

    PubMed Central

    2013-01-01

    Background and motivation The high-throughput genomics communities have been successfully using standardized spreadsheet-based formats to capture and share data within labs and among public repositories. The nanomedicine community has yet to adopt similar standards to share the diverse and multi-dimensional types of data (including metadata) pertaining to the description and characterization of nanomaterials. Owing to the lack of standardization in representing and sharing nanomaterial data, most of the data currently shared via publications and data resources are incomplete, poorly-integrated, and not suitable for meaningful interpretation and re-use of the data. Specifically, in its current state, data cannot be effectively utilized for the development of predictive models that will inform the rational design of nanomaterials. Results We have developed a specification called ISA-TAB-Nano, which comprises four spreadsheet-based file formats for representing and integrating various types of nanomaterial data. Three file formats (Investigation, Study, and Assay files) have been adapted from the established ISA-TAB specification; while the Material file format was developed de novo to more readily describe the complexity of nanomaterials and associated small molecules. In this paper, we have discussed the main features of each file format and how to use them for sharing nanomaterial descriptions and assay metadata. Conclusion The ISA-TAB-Nano file formats provide a general and flexible framework to record and integrate nanomaterial descriptions, assay data (metadata and endpoint measurements) and protocol information. Like ISA-TAB, ISA-TAB-Nano supports the use of ontology terms to promote standardized descriptions and to facilitate search and integration of the data. The ISA-TAB-Nano specification has been submitted as an ASTM work item to obtain community feedback and to provide a nanotechnology data-sharing standard for public development and adoption. PMID

  16. Investigating Image Formation with a Camera Obscura: a Study in Initial Primary Science Teacher Education

    NASA Astrophysics Data System (ADS)

    Muñoz-Franco, Granada; Criado, Ana María; García-Carmona, Antonio

    2018-04-01

    This article presents the results of a qualitative study aimed at determining the effectiveness of the camera obscura as a didactic tool to understand image formation (i.e., how it is possible to see objects and how their image is formed on the retina, and what the image formed on the retina is like compared to the object observed) in a context of scientific inquiry. The study involved 104 prospective primary teachers (PPTs) who were being trained in science teaching. To assess the effectiveness of this tool, an open questionnaire was applied before (pre-test) and after (post-test) the educational intervention. The data were analyzed by combining methods of inter- and intra-rater analysis. The results showed that more than half of the PPTs advanced in their ideas towards the desirable level of knowledge in relation to the phenomena studied. The conclusion reached is that the camera obscura, used in a context of scientific inquiry, is a useful tool for PPTs to improve their knowledge about image formation and experience in the first person an authentic scientific inquiry during their teacher training.

  17. Adolescent Girls' STEM Identity Formation and Media Images of STEM Professionals: Considering the Influence of Contextual Cues.

    PubMed

    Steinke, Jocelyn

    2017-01-01

    Popular media have played a crucial role in the construction, representation, reproduction, and transmission of stereotypes of science, technology, engineering, and mathematics (STEM) professionals, yet little is known about how these stereotypes influence STEM identity formation. Media images of STEM professionals may be important sources of information about STEM and may be particularly salient and relevant for girls during adolescence as they actively consider future personal and professional identities. This article describes gender-stereotyped media images of STEM professionals and examines theories to identify variables that explain the potential influence of these images on STEM identity formation. Understanding these variables is important for expanding current conceptual frameworks of science/STEM identity to better determine how and when cues in the broader sociocultural context may affect adolescent girls' STEM identity. This article emphasizes the importance of focusing on STEM identity relevant variables and STEM identity status to explain individual differences in STEM identity formation.

  18. Adolescent Girls’ STEM Identity Formation and Media Images of STEM Professionals: Considering the Influence of Contextual Cues

    PubMed Central

    Steinke, Jocelyn

    2017-01-01

    Popular media have played a crucial role in the construction, representation, reproduction, and transmission of stereotypes of science, technology, engineering, and mathematics (STEM) professionals, yet little is known about how these stereotypes influence STEM identity formation. Media images of STEM professionals may be important sources of information about STEM and may be particularly salient and relevant for girls during adolescence as they actively consider future personal and professional identities. This article describes gender-stereotyped media images of STEM professionals and examines theories to identify variables that explain the potential influence of these images on STEM identity formation. Understanding these variables is important for expanding current conceptual frameworks of science/STEM identity to better determine how and when cues in the broader sociocultural context may affect adolescent girls’ STEM identity. This article emphasizes the importance of focusing on STEM identity relevant variables and STEM identity status to explain individual differences in STEM identity formation. PMID:28603505

  19. A software to digital image processing to be used in the voxel phantom development.

    PubMed

    Vieira, J W; Lima, F R A

    2009-11-15

    Anthropomorphic models used in computational dosimetry, also denominated phantoms, are based on digital images recorded from scanning of real people by Computed Tomography (CT) or Magnetic Resonance Imaging (MRI). The voxel phantom construction requests computational processing for transformations of image formats, to compact two-dimensional (2-D) images forming of three-dimensional (3-D) matrices, image sampling and quantization, image enhancement, restoration and segmentation, among others. Hardly the researcher of computational dosimetry will find all these available abilities in single software, and almost always this difficulty presents as a result the decrease of the rhythm of his researches or the use, sometimes inadequate, of alternative tools. The need to integrate the several tasks mentioned above to obtain an image that can be used in an exposure computational model motivated the development of the Digital Image Processing (DIP) software, mainly to solve particular problems in Dissertations and Thesis developed by members of the Grupo de Pesquisa em Dosimetria Numérica (GDN/CNPq). Because of this particular objective, the software uses the Portuguese idiom in their implementations and interfaces. This paper presents the second version of the DIP, whose main changes are the more formal organization on menus and menu items, and menu for digital image segmentation. Currently, the DIP contains the menus Fundamentos, Visualizações, Domínio Espacial, Domínio de Frequências, Segmentações and Estudos. Each menu contains items and sub-items with functionalities that, usually, request an image as input and produce an image or an attribute in the output. The DIP reads edits and writes binary files containing the 3-D matrix corresponding to a stack of axial images from a given geometry that can be a human body or other volume of interest. It also can read any type of computational image and to make conversions. When the task involves only an output image

  20. The Biological Observation Matrix (BIOM) format or: how I learned to stop worrying and love the ome-ome.

    PubMed

    McDonald, Daniel; Clemente, Jose C; Kuczynski, Justin; Rideout, Jai Ram; Stombaugh, Jesse; Wendel, Doug; Wilke, Andreas; Huse, Susan; Hufnagle, John; Meyer, Folker; Knight, Rob; Caporaso, J Gregory

    2012-07-12

    We present the Biological Observation Matrix (BIOM, pronounced "biome") format: a JSON-based file format for representing arbitrary observation by sample contingency tables with associated sample and observation metadata. As the number of categories of comparative omics data types (collectively, the "ome-ome") grows rapidly, a general format to represent and archive this data will facilitate the interoperability of existing bioinformatics tools and future meta-analyses. The BIOM file format is supported by an independent open-source software project (the biom-format project), which initially contains Python objects that support the use and manipulation of BIOM data in Python programs, and is intended to be an open development effort where developers can submit implementations of these objects in other programming languages. The BIOM file format and the biom-format project are steps toward reducing the "bioinformatics bottleneck" that is currently being experienced in diverse areas of biological sciences, and will help us move toward the next phase of comparative omics where basic science is translated into clinical and environmental applications. The BIOM file format is currently recognized as an Earth Microbiome Project Standard, and as a Candidate Standard by the Genomic Standards Consortium.

  1. 47 CFR 1.913 - Application and notification forms; electronic and manual filing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... notifications whenever possible. The files, other than the ASCII table of contents, should be in Adobe Acrobat... possible. The attachment should be uploaded via ULS in Adobe Acrobat Portable Document Format (PDF... the table of contents, should be in Adobe Acrobat Portable Document Format (PDF) whenever possible...

  2. [A solution for display and processing of DICOM images in web PACS].

    PubMed

    Xue, Wei-jing; Lu, Wen; Wang, Hai-yang; Meng, Jian

    2009-03-01

    Use the technique of Java Applet to realize the supporting of DICOM image in ordinary Web browser, thereby to expand the processing function of medical image. First analyze the format of DICOM file and design a class which can acquire the pixels, then design two Applet classes, of which one is used to disposal the DICOM image, the other is used to display DICOM image that have been disposaled in the first Applet. They all embedded in the View page, and they communicate by Applet Context object. The method designed in this paper can make users display and process DICOM images directly by using ordinary Web browser, which makes Web PACS not only have the advantages of B/S model, but also have the advantages of the C/S model. Java Applet is the key for expanding the Web browser's function in Web PACS, which provides a guideline to sharing of medical images.

  3. A malware detection scheme based on mining format information.

    PubMed

    Bai, Jinrong; Wang, Junfeng; Zou, Guozhong

    2014-01-01

    Malware has become one of the most serious threats to computer information system and the current malware detection technology still has very significant limitations. In this paper, we proposed a malware detection approach by mining format information of PE (portable executable) files. Based on in-depth analysis of the static format information of the PE files, we extracted 197 features from format information of PE files and applied feature selection methods to reduce the dimensionality of the features and achieve acceptable high performance. When the selected features were trained using classification algorithms, the results of our experiments indicate that the accuracy of the top classification algorithm is 99.1% and the value of the AUC is 0.998. We designed three experiments to evaluate the performance of our detection scheme and the ability of detecting unknown and new malware. Although the experimental results of identifying new malware are not perfect, our method is still able to identify 97.6% of new malware with 1.3% false positive rates.

  4. A Malware Detection Scheme Based on Mining Format Information

    PubMed Central

    Bai, Jinrong; Wang, Junfeng; Zou, Guozhong

    2014-01-01

    Malware has become one of the most serious threats to computer information system and the current malware detection technology still has very significant limitations. In this paper, we proposed a malware detection approach by mining format information of PE (portable executable) files. Based on in-depth analysis of the static format information of the PE files, we extracted 197 features from format information of PE files and applied feature selection methods to reduce the dimensionality of the features and achieve acceptable high performance. When the selected features were trained using classification algorithms, the results of our experiments indicate that the accuracy of the top classification algorithm is 99.1% and the value of the AUC is 0.998. We designed three experiments to evaluate the performance of our detection scheme and the ability of detecting unknown and new malware. Although the experimental results of identifying new malware are not perfect, our method is still able to identify 97.6% of new malware with 1.3% false positive rates. PMID:24991639

  5. Endodontic complications of root canal therapy performed by dental students with stainless-steel K-files and nickel-titanium hand files.

    PubMed

    Pettiette, M T; Metzger, Z; Phillips, C; Trope, M

    1999-04-01

    Straightening of curved canals is one of the most common procedural errors in endodontic instrumentation. This problem is commonly encountered when dental students perform molar endodontics. The purpose of this study was to compare the effect of the type of instrument used by these students on the extent of straightening and on the incidence of other endodontic procedural errors. Nickel-titanium 0.02 taper hand files were compared with traditional stainless-steel 0.02 taper K-files. Sixty molar teeth comprised of maxillary and mandibular first and second molars were treated by senior dental students. Instrumentation was with either nickel-titanium hand files or stainless-steel K-files. Preoperative and postoperative radiographs of each tooth were taken using an XCP precision instrument with a customized bite block to ensure accurate reproduction of radiographic angulation. The radiographs were scanned and the images stored as TIFF files. By superimposing tracings from the preoperative over the postoperative radiographs, the degree of deviation of the apical third of the root canal filling from the original canal was measured. The presence of other errors, such as strip perforation and instrument breakage, was established by examining the radiographs. In curved canals instrumented by stainless-steel K-files, the average deviation of the apical third of the canals was 14.44 degrees (+/- 10.33 degrees). The deviation was significantly reduced when nickel-titanium hand files were used to an average of 4.39 degrees (+/- 4.53 degrees). The incidence of other procedural errors was also significantly reduced by the use of nickel-titanium hand files.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Temple, Brian Allen; Armstrong, Jerawan Chudoung

    This document is a mid-year report on a deliverable for the PYTHON Radiography Analysis Tool (PyRAT) for project LANL12-RS-107J in FY15. The deliverable is deliverable number 2 in the work package and is titled “Add the ability to read in more types of image file formats in PyRAT”. Right now PyRAT can only read in uncompressed TIF files (tiff files). It is planned to expand the file formats that can be read by PyRAT, making it easier to use in more situations. A summary of the file formats added include jpeg, jpg, png and formatted ASCII files.

  7. Characterization of E coli biofim formations on baby spinach leaf surfaces using hyperspectral fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Cho, Hyunjeong; Baek, Insuck; Oh, Mirae; Kim, Sungyoun; Lee, Hoonsoo; Kim, Moon S.

    2017-05-01

    Bacterial biofilm formed by pathogens on fresh produce surfaces is a food safety concern because the complex extracellular matrix in the biofilm structure reduces the reduction and removal efficacies of washing and sanitizing processes such as chemical or irradiation treatments. Therefore, a rapid and nondestructive method to identify pathogenic biofilm on produce surfaces is needed to ensure safe consumption of fresh, raw produce. This research aimed to evaluate the feasibility of hyperspectral fluorescence imaging for detecting Escherichia.coli (ATCC 25922) biofilms on baby spinach leaf surfaces. Samples of baby spinach leaves were immersed and inoculated with five different levels (from 2.6x104 to 2.6x108 CFU/mL) of E.coli and stored at 4°C for 24 h and 48 h to induce biofilm formation. Following the two treatment days, individual leaves were gently washed to remove excess liquid inoculums from the leaf surfaces and imaged with a hyperspectral fluorescence imaging system equipped with UV-A (365 nm) and violet (405 nm) excitation sources to evaluate a spectral-image-based method for biofilm detection. The imaging results with the UV-A excitation showed that leaves even at early stages of biofilm formations could be differentiated from the control leaf surfaces. This preliminary investigation demonstrated the potential of fluorescence imaging techniques for detection of biofilms on leafy green surfaces.

  8. Influence of image compression on the interpretation of spectral-domain optical coherence tomography in exudative age-related macular degeneration

    PubMed Central

    Kim, J H; Kang, S W; Kim, J-r; Chang, Y S

    2014-01-01

    Purpose To evaluate the effect of image compression of spectral-domain optical coherence tomography (OCT) images in the examination of eyes with exudative age-related macular degeneration (AMD). Methods Thirty eyes from 30 patients who were diagnosed with exudative AMD were included in this retrospective observational case series. The horizontal OCT scans centered at the center of the fovea were conducted using spectral-domain OCT. The images were exported to Tag Image File Format (TIFF) and 100, 75, 50, 25 and 10% quality of Joint Photographic Experts Group (JPEG) format. OCT images were taken before and after intravitreal ranibizumab injections, and after relapse. The prevalence of subretinal and intraretinal fluids was determined. Differences in choroidal thickness between the TIFF and JPEG images were compared with the intra-observer variability. Results The prevalence of subretinal and intraretinal fluids was comparable regardless of the degree of compression. However, the chorio–scleral interface was not clearly identified in many images with a high degree of compression. In images with 25 and 10% quality of JPEG, the difference in choroidal thickness between the TIFF images and the respective JPEG images was significantly greater than the intra-observer variability of the TIFF images (P=0.029 and P=0.024, respectively). Conclusions In OCT images of eyes with AMD, 50% of the quality of the JPEG format would be an optimal degree of compression for efficient data storage and transfer without sacrificing image quality. PMID:24788012

  9. The influence of stimulus format on drawing—a functional imaging study of decision making in portrait drawing

    PubMed Central

    Miall, R.C.; Nam, Se-Ho; Tchalenko, J.

    2014-01-01

    To copy a natural visual image as a line drawing, visual identification and extraction of features in the image must be guided by top-down decisions, and is usually influenced by prior knowledge. In parallel with other behavioral studies testing the relationship between eye and hand movements when drawing, we report here a functional brain imaging study in which we compared drawing of faces and abstract objects: the former can be strongly guided by prior knowledge, the latter less so. To manipulate the difficulty in extracting features to be drawn, each original image was presented in four formats including high contrast line drawings and silhouettes, and as high and low contrast photographic images. We confirmed the detailed eye–hand interaction measures reported in our other behavioral studies by using in-scanner eye-tracking and recording of pen movements with a touch screen. We also show that the brain activation pattern reflects the changes in presentation formats. In particular, by identifying the ventral and lateral occipital areas that were more highly activated during drawing of faces than abstract objects, we found a systematic increase in differential activation for the face-drawing condition, as the presentation format made the decisions more challenging. This study therefore supports theoretical models of how prior knowledge may influence perception in untrained participants, and lead to experience-driven perceptual modulation by trained artists. PMID:25128710

  10. Adobe acrobat: an alternative electronic teaching file construction methodology independent of HTML restrictions.

    PubMed

    Katzman, G L

    2001-03-01

    The goal of the project was to create a method by which an in-house digital teaching file could be constructed that was simple, inexpensive, independent of hypertext markup language (HTML) restrictions, and appears identical on multiple platforms. To accomplish this, Microsoft PowerPoint and Adobe Acrobat were used in succession to assemble digital teaching files in the Acrobat portable document file format. They were then verified to appear identically on computers running Windows, Macintosh Operating Systems (OS), and the Silicon Graphics Unix-based OS as either a free-standing file using Acrobat Reader software or from within a browser window using the Acrobat browser plug-in. This latter display method yields a file viewed through a browser window, yet remains independent of underlying HTML restrictions, which may confer an advantage over simple HTML teaching file construction. Thus, a hybrid of HTML-distributed Adobe Acrobat generated WWW documents may be a viable alternative for digital teaching file construction and distribution.

  11. Image-based characterization of thrombus formation in time-lapse DIC microscopy

    PubMed Central

    Brieu, Nicolas; Navab, Nassir; Serbanovic-Canic, Jovana; Ouwehand, Willem H.; Stemple, Derek L.; Cvejic, Ana; Groher, Martin

    2012-01-01

    The characterization of thrombus formation in time-lapse DIC microscopy is of increased interest for identifying genes which account for atherothrombosis and coronary artery diseases (CADs). In particular, we are interested in large-scale studies on zebrafish, which result in large amount of data, and require automatic processing. In this work, we present an image-based solution for the automatized extraction of parameters quantifying the temporal development of thrombotic plugs. Our system is based on the joint segmentation of thrombotic and aortic regions over time. This task is made difficult by the low contrast and the high dynamic conditions observed in vivo DIC microscopic scenes. Our key idea is to perform this segmentation by distinguishing the different motion patterns in image time series rather than by solving standard image segmentation tasks in each image frame. Thus, we are able to compensate for the poor imaging conditions. We model motion patterns by energies based on the idea of dynamic textures, and regularize the model by two prior energies on the shape of the aortic region and on the topological relationship between the thrombus and the aorta. We demonstrate the performance of our segmentation algorithm by qualitative and quantitative experiments on synthetic examples as well as on real in vivo microscopic sequences. PMID:22482997

  12. Community Oncology and Prevention Trials | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"168","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Early Detection Research Group Homepage Image","field_file_image_title_text[und][0][value]":"Early Detection Research Group Homepage Image","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Early Detection Research Group Homepage Image","title":"Early

  13. Reversible watermarking for knowledge digest embedding and reliability control in medical images.

    PubMed

    Coatrieux, Gouenou; Le Guillou, Clara; Cauvin, Jean-Michel; Roux, Christian

    2009-03-01

    To improve medical image sharing in applications such as e-learning or remote diagnosis aid, we propose to make the image more usable by watermarking it with a digest of its associated knowledge. The aim of such a knowledge digest (KD) is for it to be used for retrieving similar images with either the same findings or differential diagnoses. It summarizes the symbolic descriptions of the image, the symbolic descriptions of the findings semiology, and the similarity rules that contribute to balancing the importance of previous descriptors when comparing images. Instead of modifying the image file format by adding some extra header information, watermarking is used to embed the KD in the pixel gray-level values of the corresponding images. When shared through open networks, watermarking also helps to convey reliability proofs (integrity and authenticity) of an image and its KD. The interest of these new image functionalities is illustrated in the updating of the distributed users' databases within the framework of an e-learning application demonstrator of endoscopic semiology.

  14. Improving transmission efficiency of large sequence alignment/map (SAM) files.

    PubMed

    Sakib, Muhammad Nazmus; Tang, Jijun; Zheng, W Jim; Huang, Chin-Tser

    2011-01-01

    Research in bioinformatics primarily involves collection and analysis of a large volume of genomic data. Naturally, it demands efficient storage and transfer of this huge amount of data. In recent years, some research has been done to find efficient compression algorithms to reduce the size of various sequencing data. One way to improve the transmission time of large files is to apply a maximum lossless compression on them. In this paper, we present SAMZIP, a specialized encoding scheme, for sequence alignment data in SAM (Sequence Alignment/Map) format, which improves the compression ratio of existing compression tools available. In order to achieve this, we exploit the prior knowledge of the file format and specifications. Our experimental results show that our encoding scheme improves compression ratio, thereby reducing overall transmission time significantly.

  15. An Adaptable Seismic Data Format

    NASA Astrophysics Data System (ADS)

    Krischer, Lion; Smith, James; Lei, Wenjie; Lefebvre, Matthieu; Ruan, Youyi; de Andrade, Elliott Sales; Podhorszki, Norbert; Bozdağ, Ebru; Tromp, Jeroen

    2016-11-01

    We present ASDF, the Adaptable Seismic Data Format, a modern and practical data format for all branches of seismology and beyond. The growing volume of freely available data coupled with ever expanding computational power opens avenues to tackle larger and more complex problems. Current bottlenecks include inefficient resource usage and insufficient data organization. Properly scaling a problem requires the resolution of both these challenges, and existing data formats are no longer up to the task. ASDF stores any number of synthetic, processed or unaltered waveforms in a single file. A key improvement compared to existing formats is the inclusion of comprehensive meta information, such as event or station information, in the same file. Additionally, it is also usable for any non-waveform data, for example, cross-correlations, adjoint sources or receiver functions. Last but not least, full provenance information can be stored alongside each item of data, thereby enhancing reproducibility and accountability. Any data set in our proposed format is self-describing and can be readily exchanged with others, facilitating collaboration. The utilization of the HDF5 container format grants efficient and parallel I/O operations, integrated compression algorithms and check sums to guard against data corruption. To not reinvent the wheel and to build upon past developments, we use existing standards like QuakeML, StationXML, W3C PROV and HDF5 wherever feasible. Usability and tool support are crucial for any new format to gain acceptance. We developed mature C/Fortran and Python based APIs coupling ASDF to the widely used SPECFEM3D_GLOBE and ObsPy toolkits.

  16. Accessing files in an Internet: The Jade file system

    NASA Technical Reports Server (NTRS)

    Peterson, Larry L.; Rao, Herman C.

    1991-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  17. Accessing files in an internet - The Jade file system

    NASA Technical Reports Server (NTRS)

    Rao, Herman C.; Peterson, Larry L.

    1993-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  18. One Quiz File, Several Modes of Delivery

    ERIC Educational Resources Information Center

    Herbert, John C.

    2012-01-01

    This report offers online course designers, particularly those keen on using Moodle CMSs, a means of diversifying accessibility to their educational materials via multiple modes of delivery that do not require the creation of numerous files and formats for just one activity. The author has made contributions to the development of an open source…

  19. Proposal for a Standard Format for Neurophysiology Data Recording and Exchange.

    PubMed

    Stead, Matt; Halford, Jonathan J

    2016-10-01

    The lack of interoperability between information networks is a significant source of cost in health care. Standardized data formats decrease health care cost, improve quality of care, and facilitate biomedical research. There is no common standard digital format for storing clinical neurophysiologic data. This review proposes a new standard file format for neurophysiology data (the bulk of which is video-electroencephalographic data), entitled the Multiscale Electrophysiology Format, version 3 (MEF3), which is designed to address many of the shortcomings of existing formats. MEF3 provides functionality that addresses many of the limitations of current formats. The proposed improvements include (1) hierarchical file structure with improved organization; (2) greater extensibility for big data applications requiring a large number of channels, signal types, and parallel processing; (3) efficient and flexible lossy or lossless data compression; (4) industry standard multilayered data encryption and time obfuscation that permits sharing of human data without the need for deidentification procedures; (5) resistance to file corruption; (6) facilitation of online and offline review and analysis; and (7) provision of full open source documentation. At this time, there is no other neurophysiology format that supports all of these features. MEF3 is currently gaining industry and academic community support. The authors propose the use of the MEF3 as a standard format for neurophysiology recording and data exchange. Collaboration between industry, professional organizations, research communities, and independent standards organizations is needed to move the project forward.

  20. BOREAS Level 3-b AVHRR-LAC Imagery: Scaled At-sensor Radiance in LGSOWG Format

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime; Newcomer, Jeffrey A.; Cihlar, Josef

    2000-01-01

    The BOREAS Staff Science Satellite Data Acquisition Program focused on providing the research teams with the remotely sensed satellite data products they needed to compare and spatially extend point results. Data acquired from the AVHRR instrument on the NOAA-9, -11, -12, and -14 satellites were processed and archived for the BOREAS region by the MRSC and BORIS. The data were acquired by CCRS and were provided for use by BOREAS researchers. A few winter acquisitions are available, but the archive contains primarily growing season imagery. These gridded, at-sensor radiance image data cover the period of 30-Jan-1994 to 18-Sep-1996. Geographically, the data cover the entire 1,000-km x 1,000-km BOREAS region. The data are stored in binary image format files.

  1. An EXCEL macro for importing log ASCII standard (LAS) files into EXCEL worksheets

    NASA Astrophysics Data System (ADS)

    Özkaya, Sait Ismail

    1996-02-01

    An EXCEL 5.0 macro is presented for converting a LAS text file into an EXCEL worksheet. Although EXCEL has commands for importing text files and parsing text lines, LAS files must be decoded line-by-line because three different delimiters are used to separate fields of differing length. The macro is intended to eliminate manual decoding of LAS version 2.0. LAS is a floppy disk format for storage and transfer of log data as text files. LAS was proposed by the Canadian Well Logging Society. The present EXCEL macro decodes different sections of a LAS file, separates, and places the fields into different columns of an EXCEL worksheet. To import a LAS file into EXCEL without errors, the file must not contain any unrecognized symbols, and the data section must be the last section. The program does not check for the presence of mandatory sections or fields as required by LAS rules. Once a file is incorporated into EXCEL, mandatory sections and fields may be inspected visually.

  2. Uvf - Unified Volume Format: A General System for Efficient Handling of Large Volumetric Datasets.

    PubMed

    Krüger, Jens; Potter, Kristin; Macleod, Rob S; Johnson, Christopher

    2008-01-01

    With the continual increase in computing power, volumetric datasets with sizes ranging from only a few megabytes to petascale are generated thousands of times per day. Such data may come from an ordinary source such as simple everyday medical imaging procedures, while larger datasets may be generated from cluster-based scientific simulations or measurements of large scale experiments. In computer science an incredible amount of work worldwide is put into the efficient visualization of these datasets. As researchers in the field of scientific visualization, we often have to face the task of handling very large data from various sources. This data usually comes in many different data formats. In medical imaging, the DICOM standard is well established, however, most research labs use their own data formats to store and process data. To simplify the task of reading the many different formats used with all of the different visualization programs, we present a system for the efficient handling of many types of large scientific datasets (see Figure 1 for just a few examples). While primarily targeted at structured volumetric data, UVF can store just about any type of structured and unstructured data. The system is composed of a file format specification with a reference implementation of a reader. It is not only a common, easy to implement format but also allows for efficient rendering of most datasets without the need to convert the data in memory.

  3. Single mimivirus particles intercepted and imaged with an X-ray laser (CXIDB ID 1)

    DOE Data Explorer

    Seibert, M. Marvin; Ekeberg, Tomas; Maia, Filipe R.N.C.

    2011-02-02

    These are the files used to reconstruct the images in the paper "Single Mimivirus particles intercepted and imaged with an X-ray laser". Besides the diffracted intensities, the Hawk configuration files used for the reconstructions are also provided. The files from CXIDB ID 1 are the pattern and configuration files for the pattern showed in Figure 2a in the paper.

  4. Single mimivirus particles intercepted and imaged with an X-ray laser (CXIDB ID 2)

    DOE Data Explorer

    Seibert, M. Marvin; Ekeberg, Tomas

    2011-02-02

    These are the files used to reconstruct the images in the paper "Single Mimivirus particles intercepted and imaged with an X-ray laser". Besides the diffracted intensities, the Hawk configuration files used for the reconstructions are also provided. The files from CXIDB ID 2 are the pattern and configuration files for the pattern showed in Figure 2b in the paper.

  5. Dependency Tree Annotation Software

    DTIC Science & Technology

    2015-11-01

    formats, and it provides numerous options for customizing how dependency trees are displayed. Built entirely in Java , it can run on a wide range of...tree can be saved as an image, .mxe (a mxGraph editing file), a .conll file, and several other file formats. DTE uses the open source Java version

  6. Automated extraction of chemical structure information from digital raster images

    PubMed Central

    Park, Jungkap; Rosania, Gus R; Shedden, Kerby A; Nguyen, Mandee; Lyu, Naesung; Saitou, Kazuhiro

    2009-01-01

    Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links to scientific research

  7. Tabular data and graphical images in support of the U.S. Geological Survey National Oil and Gas Assessment--San Juan Basin Province (5022): Chapter 7 in Total petroleum systems and geologic assessment of undiscovered oil and gas resources in the San Juan Basin Province, exclusive of Paleozoic rocks, New Mexico and Colorado

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2013-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD–ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  8. The Biological Observation Matrix (BIOM) format or: how I learned to stop worrying and love the ome-ome

    PubMed Central

    2012-01-01

    Background We present the Biological Observation Matrix (BIOM, pronounced “biome”) format: a JSON-based file format for representing arbitrary observation by sample contingency tables with associated sample and observation metadata. As the number of categories of comparative omics data types (collectively, the “ome-ome”) grows rapidly, a general format to represent and archive this data will facilitate the interoperability of existing bioinformatics tools and future meta-analyses. Findings The BIOM file format is supported by an independent open-source software project (the biom-format project), which initially contains Python objects that support the use and manipulation of BIOM data in Python programs, and is intended to be an open development effort where developers can submit implementations of these objects in other programming languages. Conclusions The BIOM file format and the biom-format project are steps toward reducing the “bioinformatics bottleneck” that is currently being experienced in diverse areas of biological sciences, and will help us move toward the next phase of comparative omics where basic science is translated into clinical and environmental applications. The BIOM file format is currently recognized as an Earth Microbiome Project Standard, and as a Candidate Standard by the Genomic Standards Consortium. PMID:23587224

  9. Star formation properties of Hickson Compact Groups based on deep Hα imaging

    NASA Astrophysics Data System (ADS)

    Eigenthaler, Paul; Ploeckinger, Sylvia; Verdugo, Miguel; Ziegler, Bodo

    2015-08-01

    We present deep Hα imaging of seven Hickson Compact Groups (HCGs) using the 4.1-m Southern Astrophysics Research (SOAR) Telescope. The high spatial resolution of the observations allows us to study both the integrated star formation properties of the main galaxies as well as the 2D distribution of star-forming knots in the faint tidal arms that form during interactions between the individual galaxies. We derive star formation rates and stellar masses for group members and discuss their position relative to the main sequence of star-forming galaxies. Despite the existence of tidal features within the galaxy groups, we do not find any indication for enhanced star formation in the selected sample of HCGs. We study azimuthally averaged Hα profiles of the galaxy discs and compare them with the g' and r' surface brightness profiles. We do not find any truncated galaxy discs but reveal that more massive galaxies show a higher light concentration in Hα than less massive ones. We also see that galaxies that show a high light concentration in r', show a systematic higher light concentration in Hα. Tidal dwarf galaxy (TDG) candidates have been previously detected in R-band images for two groups in our sample but we find that most of them are likely background objects as they do not show any emission in Hα. We present a new TDG candidate at the tip of the tidal tail in HCG 91.

  10. Tele-transmission of stereoscopic images of the optic nerve head in glaucoma via Internet.

    PubMed

    Bergua, Antonio; Mardin, Christian Y; Horn, Folkert K

    2009-06-01

    The objective was to describe an inexpensive system to visualize stereoscopic photographs of the optic nerve head on computer displays and to transmit such images via the Internet for collaborative research or remote clinical diagnosis in glaucoma. Stereoscopic images of glaucoma patients were digitized and stored in a file format (joint photographic stereoimage [jps]) containing all three-dimensional information for both eyes on an Internet Web site (www.trizax.com). The size of jps files was between 0.4 to 1.4 MB (corresponding to a diagonal stereo image size between 900 and 1400 pixels) suitable for Internet protocols. A conventional personal computer system equipped with wireless stereoscopic LCD shutter glasses and a CRT-monitor with high refresh rate (120 Hz) can be used to obtain flicker-free stereo visualization of true-color images with high resolution. Modern thin-film transistor-LCD displays in combination with inexpensive red-cyan goggles achieve stereoscopic visualization with the same resolution but reduced color quality and contrast. The primary aim of our study was met to transmit stereoscopic images via the Internet. Additionally, we found that with both stereoscopic visualization techniques, cup depth, neuroretinal rim shape, and slope of the inner wall of the optic nerve head, can be qualitatively better perceived and interpreted than with monoscopic images. This study demonstrates high-quality and low-cost Internet transmission of stereoscopic images of the optic nerve head from glaucoma patients. The technique allows exchange of stereoscopic images and can be applied to tele-diagnostic and glaucoma research.

  11. CAPRICE positively regulates stomatal formation in the Arabidopsis hypocotyl

    PubMed Central

    2008-01-01

    In the Arabidopsis hypocotyl, stomata develop only from a set of epidermal cell files. Previous studies have identified several negative regulators of stomata formation. Such regulators also trigger non-hair cell fate in the root. Here, it is shown that TOO MANY MOUTHS (TMM) positively regulates CAPRICE (CPC) expression in differentiating stomaless-forming cell files, and that the CPC protein might move to the nucleus of neighbouring stoma-forming cells, where it promotes stomata formation in a redundant manner with TRIPTYCHON (TRY). Unexpectedly, the CPC protein was also localized in the nucleus and peripheral cytoplasm of hypocotyl fully differentiated epidermal cells, suggesting that CPC plays an additional role to those related to stomata formation. These results identify CPC and TRY as positive regulators of stomata formation in the embryonic stem, which increases the similarity between the genetic control of root hair and stoma cell fate determination. PMID:19513241

  12. Real-time magnetic resonance imaging-guided radiofrequency atrial ablation and visualization of lesion formation at 3 Tesla.

    PubMed

    Vergara, Gaston R; Vijayakumar, Sathya; Kholmovski, Eugene G; Blauer, Joshua J E; Guttman, Mike A; Gloschat, Christopher; Payne, Gene; Vij, Kamal; Akoum, Nazem W; Daccarett, Marcos; McGann, Christopher J; Macleod, Rob S; Marrouche, Nassir F

    2011-02-01

    Magnetic resonance imaging (MRI) allows visualization of location and extent of radiofrequency (RF) ablation lesion, myocardial scar formation, and real-time (RT) assessment of lesion formation. In this study, we report a novel 3-Tesla RT -RI based porcine RF ablation model and visualization of lesion formation in the atrium during RF energy delivery. The purpose of this study was to develop a 3-Tesla RT MRI-based catheter ablation and lesion visualization system. RF energy was delivered to six pigs under RT MRI guidance. A novel MRI-compatible mapping and ablation catheter was used. Under RT MRI, this catheter was safely guided and positioned within either the left or right atrium. Unipolar and bipolar electrograms were recorded. The catheter tip-tissue interface was visualized with a T1-weighted gradient echo sequence. RF energy was then delivered in a power-controlled fashion. Myocardial changes and lesion formation were visualized with a T2-weighted (T2W) half Fourier acquisition with single-shot turbo spin echo (HASTE) sequence during ablation. RT visualization of lesion formation was achieved in 30% of the ablations performed. In the other cases, either the lesion was formed outside the imaged region (25%) or the lesion was not created (45%) presumably due to poor tissue-catheter tip contact. The presence of lesions was confirmed by late gadolinium enhancement MRI and macroscopic tissue examination. MRI-compatible catheters can be navigated and RF energy safely delivered under 3-Tesla RT MRI guidance. Recording electrograms during RT imaging also is feasible. RT visualization of lesion as it forms during RF energy delivery is possible and was demonstrated using T2W HASTE imaging. Copyright © 2011 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  13. TOASTing Your Images With Montage

    NASA Astrophysics Data System (ADS)

    Berriman, G. Bruce; Good, John

    2017-01-01

    The Montage image mosaic engine is a scalable toolkit for creating science-grade mosaics of FITS files, according to the user's specifications of coordinates, projection, sampling, and image rotation. It is written in ANSI-C and runs on all common *nix-based platforms. The code is freely available and is released with a BSD 3-clause license. Version 5 is a major upgrade to Montage, and provides support for creating images that can be consumed by the World Wide Telescope (WWT). Montage treats the TOAST sky tessellation scheme, used by the WWT, as a spherical projection like those in the WCStools library. Thus images in any projection can be converted to the TOAST projection by Montage’s reprojection services. These reprojections can be performed at scale on high-performance platforms and on desktops. WWT consumes PNG or JPEG files, organized according to WWT’s tiling and naming scheme. Montage therefore provides a set of dedicated modules to create the required files from FITS images that contain the TOAST projection. There are two other major features of Version 5. It supports processing of HEALPix files to any projection in the WCS tools library. And it can be built as a library that can be called from other languages, primarily Python. http://montage.ipac.caltech.edu.GitHub download page: https://github.com/Caltech-IPAC/Montage.ASCL record: ascl:1010.036. DOI: dx.doi.org/10.5281/zenodo.49418 Montage is funded by the National Science Foundation under Grant Number ACI-1440620,

  14. Collecting and Animating Online Satellite Images.

    ERIC Educational Resources Information Center

    Irons, Ralph

    1995-01-01

    Describes how to generate automated classroom resources from the Internet. Topics covered include viewing animated satellite weather images using file transfer protocol (FTP); sources of images on the Internet; shareware available for viewing images; software for automating image retrieval; procedures for animating satellite images; and storing…

  15. Methods and apparatus for capture and storage of semantic information with sub-files in a parallel computing system

    DOEpatents

    Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Torres, Aaron

    2015-02-03

    Techniques are provided for storing files in a parallel computing system using sub-files with semantically meaningful boundaries. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a plurality of sub-files. The method comprises the steps of obtaining a user specification of semantic information related to the file; providing the semantic information as a data structure description to a data formatting library write function; and storing the semantic information related to the file with one or more of the sub-files in one or more storage nodes of the parallel computing system. The semantic information provides a description of data in the file. The sub-files can be replicated based on semantically meaningful boundaries.

  16. Imaging initial formation processes of nanobubbles at the graphite-water interface through high-speed atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Liao, Hsien-Shun; Yang, Chih-Wen; Ko, Hsien-Chen; Hwu, En-Te; Hwang, Ing-Shouh

    2018-03-01

    The initial formation process of nanobubbles at solid-water interfaces remains unclear because of the limitations of current imaging techniques. To directly observe the formation process, an astigmatic high-speed atomic force microscope (AFM) was modified to enable imaging in the liquid environment. By using a customized cantilever holder, the resonance of small cantilevers was effectively enhanced in water. The proposed high-speed imaging technique yielded highly dynamic quasi-two-dimensional (2D) gas structures (thickness: 20-30 nm) initially at the graphite-water interface. The 2D structures were laterally mobile mainly within certain areas, but occasionally a gas structure might extensively migrate and settle in a new area. The 2D structures were often confined by substrate step edges in one lateral dimension. Eventually, all quasi-2D gas structures were transformed into cap-shaped nanobubbles of higher heights and reduced lateral dimensions. These nanobubbles were immobile and remained stable under continuous AFM imaging. This study demonstrated that nanobubbles could be stably imaged at a scan rate of 100 lines per second (640 μm/s).

  17. Centralized Accounting and Electronic Filing Provides Efficient Receivables Collection.

    ERIC Educational Resources Information Center

    School Business Affairs, 1983

    1983-01-01

    An electronic filing system makes financial control manageable at Bowling Green State University, Ohio. The system enables quick access to computer-stored consolidated account data and microfilm images of charges, statements, and other billing documents. (MLF)

  18. Informatics in Radiology (infoRAD): personal computer security: part 2. Software Configuration and file protection.

    PubMed

    Caruso, Ronald D

    2004-01-01

    Proper configuration of software security settings and proper file management are necessary and important elements of safe computer use. Unfortunately, the configuration of software security options is often not user friendly. Safe file management requires the use of several utilities, most of which are already installed on the computer or available as freeware. Among these file operations are setting passwords, defragmentation, deletion, wiping, removal of personal information, and encryption. For example, Digital Imaging and Communications in Medicine medical images need to be anonymized, or "scrubbed," to remove patient identifying information in the header section prior to their use in a public educational or research environment. The choices made with respect to computer security may affect the convenience of the computing process. Ultimately, the degree of inconvenience accepted will depend on the sensitivity of the files and communications to be protected and the tolerance of the user. Copyright RSNA, 2004

  19. A GUI visualization system for airborne lidar image data to reconstruct 3D city model

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2015-10-01

    A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.

  20. A cloud-based multimodality case file for mobile devices.

    PubMed

    Balkman, Jason D; Loehfelm, Thomas W

    2014-01-01

    Recent improvements in Web and mobile technology, along with the widespread use of handheld devices in radiology education, provide unique opportunities for creating scalable, universally accessible, portable image-rich radiology case files. A cloud database and a Web-based application for radiologic images were developed to create a mobile case file with reasonable usability, download performance, and image quality for teaching purposes. A total of 75 radiology cases related to breast, thoracic, gastrointestinal, musculoskeletal, and neuroimaging subspecialties were included in the database. Breast imaging cases are the focus of this article, as they best demonstrate handheld display capabilities across a wide variety of modalities. This case subset also illustrates methods for adapting radiologic content to cloud platforms and mobile devices. Readers will gain practical knowledge about storage and retrieval of cloud-based imaging data, an awareness of techniques used to adapt scrollable and high-resolution imaging content for the Web, and an appreciation for optimizing images for handheld devices. The evaluation of this software demonstrates the feasibility of adapting images from most imaging modalities to mobile devices, even in cases of full-field digital mammograms, where high resolution is required to represent subtle pathologic features. The cloud platform allows cases to be added and modified in real time by using only a standard Web browser with no application-specific software. Challenges remain in developing efficient ways to generate, modify, and upload radiologic and supplementary teaching content to this cloud-based platform. Online supplemental material is available for this article. ©RSNA, 2014.