Sample records for large image files

  1. Electronic Photography at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Holm, Jack; Judge, Nancianne

    1995-01-01

    An electronic photography facility has been established in the Imaging & Photographic Technology Section, Visual Imaging Branch, at the NASA Langley Research Center (LaRC). The purpose of this facility is to provide the LaRC community with access to digital imaging technology. In particular, capabilities have been established for image scanning, direct image capture, optimized image processing for storage, image enhancement, and optimized device dependent image processing for output. Unique approaches include: evaluation and extraction of the entire film information content through scanning; standardization of image file tone reproduction characteristics for optimal bit utilization and viewing; education of digital imaging personnel on the effects of sampling and quantization to minimize image processing related information loss; investigation of the use of small kernel optimal filters for image restoration; characterization of a large array of output devices and development of image processing protocols for standardized output. Currently, the laboratory has a large collection of digital image files which contain essentially all the information present on the original films. These files are stored at 8-bits per color, but the initial image processing was done at higher bit depths and/or resolutions so that the full 8-bits are used in the stored files. The tone reproduction of these files has also been optimized so the available levels are distributed according to visual perceptibility. Look up tables are available which modify these files for standardized output on various devices, although color reproduction has been allowed to float to some extent to allow for full utilization of output device gamut.

  2. Interactive visualization tools for the structural biologist.

    PubMed

    Porebski, Benjamin T; Ho, Bosco K; Buckle, Ashley M

    2013-10-01

    In structural biology, management of a large number of Protein Data Bank (PDB) files and raw X-ray diffraction images often presents a major organizational problem. Existing software packages that manipulate these file types were not designed for these kinds of file-management tasks. This is typically encountered when browsing through a folder of hundreds of X-ray images, with the aim of rapidly inspecting the diffraction quality of a data set. To solve this problem, a useful functionality of the Macintosh operating system (OSX) has been exploited that allows custom visualization plugins to be attached to certain file types. Software plugins have been developed for diffraction images and PDB files, which in many scenarios can save considerable time and effort. The direct visualization of diffraction images and PDB structures in the file browser can be used to identify key files of interest simply by scrolling through a list of files.

  3. Open source tools for management and archiving of digital microscopy data to allow integration with patient pathology and treatment information.

    PubMed

    Khushi, Matloob; Edwards, Georgina; de Marcos, Diego Alonso; Carpenter, Jane E; Graham, J Dinny; Clarke, Christine L

    2013-02-12

    Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient's clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934.

  4. Informatics in radiology (infoRAD): free DICOM image viewing and processing software for the Macintosh computer: what's available and what it can do for you.

    PubMed

    Escott, Edward J; Rubinstein, David

    2004-01-01

    It is often necessary for radiologists to use digital images in presentations and conferences. Most imaging modalities produce images in the Digital Imaging and Communications in Medicine (DICOM) format. The image files tend to be large and thus cannot be directly imported into most presentation software, such as Microsoft PowerPoint; the large files also consume storage space. There are many free programs that allow viewing and processing of these files on a personal computer, including conversion to more common file formats such as the Joint Photographic Experts Group (JPEG) format. Free DICOM image viewing and processing software for computers running on the Microsoft Windows operating system has already been evaluated. However, many people use the Macintosh (Apple Computer) platform, and a number of programs are available for these users. The World Wide Web was searched for free DICOM image viewing or processing software that was designed for the Macintosh platform or is written in Java and is therefore platform independent. The features of these programs and their usability were evaluated. There are many free programs for the Macintosh platform that enable viewing and processing of DICOM images. (c) RSNA, 2004.

  5. A Study of NetCDF as an Approach for High Performance Medical Image Storage

    NASA Astrophysics Data System (ADS)

    Magnus, Marcone; Coelho Prado, Thiago; von Wangenhein, Aldo; de Macedo, Douglas D. J.; Dantas, M. A. R.

    2012-02-01

    The spread of telemedicine systems increases every day. The systems and PACS based on DICOM images has become common. This rise reflects the need to develop new storage systems, more efficient and with lower computational costs. With this in mind, this article discusses a study for application in NetCDF data format as the basic platform for storage of DICOM images. The study case comparison adopts an ordinary database, the HDF5 and the NetCDF to storage the medical images. Empirical results, using a real set of images, indicate that the time to retrieve images from the NetCDF for large scale images has a higher latency compared to the other two methods. In addition, the latency is proportional to the file size, which represents a drawback to a telemedicine system that is characterized by a large amount of large image files.

  6. Open source tools for management and archiving of digital microscopy data to allow integration with patient pathology and treatment information

    PubMed Central

    2013-01-01

    Background Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. Results We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient’s clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Conclusions Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934 PMID:23402499

  7. imzML: Imaging Mass Spectrometry Markup Language: A common data format for mass spectrometry imaging.

    PubMed

    Römpp, Andreas; Schramm, Thorsten; Hester, Alfons; Klinkert, Ivo; Both, Jean-Pierre; Heeren, Ron M A; Stöckli, Markus; Spengler, Bernhard

    2011-01-01

    Imaging mass spectrometry is the method of scanning a sample of interest and generating an "image" of the intensity distribution of a specific analyte. The data sets consist of a large number of mass spectra which are usually acquired with identical settings. Existing data formats are not sufficient to describe an MS imaging experiment completely. The data format imzML was developed to allow the flexible and efficient exchange of MS imaging data between different instruments and data analysis software.For this purpose, the MS imaging data is divided in two separate files. The mass spectral data is stored in a binary file to ensure efficient storage. All metadata (e.g., instrumental parameters, sample details) are stored in an XML file which is based on the standard data format mzML developed by HUPO-PSI. The original mzML controlled vocabulary was extended to include specific parameters of imaging mass spectrometry (such as x/y position and spatial resolution). The two files (XML and binary) are connected by offset values in the XML file and are unambiguously linked by a universally unique identifier. The resulting datasets are comparable in size to the raw data and the separate metadata file allows flexible handling of large datasets.Several imaging MS software tools already support imzML. This allows choosing from a (growing) number of processing tools. One is no longer limited to proprietary software, but is able to use the processing software which is best suited for a specific question or application. On the other hand, measurements from different instruments can be compared within one software application using identical settings for data processing. All necessary information for evaluating and implementing imzML can be found at http://www.imzML.org .

  8. OMERO and Bio-Formats 5: flexible access to large bioimaging datasets at scale

    NASA Astrophysics Data System (ADS)

    Moore, Josh; Linkert, Melissa; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Li, Simon; Lindner, Dominik; Moore, William J.; Patterson, Andrew J.; Pindelski, Blazej; Ramalingam, Balaji; Rozbicki, Emil; Tarkowska, Aleksandra; Walczysko, Petr; Allan, Chris; Burel, Jean-Marie; Swedlow, Jason

    2015-03-01

    The Open Microscopy Environment (OME) has built and released Bio-Formats, a Java-based proprietary file format conversion tool and OMERO, an enterprise data management platform under open source licenses. In this report, we describe new versions of Bio-Formats and OMERO that are specifically designed to support large, multi-gigabyte or terabyte scale datasets that are routinely collected across most domains of biological and biomedical research. Bio- Formats reads image data directly from native proprietary formats, bypassing the need for conversion into a standard format. It implements the concept of a file set, a container that defines the contents of multi-dimensional data comprised of many files. OMERO uses Bio-Formats to read files natively, and provides a flexible access mechanism that supports several different storage and access strategies. These new capabilities of OMERO and Bio-Formats make them especially useful for use in imaging applications like digital pathology, high content screening and light sheet microscopy that create routinely large datasets that must be managed and analyzed.

  9. MINC 2.0: A Flexible Format for Multi-Modal Images.

    PubMed

    Vincent, Robert D; Neelin, Peter; Khalili-Mahani, Najmeh; Janke, Andrew L; Fonov, Vladimir S; Robbins, Steven M; Baghdadi, Leila; Lerch, Jason; Sled, John G; Adalat, Reza; MacDonald, David; Zijdenbos, Alex P; Collins, D Louis; Evans, Alan C

    2016-01-01

    It is often useful that an imaging data format can afford rich metadata, be flexible, scale to very large file sizes, support multi-modal data, and have strong inbuilt mechanisms for data provenance. Beginning in 1992, MINC was developed as a system for flexible, self-documenting representation of neuroscientific imaging data with arbitrary orientation and dimensionality. The MINC system incorporates three broad components: a file format specification, a programming library, and a growing set of tools. In the early 2000's the MINC developers created MINC 2.0, which added support for 64-bit file sizes, internal compression, and a number of other modern features. Because of its extensible design, it has been easy to incorporate details of provenance in the header metadata, including an explicit processing history, unique identifiers, and vendor-specific scanner settings. This makes MINC ideal for use in large scale imaging studies and databases. It also makes it easy to adapt to new scanning sequences and modalities.

  10. Cartography of irregularly shaped satellites

    NASA Technical Reports Server (NTRS)

    Batson, R. M.; Edwards, Kathleen

    1987-01-01

    Irregularly shaped satellites, such as Phobos and Amalthea, do not lend themselves to mapping by conventional methods because mathematical projections of their surfaces fail to convey an accurate visual impression of the landforms, and because large and irregular scale changes make their features difficult to measure on maps. A digital mapping technique has therefore been developed by which maps are compiled from digital topographic and spacecraft image files. The digital file is geometrically transformed as desired for human viewing, either on video screens or on hard copy. Digital files of this kind consist of digital images superimposed on another digital file representing the three-dimensional form of a body.

  11. BOREAS RSS-14 Level-2 GOES-7 Shortwave and Longwave Radiation Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Gu, Jiujing; Smith, Eric A.

    2000-01-01

    The BOREAS RSS-14 team collected and processed several GOES-7 and GOES-8 image data sets that covered the BOREAS study region. This data set contains images of shortwave and longwave radiation at the surface and top of the atmosphere derived from collected GOES-7 data. The data cover the time period of 05-Feb-1994 to 20-Sep-1994. The images missing from the temporal series were zero-filled to create a consistent sequence of files. The data are stored in binary image format files. Due to the large size of the images, the level-1a GOES-7 data are not contained on the BOREAS CD-ROM set. An inventory listing file is supplied on the CD-ROM to inform users of what data were collected. The level-1a GOES-7 image data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). See sections 15 and 16 for more information. The data files are available on a CD-ROM (see document number 20010000884).

  12. BOREAS RSS-14 Level-1a GOES-8 Visible, IR and Water Vapor Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Newcomer, Jeffrey A.; Faysash, David; Cooper, Harry J.; Smith, Eric A.

    2000-01-01

    The BOREAS RSS-14 team collected and processed several GOES-7 and GOES-8 image data sets that covered the BOREAS study region. The level-1a GOES-8 images were created by BORIS personnel from the level-1 images delivered by FSU personnel. The data cover 14-Jul-1995 to 21-Sep-1995 and 12-Feb-1996 to 03-Oct-1996. The data start out as three bands with 8-bit pixel values and end up as five bands with 10-bit pixel values. No major problems with the data have been identified. The differences between the level-1 and level-1a GOES-8 data are the formatting and packaging of the data. The images missing from the temporal series of level-1 GOES-8 images were zero-filled by BORIS staff to create files consistent in size and format. In addition, BORIS staff packaged all the images of a given type from a given day into a single file, removed the header information from the individual level-1 files, and placed it into a single descriptive ASCII header file. The data are contained in binary image format files. Due to the large size of the images, the level-1a GOES-8 data are not contained on the BOREAS CD-ROM set. An inventory listing file is supplied on the CD-ROM to inform users of what data were collected. The level-1a GOES-8 image data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). See sections 15 and 16 for more information. The data files are available on a CD-ROM (see document number 20010000884).

  13. Software for browsing sectioned images of a dog body and generating a 3D model.

    PubMed

    Park, Jin Seo; Jung, Yong Wook

    2016-01-01

    The goals of this study were (1) to provide accessible and instructive browsing software for sectioned images and a portable document format (PDF) file that includes three-dimensional (3D) models of an entire dog body and (2) to develop techniques for segmentation and 3D modeling that would enable an investigator to perform these tasks without the aid of a computer engineer. To achieve these goals, relatively important or large structures in the sectioned images were outlined to generate segmented images. The sectioned and segmented images were then packaged into browsing software. In this software, structures in the sectioned images are shown in detail and in real color. After 3D models were made from the segmented images, the 3D models were exported into a PDF file. In this format, the 3D models could be manipulated freely. The browsing software and PDF file are available for study by students, for lecture for teachers, and for training for clinicians. These files will be helpful for anatomical study by and clinical training of veterinary students and clinicians. Furthermore, these techniques will be useful for researchers who study two-dimensional images and 3D models. © 2015 Wiley Periodicals, Inc.

  14. Why can't I manage my digital images like MP3s? The evolution and intent of multimedia metadata

    NASA Astrophysics Data System (ADS)

    Goodrum, Abby; Howison, James

    2005-01-01

    This paper considers the deceptively simple question: Why can't digital images be managed in the simple and effective manner in which digital music files are managed? We make the case that the answer is different treatments of metadata in different domains with different goals. A central difference between the two formats stems from the fact that digital music metadata lookup services are collaborative and automate the movement from a digital file to the appropriate metadata, while image metadata services do not. To understand why this difference exists we examine the divergent evolution of metadata standards for digital music and digital images and observed that the processes differ in interesting ways according to their intent. Specifically music metadata was developed primarily for personal file management and community resource sharing, while the focus of image metadata has largely been on information retrieval. We argue that lessons from MP3 metadata can assist individuals facing their growing personal image management challenges. Our focus therefore is not on metadata for cultural heritage institutions or the publishing industry, it is limited to the personal libraries growing on our hard-drives. This bottom-up approach to file management combined with p2p distribution radically altered the music landscape. Might such an approach have a similar impact on image publishing? This paper outlines plans for improving the personal management of digital images-doing image metadata and file management the MP3 way-and considers the likelihood of success.

  15. Why can't I manage my digital images like MP3s? The evolution and intent of multimedia metadata

    NASA Astrophysics Data System (ADS)

    Goodrum, Abby; Howison, James

    2004-12-01

    This paper considers the deceptively simple question: Why can"t digital images be managed in the simple and effective manner in which digital music files are managed? We make the case that the answer is different treatments of metadata in different domains with different goals. A central difference between the two formats stems from the fact that digital music metadata lookup services are collaborative and automate the movement from a digital file to the appropriate metadata, while image metadata services do not. To understand why this difference exists we examine the divergent evolution of metadata standards for digital music and digital images and observed that the processes differ in interesting ways according to their intent. Specifically music metadata was developed primarily for personal file management and community resource sharing, while the focus of image metadata has largely been on information retrieval. We argue that lessons from MP3 metadata can assist individuals facing their growing personal image management challenges. Our focus therefore is not on metadata for cultural heritage institutions or the publishing industry, it is limited to the personal libraries growing on our hard-drives. This bottom-up approach to file management combined with p2p distribution radically altered the music landscape. Might such an approach have a similar impact on image publishing? This paper outlines plans for improving the personal management of digital images-doing image metadata and file management the MP3 way-and considers the likelihood of success.

  16. High-grade video compression of echocardiographic studies: a multicenter validation study of selected motion pictures expert groups (MPEG)-4 algorithms.

    PubMed

    Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Celeste, Fabrizio; Gentile, Francesco; Mantero, Antonio; Montericcio, Vincenzo; Muratori, Manuela

    2007-05-01

    Large files produced by standard compression algorithms slow down spread of digital and tele-echocardiography. We validated echocardiographic video high-grade compression with the new Motion Pictures Expert Groups (MPEG)-4 algorithms with a multicenter study. Seven expert cardiologists blindly scored (5-point scale) 165 uncompressed and compressed 2-dimensional and color Doppler video clips, based on combined diagnostic content and image quality (uncompressed files as references). One digital video and 3 MPEG-4 algorithms (WM9, MV2, and DivX) were used, the latter at 3 compression levels (0%, 35%, and 60%). Compressed file sizes decreased from 12 to 83 MB to 0.03 to 2.3 MB (1:1051-1:26 reduction ratios). Mean SD of differences was 0.81 for intraobserver variability (uncompressed and digital video files). Compared with uncompressed files, only the DivX mean score at 35% (P = .04) and 60% (P = .001) compression was significantly reduced. At subcategory analysis, these differences were still significant for gray-scale and fundamental imaging but not for color or second harmonic tissue imaging. Original image quality, session sequence, compression grade, and bitrate were all independent determinants of mean score. Our study supports use of MPEG-4 algorithms to greatly reduce echocardiographic file sizes, thus facilitating archiving and transmission. Quality evaluation studies should account for the many independent variables that affect image quality grading.

  17. 1995 Joseph E. Whitley, MD, Award. A World Wide Web gateway to the radiologic learning file.

    PubMed

    Channin, D S

    1995-12-01

    Computer networks in general, and the Internet specifically, are changing the way information is manipulated in the world at large and in radiology. The goal of this project was to develop a computer system in which images from the Radiologic Learning File, available previously only via a single-user laser disc, are made available over a generic, high-availability computer network to many potential users simultaneously. Using a networked workstation in our laboratory and freely available distributed hypertext software, we established a World Wide Web (WWW) information server for radiology. Images from the Radiologic Learning File are requested through the WWW client software, digitized from a single laser disc containing the entire teaching file and then transmitted over the network to the client. The text accompanying each image is incorporated into the transmitted document. The Radiologic Learning File is now on-line, and requests to view the cases result in the delivery of the text and images. Image digitization via a frame grabber takes 1/30th of a second. Conversion of the image to a standard computer graphic format takes 45-60 sec. Text and image transmission speed on a local area network varies between 200 and 400 kilobytes (KB) per second depending on the network load. We have made images from a laser disc of the Radiologic Learning File available through an Internet-based hypertext server. The images previously available through a single-user system located in a remote section of our department are now ubiquitously available throughout our department via the department's computer network. We have thus converted a single-user, limited functionality system into a multiuser, widely available resource.

  18. Toward a standard reference database for computer-aided mammography

    NASA Astrophysics Data System (ADS)

    Oliveira, Júlia E. E.; Gueld, Mark O.; de A. Araújo, Arnaldo; Ott, Bastian; Deserno, Thomas M.

    2008-03-01

    Because of the lack of mammography databases with a large amount of codified images and identified characteristics like pathology, type of breast tissue, and abnormality, there is a problem for the development of robust systems for computer-aided diagnosis. Integrated to the Image Retrieval in Medical Applications (IRMA) project, we present an available mammography database developed from the union of: The Mammographic Image Analysis Society Digital Mammogram Database (MIAS), The Digital Database for Screening Mammography (DDSM), the Lawrence Livermore National Laboratory (LLNL), and routine images from the Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen. Using the IRMA code, standardized coding of tissue type, tumor staging, and lesion description was developed according to the American College of Radiology (ACR) tissue codes and the ACR breast imaging reporting and data system (BI-RADS). The import was done automatically using scripts for image download, file format conversion, file name, web page and information file browsing. Disregarding the resolution, this resulted in a total of 10,509 reference images, and 6,767 images are associated with an IRMA contour information feature file. In accordance to the respective license agreements, the database will be made freely available for research purposes, and may be used for image based evaluation campaigns such as the Cross Language Evaluation Forum (CLEF). We have also shown that it can be extended easily with further cases imported from a picture archiving and communication system (PACS).

  19. The Open Microscopy Environment: open image informatics for the biological sciences

    NASA Astrophysics Data System (ADS)

    Blackburn, Colin; Allan, Chris; Besson, Sébastien; Burel, Jean-Marie; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gault, David; Gillen, Kenneth; Leigh, Roger; Leo, Simone; Li, Simon; Lindner, Dominik; Linkert, Melissa; Moore, Josh; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Swedlow, Jason R.

    2016-07-01

    Despite significant advances in biological imaging and analysis, major informatics challenges remain unsolved: file formats are proprietary, storage and analysis facilities are lacking, as are standards for sharing image data and results. While the open FITS file format is ubiquitous in astronomy, astronomical imaging shares many challenges with biological imaging, including the need to share large image sets using secure, cross-platform APIs, and the need for scalable applications for processing and visualization. The Open Microscopy Environment (OME) is an open-source software framework developed to address these challenges. OME tools include: an open data model for multidimensional imaging (OME Data Model); an open file format (OME-TIFF) and library (Bio-Formats) enabling free access to images (5D+) written in more than 145 formats from many imaging domains, including FITS; and a data management server (OMERO). The Java-based OMERO client-server platform comprises an image metadata store, an image repository, visualization and analysis by remote access, allowing sharing and publishing of image data. OMERO provides a means to manage the data through a multi-platform API. OMERO's model-based architecture has enabled its extension into a range of imaging domains, including light and electron microscopy, high content screening, digital pathology and recently into applications using non-image data from clinical and genomic studies. This is made possible using the Bio-Formats library. The current release includes a single mechanism for accessing image data of all types, regardless of original file format, via Java, C/C++ and Python and a variety of applications and environments (e.g. ImageJ, Matlab and R).

  20. BSIFT: toward data-independent codebook for large scale image search.

    PubMed

    Zhou, Wengang; Li, Houqiang; Hong, Richang; Lu, Yijuan; Tian, Qi

    2015-03-01

    Bag-of-Words (BoWs) model based on Scale Invariant Feature Transform (SIFT) has been widely used in large-scale image retrieval applications. Feature quantization by vector quantization plays a crucial role in BoW model, which generates visual words from the high- dimensional SIFT features, so as to adapt to the inverted file structure for the scalable retrieval. Traditional feature quantization approaches suffer several issues, such as necessity of visual codebook training, limited reliability, and update inefficiency. To avoid the above problems, in this paper, a novel feature quantization scheme is proposed to efficiently quantize each SIFT descriptor to a descriptive and discriminative bit-vector, which is called binary SIFT (BSIFT). Our quantizer is independent of image collections. In addition, by taking the first 32 bits out from BSIFT as code word, the generated BSIFT naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error is reduced by feature filtering, code word expansion, and query sensitive mask shielding. Without any explicit codebook for quantization, our approach can be readily applied in image search in some resource-limited scenarios. We evaluate the proposed algorithm for large scale image search on two public image data sets. Experimental results demonstrate the index efficiency and retrieval accuracy of our approach.

  1. A high-speed network for cardiac image review.

    PubMed

    Elion, J L; Petrocelli, R R

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage.

  2. A high-speed network for cardiac image review.

    PubMed Central

    Elion, J. L.; Petrocelli, R. R.

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage. PMID:7949964

  3. The virtual microscopy database-sharing digital microscope images for research and education.

    PubMed

    Lee, Lisa M J; Goldman, Haviva M; Hortsch, Michael

    2018-02-14

    Over the last 20 years, virtual microscopy has become the predominant modus of teaching the structural organization of cells, tissues, and organs, replacing the use of optical microscopes and glass slides in a traditional histology or pathology laboratory setting. Although virtual microscopy image files can easily be duplicated, creating them requires not only quality histological glass slides but also an expensive whole slide microscopic scanner and massive data storage devices. These resources are not available to all educators and researchers, especially at new institutions in developing countries. This leaves many schools without access to virtual microscopy resources. The Virtual Microscopy Database (VMD) is a new resource established to address this problem. It is a virtual image file-sharing website that allows researchers and educators easy access to a large repository of virtual histology and pathology image files. With the support from the American Association of Anatomists (Bethesda, MD) and MBF Bioscience Inc. (Williston, VT), registration and use of the VMD are currently free of charge. However, the VMD site is restricted to faculty and staff of research and educational institutions. Virtual Microscopy Database users can upload their own collection of virtual slide files, as well as view and download image files for their own non-profit educational and research purposes that have been deposited by other VMD clients. Anat Sci Educ. © 2018 American Association of Anatomists. © 2018 American Association of Anatomists.

  4. Extracting the Data From the LCM vk4 Formatted Output File

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, James G.

    These are slides about extracting the data from the LCM vk4 formatted output file. The following is covered: vk4 file produced by Keyence VK Software, custom analysis, no off the shelf way to read the file, reading the binary data in a vk4 file, various offsets in decimal lines, finding the height image data, directly in MATLAB, binary output beginning of height image data, color image information, color image binary data, color image decimal and binary data, MATLAB code to read vk4 file (choose a file, read the file, compute offsets, read optical image, laser optical image, read and computemore » laser intensity image, read height image, timing, display height image, display laser intensity image, display RGB laser optical images, display RGB optical images, display beginning data and save images to workspace, gamma correction subroutine), reading intensity form the vk4 file, linear in the low range, linear in the high range, gamma correction for vk4 files, computing the gamma intensity correction, observations.« less

  5. Developing a radiology-based teaching approach for gross anatomy in the digital era.

    PubMed

    Marker, David R; Bansal, Anshuman K; Juluru, Krishna; Magid, Donna

    2010-08-01

    The purpose of this study was to assess the implementation of a digital anatomy lecture series based largely on annotated, radiographic images and the utility of the Radiological Society of North America-developed Medical Imaging Resource Center (MIRC) for providing an online educational resource. A series of digital teaching images were collected and organized to correspond to lecture and dissection topics. MIRC was used to provide the images in a Web-based educational format for incorporation into anatomy lectures and as a review resource. A survey assessed the impressions of the medical students regarding this educational format. MIRC teaching files were successfully used in our teaching approach. The lectures were interactive with questions to and from the medical student audience regarding the labeled images used in the presentation. Eighty-five of 120 students completed the survey. The majority of students (87%) indicated that the MIRC teaching files were "somewhat useful" to "very useful" when incorporated into the lecture. The students who used the MIRC files were most likely to access the material from home (82%) on an occasional basis (76%). With regard to areas for improvement, 63% of the students reported that they would have benefited from more teaching files, and only 9% of the students indicated that the online files were not user friendly. The combination of electronic radiology resources available in lecture format and on the Internet can provide multiple opportunities for medical students to learn and revisit first-year anatomy. MIRC provides a user-friendly format for presenting radiology education files for medical students. 2010 AUR. Published by Elsevier Inc. All rights reserved.

  6. Fast large-scale object retrieval with binary quantization

    NASA Astrophysics Data System (ADS)

    Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi

    2015-11-01

    The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.

  7. Digital Data from the Great Sand Dunes and Poncha Springs Aeromagnetic Surveys, South-Central Colorado

    USGS Publications Warehouse

    Drenth, B.J.; Grauch, V.J.S.; Bankey, Viki; New Sense Geophysics, Ltd.

    2009-01-01

    This report contains digital data, image files, and text files describing data formats and survey procedures for two high-resolution aeromagnetic surveys in south-central Colorado: one in the eastern San Luis Valley, Alamosa and Saguache Counties, and the other in the southern Upper Arkansas Valley, Chaffee County. In the San Luis Valley, the Great Sand Dunes survey covers a large part of Great Sand Dunes National Park and Preserve and extends south along the mountain front to the foot of Mount Blanca. In the Upper Arkansas Valley, the Poncha Springs survey covers the town of Poncha Springs and vicinity. The digital files include grids, images, and flight-line data. Several derivative products from these data are also presented as grids and images, including two grids of reduced-to-pole aeromagnetic data and data continued to a reference surface. Images are presented in various formats and are intended to be used as input to geographic information systems, standard graphics software, or map plotting packages.

  8. Clementine High Resolution Camera Mosaicking Project. Volume 14; CL 6014; 0 deg N to 80 deg N Latitude, 270 deg E to 300 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  9. Clementine High Resolution Camera Mosaicking Project. Volume 17; CL 6017; 0 deg to 80 deg S Latitude, 330 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  10. Clementine High Resolution Camera Mosaicking Project. Volume 15; CL 6015; 0 deg S to 80 deg S Latitude, 270 deg E to 300 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U. S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  11. Clementine High Resolution Camera Mosaicking Project. Volume 13; CL 6013; 0 deg S to 80 deg S Latitude, 240 deg to 270 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  12. Clementine High Resolution Camera Mosaicking Project. Volume 18; CL 6018; 80 deg N to 80 deg S Latitude, 330 deg E to 360 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U. S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  13. Clementine High Resolution Camera Mosaicking Project. Volume 12; CL 6012; 0 deg N to 80 deg N Latitude, 240 deg to 270 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  14. Clementine High Resolution Camera Mosaicking Project. Volume 10; CL 6010; 0 deg N to 80 deg N Latitude, 210 deg E to 240 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  15. Clementine High Resolution Camera Mosaicking Project. Volume 16; CL 6016; 0 deg N to 80 deg N Latitude, 300 deg E to 330 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  16. MSiReader: an open-source interface to view and analyze high resolving power MS imaging files on Matlab platform.

    PubMed

    Robichaud, Guillaume; Garrard, Kenneth P; Barry, Jeremy A; Muddiman, David C

    2013-05-01

    During the past decade, the field of mass spectrometry imaging (MSI) has greatly evolved, to a point where it has now been fully integrated by most vendors as an optional or dedicated platform that can be purchased with their instruments. However, the technology is not mature and multiple research groups in both academia and industry are still very actively studying the fundamentals of imaging techniques, adapting the technology to new ionization sources, and developing new applications. As a result, there important varieties of data file formats used to store mass spectrometry imaging data and, concurrent to the development of MSi, collaborative efforts have been undertaken to introduce common imaging data file formats. However, few free software packages to read and analyze files of these different formats are readily available. We introduce here MSiReader, a free open source application to read and analyze high resolution MSI data from the most common MSi data formats. The application is built on the Matlab platform (Mathworks, Natick, MA, USA) and includes a large selection of data analysis tools and features. People who are unfamiliar with the Matlab language will have little difficult navigating the user-friendly interface, and users with Matlab programming experience can adapt and customize MSiReader for their own needs.

  17. MSiReader: An Open-Source Interface to View and Analyze High Resolving Power MS Imaging Files on Matlab Platform

    NASA Astrophysics Data System (ADS)

    Robichaud, Guillaume; Garrard, Kenneth P.; Barry, Jeremy A.; Muddiman, David C.

    2013-05-01

    During the past decade, the field of mass spectrometry imaging (MSI) has greatly evolved, to a point where it has now been fully integrated by most vendors as an optional or dedicated platform that can be purchased with their instruments. However, the technology is not mature and multiple research groups in both academia and industry are still very actively studying the fundamentals of imaging techniques, adapting the technology to new ionization sources, and developing new applications. As a result, there important varieties of data file formats used to store mass spectrometry imaging data and, concurrent to the development of MSi, collaborative efforts have been undertaken to introduce common imaging data file formats. However, few free software packages to read and analyze files of these different formats are readily available. We introduce here MSiReader, a free open source application to read and analyze high resolution MSI data from the most common MSi data formats. The application is built on the Matlab platform (Mathworks, Natick, MA, USA) and includes a large selection of data analysis tools and features. People who are unfamiliar with the Matlab language will have little difficult navigating the user-friendly interface, and users with Matlab programming experience can adapt and customize MSiReader for their own needs.

  18. Tooth labeling in cone-beam CT using deep convolutional neural network for forensic identification

    NASA Astrophysics Data System (ADS)

    Miki, Yuma; Muramatsu, Chisako; Hayashi, Tatsuro; Zhou, Xiangrong; Hara, Takeshi; Katsumata, Akitoshi; Fujita, Hiroshi

    2017-03-01

    In large disasters, dental record plays an important role in forensic identification. However, filing dental charts for corpses is not an easy task for general dentists. Moreover, it is laborious and time-consuming work in cases of large scale disasters. We have been investigating a tooth labeling method on dental cone-beam CT images for the purpose of automatic filing of dental charts. In our method, individual tooth in CT images are detected and classified into seven tooth types using deep convolutional neural network. We employed the fully convolutional network using AlexNet architecture for detecting each tooth and applied our previous method using regular AlexNet for classifying the detected teeth into 7 tooth types. From 52 CT volumes obtained by two imaging systems, five images each were randomly selected as test data, and the remaining 42 cases were used as training data. The result showed the tooth detection accuracy of 77.4% with the average false detection of 5.8 per image. The result indicates the potential utility of the proposed method for automatic recording of dental information.

  19. Web based tools for visualizing imaging data and development of XNATView, a zero footprint image viewer

    PubMed Central

    Gutman, David A.; Dunn, William D.; Cobb, Jake; Stoner, Richard M.; Kalpathy-Cramer, Jayashree; Erickson, Bradley

    2014-01-01

    Advances in web technologies now allow direct visualization of imaging data sets without necessitating the download of large file sets or the installation of software. This allows centralization of file storage and facilitates image review and analysis. XNATView is a light framework recently developed in our lab to visualize DICOM images stored in The Extensible Neuroimaging Archive Toolkit (XNAT). It consists of a PyXNAT-based framework to wrap around the REST application programming interface (API) and query the data in XNAT. XNATView was developed to simplify quality assurance, help organize imaging data, and facilitate data sharing for intra- and inter-laboratory collaborations. Its zero-footprint design allows the user to connect to XNAT from a web browser, navigate through projects, experiments, and subjects, and view DICOM images with accompanying metadata all within a single viewing instance. PMID:24904399

  20. VizieR Online Data Catalog: PN and HII regions of West and East of NGC 3109 (Pena+, 2007)

    NASA Astrophysics Data System (ADS)

    Pena, M.; Richer, M. G.; Stasińska, G.

    2007-03-01

    Six files (fits format, 16MB) containing images of the West (W) and East (E) zones of NGC 3109 are presented. The images are a combination of frames obtained with the ESO Very Large Telescope and the Focal Reducer Spectrograph FORS1. All the frames were obtained on 29 November and 1 December 2005, with air masses smaller than 1.16 and seeing better than 0.7 arcsec. They constitute the pre-imaging of the ESO program ID 076.B-0166(A). Central coordinates of images are: West side (images named NGC3109W-xxxx.fits) RA=10:02:54.5, DE=-26:09:22, equinox 2000. East side (images named NGC3109E-xxx.fits) RA=10:03:19.8, DE=-26:09:32, equinox 2000. The image size is 6.8x6.8arcmin2. (3 data files).

  1. A land-surface Testbed for EOSDIS

    NASA Technical Reports Server (NTRS)

    Emery, William; Kelley, Tim

    1994-01-01

    The main objective of the Testbed project was to deliver satellite images via the Internet to scientific and educational users free of charge. The main method of operations was to store satellite images on a low cost tape library system, visually browse the raw satellite data, access the raw data filed, navigate the imagery through 'C' programming and X-Windows interface software, and deliver the finished image to the end user over the Internet by means of file transfer protocol methods. The conclusion is that the distribution of satellite imagery by means of the Internet is feasible, and the archiving of large data sets can be accomplished with low cost storage systems allowing multiple users.

  2. BOREAS RSS-14 Level-1 GOES-8 Visible, IR and Water Vapor Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Faysash, David; Cooper, Harry J.; Smith, Eric A.; Newcomer, Jeffrey A.

    2000-01-01

    The BOREAS RSS-14 team collected and processed several GOES-7 and GOES-8 image data sets that covered the BOREAS study region. The level-1 BOREAS GOES-8 images are raw data values collected by RSS-14 personnel at FSU and delivered to BORIS. The data cover 14-Jul-1995 to 21-Sep-1995 and 01-Jan-1996 to 03-Oct-1996. The data start out containing three 8-bit spectral bands and end up containing five 10-bit spectral bands. No major problems with the data have been identified. The data are contained in binary image format files. Due to the large size of the images, the level-1 GOES-8 data are not contained on the BOREAS CD-ROM set. An inventory listing file is supplied on the CD-ROM to inform users of what data were collected. The level-1 GOES-8 image data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). See sections 15 and 16 for more information. The data files are available on a CD-ROM (see document number 20010000884).

  3. Parallel object-oriented data mining system

    DOEpatents

    Kamath, Chandrika; Cantu-Paz, Erick

    2004-01-06

    A data mining system uncovers patterns, associations, anomalies and other statistically significant structures in data. Data files are read and displayed. Objects in the data files are identified. Relevant features for the objects are extracted. Patterns among the objects are recognized based upon the features. Data from the Faint Images of the Radio Sky at Twenty Centimeters (FIRST) sky survey was used to search for bent doubles. This test was conducted on data from the Very Large Array in New Mexico which seeks to locate a special type of quasar (radio-emitting stellar object) called bent doubles. The FIRST survey has generated more than 32,000 images of the sky to date. Each image is 7.1 megabytes, yielding more than 100 gigabytes of image data in the entire data set.

  4. SEGY to ASCII Conversion and Plotting Program 2.0

    USGS Publications Warehouse

    Goldman, Mark R.

    2005-01-01

    INTRODUCTION SEGY has long been a standard format for storing seismic data and header information. Almost every seismic processing package can read and write seismic data in SEGY format. In the data processing world, however, ASCII format is the 'universal' standard format. Very few general-purpose plotting or computation programs will accept data in SEGY format. The software presented in this report, referred to as SEGY to ASCII (SAC), converts seismic data written in SEGY format (Barry et al., 1975) to an ASCII data file, and then creates a postscript file of the seismic data using a general plotting package (GMT, Wessel and Smith, 1995). The resulting postscript file may be plotted by any standard postscript plotting program. There are two versions of SAC: one version for plotting a SEGY file that contains a single gather, such as a stacked CDP or migrated section, and a second version for plotting multiple gathers from a SEGY file containing more than one gather, such as a collection of shot gathers. Note that if a SEGY file has multiple gathers, then each gather must have the same number of traces per gather, and each trace must have the same sample interval and number of samples per trace. SAC will read several common standards of SEGY data, including SEGY files with sample values written in either IBM or IEEE floating-point format. In addition, utility programs are present to convert non-standard Seismic Unix (.sux) SEGY files and PASSCAL (.rsy) SEGY files to standard SEGY files. SAC allows complete user control over all plotting parameters including label size and font, tick mark intervals, trace scaling, and the inclusion of a title and descriptive text. SAC shell scripts create a postscript image of the seismic data in vector rather than bitmap format, using GMT's pswiggle command. Although this can produce a very large postscript file, the image quality is generally superior to that of a bitmap image, and commercial programs such as Adobe Illustrator? can manipulate the image more efficiently.

  5. Distributed file management for remote clinical image-viewing stations

    NASA Astrophysics Data System (ADS)

    Ligier, Yves; Ratib, Osman M.; Girard, Christian; Logean, Marianne; Trayser, Gerhard

    1996-05-01

    The Geneva PACS is based on a distributed architecture, with different archive servers used to store all the image files produced by digital imaging modalities. Images can then be visualized on different display stations with the Osiris software. Image visualization require to have the image file physically present on the local station. Thus, images must be transferred from archive servers to local display stations in an acceptable way, which means fast and user friendly where the notion of file must be hidden to users. The transfer of image files is done according to different schemes including prefetching and direct image selection. Prefetching allows the retrieval of previous studies of a patient in advance. A direct image selection is also provided in order to retrieve images on request. When images are transferred locally on the display station, they are stored in Papyrus files, each file containing a set of images. File names are used by the Osiris viewing software to open image sequences. But file names alone are not explicit enough to properly describe the content of the file. A specific utility has been developed to present a list of patients, and for each patient a list of exams which can be selected and automatically displayed. The system has been successfully tested in different clinical environments. It will be soon extended on a hospital wide basis.

  6. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    PubMed

    Pan, Shaoming; Li, Yongkai; Xu, Zhengquan; Chong, Yanwen

    2015-01-01

    Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  7. Volcanoes of the Wrangell Mountains and Cook Inlet region, Alaska: selected photographs

    USGS Publications Warehouse

    Neal, Christina A.; McGimsey, Robert G.; Diggles, Michael F.

    2001-01-01

    Alaska is home to more than 40 active volcanoes, many of which have erupted violently and repeatedly in the last 200 years. This CD-ROM contains 97 digitized color 35-mm images which represent a small fraction of thousands of photographs taken by Alaska Volcano Observatory scientists, other researchers, and private citizens. The photographs were selected to portray Alaska's volcanoes, to document recent eruptive activity, and to illustrate the range of volcanic phenomena observed in Alaska. These images are for use by the interested public, multimedia producers, desktop publishers, and the high-end printing industry. The digital images are stored in the 'images' folder and can be read across Macintosh, Windows, DOS, OS/2, SGI, and UNIX platforms with applications that can read JPG (JPEG - Joint Photographic Experts Group format) or PCD (Kodak's PhotoCD (YCC) format) files. Throughout this publication, the image numbers match among the file names, figure captions, thumbnail labels, and other references. Also included on this CD-ROM are Windows and Macintosh viewers and engines for keyword searches (Adobe Acrobat Reader with Search). At the time of this publication, Kodak's policy on the distribution of color-management files is still unresolved, and so none is included on this CD-ROM. However, using the Universal Ektachrome or Universal Kodachrome transforms found in your software will provide excellent color. In addition to PhotoCD (PCD) files, this CD-ROM contains large (14.2'x19.5') and small (4'x6') screen-resolution (72 dots per inch; dpi) images in JPEG format. These undergo downsizing and compression relative to the PhotoCD images.

  8. Large Scale Hierarchical K-Means Based Image Retrieval With MapReduce

    DTIC Science & Technology

    2014-03-27

    hadoop distributed file system: Architecture and design, 2007. [10] G. Bradski. Dr. Dobb’s Journal of Software Tools, 2000. [11] Terry Costlow. Big data ...million images running on 20 virtual machines are shown. 15. SUBJECT TERMS Image Retrieval, MapReduce, Hierarchical K-Means, Big Data , Hadoop U U U UU 87...13 2.1.1.2 HDFS Data Representation . . . . . . . . . . . . . . . . 14 2.1.1.3 Hadoop Engine

  9. LabVIEW 2010 Computer Vision Platform Based Virtual Instrument and Its Application for Pitting Corrosion Study.

    PubMed

    Ramos, Rogelio; Zlatev, Roumen; Valdez, Benjamin; Stoytcheva, Margarita; Carrillo, Mónica; García, Juan-Francisco

    2013-01-01

    A virtual instrumentation (VI) system called VI localized corrosion image analyzer (LCIA) based on LabVIEW 2010 was developed allowing rapid automatic and subjective error-free determination of the pits number on large sized corroded specimens. The VI LCIA controls synchronously the digital microscope image taking and its analysis, finally resulting in a map file containing the coordinates of the detected probable pits containing zones on the investigated specimen. The pits area, traverse length, and density are also determined by the VI using binary large objects (blobs) analysis. The resulting map file can be used further by a scanning vibrating electrode technique (SVET) system for rapid (one pass) "true/false" SVET check of the probable zones only passing through the pit's centers avoiding thus the entire specimen scan. A complete SVET scan over the already proved "true" zones could determine the corrosion rate in any of the zones.

  10. 77 FR 59692 - 2014 Diversity Immigrant Visa Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-28

    ... the E-DV system. The entry will not be accepted and must be resubmitted. Group or family photographs... must be in the Joint Photographic Experts Group (JPEG) format. Image File Size: The maximum file size...). Image File Format: The image must be in the Joint Photographic Experts Group (JPEG) format. Image File...

  11. Post-Flash Validation of the new ACS/WFC Subarrays

    NASA Astrophysics Data System (ADS)

    Bellini, A.; Grogin, N. A.; Lim, P. L.; Golimowski, D.

    2017-05-01

    We made use of the new ACS/WFC subarray images of CAL-14410, taken taken with a large range of flash exposure times (0.1-30 seconds), to probe the temporal stability of the reference flash file and to validate the current post-flash correction pipeline of CALACS and ACS DESTRIPE PLUS on the new subarray modes. No statistically-significant deviations are found between the new post-flashed subarray exposures and the flash reference file, indicating that the LED lamp used to post-flash ACS images has been stable over several years. The current calibration pipelines (both CALACS and ACS DESTRIPE PLUS can be successfully used with the new subarray modes.

  12. Data management in pattern recognition and image processing systems

    NASA Technical Reports Server (NTRS)

    Zobrist, A. L.; Bryant, N. A.

    1976-01-01

    Data management considerations are important to any system which handles large volumes of data or where the manipulation of data is technically sophisticated. A particular problem is the introduction of image-formatted files into the mainstream of data processing application. This report describes a comprehensive system for the manipulation of image, tabular, and graphical data sets which involve conversions between the various data types. A key characteristic is the use of image processing technology to accomplish data management tasks. Because of this, the term 'image-based information system' has been adopted.

  13. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  14. Digital camera with apparatus for authentication of images produced from an image file

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1993-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.

  15. OASIS: A Data Fusion System Optimized for Access to Distributed Archives

    NASA Astrophysics Data System (ADS)

    Berriman, G. B.; Kong, M.; Good, J. C.

    2002-05-01

    The On-Line Archive Science Information Services (OASIS) is accessible as a java applet through the NASA/IPAC Infrared Science Archive home page. It uses Geographical Information System (GIS) technology to provide data fusion and interaction services for astronomers. These services include the ability to process and display arbitrarily large image files, and user-controlled contouring, overlay regeneration and multi-table/image interactions. OASIS has been optimized for access to distributed archives and data sets. Its second release (June 2002) provides a mechanism that enables access to OASIS from "third-party" services and data providers. That is, any data provider who creates a query form to an archive containing a collection of data (images, catalogs, spectra) can direct the result files from the query into OASIS. Similarly, data providers who serve links to datasets or remote services on a web page can access all of these data with one instance of OASIS. In this was any data or service provider is given access to the full suite of capabilites of OASIS. We illustrate the "third-party" access feature with two examples: queries to the high-energy image datasets accessible from GSFC SkyView, and links to data that are returned from a target-based query to the NASA Extragalactic Database (NED). The second release of OASIS also includes a file-transfer manager that reports the status of multiple data downloads from remote sources to the client machine. It is a prototype for a request management system that will ultimately control and manage compute-intensive jobs submitted through OASIS to computing grids, such as request for large scale image mosaics and bulk statistical analysis.

  16. VizieR Online Data Catalog: gr photometry of Sextans A and Sextans B (Bellazzini+, 2014)

    NASA Astrophysics Data System (ADS)

    Bellazzini, M.; Beccari, G.; Fraternali, F.; Oosterloo, T. A.; Sollima, A.; Testa, V.; Galleti, S.; Perina, S.; Faccini, M.; Cusano, F.

    2014-04-01

    The tables present deep LBT/LBC g and r photometry of the stars having image quality parameters (provided by DAOPHOTII) CHI<=2 and SHARP within magnitude-dependent contours traced to include the bulk of stellar objects. The observations were achieved on the night og 2012-02-21 with the Large Binocular Camera at the Large Binocular Telescope in binocular mode; g images were acquired with the blue arm and r images with the red arm of the telescope/camera. The astrometry and the photometry were calibrated with stars in common with SDSS-DR9 (V/139). (2 data files).

  17. Digital Camera with Apparatus for Authentication of Images Produced from an Image File

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1996-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.

  18. Outpatients flow management and ophthalmic electronic medical records system in university hospital using Yahgee Document View.

    PubMed

    Matsuo, Toshihiko; Gochi, Akira; Hirakawa, Tsuyoshi; Ito, Tadashi; Kohno, Yoshihisa

    2010-10-01

    General electronic medical records systems remain insufficient for ophthalmology outpatient clinics from the viewpoint of dealing with many ophthalmic examinations and images in a large number of patients. Filing systems for documents and images by Yahgee Document View (Yahgee, Inc.) were introduced on the platform of general electronic medical records system (Fujitsu, Inc.). Outpatients flow management system and electronic medical records system for ophthalmology were constructed. All images from ophthalmic appliances were transported to Yahgee Image by the MaxFile gateway system (P4 Medic, Inc.). The flow of outpatients going through examinations such as visual acuity testing were monitored by the list "Ophthalmology Outpatients List" by Yahgee Workflow in addition to the list "Patients Reception List" by Fujitsu. Patients' identification number was scanned with bar code readers attached to ophthalmic appliances. Dual monitors were placed in doctors' rooms to show Fujitsu Medical Records on the left-hand monitor and ophthalmic charts of Yahgee Document on the right-hand monitor. The data of manually-inputted visual acuity, automatically-exported autorefractometry and non-contact tonometry on a new template, MaxFile ED, were again automatically transported to designated boxes on ophthalmic charts of Yahgee Document. Images such as fundus photographs, fluorescein angiograms, optical coherence tomographic and ultrasound scans were viewed by Yahgee Image, and were copy-and-pasted to assigned boxes on the ophthalmic charts. Ordering such as appointments, drug prescription, fees and diagnoses input, central laboratory tests, surgical theater and ward room reservations were placed by functions of the Fujitsu electronic medical records system. The combination of the Fujitsu electronic medical records and Yahgee Document View systems enabled the University Hospital to examine the same number of outpatients as prior to the implementation of the computerized filing system.

  19. Interactive publications: creation and usage

    NASA Astrophysics Data System (ADS)

    Thoma, George R.; Ford, Glenn; Chung, Michael; Vasudevan, Kirankumar; Antani, Sameer

    2006-02-01

    As envisioned here, an "interactive publication" has similarities to multimedia documents that have been in existence for a decade or more, but possesses specific differentiating characteristics. In common usage, the latter refers to online entities that, in addition to text, consist of files of images and video clips residing separately in databases, rarely providing immediate context to the document text. While an interactive publication has many media objects as does the "traditional" multimedia document, it is a self-contained document, either as a single file with media files embedded within it, or as a "folder" containing tightly linked media files. The main characteristic that differentiates an interactive publication from a traditional multimedia document is that the reader would be able to reuse the media content for analysis and presentation, and to check the underlying data and possibly derive alternative conclusions leading, for example, to more in-depth peer reviews. We have created prototype publications containing paginated text and several media types encountered in the biomedical literature: 3D animations of anatomic structures; graphs, charts and tabular data; cell development images (video sequences); and clinical images such as CT, MRI and ultrasound in the DICOM format. This paper presents developments to date including: a tool to convert static tables or graphs into interactive entities, authoring procedures followed to create prototypes, and advantages and drawbacks of each of these platforms. It also outlines future work including meeting the challenge of network distribution for these large files.

  20. Image Size Variation Influence on Corrupted and Non-viewable BMP Image

    NASA Astrophysics Data System (ADS)

    Azmi, Tengku Norsuhaila T.; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Hamid, Isredza Rahmi A.; Chai Wen, Chuah

    2017-08-01

    Image is one of the evidence component seek in digital forensics. Joint Photographic Experts Group (JPEG) format is most popular used in the Internet because JPEG files are very lossy and easy to compress that can speed up Internet transmitting processes. However, corrupted JPEG images are hard to recover due to the complexities of determining corruption point. Nowadays Bitmap (BMP) images are preferred in image processing compared to another formats because BMP image contain all the image information in a simple format. Therefore, in order to investigate the corruption point in JPEG, the file is required to be converted into BMP format. Nevertheless, there are many things that can influence the corrupting of BMP image such as the changes of image size that make the file non-viewable. In this paper, the experiment indicates that the size of BMP file influences the changes in the image itself through three conditions, deleting, replacing and insertion. From the experiment, we learnt by correcting the file size, it can able to produce a viewable file though partially. Then, it can be investigated further to identify the corruption point.

  1. Representation of thermal infrared imaging data in the DICOM using XML configuration files.

    PubMed

    Ruminski, Jacek

    2007-01-01

    The DICOM standard has become a widely accepted and implemented format for the exchange and storage of medical imaging data. Different imaging modalities are supported however there is not a dedicated solution for thermal infrared imaging in medicine. In this article we propose new ideas and improvements to final proposal of the new DICOM Thermal Infrared Imaging structures and services. Additionally, we designed, implemented and tested software packages for universal conversion of existing thermal imaging files to the DICOM format using XML configuration files. The proposed solution works fast and requires minimal number of user interactions. The XML configuration file enables to compose a set of attributes for any source file format of thermal imaging camera.

  2. OpenMSI: A High-Performance Web-Based Platform for Mass Spectrometry Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Greiner, Annette; Cholia, Shreyas

    Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data accessmore » (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis.« less

  3. AstroVis: Visualizing astronomical data cubes

    NASA Astrophysics Data System (ADS)

    Finniss, Stephen; Tyler, Robin; Questiaux, Jacques

    2016-08-01

    AstroVis enables rapid visualization of large data files on platforms supporting the OpenGL rendering library. Radio astronomical observations are typically three dimensional and stored as data cubes. AstroVis implements a scalable approach to accessing these files using three components: a File Access Component (FAC) that reduces the impact of reading time, which speeds up access to the data; the Image Processing Component (IPC), which breaks up the data cube into smaller pieces that can be processed locally and gives a representation of the whole file; and Data Visualization, which implements an approach of Overview + Detail to reduces the dimensions of the data being worked with and the amount of memory required to store it. The result is a 3D display paired with a 2D detail display that contains a small subsection of the original file in full resolution without reducing the data in any way.

  4. Medical imaging informatics based solutions for human performance analytics

    NASA Astrophysics Data System (ADS)

    Verma, Sneha; McNitt-Gray, Jill; Liu, Brent J.

    2018-03-01

    For human performance analysis, extensive experimental trials are often conducted to identify the underlying cause or long-term consequences of certain pathologies and to improve motor functions by examining the movement patterns of affected individuals. Data collected for human performance analysis includes high-speed video, surveys, spreadsheets, force data recordings from instrumented surfaces etc. These datasets are recorded from various standalone sources and therefore captured in different folder structures as well as in varying formats depending on the hardware configurations. Therefore, data integration and synchronization present a huge challenge while handling these multimedia datasets specifically for large datasets. Another challenge faced by researchers is querying large quantity of unstructured data and to design feedbacks/reporting tools for users who need to use datasets at various levels. In the past, database server storage solutions have been introduced to securely store these datasets. However, to automate the process of uploading raw files, various file manipulation steps are required. In the current workflow, this file manipulation and structuring is done manually and is not feasible for large amounts of data. However, by attaching metadata files and data dictionaries with these raw datasets, they can provide information and structure needed for automated server upload. We introduce one such system for metadata creation for unstructured multimedia data based on the DICOM data model design. We will discuss design and implementation of this system and evaluate this system with data set collected for movement analysis study. The broader aim of this paper is to present a solutions space achievable based on medical imaging informatics design and methods for improvement in workflow for human performance analysis in a biomechanics research lab.

  5. Computer-aided meiotic maturation assay (CAMMA) of zebrafish (danio rerio) oocytes in vitro.

    PubMed

    Lessman, Charles A; Nathani, Ravikanth; Uddin, Rafique; Walker, Jamie; Liu, Jianxiong

    2007-01-01

    We have developed a new technique called Computer-Aided Meiotic Maturation Assay (CAMMA) for imaging large arrays of zebrafish oocytes and automatically collecting image files at regular intervals during meiotic maturation. This novel method uses a transparency scanner interfaced to a computer with macro programming that automatically scans and archives the image files. Images are stacked and analyzed with ImageJ to quantify changes in optical density characteristic of zebrafish oocyte maturation. Major advantages of CAMMA include (1) ability to image very large arrays of oocytes and follow individual cells over time, (2) simultaneously image many treatment groups, (3) digitized images may be stacked, animated, and analyzed in programs such as ImageJ, NIH-Image, or ScionImage, and (4) CAMMA system is inexpensive, costing less than most microscopes used in traditional assays. We have used CAMMA to determine the dose response and time course of oocyte maturation induced by 17alpha-hydroxyprogesterone (HP). Maximal decrease in optical density occurs around 5 hr after 0.1 micro g/ml HP (28.5 degrees C), approximately 3 hr after germinal vesicle migration (GVM) and dissolution (GVD). In addition to changes in optical density, GVD is accompanied by streaming of ooplasm to the animal pole to form a blastodisc. These dynamic changes are readily visualized by animating image stacks from CAMMA; thus, CAMMA provides a valuable source of time-lapse movies for those studying zebrafish oocyte maturation. The oocyte clearing documented by CAMMA is correlated to changes in size distribution of major yolk proteins upon SDS-PAGE, and, this in turn, is related to increased cyclin B(1) protein.

  6. Hear it, See it, Explore it: Visualizations and Sonifications of Seismic Signals

    NASA Astrophysics Data System (ADS)

    Fisher, M.; Peng, Z.; Simpson, D. W.; Kilb, D. L.

    2010-12-01

    Sonification of seismic data is an innovative way to represent seismic data in the audible range (Simpson, 2005). Seismic waves with different frequency and temporal characteristics, such as those from teleseismic earthquakes, deep “non-volcanic” tremor and local earthquakes, can be easily discriminated when time-compressed to the audio range. Hence, sonification is particularly useful for presenting complicated seismic signals with multiple sources, such as aftershocks within the coda of large earthquakes, and remote triggering of earthquakes and tremor by large teleseismic earthquakes. Previous studies mostly focused on converting the seismic data into audible files by simple time compression or frequency modulation (Simpson et al., 2009). Here we generate animations of the seismic data together with the sounds. We first read seismic data in the SAC format into Matlab, and generate a sequence of image files and an associated WAV sound file. Next, we use a third party video editor, such as the QuickTime Pro, to combine the image sequences and the sound file into an animation. We have applied this simple procedure to generate animations of remotely triggered earthquakes, tremor and low-frequency earthquakes in California, and mainshock-aftershock sequences in Japan and California. These animations clearly demonstrate the interactions of earthquake sequences and the richness of the seismic data. The tool developed in this study can be easily adapted for use in other research applications and to create sonification/animation of seismic data for education and outreach purpose.

  7. PIXEL PUSHER

    NASA Technical Reports Server (NTRS)

    Stanfill, D. F.

    1994-01-01

    Pixel Pusher is a Macintosh application used for viewing and performing minor enhancements on imagery. It will read image files in JPL's two primary image formats- VICAR and PDS - as well as the Macintosh PICT format. VICAR (NPO-18076) handles an array of image processing capabilities which may be used for a variety of applications including biomedical image processing, cartography, earth resources, and geological exploration. Pixel Pusher can also import VICAR format color lookup tables for viewing images in pseudocolor (256 colors). This program currently supports only eight bit images but will work on monitors with any number of colors. Arbitrarily large image files may be viewed in a normal Macintosh window. Color and contrast enhancement can be performed with a graphical "stretch" editor (as in contrast stretch). In addition, VICAR images may be saved as Macintosh PICT files for exporting into other Macintosh programs, and individual pixels can be queried to determine their locations and actual data values. Pixel Pusher is written in Symantec's Think C and was developed for use on a Macintosh SE30, LC, or II series computer running System Software 6.0.3 or later and 32 bit QuickDraw. Pixel Pusher will only run on a Macintosh which supports color (whether a color monitor is being used or not). The standard distribution medium for this program is a set of three 3.5 inch Macintosh format diskettes. The program price includes documentation. Pixel Pusher was developed in 1991 and is a copyrighted work with all copyright vested in NASA. Think C is a trademark of Symantec Corporation. Macintosh is a registered trademark of Apple Computer, Inc.

  8. Programmed database system at the Chang Gung Craniofacial Center: part II--digitizing photographs.

    PubMed

    Chuang, Shiow-Shuh; Hung, Kai-Fong; de Villa, Glenda H; Chen, Philip K T; Lo, Lun-Jou; Chang, Sophia C N; Yu, Chung-Chih; Chen, Yu-Ray

    2003-07-01

    The archival tools used for digital images in advertising are not to fulfill the clinic requisition and are just beginning to develop. The storage of a large amount of conventional photographic slides needs a lot of space and special conditions. In spite of special precautions, degradation of the slides still occurs. The most common degradation is the appearance of fungus flecks. With the recent advances in digital technology, it is now possible to store voluminous numbers of photographs on a computer hard drive and keep them for a long time. A self-programmed interface has been developed to integrate database and image browser system that can build and locate needed files archive in a matter of seconds with the click of a button. This system requires hardware and software were market provided. There are 25,200 patients recorded in the database that involve 24,331 procedures. In the image files, there are 6,384 patients with 88,366 digital pictures files. From 1999 through 2002, NT400,000 dollars have been saved using the new system. Photographs can be managed with the integrating Database and Browse software for database archiving. This allows labeling of the individual photographs with demographic information and browsing. Digitized images are not only more efficient and economical than the conventional slide images, but they also facilitate clinical studies.

  9. [Intranet-based integrated information system of radiotherapy-related images and diagnostic reports].

    PubMed

    Nakamura, R; Sasaki, M; Oikawa, H; Harada, S; Tamakawa, Y

    2000-03-01

    To use an intranet technique to develop an information system that simultaneously supports both diagnostic reports and radiotherapy planning images. Using a file server as the gateway a radiation oncology LAN was connected to an already operative RIS LAN. Dose-distribution images were saved in tagged-image-file format by way of a screen dump to the file server. X-ray simulator images and portal images were saved in encapsulated postscript format in the file server and automatically converted to portable document format. The files on the file server were automatically registered to the Web server by the search engine and were available for searching and browsing using the Web browser. It took less than a minute to register planning images. For clients, searching and browsing the file took less than 3 seconds. Over 150,000 reports and 4,000 images from a six-month period were accessible. Because the intranet technique was used, construction and maintenance was completed without specialty. Prompt access to essential information about radiotherapy has been made possible by this system. It promotes public access to radiotherapy planning that may improve the quality of treatment.

  10. LabVIEW 2010 Computer Vision Platform Based Virtual Instrument and Its Application for Pitting Corrosion Study

    PubMed Central

    Ramos, Rogelio; Zlatev, Roumen; Valdez, Benjamin; Stoytcheva, Margarita; Carrillo, Mónica; García, Juan-Francisco

    2013-01-01

    A virtual instrumentation (VI) system called VI localized corrosion image analyzer (LCIA) based on LabVIEW 2010 was developed allowing rapid automatic and subjective error-free determination of the pits number on large sized corroded specimens. The VI LCIA controls synchronously the digital microscope image taking and its analysis, finally resulting in a map file containing the coordinates of the detected probable pits containing zones on the investigated specimen. The pits area, traverse length, and density are also determined by the VI using binary large objects (blobs) analysis. The resulting map file can be used further by a scanning vibrating electrode technique (SVET) system for rapid (one pass) “true/false” SVET check of the probable zones only passing through the pit's centers avoiding thus the entire specimen scan. A complete SVET scan over the already proved “true” zones could determine the corrosion rate in any of the zones. PMID:23691434

  11. 40 CFR 264.71 - Use of manifest system.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... revising paragraph (a)(2), and by adding paragraphs (f), (g), (h), (i), (j), and (k) to read as follows... image file of Page 1 of the manifest, or both a data string file and the image file corresponding to Page 1 of the manifest. Any data or image files transmitted to EPA under this paragraph must be...

  12. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  13. A low-cost universal cumulative gating circuit for small and large animal clinical imaging

    NASA Astrophysics Data System (ADS)

    Gioux, Sylvain; Frangioni, John V.

    2008-02-01

    Image-assisted diagnosis and therapy is becoming more commonplace in medicine. However, most imaging techniques suffer from voluntary or involuntary motion artifacts, especially cardiac and respiratory motions, which degrade image quality. Current software solutions either induce computational overhead or reject out-of-focus images after acquisition. In this study we demonstrate a hardware-only gating circuit that accepts multiple, pseudo-periodic signals and produces a single TTL (0-5 V) imaging window of accurate phase and period. The electronic circuit Gerber files described in this article and the list of components are available online at www.frangionilab.org.

  14. Pancreatic Cancer Detection Consortium (PCDC) | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"2256","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"A 3-dimensional image of a human torso highlighting the pancreas.","field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"A 3-dimensional image of a human torso

  15. An analysis of image storage systems for scalable training of deep neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Young, Steven R; Patton, Robert M

    This study presents a principled empirical evaluation of image storage systems for training deep neural networks. We employ the Caffe deep learning framework to train neural network models for three different data sets, MNIST, CIFAR-10, and ImageNet. While training the models, we evaluate five different options to retrieve training image data: (1) PNG-formatted image files on local file system; (2) pushing pixel arrays from image files into a single HDF5 file on local file system; (3) in-memory arrays to hold the pixel arrays in Python and C++; (4) loading the training data into LevelDB, a log-structured merge tree based key-valuemore » storage; and (5) loading the training data into LMDB, a B+tree based key-value storage. The experimental results quantitatively highlight the disadvantage of using normal image files on local file systems to train deep neural networks and demonstrate reliable performance with key-value storage based storage systems. When training a model on the ImageNet dataset, the image file option was more than 17 times slower than the key-value storage option. Along with measurements on training time, this study provides in-depth analysis on the cause of performance advantages/disadvantages of each back-end to train deep neural networks. We envision the provided measurements and analysis will shed light on the optimal way to architect systems for training neural networks in a scalable manner.« less

  16. Community archiving of imaging studies

    NASA Astrophysics Data System (ADS)

    Fritz, Steven L.; Roys, Steven R.; Munjal, Sunita

    1996-05-01

    The quantity of image data created in a large radiology practice has long been a challenge for available archiving technology. Traditional methods ofarchiving the large quantity of films generated in radiology have relied on warehousing in remote sites, with courier delivery of film files for historical comparisons. A digital community archive, accessible via a wide area network, represents a feasible solution to the problem of archiving digital images from a busy practice. In addition, it affords a physician caring for a patient access to imaging studies performed at a variety ofhealthcare institutions without the need to repeat studies. Security problems include both network security issues in the WAN environment and access control for patient, physician and imaging center. The key obstacle to developing a community archive is currently political. Reluctance to participate in a community archive can be reduced by appropriate design of the access mechanisms.

  17. Designing for Peta-Scale in the LSST Database

    NASA Astrophysics Data System (ADS)

    Kantor, J.; Axelrod, T.; Becla, J.; Cook, K.; Nikolaev, S.; Gray, J.; Plante, R.; Nieto-Santisteban, M.; Szalay, A.; Thakar, A.

    2007-10-01

    The Large Synoptic Survey Telescope (LSST), a proposed ground-based 8.4 m telescope with a 10 deg^2 field of view, will generate 15 TB of raw images every observing night. When calibration and processed data are added, the image archive, catalogs, and meta-data will grow 15 PB yr^{-1} on average. The LSST Data Management System (DMS) must capture, process, store, index, replicate, and provide open access to this data. Alerts must be triggered within 30 s of data acquisition. To do this in real-time at these data volumes will require advances in data management, database, and file system techniques. This paper describes the design of the LSST DMS and emphasizes features for peta-scale data. The LSST DMS will employ a combination of distributed database and file systems, with schema, partitioning, and indexing oriented for parallel operations. Image files are stored in a distributed file system with references to, and meta-data from, each file stored in the databases. The schema design supports pipeline processing, rapid ingest, and efficient query. Vertical partitioning reduces disk input/output requirements, horizontal partitioning allows parallel data access using arrays of servers and disks. Indexing is extensive, utilizing both conventional RAM-resident indexes and column-narrow, row-deep tag tables/covering indices that are extracted from tables that contain many more attributes. The DMS Data Access Framework is encapsulated in a middleware framework to provide a uniform service interface to all framework capabilities. This framework will provide the automated work-flow, replication, and data analysis capabilities necessary to make data processing and data quality analysis feasible at this scale.

  18. Preliminary Image Map of the 2007 Harris Fire Perimeter, Barrett Lake Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  19. Preliminary Image Map of the 2007 Santiago Fire Perimeter, Santiago Peak Quadrangle, Orange and Riverside Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  20. Preliminary Image Map of the 2007 Buckweed Fire Perimeter, Green Valley Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  1. Preliminary Image Map of the 2007 Witch Fire Perimeter, Warners Ranch Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  2. Preliminary Image Map of the 2007 Harris Fire Perimeter, Otay Mesa Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  3. Preliminary Image Map of the 2007 Rice Fire Perimeter, Bonsall Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  4. Preliminary Image Map of the 2007 Poomacha Fire Perimeter, Pechanga Quadrangle, Riverside and San Diego Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  5. Preliminary Image Map of the 2007 Harris Fire Perimeter, Tecate Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  6. Preliminary Image Map of the 2007 Poomacha Fire Perimeter, Temecula Quadrangle, Riverside and San Diego Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  7. Preliminary Image Map of the 2007 Buckweed Fire Perimeter, Agua Dulce Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  8. Preliminary Image Map of the 2007 Witch Fire Perimeter, San Pasqual Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  9. Preliminary Image Map of the 2007 Buckweed Fire Perimeter, Mint Canyon Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  10. Preliminary Image Map of the 2007 Witch Fire Perimeter, Escondido Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  11. Preliminary Image Map of the 2007 Poomacha Fire Perimeter, Boucher Hill Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  12. Preliminary Image Map of the 2007 Ammo Fire Perimeter, Margarita Peak Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  13. Preliminary Image Map of the 2007 Witch Fire Perimeter, Ramona Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  14. Preliminary Image Map of the 2007 Ammo Fire Perimeter, San Onofre Bluff Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  15. Preliminary Image Map of the 2007 Santiago Fire Perimeter, Orange Quadrangle, Orange County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  16. Preliminary Image Map of the 2007 Harris Fire Perimeter, Otay Mountain Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  17. Preliminary Image Map of the 2007 Ranch Fire Perimeter, Cobblestone Mountain Quadrangle, Los Angeles and Ventura Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  18. Preliminary Image Map of the 2007 Poomacha Fire Perimeter, Palomar Observatory Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  19. Preliminary Image Map of the 2007 Witch Fire Perimeter, El Cajon Mountain Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  20. Preliminary Image Map of the 2007 Witch and Poomacha Fire Perimeters, Rodriguez Mountain Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  1. Preliminary Image Map of the 2007 Witch Fire Perimeter, Santa Ysabel Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  2. Preliminary Image Map of the 2007 Ammo Fire Perimeter, Las Pulgas Canyon Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  3. Preliminary Image Map of the 2007 Harris Fire Perimeter, Jamul Mountains Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  4. Preliminary Image Map of the 2007 Santiago Fire Perimeter, Lake Forest Quadrangle, Orange County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  5. Preliminary Image Map of the 2007 Cajon Fire Perimeter, San Bernardino North Quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  6. Preliminary Image Map of the 2007 Slide Fire Perimeter, Butler Peak Quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  7. Preliminary Image Map of the 2007 Witch Fire Perimeter, San Vicente Reservoir Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  8. Preliminary Image Map of the 2007 Ammo Fire Perimeter, San Clemente Quadrangle, Orange and San Diego Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  9. Preliminary Image Map of the 2007 Cajon Fire Perimeter, Devore Quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  10. Preliminary Image Map of the 2007 Ranch Fire Perimeter, Fillmore Quadrangle, Ventura County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  11. Preliminary Image Map of the 2007 Ranch Fire Perimeter, Piru Quadrangle, Ventura County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  12. Preliminary Image Map of the 2007 Magic and Buckweed Fire Perimeters, Newhall Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  13. Preliminary Image Map of the 2007 Harris Fire Perimeter, Dulzura Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  14. Preliminary Image Map of the 2007 Grass Valley Fire Perimeter, Lake Arrowhead Quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  15. Preliminary Image Map of the 2007 Harris Fire Perimeter, Potrero Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  16. Preliminary Image Map of the 2007 Witch and Poomacha Fire Perimeters, Mesa Grande Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  17. Preliminary Image Map of the 2007 Canyon Fire Perimeter, Malibu Beach Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  18. Preliminary Image Map of the 2007 Santiago Fire Perimeter, Black Star Canyon Quadrangle, Orange, Riverside, and San Bernardino Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  19. Preliminary Image Map of the 2007 Buckweed Fire Perimeter, Warm Springs Mountain Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  20. Preliminary Image Map of the 2007 Ranch Fire Perimeter, Whitaker Peak Quadrangle, Los Angeles and Ventura Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  1. Preliminary Image Map of the 2007 Poomacha Fire Perimeter, Vail Lake Quadrangle, Riverside and San Diego Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  2. Preliminary Image Map of the 2007 Witch Fire Perimeter, Valley Center Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  3. Preliminary Image Map of the 2007 Santiago Fire Perimeter, Tustin Quadrangle, Orange County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  4. Preliminary Image Map of the 2007 Witch Fire Perimeter, Rancho Santa Fe Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  5. Preliminary Image Map of the 2007 Slide Fire Perimeter, Harrison Mountain Quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  6. Preliminary Image Map of the 2007 Buckweed Fire Perimeter, Sleepy Valley Quadrangle, Los Angeles County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  7. Preliminary Image Map of the 2007 Ranch and Magic Fire Perimeters, Val Verde Quadrangle, Los Angeles and Ventura Counties, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  8. Preliminary Image Map of the 2007 Witch Fire Perimeter, Poway Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  9. Preliminary Image Map of the 2007 Poomacha Fire Perimeter, Pala Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  10. Preliminary Image Map of the 2007 Witch Fire Perimeter, Tule Springs Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  11. Preliminary Image Map of the 2007 Harris Fire Perimeter, Morena Reservoir Quadrangle, San Diego County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  12. Preliminary Image Map of the 2007 Slide Fire Perimeter, Keller Peak Quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Clark, Perry S.; Scratch, Wendy S.; Bias, Gaylord W.; Stander, Gregory B.; Sexton, Jenne L.; Krawczak, Bridgette J.

    2008-01-01

    In the fall of 2007, wildfires burned out of control in southern California. The extent of these fires encompassed large geographic areas that included a variety of landscapes from urban to wilderness. The U.S. Geological Survey National Geospatial Technical Operations Center (NGTOC) is currently (2008) developing a quadrangle-based 1:24,000-scale image map product. One of the concepts behind the image map product is to provide an updated map in electronic format to assist with emergency response. This image map is one of 55 preliminary image map quadrangles covering the areas burned by the southern California wildfires. Each map is a layered, geo-registered Portable Document Format (.pdf) file. For more information about the layered geo-registered .pdf, see the readme file (http://pubs.usgs.gov/of/2008/1029/downloads/CA_Agua_Dulce_of2008-1029_README.txt). To view the areas affected and the quadrangles mapped in this preliminary project, see the map index (http://pubs.usgs.gov/of/2008/1029/downloads/CA_of2008_1029-1083_index.pdf) provided with this report.

  13. Toyz: A framework for scientific analysis of large datasets and astronomical images

    NASA Astrophysics Data System (ADS)

    Moolekamp, F.; Mamajek, E.

    2015-11-01

    As the size of images and data products derived from astronomical data continues to increase, new tools are needed to visualize and interact with that data in a meaningful way. Motivated by our own astronomical images taken with the Dark Energy Camera (DECam) we present Toyz, an open source Python package for viewing and analyzing images and data stored on a remote server or cluster. Users connect to the Toyz web application via a web browser, making it ​a convenient tool for students to visualize and interact with astronomical data without having to install any software on their local machines. In addition it provides researchers with an easy-to-use tool that allows them to browse the files on a server and quickly view very large images (>2 Gb) taken with DECam and other cameras with a large FOV and create their own visualization tools that can be added on as extensions to the default Toyz framework.

  14. Single mimivirus particles intercepted and imaged with an X-ray laser (CXIDB ID 1)

    DOE Data Explorer

    Seibert, M. Marvin; Ekeberg, Tomas; Maia, Filipe R.N.C.

    2011-02-02

    These are the files used to reconstruct the images in the paper "Single Mimivirus particles intercepted and imaged with an X-ray laser". Besides the diffracted intensities, the Hawk configuration files used for the reconstructions are also provided. The files from CXIDB ID 1 are the pattern and configuration files for the pattern showed in Figure 2a in the paper.

  15. Single mimivirus particles intercepted and imaged with an X-ray laser (CXIDB ID 2)

    DOE Data Explorer

    Seibert, M. Marvin; Ekeberg, Tomas

    2011-02-02

    These are the files used to reconstruct the images in the paper "Single Mimivirus particles intercepted and imaged with an X-ray laser". Besides the diffracted intensities, the Hawk configuration files used for the reconstructions are also provided. The files from CXIDB ID 2 are the pattern and configuration files for the pattern showed in Figure 2b in the paper.

  16. Arkansas and Louisiana Aeromagnetic and Gravity Maps and Data - A Website for Distribution of Data

    USGS Publications Warehouse

    Bankey, Viki; Daniels, David L.

    2008-01-01

    This report contains digital data, image files, and text files describing data formats for aeromagnetic and gravity data used to compile the State aeromagnetic and gravity maps of Arkansas and Louisiana. The digital files include grids, images, ArcInfo, and Geosoft compatible files. In some of the data folders, ASCII files with the extension 'txt' describe the format and contents of the data files. Read the 'txt' files before using the data files.

  17. A JPEG backward-compatible HDR image compression

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Ebrahimi, Touradj

    2012-10-01

    High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.

  18. Spotlight-8 Image Analysis Software

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2006-01-01

    Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.

  19. A Pyramid Scheme for Constructing Geologic Maps on Geobrowsers

    NASA Astrophysics Data System (ADS)

    Whitmeyer, S. J.; de Paor, D. G.; Daniels, J.; Jeremy, N.; Michael, R.; Santangelo, B.

    2008-12-01

    Hundreds of geologic maps have been draped onto Google Earth (GE) using the ground overlay tag of Keyhole Markup Language (KML) and dozens have been published on academic and survey web pages as downloadable KML or KMZ (zipped KML) files. The vast majority of these are small KML docs that link to single, large - often very large - image files (jpegs, tiffs, etc.) Files that exceed 50 MB in size defeat the purpose of GE as an interactive and responsive, and therefore fast, virtual terrain medium. KML supports super-overlays (a.k.a. image pyramids), which break large graphic files into manageable tiles that load only when they are in the visible region at a sufficient level of detail (LOD), and several automatic tile-generating applications have been written. The process of exporting map data from applications such as ArcGIS® to KML format is becoming more manageable but still poses challenges. Complications arise, for example, because of differences between grid-north at a point on a map and true north at the equivalent location on the virtual globe. In our recent field season, we devised ways of overcoming many of these obstacles in order to generate responsive, panable, zoomable geologic maps in which data is layered in a pyramid structure similar to the image pyramid used for default GE terrain. The structure of our KML code for each level of the pyramid is self-similar: (i) check whether the current tile is in the visible region, (ii) if so, render the current overlay, (iii) add the current data level, and (iv) using four network links, check the visibility and LOD of four nested tiles. By using this pyramid structure we provide the user with access to geologic and map data at multiple levels of observation. For example, when the viewpoint is distant, regional structures and stratigraphy (e.g. lithological groups and terrane boundaries) are visible. As the user zooms to lower elevations, formations and ultimately individual outcrops come into focus. The pyramid structure is ideally suited to geologic data which tends to be unevenly exposed across the earth's surface.

  20. Characteristics of Urbanization in Five Watersheds of Anchorage, Alaska: Geographic Information System Data

    USGS Publications Warehouse

    Moran, Edward H.

    2002-01-01

    The report contains environmental and urban geographic information system data for 14 sites in 5 watersheds in Anchorage, Alaska. These sites were examined during summer in 1999 and 2000 to determine effects of urbanization on water quality. The data sets are Environmental Systems Research Institute, Inc., shapefiles, coverages, and images. Also included are an elevation grid and a triangulated irregular network. Although the data are intended for users with advanced geographic information system capabilities, simple images of the data also are available. ArcView? 3.2 project, an ArcGIS? project, and 16 ArcExplorer2? projects are linked to the PDF file based report. Some of these coverages are large files over 10 MB. The largest coverage, impervious cover, is 208 MB.

  1. IIPImage: Large-image visualization

    NASA Astrophysics Data System (ADS)

    Pillay, Ruven

    2014-08-01

    IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.

  2. Automatic image database generation from CAD for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Sardana, Harish K.; Daemi, Mohammad F.; Ibrahim, Mohammad K.

    1993-06-01

    The development and evaluation of Multiple-View 3-D object recognition systems is based on a large set of model images. Due to the various advantages of using CAD, it is becoming more and more practical to use existing CAD data in computer vision systems. Current PC- level CAD systems are capable of providing physical image modelling and rendering involving positional variations in cameras, light sources etc. We have formulated a modular scheme for automatic generation of various aspects (views) of the objects in a model based 3-D object recognition system. These views are generated at desired orientations on the unit Gaussian sphere. With a suitable network file sharing system (NFS), the images can directly be stored on a database located on a file server. This paper presents the image modelling solutions using CAD in relation to multiple-view approach. Our modular scheme for data conversion and automatic image database storage for such a system is discussed. We have used this approach in 3-D polyhedron recognition. An overview of the results, advantages and limitations of using CAD data and conclusions using such as scheme are also presented.

  3. 78 FR 59743 - Bureau of Consular Affairs; Registration for the Diversity Immigrant (DV-2015) Visa Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-27

    ... already a U.S. citizen or a Lawful Permanent Resident, but you will not be penalized if you do. Group... specifications: Image File Format: The miage must be in the Joint Photographic Experts Group (JPEG) format. Image... in the Joint Photographic Experts Group (JPEG) format. Image File Size: The maximum image file size...

  4. Effect of combined digital imaging parameters on endodontic file measurements.

    PubMed

    de Oliveira, Matheus Lima; Pinto, Geraldo Camilo de Souza; Ambrosano, Glaucia Maria Bovi; Tosoni, Guilherme Monteiro

    2012-10-01

    This study assessed the effect of the combination of a dedicated endodontic filter, spatial resolution, and contrast resolution on the determination of endodontic file lengths. Forty extracted single-rooted teeth were x-rayed with K-files (ISO size 10 and 15) in the root canals. Images were acquired using the VistaScan system (Dürr Dental, Beitigheim-Bissingen, Germany) under different combining parameters of spatial resolution (10 and 25 line pairs per millimeter [lp/mm]) and contrast resolution (8- and 16-bit depths). Subsequently, a dedicated endodontic filter was applied on the 16-bit images, creating 2 additional parameters. Six observers measured the length of the endodontic files in the root canals using the software that accompanies the system. The mean values of the actual file lengths and the measurements of the radiographic images were submitted to 1-way analysis of variance and the Tukey test at a level of significance of 5%. The intraobserver reproducibility was assessed by the intraclass correlation coefficient. All combined image parameters showed excellent intraobserver agreement with intraclass correlation coefficient means higher than 0.98. The imaging parameter of 25 lp/mm and 16 bit associated with the use of the endodontic filter did not differ significantly from the actual file lengths when both file sizes were analyzed together or separately (P > .05). When the size 15 file was evaluated separately, only 8-bit images differed significantly from the actual file lengths (P ≤ .05). The combination of an endodontic filter with high spatial resolution and high contrast resolution is recommended for the determination of file lengths when using storage phosphor plates. Copyright © 2012 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  5. MATtrack: A MATLAB-Based Quantitative Image Analysis Platform for Investigating Real-Time Photo-Converted Fluorescent Signals in Live Cells.

    PubMed

    Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W; Gautier, Virginie W

    2015-01-01

    We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip.

  6. MATtrack: A MATLAB-Based Quantitative Image Analysis Platform for Investigating Real-Time Photo-Converted Fluorescent Signals in Live Cells

    PubMed Central

    Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W.; Gautier, Virginie W.

    2015-01-01

    We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip. PMID:26485569

  7. Mars Global Digital Dune Database: MC2-MC29

    USGS Publications Warehouse

    Hayward, Rosalyn K.; Mullins, Kevin F.; Fenton, L.K.; Hare, T.M.; Titus, T.N.; Bourke, M.C.; Colaprete, Anthony; Christensen, P.R.

    2007-01-01

    Introduction The Mars Global Digital Dune Database presents data and describes the methodology used in creating the database. The database provides a comprehensive and quantitative view of the geographic distribution of moderate- to large-size dune fields from 65? N to 65? S latitude and encompasses ~ 550 dune fields. The database will be expanded to cover the entire planet in later versions. Although we have attempted to include all dune fields between 65? N and 65? S, some have likely been excluded for two reasons: 1) incomplete THEMIS IR (daytime) coverage may have caused us to exclude some moderate- to large-size dune fields or 2) resolution of THEMIS IR coverage (100m/pixel) certainly caused us to exclude smaller dune fields. The smallest dune fields in the database are ~ 1 km2 in area. While the moderate to large dune fields are likely to constitute the largest compilation of sediment on the planet, smaller stores of sediment of dunes are likely to be found elsewhere via higher resolution data. Thus, it should be noted that our database excludes all small dune fields and some moderate to large dune fields as well. Therefore the absence of mapped dune fields does not mean that such dune fields do not exist and is not intended to imply a lack of saltating sand in other areas. Where availability and quality of THEMIS visible (VIS) or Mars Orbiter Camera narrow angle (MOC NA) images allowed, we classifed dunes and included dune slipface measurements, which were derived from gross dune morphology and represent the prevailing wind direction at the last time of significant dune modification. For dunes located within craters, the azimuth from crater centroid to dune field centroid was calculated. Output from a general circulation model (GCM) is also included. In addition to polygons locating dune fields, the database includes over 1800 selected Thermal Emission Imaging System (THEMIS) infrared (IR), THEMIS visible (VIS) and Mars Orbiter Camera Narrow Angle (MOC NA) images that were used to build the database. The database is presented in a variety of formats. It is presented as a series of ArcReader projects which can be opened using the free ArcReader software. The latest version of ArcReader can be downloaded at http://www.esri.com/software/arcgis/arcreader/download.html. The database is also presented in ArcMap projects. The ArcMap projects allow fuller use of the data, but require ESRI ArcMap? software. Multiple projects were required to accommodate the large number of images needed. A fuller description of the projects can be found in the Dunes_ReadMe file and the ReadMe_GIS file in the Documentation folder. For users who prefer to create their own projects, the data is available in ESRI shapefile and geodatabase formats, as well as the open Geographic Markup Language (GML) format. A printable map of the dunes and craters in the database is available as a Portable Document Format (PDF) document. The map is also included as a JPEG file. ReadMe files are available in PDF and ASCII (.txt) files. Tables are available in both Excel (.xls) and ASCII formats.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Donald F.; Schulz, Carl; Konijnenburg, Marco

    High-resolution Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry imaging enables the spatial mapping and identification of biomolecules from complex surfaces. The need for long time-domain transients, and thus large raw file sizes, results in a large amount of raw data (“big data”) that must be processed efficiently and rapidly. This can be compounded by largearea imaging and/or high spatial resolution imaging. For FT-ICR, data processing and data reduction must not compromise the high mass resolution afforded by the mass spectrometer. The continuous mode “Mosaic Datacube” approach allows high mass resolution visualization (0.001 Da) of mass spectrometry imaging data, butmore » requires additional processing as compared to featurebased processing. We describe the use of distributed computing for processing of FT-ICR MS imaging datasets with generation of continuous mode Mosaic Datacubes for high mass resolution visualization. An eight-fold improvement in processing time is demonstrated using a Dutch nationally available cloud service.« less

  9. Imaging mass spectrometry data reduction: automated feature identification and extraction.

    PubMed

    McDonnell, Liam A; van Remoortere, Alexandra; de Velde, Nico; van Zeijl, René J M; Deelder, André M

    2010-12-01

    Imaging MS now enables the parallel analysis of hundreds of biomolecules, spanning multiple molecular classes, which allows tissues to be described by their molecular content and distribution. When combined with advanced data analysis routines, tissues can be analyzed and classified based solely on their molecular content. Such molecular histology techniques have been used to distinguish regions with differential molecular signatures that could not be distinguished using established histologic tools. However, its potential to provide an independent, complementary analysis of clinical tissues has been limited by the very large file sizes and large number of discrete variables associated with imaging MS experiments. Here we demonstrate data reduction tools, based on automated feature identification and extraction, for peptide, protein, and lipid imaging MS, using multiple imaging MS technologies, that reduce data loads and the number of variables by >100×, and that highlight highly-localized features that can be missed using standard data analysis strategies. It is then demonstrated how these capabilities enable multivariate analysis on large imaging MS datasets spanning multiple tissues. Copyright © 2010 American Society for Mass Spectrometry. Published by Elsevier Inc. All rights reserved.

  10. 3D for Geosciences: Interactive Tangibles and Virtual Models

    NASA Astrophysics Data System (ADS)

    Pippin, J. E.; Matheney, M.; Kitsch, N.; Rosado, G.; Thompson, Z.; Pierce, S. A.

    2016-12-01

    Point cloud processing provides a method of studying and modelling geologic features relevant to geoscience systems and processes. Here, software including Skanect, MeshLab, Blender, PDAL, and PCL are used in conjunction with 3D scanning hardware, including a Structure scanner and a Kinect camera, to create and analyze point cloud images of small scale topography, karst features, tunnels, and structures at high resolution. This project successfully scanned internal karst features ranging from small stalactites to large rooms, as well as an external waterfall feature. For comparison purposes, multiple scans of the same object were merged into single object files both automatically, using commercial software, and manually using open source libraries and code. Files with format .ply were manually converted into numeric data sets to be analyzed for similar regions between files in order to match them together. We can assume a numeric process would be more powerful and efficient than the manual method, however it could lack other useful features that GUI's may have. The digital models have applications in mining as efficient means of replacing topography functions such as measuring distances and areas. Additionally, it is possible to make simulation models such as drilling templates and calculations related to 3D spaces. Advantages of using methods described here for these procedures include the relatively quick time to obtain data and the easy transport of the equipment. With regard to openpit mining, obtaining 3D images of large surfaces and with precision would be a high value tool by georeferencing scan data to interactive maps. The digital 3D images obtained from scans may be saved as printable files to create physical 3D-printable models to create tangible objects based on scientific information, as well as digital "worlds" able to be navigated virtually. The data, models, and algorithms explored here can be used to convey complex scientific ideas to a range of professionals and audiences.

  11. Analyzing microtomography data with Python and the scikit-image library.

    PubMed

    Gouillart, Emmanuelle; Nunez-Iglesias, Juan; van der Walt, Stéfan

    2017-01-01

    The exploration and processing of images is a vital aspect of the scientific workflows of many X-ray imaging modalities. Users require tools that combine interactivity, versatility, and performance. scikit-image is an open-source image processing toolkit for the Python language that supports a large variety of file formats and is compatible with 2D and 3D images. The toolkit exposes a simple programming interface, with thematic modules grouping functions according to their purpose, such as image restoration, segmentation, and measurements. scikit-image users benefit from a rich scientific Python ecosystem that contains many powerful libraries for tasks such as visualization or machine learning. scikit-image combines a gentle learning curve, versatile image processing capabilities, and the scalable performance required for the high-throughput analysis of X-ray imaging data.

  12. NVST Data Archiving System Based On FastBit NoSQL Database

    NASA Astrophysics Data System (ADS)

    Liu, Ying-bo; Wang, Feng; Ji, Kai-fan; Deng, Hui; Dai, Wei; Liang, Bo

    2014-06-01

    The New Vacuum Solar Telescope (NVST) is a 1-meter vacuum solar telescope that aims to observe the fine structures of active regions on the Sun. The main tasks of the NVST are high resolution imaging and spectral observations, including the measurements of the solar magnetic field. The NVST has been collecting more than 20 million FITS files since it began routine observations in 2012 and produces a maximum observational records of 120 thousand files in a day. Given the large amount of files, the effective archiving and retrieval of files becomes a critical and urgent problem. In this study, we implement a new data archiving system for the NVST based on the Fastbit Not Only Structured Query Language (NoSQL) database. Comparing to the relational database (i.e., MySQL; My Structured Query Language), the Fastbit database manifests distinctive advantages on indexing and querying performance. In a large scale database of 40 million records, the multi-field combined query response time of Fastbit database is about 15 times faster and fully meets the requirements of the NVST. Our study brings a new idea for massive astronomical data archiving and would contribute to the design of data management systems for other astronomical telescopes.

  13. Adding and Deleting Images

    EPA Pesticide Factsheets

    Images are added via the Drupal WebCMS Editor. Once an image is uploaded onto a page, it is available via the Library and your files. You can edit the metadata, delete the image permanently, and/or replace images on the Files tab.

  14. Early Detection | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"171","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Early Detection Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Early Detection Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Early

  15. Main image file tape description

    USGS Publications Warehouse

    Warriner, Howard W.

    1980-01-01

    This Main Image File Tape document defines the data content and file structure of the Main Image File Tape (MIFT) produced by the EROS Data Center (EDC). This document also defines an INQUIRY tape, which is just a subset of the MIFT. The format of the INQUIRY tape is identical to the MIFT except for two records; therefore, with the exception of these two records (described elsewhere in this document), every remark made about the MIFT is true for the INQUIRY tape.

  16. VizieR Online Data Catalog: H2CO production in HD 163296 (Carney+)

    NASA Astrophysics Data System (ADS)

    Carney, M. T.; Hogerheijde, M. R.; Loomis, R. A.; Salinas, V. N.; Oberg, K. I.; Qi, C.; Wilner, D. J.

    2017-07-01

    The FITS files contain data cubes for H2CO(30 H2CO(322-221), H2CO(321-220), C18O(2-1), and a 2D image of the 1.3mm continuum. The observations were taken with the Atacama Large Millimeter/submillimeter Array (ALMA). The formaldehyde and 1.3mm continuum data have a spatial resolution of 0.5". The C18O(2-1) data is part of the ALMA Science Verification data set released for HD 163296, with an angular resolution of 0.8". (2 data files).

  17. "Fahrenheit 9/11" in the Classroom

    ERIC Educational Resources Information Center

    Dahlgren, Robert L.

    2009-01-01

    The polarized political mood engendered by the most sharply partisan Presidential election campaigns in recent memory has had an especially deleterious effect on the image of public education. This increased scrutiny has largely fallen on the shoulders of rank and file teachers who now face the most precarious moment in terms of job security since…

  18. Drainage Algorithm for Geospatial Knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2006-08-15

    The Pacific Northwest National Laboratory (PNNL) has developed a prototype stream extraction algorithm that semi-automatically extracts and characterizes streams using a variety of multisensor imagery and digital terrain elevation data (DTED) data. The system is currently optimized for three types of single-band imagery: radar, visible, and thermal. Method of Solution: DRAGON: (1) classifies pixels into clumps of water objects based on the classification of water pixels by spectral signatures and neighborhood relationships, (2) uses the morphology operations (erosion and dilation) to separate out large lakes (or embayment), isolated lakes, ponds, wide rivers and narrow rivers, and (3) translates the rivermore » objects into vector objects. In detail, the process can be broken down into the following steps. A. Water pixels are initially identified using on the extend range and slope values (if an optional DEM file is available). B. Erode to the distance that defines a large water body and then dilate back. The resulting mask can be used to identify large lake and embayment objects that are then removed from the image. Since this operation be time consuming it is only performed if a simple test (i.e. a large box can be found somewhere in the image that contains only water pixels) that indicates a large water body is present. C. All water pixels are ‘clumped’ (in Imagine terminology clumping is when pixels of a common classification that touch are connected) and clumps which do not contain pure water pixels (e.g. dark cloud shadows) are removed D. The resulting true water pixels are clumped and water objects which are too small (e.g. ponds) or isolated lakes (i.e. isolated objects with a small compactness ratio) are removed. Note that at this point lakes have been identified has a byproduct of the filtering process and can be output has vector layers if needed. E. At this point only river pixels are left in the image. To separate out wide rivers all objects in the image are eroded by the half width of narrow rivers. This causes all narrow rivers to be removed and leaves only the core of wide rivers. This core is dilated out by the same distance to create a mask that is used with the original river image to separate out rivers into two separate images of narrow rivers and wide rivers F. If in the image that contains wide rivers there are small isolated short (less than 300 meters if NGA criteria is used) segments these segments are transferred to the narrow river file in order to be treated has parts of single line rivers G. The narrow river file is optionally dilated and eroded. This ‘closing’ has the effect of removing small islands, filling small gaps, and smoothing the outline H. The user also has the option of ‘closing’ objects in the wide river file. However, this depends on the degree to which the user wants to remove small islands in the large rivers. I. To make the translation from raster to single vector easier the objects in the narrow river image are reduced to a single center line (i.e. thinned) with binary morphology operations.« less

  19. Effect of various digital processing algorithms on the measurement accuracy of endodontic file length.

    PubMed

    Kal, Betül Ilhan; Baksi, B Güniz; Dündar, Nesrin; Sen, Bilge Hakan

    2007-02-01

    The aim of this study was to compare the accuracy of endodontic file lengths after application of various image enhancement modalities. Endodontic files of three different ISO sizes were inserted in 20 single-rooted extracted permanent mandibular premolar teeth and standardized images were obtained. Original digital images were then enhanced using five processing algorithms. Six evaluators measured the length of each file on each image. The measurements from each processing algorithm and each file size were compared using repeated measures ANOVA and Bonferroni tests (P = 0.05). Paired t test was performed to compare the measurements with the true lengths of the files (P = 0.05). All of the processing algorithms provided significantly shorter measurements than the true length of each file size (P < 0.05). The threshold enhancement modality produced significantly higher mean error values (P < 0.05), while there was no significant difference among the other enhancement modalities (P > 0.05). Decrease in mean error value was observed with increasing file size (P < 0.05). Invert, contrast/brightness and edge enhancement algorithms may be recommended for accurate file length measurements when utilizing storage phosphor plates.

  20. Data Science Bowl Launched to Improve Lung Cancer Screening | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"2078","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Data Science Bowl Logo","field_file_image_title_text[und][0][value]":"Data Science Bowl Logo","field_folder[und]":"76"},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Data Science Bowl

  1. Informatics in radiology (infoRAD): multimedia extension of medical imaging resource center teaching files.

    PubMed

    Yang, Guo Liang; Aziz, Aamer; Narayanaswami, Banukumar; Anand, Ananthasubramaniam; Lim, C C Tchoyoson; Nowinski, Wieslaw Lucjan

    2005-01-01

    A new method has been developed for multimedia enhancement of electronic teaching files created by using the standard protocols and formats offered by the Medical Imaging Resource Center (MIRC) project of the Radiological Society of North America. The typical MIRC electronic teaching file consists of static pages only; with the new method, audio and visual content may be added to the MIRC electronic teaching file so that the entire image interpretation process can be recorded for teaching purposes. With an efficient system for encoding the audiovisual record of on-screen manipulation of radiologic images, the multimedia teaching files generated are small enough to be transmitted via the Internet with acceptable resolution. Students may respond with the addition of new audio and visual content and thereby participate in a discussion about a particular case. MIRC electronic teaching files with multimedia enhancement have the potential to augment the effectiveness of diagnostic radiology teaching. RSNA, 2005.

  2. Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop

    NASA Astrophysics Data System (ADS)

    Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin

    2014-06-01

    Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.

  3. BOREAS RSS-14 Level 1a GOES-7 Visible, IR, and Water Vapor Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Newcomer, Jeffrey A.; Faysash, David; Cooper, Harry J.; Smith, Eric A.

    2000-01-01

    The BOREAS RSS-14 team collected and processed GOES-7 and -8 images of the BOREAS region as part of its effort to characterize the incoming, reflected, and emitted radiation at regional scales. The level-1a BOREAS GOES-7 image data were collected by RSS-14 personnel at FSU and processed to level-1a products by BORIS personnel. The data cover the period of 01-Jan-1994 through 08-Jul-1995 with partial to complete coverage on the majority of the days. The data include three bands with eightbit pixel values. No major problems with the data have been identified. Due to the large size of the images, the level-1a GOES-7 data are not contained on the BOREAS CD-ROM set. An inventory listing file is supplied on the CD-ROM to inform users of what data were collected. The level-1a GOES-7 image data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). See sections 15 and 16 for more information. The data files are available on a CD-ROM (see document number 20010000884).

  4. BOREAS RSS-14 Level-1 GOES-7 Visible, IR and Water Vapor Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Faysash, David; Cooper, Harry J.; Smith, Eric A.; Newcomer, Jeffrey A.

    2000-01-01

    The BOREAS RSS-14 team collected and processed GOES-7 and -8 images of the BOREAS region as part of its effort to characterize the incoming, reflected, and emitted radiation at regional scales. The level-1 BOREAS GOES-7 image data were collected by RSS-14 personnel at FSU and delivered to BORIS. The data cover the period of 01-Jan-1994 through 08-Jul-1995, with partial to complete coverage on the majority of the days. The data include three bands with eight-bit pixel values. No major problems with the data have been identified. Due to the large size of the images, the level-1 GOES-7 data are not contained on the BOREAS CD-ROM set. An inventory listing file is supplied on the CD-ROM to inform users of what data were collected. The level-1 GOES-7 image data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). See sections 15 and 16 for more information. The data files are available on a CD-ROM (see document number 20010000884).

  5. Clementine High Resolution Camera Mosaicking Project. Volume 21; CL 6021; 80 deg S to 90 deg S Latitude, North Periapsis; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Clementine I high resolution (HiRes) camera lunar image mosaics developed by Malin Space Science Systems (MSSS). These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. The geometric control is provided by the U. S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD are compiled from polar data (latitudes greater than 80 degrees), and are presented in the stereographic projection at a scale of 30 m/pixel at the pole, a resolution 5 times greater than that (150 m/pixel) of the corresponding UV/Vis polar basemap. This 5:1 scale ratio is in keeping with the sub-polar mosaic, in which the HiRes and UV/Vis mosaics had scales of 20 m/pixel and 100 m/pixel, respectively. The equal-area property of the stereographic projection made this preferable for the HiRes polar mosaic rather than the basemap's orthographic projection. Thus, a necessary first step in constructing the mosaic was the reprojection of the UV/Vis basemap to the stereographic projection. The HiRes polar data can be naturally grouped according to the orbital periapsis, which was in the south during the first half of the mapping mission and in the north during the second half. Images in each group have generally uniform intrinsic resolution, illumination, exposure and gain. Rather than mingle data from the two periapsis epochs, separate mosaics are provided for each, a total of 4 polar mosaics. The mosaics are divided into 100 square tiles of 2250 pixels (approximately 2.2 deg near the pole) on a side. Not all squares of this grid contain HiRes mosaic data, some inevitably since a square is not a perfect representation of a (latitude) circle, others due to the lack of HiRes data. This CD also contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  6. Integration of DICOM and openEHR standards

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Yao, Zhihong; Liu, Lei

    2011-03-01

    The standard format for medical imaging storage and transmission is DICOM. openEHR is an open standard specification in health informatics that describes the management and storage, retrieval and exchange of health data in electronic health records. Considering that the integration of DICOM and openEHR is beneficial to information sharing, on the basis of XML-based DICOM format, we developed a method of creating a DICOM Imaging Archetype in openEHR to enable the integration of DICOM and openEHR. Each DICOM file contains abundant imaging information. However, because reading a DICOM involves looking up the DICOM Data Dictionary, the readability of a DICOM file has been limited. openEHR has innovatively adopted two level modeling method, making clinical information divided into lower level, the information model, and upper level, archetypes and templates. But one critical challenge posed to the development of openEHR is the information sharing problem, especially in imaging information sharing. For example, some important imaging information cannot be displayed in an openEHR file. In this paper, to enhance the readability of a DICOM file and semantic interoperability of an openEHR file, we developed a method of mapping a DICOM file to an openEHR file by adopting the form of archetype defined in openEHR. Because an archetype has a tree structure, after mapping a DICOM file to an openEHR file, the converted information is structuralized in conformance with openEHR format. This method enables the integration of DICOM and openEHR and data exchange without losing imaging information between two standards.

  7. Accessible and informative sectioned images, color-coded images, and surface models of the ear.

    PubMed

    Park, Hyo Seok; Chung, Min Suk; Shin, Dong Sun; Jung, Yong Wook; Park, Jin Seo

    2013-08-01

    In our previous research, we created state-of-the-art sectioned images, color-coded images, and surface models of the human ear. Our ear data would be more beneficial and informative if they were more easily accessible. Therefore, the purpose of this study was to distribute the browsing software and the PDF file in which ear images are to be readily obtainable and freely explored. Another goal was to inform other researchers of our methods for establishing the browsing software and the PDF file. To achieve this, sectioned images and color-coded images of ear were prepared (voxel size 0.1 mm). In the color-coded images, structures related to hearing, equilibrium, and structures originated from the first and second pharyngeal arches were segmented supplementarily. The sectioned and color-coded images of right ear were added to the browsing software, which displayed the images serially along with structure names. The surface models were reconstructed to be combined into the PDF file where they could be freely manipulated. Using the browsing software and PDF file, sectional and three-dimensional shapes of ear structures could be comprehended in detail. Furthermore, using the PDF file, clinical knowledge could be identified through virtual otoscopy. Therefore, the presented educational tools will be helpful to medical students and otologists by improving their knowledge of ear anatomy. The browsing software and PDF file can be downloaded without charge and registration at our homepage (http://anatomy.dongguk.ac.kr/ear/). Copyright © 2013 Wiley Periodicals, Inc.

  8. High throughput imaging cytometer with acoustic focussing† †Electronic supplementary information (ESI) available: High throughput imaging cytometer with acoustic focussing. See DOI: 10.1039/c5ra19497k Click here for additional data file. Click here for additional data file. Click here for additional data file. Click here for additional data file. Click here for additional data file. Click here for additional data file. Click here for additional data file. Click here for additional data file.

    PubMed Central

    Zmijan, Robert; Jonnalagadda, Umesh S.; Carugo, Dario; Kochi, Yu; Lemm, Elizabeth; Packham, Graham; Hill, Martyn

    2015-01-01

    We demonstrate an imaging flow cytometer that uses acoustic levitation to assemble cells and other particles into a sheet structure. This technique enables a high resolution, low noise CMOS camera to capture images of thousands of cells with each frame. While ultrasonic focussing has previously been demonstrated for 1D cytometry systems, extending the technology to a planar, much higher throughput format and integrating imaging is non-trivial, and represents a significant jump forward in capability, leading to diagnostic possibilities not achievable with current systems. A galvo mirror is used to track the images of the moving cells permitting exposure times of 10 ms at frame rates of 50 fps with motion blur of only a few pixels. At 80 fps, we demonstrate a throughput of 208 000 beads per second. We investigate the factors affecting motion blur and throughput, and demonstrate the system with fluorescent beads, leukaemia cells and a chondrocyte cell line. Cells require more time to reach the acoustic focus than beads, resulting in lower throughputs; however a longer device would remove this constraint. PMID:29456838

  9. Interactive brain shift compensation using GPU based programming

    NASA Astrophysics Data System (ADS)

    van der Steen, Sander; Noordmans, Herke Jan; Verdaasdonk, Rudolf

    2009-02-01

    Processing large images files or real-time video streams requires intense computational power. Driven by the gaming industry, the processing power of graphic process units (GPUs) has increased significantly. With the pixel shader model 4.0 the GPU can be used for image processing 10x faster than the CPU. Dedicated software was developed to deform 3D MR and CT image sets for real-time brain shift correction during navigated neurosurgery using landmarks or cortical surface traces defined by the navigation pointer. Feedback was given using orthogonal slices and an interactively raytraced 3D brain image. GPU based programming enables real-time processing of high definition image datasets and various applications can be developed in medicine, optics and image sciences.

  10. 75 FR 60846 - Bureau of Consular Affairs; Registration for the Diversity Immigrant (DV-2012) Visa Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-01

    ... need to submit a photo for a child who is already a U.S. citizen or a Legal Permanent Resident. Group... Joint Photographic Experts Group (JPEG) format; it must have a maximum image file size of two hundred... (dpi); the image file format in Joint Photographic Experts Group (JPEG) format; the maximum image file...

  11. NIH Seeks Input on In-patient Clinical Research Areas | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"2476","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of the National Institutes of Health Clinical Center (Building 10) in Bethesda, Maryland.","field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of

  12. 76 FR 59114 - Request for Comments on Establishment of a One-Year Retention Period for Trademark-Related Papers...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-23

    ... application image files and most registration image files. The USPTO is nearing completion of a multi-year... images are stored in TICRS. To date in Fiscal Year 2011, almost 99% of applications were filed...: September 19, 2011. David J. Kappos, Under Secretary of Commerce for Intellectual Property and Director of...

  13. Tracker: Image-Processing and Object-Tracking System Developed

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Wright, Theodore W.

    1999-01-01

    Tracker is an object-tracking and image-processing program designed and developed at the NASA Lewis Research Center to help with the analysis of images generated by microgravity combustion and fluid physics experiments. Experiments are often recorded on film or videotape for analysis later. Tracker automates the process of examining each frame of the recorded experiment, performing image-processing operations to bring out the desired detail, and recording the positions of the objects of interest. It can load sequences of images from disk files or acquire images (via a frame grabber) from film transports, videotape, laser disks, or a live camera. Tracker controls the image source to automatically advance to the next frame. It can employ a large array of image-processing operations to enhance the detail of the acquired images and can analyze an arbitrarily large number of objects simultaneously. Several different tracking algorithms are available, including conventional threshold and correlation-based techniques, and more esoteric procedures such as "snake" tracking and automated recognition of character data in the image. The Tracker software was written to be operated by researchers, thus every attempt was made to make the software as user friendly and self-explanatory as possible. Tracker is used by most of the microgravity combustion and fluid physics experiments performed by Lewis, and by visiting researchers. This includes experiments performed on the space shuttles, Mir, sounding rockets, zero-g research airplanes, drop towers, and ground-based laboratories. This software automates the analysis of the flame or liquid s physical parameters such as position, velocity, acceleration, size, shape, intensity characteristics, color, and centroid, as well as a number of other measurements. It can perform these operations on multiple objects simultaneously. Another key feature of Tracker is that it performs optical character recognition (OCR). This feature is useful in extracting numerical instrumentation data that are embedded in images. All the results are saved in files for further data reduction and graphing. There are currently three Tracking Systems (workstations) operating near the laboratories and offices of Lewis Microgravity Science Division researchers. These systems are used independently by students, scientists, and university-based principal investigators. The researchers bring their tapes or films to the workstation and perform the tracking analysis. The resultant data files generated by the tracking process can then be analyzed on the spot, although most of the time researchers prefer to transfer them via the network to their offices for further analysis or plotting. In addition, many researchers have installed Tracker on computers in their office for desktop analysis of digital image sequences, which can be digitized by the Tracking System or some other means. Tracker has not only provided a capability to efficiently and automatically analyze large volumes of data, saving many hours of tedious work, but has also provided new capabilities to extract valuable information and phenomena that was heretofore undetected and unexploited.

  14. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    PubMed Central

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  15. Cardio-PACs: a new opportunity

    NASA Astrophysics Data System (ADS)

    Heupler, Frederick A., Jr.; Thomas, James D.; Blume, Hartwig R.; Cecil, Robert A.; Heisler, Mary

    2000-05-01

    It is now possible to replace film-based image management in the cardiac catheterization laboratory with a Cardiology Picture Archiving and Communication System (Cardio-PACS) based on digital imaging technology. The first step in the conversion process is installation of a digital image acquisition system that is capable of generating high-quality DICOM-compatible images. The next three steps, which are the subject of this presentation, involve image display, distribution, and storage. Clinical requirements and associated cost considerations for these three steps are listed below: Image display: (1) Image quality equal to film, with DICOM format, lossless compression, image processing, desktop PC-based with color monitor, and physician-friendly imaging software; (2) Performance specifications include: acquire 30 frames/sec; replay 15 frames/sec; access to file server 5 seconds, and to archive 5 minutes; (3) Compatibility of image file, transmission, and processing formats; (4) Image manipulation: brightness, contrast, gray scale, zoom, biplane display, and quantification; (5) User-friendly control of image review. Image distribution: (1) Standard IP-based network between cardiac catheterization laboratories, file server, long-term archive, review stations, and remote sites; (2) Non-proprietary formats; (3) Bidirectional distribution. Image storage: (1) CD-ROM vs disk vs tape; (2) Verification of data integrity; (3) User-designated storage capacity for catheterization laboratory, file server, long-term archive. Costs: (1) Image acquisition equipment, file server, long-term archive; (2) Network infrastructure; (3) Review stations and software; (4) Maintenance and administration; (5) Future upgrades and expansion; (6) Personnel.

  16. The key image and case log application: new radiology software for teaching file creation and case logging that incorporates elements of a social network.

    PubMed

    Rowe, Steven P; Siddiqui, Adeel; Bonekamp, David

    2014-07-01

    To create novel radiology key image software that is easy to use for novice users, incorporates elements adapted from social networking Web sites, facilitates resident and fellow education, and can serve as the engine for departmental sharing of interesting cases and follow-up studies. Using open-source programming languages and software, radiology key image software (the key image and case log application, KICLA) was developed. This system uses a lightweight interface with the institutional picture archiving and communications systems and enables the storage of key images, image series, and cine clips. It was designed to operate with minimal disruption to the radiologists' daily workflow. Many features of the user interface have been inspired by social networking Web sites, including image organization into private or public folders, flexible sharing with other users, and integration of departmental teaching files into the system. We also review the performance, usage, and acceptance of this novel system. KICLA was implemented at our institution and achieved widespread popularity among radiologists. A large number of key images have been transmitted to the system since it became available. After this early experience period, the most commonly encountered radiologic modalities are represented. A survey distributed to users revealed that most of the respondents found the system easy to use (89%) and fast at allowing them to record interesting cases (100%). Hundred percent of respondents also stated that they would recommend a system such as KICLA to their colleagues. The system described herein represents a significant upgrade to the Digital Imaging and Communications in Medicine teaching file paradigm with efforts made to maximize its ease of use and inclusion of characteristics inspired by social networking Web sites that allow the system additional functionality such as individual case logging. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  17. Using compressed images in multimedia education

    NASA Astrophysics Data System (ADS)

    Guy, William L.; Hefner, Lance V.

    1996-04-01

    The classic radiologic teaching file consists of hundreds, if not thousands, of films of various ages, housed in paper jackets with brief descriptions written on the jackets. The development of a good teaching file has been both time consuming and voluminous. Also, any radiograph to be copied was unavailable during the reproduction interval, inconveniencing other medical professionals needing to view the images at that time. These factors hinder motivation to copy films of interest. If a busy radiologist already has an adequate example of a radiological manifestation, it is unlikely that he or she will exert the effort to make a copy of another similar image even if a better example comes along. Digitized radiographs stored on CD-ROM offer marked improvement over the copied film teaching files. Our institution has several laser digitizers which are used to rapidly scan radiographs and produce high quality digital images which can then be converted into standard microcomputer (IBM, Mac, etc.) image format. These images can be stored on floppy disks, hard drives, rewritable optical disks, recordable CD-ROM disks, or removable cartridge media. Most hospital computer information systems include radiology reports in their database. We demonstrate that the reports for the images included in the users teaching file can be copied and stored on the same storage media as the images. The radiographic or sonographic image and the corresponding dictated report can then be 'linked' together. The description of the finding or findings of interest on the digitized image is thus electronically tethered to the image. This obviates the need to write much additional detail concerning the radiograph, saving time. In addition, the text on this disk can be indexed such that all files with user specified features can be instantly retrieve and combined in a single report, if desired. With the use of newer image compression techniques, hundreds of cases may be stored on a single CD-ROM depending on the quality of image required for the finding in question. This reduces the weight of a teaching file from that of a baby elephant to that of a single CD-ROM disc. Thus, with this method of teaching file preparation and storage the following advantages are realized: (1) Technically easier and less time consuming image reproduction. (2) Considerably less unwieldy and substantially more portable teaching files. (3) Novel ability to index files and then retrieve specific cases of choice based on descriptive text.

  18. Software for Viewing Landsat Mosaic Images

    NASA Technical Reports Server (NTRS)

    Watts, Zack; Farve, Catharine L.; Harvey, Craig

    2003-01-01

    A Windows-based computer program has been written to enable novice users (especially educators and students) to view images of large areas of the Earth (e.g., the continental United States) generated from image data acquired in the Landsat observations performed circa the year 1990. The large-area images are constructed as mosaics from the original Landsat images, which were acquired in several wavelength bands and each of which spans an area (in effect, one tile of a mosaic) of .5 in latitude by .6 in longitude. Whereas the original Landsat data are registered on a universal transverse Mercator (UTM) grid, the program converts the UTM coordinates of a mouse pointer in the image to latitude and longitude, which are continuously updated and displayed as the pointer is moved. The mosaic image currently on display can be exported as a Windows bitmap file. Other images (e.g., of state boundaries or interstate highways) can be overlaid on Landsat mosaics. The program interacts with the user via standard toolbar, keyboard, and mouse user interfaces. The program is supplied on a compact disk along with tutorial and educational information.

  19. Comparison of two methods of digital imaging technology for small diameter K-file length determination.

    PubMed

    Maryam, Ehsani; Farida, Abesi; Farhad, Akbarzade; Soraya, Khafri

    2013-11-01

    Obtaining the proper working length in endodontic treatment is essential. The aim of this study was to compare the working length (WL) assessment of small diameter K-files using the two different digital imaging methods. The samples for this in-vitro experimental study consisted of 40 extracted single-rooted premolars. After access cavity preparation, the ISO files no. 6, 8, and 10 stainless steel K-files were inserted in the canals in the three different lengths to evaluate the results in a blinded manner: At the level of apical foramen(actual)1 mm short of apical foramen2 mm short of apical foramen A digital caliper was used to measure the length of the files which was considered as the Gold Standard. Five observers (two oral and maxillofacial radiologists and three endodontists) observed the digital radiographs which were obtained using PSP and CCD digital imaging sensors. The collected data were analyzed by SPSS 17 and Repeated Measures Paired T-test. In WL assessment of small diameter K-files, a significant statistical relationship was seen among the observers of two digital imaging techniques (P<0.001). However, no significant difference was observed between the two digital techniques in WL assessment of small diameter K-files (P<0.05). PSP and CCD digital imaging techniques were similar in WL assessment of canals using no. 6, 8, and 10 K-files.

  20. Data Processing Factory for the Sloan Digital Sky Survey

    NASA Astrophysics Data System (ADS)

    Stoughton, Christopher; Adelman, Jennifer; Annis, James T.; Hendry, John; Inkmann, John; Jester, Sebastian; Kent, Steven M.; Kuropatkin, Nickolai; Lee, Brian; Lin, Huan; Peoples, John, Jr.; Sparks, Robert; Tucker, Douglas; Vanden Berk, Dan; Yanny, Brian; Yocum, Dan

    2002-12-01

    The Sloan Digital Sky Survey (SDSS) data handling presents two challenges: large data volume and timely production of spectroscopic plates from imaging data. A data processing factory, using technologies both old and new, handles this flow. Distribution to end users is via disk farms, to serve corrected images and calibrated spectra, and a database, to efficiently process catalog queries. For distribution of modest amounts of data from Apache Point Observatory to Fermilab, scripts use rsync to update files, while larger data transfers are accomplished by shipping magnetic tapes commercially. All data processing pipelines are wrapped in scripts to address consecutive phases: preparation, submission, checking, and quality control. We constructed the factory by chaining these pipelines together while using an operational database to hold processed imaging catalogs. The science database catalogs all imaging and spectroscopic object, with pointers to the various external files associated with them. Diverse computing systems address particular processing phases. UNIX computers handle tape reading and writing, as well as calibration steps that require access to a large amount of data with relatively modest computational demands. Commodity CPUs process steps that require access to a limited amount of data with more demanding computations requirements. Disk servers optimized for cost per Gbyte serve terabytes of processed data, while servers optimized for disk read speed run SQLServer software to process queries on the catalogs. This factory produced data for the SDSS Early Data Release in June 2001, and it is currently producing Data Release One, scheduled for January 2003.

  1. DICOM to print, 35-mm slides, web, and video projector: tutorial using Adobe Photoshop.

    PubMed

    Gurney, Jud W

    2002-10-01

    Preparing images for publication has dealt with film and the photographic process. With picture archiving and communications systems, many departments will no longer produce film. This will change how images are produced for publication. DICOM, the file format for radiographic images, has to be converted and then prepared for traditional publication, 35-mm slides, the newest techniques of video projection, and the World Wide Web. Tagged image file format is the common format for traditional print publication, whereas joint photographic expert group is the current file format for the World Wide Web. Each medium has specific requirements that can be met with a common image-editing program such as Adobe Photoshop (Adobe Systems, San Jose, CA). High-resolution images are required for print, a process that requires interpolation. However, the Internet requires images with a small file size for rapid transmission. The resolution of each output differs and the image resolution must be optimized to match the output of the publishing medium.

  2. Hybrid cryptosystem for image file using elgamal and double playfair cipher algorithm

    NASA Astrophysics Data System (ADS)

    Hardi, S. M.; Tarigan, J. T.; Safrina, N.

    2018-03-01

    In this paper, we present an implementation of an image file encryption using hybrid cryptography. We chose ElGamal algorithm to perform asymmetric encryption and Double Playfair for the symmetric encryption. Our objective is to show that these algorithms are capable to encrypt an image file with an acceptable running time and encrypted file size while maintaining the level of security. The application was built using C# programming language and ran as a stand alone desktop application under Windows Operating System. Our test shows that the system is capable to encrypt an image with a resolution of 500×500 to a size of 976 kilobytes with an acceptable running time.

  3. High-performance web viewer for cardiac images

    NASA Astrophysics Data System (ADS)

    dos Santos, Marcelo; Furuie, Sergio S.

    2004-04-01

    With the advent of the digital devices for medical diagnosis the use of the regular films in radiology has decreased. Thus, the management and handling of medical images in digital format has become an important and critical task. In Cardiology, for example, the main difficulty is to display dynamic images with the appropriated color palette and frame rate used on acquisition process by Cath, Angio and Echo systems. In addition, other difficulty is handling large images in memory by any existing personal computer, including thin clients. In this work we present a web-based application that carries out these tasks with robustness and excellent performance, without burdening the server and network. This application provides near-diagnostic quality display of cardiac images stored as DICOM 3.0 files via a web browser and provides a set of resources that allows the viewing of still and dynamic images. It can access image files from the local disks, or network connection. Its features include: allows real-time playback, dynamic thumbnails image viewing during loading, access to patient database information, image processing tools, linear and angular measurements, on-screen annotations, image printing and exporting DICOM images to other image formats, and many others, all characterized by a pleasant user-friendly interface, inside a Web browser by means of a Java application. This approach offers some advantages over the most of medical images viewers, such as: facility of installation, integration with other systems by means of public and standardized interfaces, platform independence, efficient manipulation and display of medical images, all with high performance.

  4. Community Oncology and Prevention Trials | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"168","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Early Detection Research Group Homepage Image","field_file_image_title_text[und][0][value]":"Early Detection Research Group Homepage Image","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Early Detection Research Group Homepage Image","title":"Early

  5. Transforming Dermatologic Imaging for the Digital Era: Metadata and Standards.

    PubMed

    Caffery, Liam J; Clunie, David; Curiel-Lewandrowski, Clara; Malvehy, Josep; Soyer, H Peter; Halpern, Allan C

    2018-01-17

    Imaging is increasingly being used in dermatology for documentation, diagnosis, and management of cutaneous disease. The lack of standards for dermatologic imaging is an impediment to clinical uptake. Standardization can occur in image acquisition, terminology, interoperability, and metadata. This paper presents the International Skin Imaging Collaboration position on standardization of metadata for dermatologic imaging. Metadata is essential to ensure that dermatologic images are properly managed and interpreted. There are two standards-based approaches to recording and storing metadata in dermatologic imaging. The first uses standard consumer image file formats, and the second is the file format and metadata model developed for the Digital Imaging and Communication in Medicine (DICOM) standard. DICOM would appear to provide an advantage over using consumer image file formats for metadata as it includes all the patient, study, and technical metadata necessary to use images clinically. Whereas, consumer image file formats only include technical metadata and need to be used in conjunction with another actor-for example, an electronic medical record-to supply the patient and study metadata. The use of DICOM may have some ancillary benefits in dermatologic imaging including leveraging DICOM network and workflow services, interoperability of images and metadata, leveraging existing enterprise imaging infrastructure, greater patient safety, and better compliance to legislative requirements for image retention.

  6. Road Damage Extraction from Post-Earthquake Uav Images Assisted by Vector Data

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Dou, A.

    2018-04-01

    Extraction of road damage information after earthquake has been regarded as urgent mission. To collect information about stricken areas, Unmanned Aerial Vehicle can be used to obtain images rapidly. This paper put forward a novel method to detect road damage and bring forward a coefficient to assess road accessibility. With the assistance of vector road data, image data of the Jiuzhaigou Ms7.0 Earthquake is tested. In the first, the image is clipped according to vector buffer. Then a large-scale segmentation is applied to remove irrelevant objects. Thirdly, statistics of road features are analysed, and damage information is extracted. Combining with the on-filed investigation, the extraction result is effective.

  7. Image processing tool for automatic feature recognition and quantification

    DOEpatents

    Chen, Xing; Stoddard, Ryan J.

    2017-05-02

    A system for defining structures within an image is described. The system includes reading of an input file, preprocessing the input file while preserving metadata such as scale information and then detecting features of the input file. In one version the detection first uses an edge detector followed by identification of features using a Hough transform. The output of the process is identified elements within the image.

  8. a Hadoop-Based Distributed Framework for Efficient Managing and Processing Big Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Wang, C.; Hu, F.; Hu, X.; Zhao, S.; Wen, W.; Yang, C.

    2015-07-01

    Various sensors from airborne and satellite platforms are producing large volumes of remote sensing images for mapping, environmental monitoring, disaster management, military intelligence, and others. However, it is challenging to efficiently storage, query and process such big data due to the data- and computing- intensive issues. In this paper, a Hadoop-based framework is proposed to manage and process the big remote sensing data in a distributed and parallel manner. Especially, remote sensing data can be directly fetched from other data platforms into the Hadoop Distributed File System (HDFS). The Orfeo toolbox, a ready-to-use tool for large image processing, is integrated into MapReduce to provide affluent image processing operations. With the integration of HDFS, Orfeo toolbox and MapReduce, these remote sensing images can be directly processed in parallel in a scalable computing environment. The experiment results show that the proposed framework can efficiently manage and process such big remote sensing data.

  9. Migration of the digital interactive breast-imaging teaching file

    NASA Astrophysics Data System (ADS)

    Cao, Fei; Sickles, Edward A.; Huang, H. K.; Zhou, Xiaoqiang

    1998-06-01

    The digital breast imaging teaching file developed during the last two years in our laboratory has been used successfully at UCSF (University of California, San Francisco) as a routine teaching tool for training radiology residents and fellows in mammography. Building on this success, we have ported the teaching file from an old Pixar imaging/Sun SPARC 470 display system to our newly designed telemammography display workstation (Ultra SPARC 2 platform with two DOME Md5/SBX display boards). The old Pixar/Sun 470 system, although adequate for fast and high-resolution image display, is 4- year-old technology, expensive to maintain and difficult to upgrade. The new display workstation is more cost-effective and is also compatible with the digital image format from a full-field direct digital mammography system. The digital teaching file is built on a sophisticated computer-aided instruction (CAI) model, which simulates the management sequences used in imaging interpretation and work-up. Each user can be prompted to respond by making his/her own observations, assessments, and work-up decisions as well as the marking of image abnormalities. This effectively replaces the traditional 'show-and-tell' teaching file experience with an interactive, response-driven type of instruction.

  10. The comparative effectiveness of conventional and digital image libraries.

    PubMed

    McColl, R I; Johnson, A

    2001-03-01

    Before introducing a hospital-wide image database to improve access, navigation and retrieval speed, a comparative study between a conventional slide library and a matching image database was undertaken to assess its relative benefits. Paired time trials and personal questionnaires revealed faster retrieval rates, higher image quality, and easier viewing for the pilot digital image database. Analysis of confidentiality, copyright and data protection exposed similar issues for both systems, thus concluding that the digital image database is a more effective library system. The authors suggest that in the future, medical images will be stored on large, professionally administered, centrally located file servers, allowing specialist image libraries to be tailored locally for individual users. The further integration of the database with web technology will enable cheap and efficient remote access for a wide range of users.

  11. Context-dependent JPEG backward-compatible high-dynamic range image compression

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Ebrahimi, Touradj

    2013-10-01

    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  12. Hierarchical storage of large volume of multidector CT data using distributed servers

    NASA Astrophysics Data System (ADS)

    Ratib, Osman; Rosset, Antoine; Heuberger, Joris; Bandon, David

    2006-03-01

    Multidector scanners and hybrid multimodality scanners have the ability to generate large number of high-resolution images resulting in very large data sets. In most cases, these datasets are generated for the sole purpose of generating secondary processed images and 3D rendered images as well as oblique and curved multiplanar reformatted images. It is therefore not essential to archive the original images after they have been processed. We have developed an architecture of distributed archive servers for temporary storage of large image datasets for 3D rendering and image processing without the need for long term storage in PACS archive. With the relatively low cost of storage devices it is possible to configure these servers to hold several months or even years of data, long enough for allowing subsequent re-processing if required by specific clinical situations. We tested the latest generation of RAID servers provided by Apple computers with a capacity of 5 TBytes. We implemented a peer-to-peer data access software based on our Open-Source image management software called OsiriX, allowing remote workstations to directly access DICOM image files located on the server through a new technology called "bonjour". This architecture offers a seamless integration of multiple servers and workstations without the need for central database or complex workflow management tools. It allows efficient access to image data from multiple workstation for image analysis and visualization without the need for image data transfer. It provides a convenient alternative to centralized PACS architecture while avoiding complex and time-consuming data transfer and storage.

  13. Rapid 3D bioprinting from medical images: an application to bone scaffolding

    NASA Astrophysics Data System (ADS)

    Lee, Daniel Z.; Peng, Matthew W.; Shinde, Rohit; Khalid, Arbab; Hong, Abigail; Pennacchi, Sara; Dawit, Abel; Sipzner, Daniel; Udupa, Jayaram K.; Rajapakse, Chamith S.

    2018-03-01

    Bioprinting of tissue has its applications throughout medicine. Recent advances in medical imaging allows the generation of 3-dimensional models that can then be 3D printed. However, the conventional method of converting medical images to 3D printable G-Code instructions has several limitations, namely significant processing time for large, high resolution images, and the loss of microstructural surface information from surface resolution and subsequent reslicing. We have overcome these issues by creating a JAVA program that skips the intermediate triangularization and reslicing steps and directly converts binary dicom images into G-Code. In this study, we tested the two methods of G-Code generation on the application of synthetic bone graft scaffold generation. We imaged human cadaveric proximal femurs at an isotropic resolution of 0.03mm using a high resolution peripheral quantitative computed tomography (HR-pQCT) scanner. These images, of the Digital Imaging and Communications in Medicine (DICOM) format, were then processed through two methods. In each method, slices and regions of print were selected, filtered to generate a smoothed image, and thresholded. In the conventional method, these processed images are converted to the STereoLithography (STL) format and then resliced to generate G-Code. In the new, direct method, these processed images are run through our JAVA program and directly converted to G-Code. File size, processing time, and print time were measured for each. We found that this new method produced a significant reduction in G-Code file size as well as processing time (92.23% reduction). This allows for more rapid 3D printing from medical images.

  14. Goddard high resolution spectrograph science verification and data analysis

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The data analysis performed was to support the Orbital Verification (OV) and Science Verification (SV) of the GHRS was in the areas of the Digicon detector's performance and stability, wavelength calibration, and geomagnetic induced image motion. The results of the analyses are briefly described. Detailed results are given in the form of attachments. Specialized software was developed for the analyses. Calibration files were formatted according to the specifications in a Space Telescope Science report. IRAS images were restored of the Large Magellanic Cloud using a blocked iterative algorithm. The algorithm works with the raw data scans without regridding or interpolating the data on an equally spaced image grid.

  15. Printed products for digital cameras and mobile devices

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Schmidt-Sacht, Wulf

    2005-01-01

    Digital photography is no longer simply a successor to film. The digital market is now driven by additional devices such as mobile phones with camera and video functions (camphones) as well as innovative products derived from digital files. A large number of consumers do not print their images and non-printing has become the major enemy of wholesale printers, home printing suppliers and retailers. This paper addresses the challenge facing our industry, namely how to encourage the consumer to print images easily and conveniently from all types of digital media.

  16. Development of a user-centered radiology teaching file system

    NASA Astrophysics Data System (ADS)

    dos Santos, Marcelo; Fujino, Asa

    2011-03-01

    Learning radiology requires systematic and comprehensive study of a large knowledge base of medical images. In this work is presented the development of a digital radiology teaching file system. The proposed system has been created in order to offer a set of customized services regarding to users' contexts and their informational needs. This has been done by means of an electronic infrastructure that provides easy and integrated access to all relevant patient data at the time of image interpretation, so that radiologists and researchers can examine all available data to reach well-informed conclusions, while protecting patient data privacy and security. The system is presented such as an environment which implements a distributed clinical database, including medical images, authoring tools, repository for multimedia documents, and also a peer-reviewed model which assures dataset quality. The current implementation has shown that creating clinical data repositories on networked computer environments points to be a good solution in terms of providing means to review information management practices in electronic environments and to create customized and contextbased tools for users connected to the system throughout electronic interfaces.

  17. Determining the Completeness of the Nimbus Meteorological Data Archive

    NASA Technical Reports Server (NTRS)

    Johnson, James; Moses, John; Kempler, Steven; Zamkoff, Emily; Al-Jazrawi, Atheer; Gerasimov, Irina; Trivedi, Bhagirath

    2011-01-01

    NASA launched the Nimbus series of meteorological satellites in the 1960s and 70s. These satellites carried instruments for making observations of the Earth in the visible, infrared, ultraviolet, and microwave wavelengths. The original data archive consisted of a combination of digital data written to 7-track computer tapes and on various film media. Many of these data sets are now being migrated from the old media to the GES DISC modern online archive. The process involves recovering the digital data files from tape as well as scanning images of the data from film strips. Some of the challenges of archiving the Nimbus data include the lack of any metadata from these old data sets. Metadata standards and self-describing data files did not exist at that time, and files were written on now obsolete hardware systems and outdated file formats. This requires creating metadata by reading the contents of the old data files. Some digital data files were corrupted over time, or were possibly improperly copied at the time of creation. Thus there are data gaps in the collections. The film strips were stored in boxes and are now being scanned as JPEG-2000 images. The only information describing these images is what was written on them when they were originally created, and sometimes this information is incomplete or missing. We have the ability to cross-reference the scanned images against the digital data files to determine which of these best represents the data set from the various missions, or to see how complete the data sets are. In this presentation we compared data files and scanned images from the Nimbus-2 High-Resolution Infrared Radiometer (HRIR) for September 1966 to determine whether the data and images are properly archived with correct metadata.

  18. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  19. TM digital image products for applications. [computer compatible tapes

    NASA Technical Reports Server (NTRS)

    Barker, J. L.; Gunther, F. J.; Abrams, R. B.; Ball, D.

    1984-01-01

    The image characteristics of digital data generated by LANDSAT 4 thematic mapper (TM) are discussed. Digital data from the TM resides in tape files at various stages of image processing. Within each image data file, the image lines are blocked by a factor of either 5 for a computer compatible tape CCT-BT, or 4 for a CCT-AT and CCT-PT; in each format, the image file has a different format. Nominal geometric corrections which provide proper geodetic relationships between different parts of the image are available only for the CCT-PT. It is concluded that detector 3 of band 5 on the TM does not respond; this channel of data needs replacement. The empty bin phenomenon in CCT-AT images results from integer truncations of mixed-mode arithmetric operations.

  20. Galileo SSI/Ida Radiometrically Calibrated Images V1.0

    NASA Astrophysics Data System (ADS)

    Domingue, D. L.

    2016-05-01

    This data set includes Galileo Orbiter SSI radiometrically calibrated images of the asteroid 243 Ida, created using ISIS software and assuming nadir pointing. This is an original delivery of radiometrically calibrated files, not an update to existing files. All images archived include the asteroid within the image frame. Calibration was performed in 2013-2014.

  1. Casimage project: a digital teaching files authoring environment.

    PubMed

    Rosset, Antoine; Muller, Henning; Martins, Martina; Dfouni, Natalia; Vallée, Jean-Paul; Ratib, Osman

    2004-04-01

    The goal of the Casimage project is to offer an authoring and editing environment integrated with the Picture Archiving and Communication Systems (PACS) for creating image-based electronic teaching files. This software is based on a client/server architecture allowing remote access of users to a central database. This authoring environment allows radiologists to create reference databases and collection of digital images for teaching and research directly from clinical cases being reviewed on PACS diagnostic workstations. The environment includes all tools to create teaching files, including textual description, annotations, and image manipulation. The software also allows users to generate stand-alone CD-ROMs and web-based teaching files to easily share their collections. The system includes a web server compatible with the Medical Imaging Resource Center standard (MIRC, http://mirc.rsna.org) to easily integrate collections in the RSNA web network dedicated to teaching files. This software could be installed on any PACS workstation to allow users to add new cases at any time and anywhere during clinical operations. Several images collections were created with this tool, including thoracic imaging that was subsequently made available on a CD-Rom and on our web site and through the MIRC network for public access.

  2. Clementine High Resolution Camera Mosaicking Project

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This report constitutes the final report for NASA Contract NASW-5054. This project processed Clementine I high resolution images of the Moon, mosaicked these images together, and created a 22-disk set of compact disk read-only memory (CD-ROM) volumes. The mosaics were produced through semi-automated registration and calibration of the high resolution (HiRes) camera's data against the geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic produced by the US Geological Survey (USGS). The HiRes mosaics were compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution nadir-looking observations. The images were spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel for sub-polar mosaics (below 80 deg. latitude) and using the stereographic projection at a scale of 30 m/pixel for polar mosaics. Only images with emission angles less than approximately 50 were used. Images from non-mapping cross-track slews, which tended to have large SPICE errors, were generally omitted. The locations of the resulting image population were found to be offset from the UV/Vis basemap by up to 13 km (0.4 deg.). Geometric control was taken from the 100 m/pixel global and 150 m/pixel polar USGS Clementine Basemap Mosaics compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Radiometric calibration was achieved by removing the image nonuniformity dominated by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap, that approximately transform the 8-bit HiRes data to photometric units. The sub-polar mosaics are divided into tiles that cover approximately 1.75 deg. of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. Polar mosaics are tiled into squares 2250 pixels on a side, which spans approximately 2.2 deg. Two mosaics are provided for each pole: one corresponding to data acquired while periapsis was in the south, the other while periapsis was in the north. The CD-ROMs also contain ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files.

  3. Optimizing Cloud Based Image Storage, Dissemination and Processing Through Use of Mrf and Lerc

    NASA Astrophysics Data System (ADS)

    Becker, Peter; Plesea, Lucian; Maurer, Thomas

    2016-06-01

    The volume and numbers of geospatial images being collected continue to increase exponentially with the ever increasing number of airborne and satellite imaging platforms, and the increasing rate of data collection. As a result, the cost of fast storage required to provide access to the imagery is a major cost factor in enterprise image management solutions to handle, process and disseminate the imagery and information extracted from the imagery. Cloud based object storage offers to provide significantly lower cost and elastic storage for this imagery, but also adds some disadvantages in terms of greater latency for data access and lack of traditional file access. Although traditional file formats geoTIF, JPEG2000 and NITF can be downloaded from such object storage, their structure and available compression are not optimum and access performance is curtailed. This paper provides details on a solution by utilizing a new open image formats for storage and access to geospatial imagery optimized for cloud storage and processing. MRF (Meta Raster Format) is optimized for large collections of scenes such as those acquired from optical sensors. The format enables optimized data access from cloud storage, along with the use of new compression options which cannot easily be added to existing formats. The paper also provides an overview of LERC a new image compression that can be used with MRF that provides very good lossless and controlled lossy compression.

  4. In vitro particle image velocity measurements in a model root canal: flow around a polymer rotary finishing file.

    PubMed

    Koch, Jon D; Smith, Nicholas A; Garces, Daniel; Gao, Luyang; Olsen, F Kris

    2014-03-01

    Root canal irrigation is vital to thorough debridement and disinfection, but the mechanisms that contribute to its effectiveness are complex and uncertain. Traditionally, studies in this area have relied on before-and-after static comparisons to assess effectiveness, but new in situ tools are being developed to provide real-time assessments of irrigation. The aim in this work was to measure a cross section of the velocity field in the fluid flow around a polymer rotary finishing file in a model root canal. Fluorescent microparticles were seeded into an optically accessible acrylic root canal model. A polymer rotary finishing file was activated in a static position. After laser excitation, fluorescence from the microparticles was imaged onto a frame-transfer camera. Two consecutive images were cross-correlated to provide a measurement of a projected, 2-dimensional velocity field. The method reveals that fluid velocities can be much higher than the velocity of the file because of the shape of the file. Furthermore, these high velocities are in the axial direction of the canal rather than only in the direct of motion of the file. Particle image velocimetry indicates that fluid velocities induced by the rotating file can be much larger than the speed of the file. Particle image velocimetry can provide qualitative insight and quantitative measurements that may be useful for validating computational fluid dynamic models and connecting clinical observations to physical explanations in dental research. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  5. Image editing with Adobe Photoshop 6.0.

    PubMed

    Caruso, Ronald D; Postel, Gregory C

    2002-01-01

    The authors introduce Photoshop 6.0 for radiologists and demonstrate basic techniques of editing gray-scale cross-sectional images intended for publication and for incorporation into computerized presentations. For basic editing of gray-scale cross-sectional images, the Tools palette and the History/Actions palette pair should be displayed. The History palette may be used to undo a step or series of steps. The Actions palette is a menu of user-defined macros that save time by automating an action or series of actions. Converting an image to 8-bit gray scale is the first editing function. Cropping is the next action. Both decrease file size. Use of the smallest file size necessary for the purpose at hand is recommended. Final file size for gray-scale cross-sectional neuroradiologic images (8-bit, single-layer TIFF [tagged image file format] at 300 pixels per inch) intended for publication varies from about 700 Kbytes to 3 Mbytes. Final file size for incorporation into computerized presentations is about 10-100 Kbytes (8-bit, single-layer, gray-scale, high-quality JPEG [Joint Photographic Experts Group]), depending on source and intended use. Editing and annotating images before they are inserted into presentation software is highly recommended, both for convenience and flexibility. Radiologists should find that image editing can be carried out very rapidly once the basic steps are learned and automated. Copyright RSNA, 2002

  6. 76 FR 24467 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-02

    ... Schedule A Image to be effective 12/20/2010. Filed Date: 04/25/2011. Accession Number: 20110425-5169... per Order in ER11-2750-000 to resubmit the Schedule A Image to be effective 12/28/2010. Filed Date: 04... (toll free). For TTY, call (202) 502-8659. Dated: April 26, 2011. Nathaniel J. Davis, Sr., Deputy...

  7. ARCUS Internet Media Archive (IMA): A Window Into the Arctic - An Online Resource for Education and Outreach

    NASA Astrophysics Data System (ADS)

    Buxbaum, T. M.; Warnick, W. K.; Polly, B.; Hueffer, L. J.; Behr, S. A.

    2006-12-01

    The ARCUS Internet Media Archive (IMA) is a collection of photos, graphics, videos, and presentations about the Arctic that are shared through the Internet. It provides the arctic research community and the public at large with a centralized location where images and video pertaining to polar research can be browsed and retrieved for a variety of uses. The IMA currently contains almost 5,000 publicly accessible photos, including 3,000 photos from the National Science Foundation funded Teachers and Researchers Exploring and Collaborating (TREC) program, an educational research experience in which K-12 teachers participate in arctic research as a pathway to improving science education. The IMA also includes 360 video files, 260 audio files, and approximately 8,000 additional resources that are being prepared for public access. The contents of this archive are organized by file type, contributor's name, event, or by organization, with each photo or file accompanied by information on content, contributor source, and usage requirements. All the files are keyworded and all information, including file name and description, is completely searchable. ARCUS plans to continue to improve and expand the IMA with a particular focus on providing graphics depicting key arctic research results and findings as well as edited video archives of relevant scientific community meetings.

  8. Workload Characterization and Performance Implications of Large-Scale Blog Servers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeon, Myeongjae; Kim, Youngjae; Hwang, Jeaho

    With the ever-increasing popularity of social network services (SNSs), an understanding of the characteristics of these services and their effects on the behavior of their host servers is critical. However, there has been a lack of research on the workload characterization of servers running SNS applications such as blog services. To fill this void, we empirically characterized real-world web server logs collected from one of the largest South Korean blog hosting sites for 12 consecutive days. The logs consist of more than 96 million HTTP requests and 4.7 TB of network traffic. Our analysis reveals the followings: (i) The transfermore » size of non-multimedia files and blog articles can be modeled using a truncated Pareto distribution and a log-normal distribution, respectively; (ii) User access for blog articles does not show temporal locality, but is strongly biased towards those posted with image or audio files. We additionally discuss the potential performance improvement through clustering of small files on a blog page into contiguous disk blocks, which benefits from the observed file access patterns. Trace-driven simulations show that, on average, the suggested approach achieves 60.6% better system throughput and reduces the processing time for file access by 30.8% compared to the best performance of the Ext4 file system.« less

  9. Compression Algorithm Analysis of In-Situ (S)TEM Video: Towards Automatic Event Detection and Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teuton, Jeremy R.; Griswold, Richard L.; Mehdi, Beata L.

    Precise analysis of both (S)TEM images and video are time and labor intensive processes. As an example, determining when crystal growth and shrinkage occurs during the dynamic process of Li dendrite deposition and stripping involves manually scanning through each frame in the video to extract a specific set of frames/images. For large numbers of images, this process can be very time consuming, so a fast and accurate automated method is desirable. Given this need, we developed software that uses analysis of video compression statistics for detecting and characterizing events in large data sets. This software works by converting the datamore » into a series of images which it compresses into an MPEG-2 video using the open source “avconv” utility [1]. The software does not use the video itself, but rather analyzes the video statistics from the first pass of the video encoding that avconv records in the log file. This file contains statistics for each frame of the video including the frame quality, intra-texture and predicted texture bits, forward and backward motion vector resolution, among others. In all, avconv records 15 statistics for each frame. By combining different statistics, we have been able to detect events in various types of data. We have developed an interactive tool for exploring the data and the statistics that aids the analyst in selecting useful statistics for each analysis. Going forward, an algorithm for detecting and possibly describing events automatically can be written based on statistic(s) for each data type.« less

  10. Design and fabrication of multispectral optics using expanded glass map

    NASA Astrophysics Data System (ADS)

    Bayya, Shyam; Gibson, Daniel; Nguyen, Vinh; Sanghera, Jasbinder; Kotov, Mikhail; Drake, Gryphon; Deegan, John; Lindberg, George

    2015-06-01

    As the desire to have compact multispectral imagers in various DoD platforms is growing, the dearth of multispectral optics is widely felt. With the limited number of material choices for optics, these multispectral imagers are often very bulky and impractical on several weight sensitive platforms. To address this issue, NRL has developed a large set of unique infrared glasses that transmit from 0.9 to > 14 μm in wavelength and expand the glass map for multispectral optics with refractive indices from 2.38 to 3.17. They show a large spread in dispersion (Abbe number) and offer some unique solutions for multispectral optics designs. The new NRL glasses can be easily molded and also fused together to make bonded doublets. A Zemax compatible glass file has been created and is available upon request. In this paper we present some designs, optics fabrication and imaging, all using NRL materials.

  11. Development of a user-friendly system for image processing of electron microscopy by integrating a web browser and PIONE with Eos.

    PubMed

    Tsukamoto, Takafumi; Yasunaga, Takuo

    2014-11-01

    Eos (Extensible object-oriented system) is one of the powerful applications for image processing of electron micrographs. In usual cases, Eos works with only character user interfaces (CUI) under the operating systems (OS) such as OS-X or Linux, not user-friendly. Thus, users of Eos need to be expert at image processing of electron micrographs, and have a little knowledge of computer science, as well. However, all the persons who require Eos does not an expert for CUI. Thus we extended Eos to a web system independent of OS with graphical user interfaces (GUI) by integrating web browser.Advantage to use web browser is not only to extend Eos with GUI, but also extend Eos to work under distributed computational environment. Using Ajax (Asynchronous JavaScript and XML) technology, we implemented more comfortable user-interface on web browser. Eos has more than 400 commands related to image processing for electron microscopy, and the usage of each command is different from each other. Since the beginning of development, Eos has managed their user-interface by using the interface definition file of "OptionControlFile" written in CSV (Comma-Separated Value) format, i.e., Each command has "OptionControlFile", which notes information for interface and its usage generation. Developed GUI system called "Zephyr" (Zone for Easy Processing of HYpermedia Resources) also accessed "OptionControlFIle" and produced a web user-interface automatically, because its mechanism is mature and convenient,The basic actions of client side system was implemented properly and can supply auto-generation of web-form, which has functions of execution, image preview, file-uploading to a web server. Thus the system can execute Eos commands with unique options for each commands, and process image analysis. There remain problems of image file format for visualization and workspace for analysis: The image file format information is useful to check whether the input/output file is correct and we also need to provide common workspace for analysis because the client is physically separated from a server. We solved the file format problem by extension of rules of OptionControlFile of Eos. Furthermore, to solve workspace problems, we have developed two type of system. The first system is to use only local environments. The user runs a web server provided by Eos, access to a web client through a web browser, and manipulate the local files with GUI on the web browser. The second system is employing PIONE (Process-rule for Input/Output Negotiation Environment), which is our developing platform that works under heterogenic distributed environment. The users can put their resources, such as microscopic images, text files and so on, into the server-side environment supported by PIONE, and so experts can write PIONE rule definition, which defines a workflow of image processing. PIONE run each image processing on suitable computers, following the defined rule. PIONE has the ability of interactive manipulation, and user is able to try a command with various setting values. In this situation, we contribute to auto-generation of GUI for a PIONE workflow.As advanced functions, we have developed a module to log user actions. The logs include information such as setting values in image processing, procedure of commands and so on. If we use the logs effectively, we can get a lot of advantages. For example, when an expert may discover some know-how of image processing, other users can also share logs including his know-hows and so we may obtain recommendation workflow of image analysis, if we analyze logs. To implement social platform of image processing for electron microscopists, we have developed system infrastructure, as well. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. 42 CFR 37.44 - Approval of radiographic facilities that use digital radiography systems.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... image acquisition, digitization, processing, compression, transmission, display, archiving, and... quality digital chest radiographs by submitting to NIOSH digital radiographic image files of a test object... digital radiographic image files from six or more sample chest radiographs that are of acceptable quality...

  13. 42 CFR 37.44 - Approval of radiographic facilities that use digital radiography systems.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... image acquisition, digitization, processing, compression, transmission, display, archiving, and... quality digital chest radiographs by submitting to NIOSH digital radiographic image files of a test object... digital radiographic image files from six or more sample chest radiographs that are of acceptable quality...

  14. A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data

    PubMed Central

    Venkat, A.; Christensen, C.; Gyulassy, A.; Summa, B.; Federer, F.; Angelucci, A.; Pascucci, V.

    2017-01-01

    The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data. PMID:28638896

  15. A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data.

    PubMed

    Venkat, A; Christensen, C; Gyulassy, A; Summa, B; Federer, F; Angelucci, A; Pascucci, V

    2016-08-01

    The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data.

  16. VizieR Online Data Catalog: 8 Fermi GRB afterglows follow-up (Singer+, 2015)

    NASA Astrophysics Data System (ADS)

    Singer, L. P.; Kasliwal, M. M.; Cenko, S. B.; Perley, D. A.; Anderson, G. E.; Anupama, G. C.; Arcavi, I.; Bhalerao, V.; Bue, B. D.; Cao, Y.; Connaughton, V.; Corsi, A.; Cucchiara, A.; Fender, R. P.; Fox, D. B.; Gehrels, N.; Goldstein, A.; Gorosabel, J.; Horesh, A.; Hurley, K.; Johansson, J.; Kann, D. A.; Kouveliotou, C.; Huang, K.; Kulkarni, S. R.; Masci, F.; Nugent, P.; Rau, A.; Rebbapragada, U. D.; Staley, T. D.; Svinkin, D.; Thone, C. C.; de Ugarte Postigo, A.; Urata, Y.; Weinstein, A.

    2015-10-01

    In this work, we present the GBM-iPTF (intermediate Palomar Transient Factory) afterglows from the first 13 months of this project. Follow-up observations include R-band photometry from the P48, multicolor photometry from the P60, spectroscopy (acquired with the P200, Keck, Gemini, APO, Magellan, Very Large Telescope (VLT), and GTC), and radio observations with the Very Large Array (VLA), the Combined Array for Research in Millimeter-wave Astronomy (CARMA), the Australia Telescope Compact Array (ATCA), and the Arcminute Microkelvin Imager (AMI). (3 data files).

  17. Peer-to-peer architecture for multi-departmental distributed PACS

    NASA Astrophysics Data System (ADS)

    Rosset, Antoine; Heuberger, Joris; Pysher, Lance; Ratib, Osman

    2006-03-01

    We have elected to explore peer-to-peer technology as an alternative to centralized PACS architecture for the increasing requirements for wide access to images inside and outside a radiology department. The goal being to allow users across the enterprise to access any study anytime without the need for prefetching or routing of images from central archive. Images can be accessed between different workstations and local storage nodes. We implemented "bonjour" a new remote file access technology developed by Apple allowing applications to share data and files remotely with optimized data access and data transfer. Our Open-source image display platform called OsiriX was adapted to allow sharing of local DICOM images through direct access of each local SQL database to be accessible from any other OsiriX workstation over the network. A server version of Osirix Core Data database also allows to access distributed archives servers in the same way. The infrastructure implemented allows fast and efficient access to any image anywhere anytime independently from the actual physical location of the data. It also allows benefiting from the performance of distributed low-cost and high capacity storage servers that can provide efficient caching of PACS data that was found to be 10 to 20 x faster that accessing the same date from the central PACS archive. It is particularly suitable for large hospitals and academic environments where clinical conferences, interdisciplinary discussions and successive sessions of image processing are often part of complex workflow or patient management and decision making.

  18. Mass-storage management for distributed image/video archives

    NASA Astrophysics Data System (ADS)

    Franchi, Santina; Guarda, Roberto; Prampolini, Franco

    1993-04-01

    The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.

  19. Automatic Detection of Steganographic Content

    DTIC Science & Technology

    2005-06-30

    Practically, it is mostly embedded into the media files, especially the image files. Consequently, a lot of the anti- steganography algorithms work with raw...1: not enough memory * -2: error running the removal algorithm EXPORT IMAGE *StegRemove( IMAGE * image , int *error); 2.8 Steganography Extraction API...researcher just invented a reliable algorithm that can detect the existence of a steganography if it is embedded anywhere in any uncompressed image . The

  20. Graphics-Printing Program For The HP Paintjet Printer

    NASA Technical Reports Server (NTRS)

    Atkins, Victor R.

    1993-01-01

    IMPRINT utility computer program developed to print graphics specified in raster files by use of Hewlett-Packard Paintjet(TM) color printer. Reads bit-mapped images from files on UNIX-based graphics workstation and prints out three different types of images: wire-frame images, solid-color images, and gray-scale images. Wire-frame images are in continuous tone or, in case of low resolution, in random gray scale. In case of color images, IMPRINT also prints by use of default palette of solid colors. Written in C language.

  1. 40 CFR 265.71 - Use of manifest system.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., § 265.71 was amended by revising paragraph (a)(2), and by adding paragraphs (f), (g), (h), (i), (j), and... operator, the owner or operator may transmit to the system operator an image file of Page 1 of the manifest, or both a data string file and the image file corresponding to Page 1 of the manifest. Any data or...

  2. Internet-based transfer of cardiac ultrasound images

    NASA Technical Reports Server (NTRS)

    Firstenberg, M. S.; Greenberg, N. L.; Garcia, M. J.; Morehead, A. J.; Cardon, L. A.; Klein, A. L.; Thomas, J. D.

    2000-01-01

    A drawback to large-scale multicentre studies is the time required for the centralized evaluation of diagnostic images. We evaluated the feasibility of digital transfer of echocardiographic images to a central laboratory for rapid and accurate interpretation. Ten patients undergoing trans-oesophageal echocardiographic scanning at three sites had representative single images and multiframe loops stored digitally. The images were analysed in the ordinary way. All images were then transferred via the Internet to a central laboratory and reanalysed by a different observer. The file sizes were 1.5-72 MByte and the transfer rates achieved were 0.6-4.8 Mbit/min. Quantitative measurements were similar between most on-site and central laboratory measurements (all P > 0.25), although measurements differed for left atrial width and pulmonary venous systolic velocities (both P < 0.05). Digital transfer of echocardiographic images and data to a central laboratory may be useful for multicentre trials.

  3. Efficient image data distribution and management with application to web caching architectures

    NASA Astrophysics Data System (ADS)

    Han, Keesook J.; Suter, Bruce W.

    2003-03-01

    We present compact image data structures and associated packet delivery techniques for effective Web caching architectures. Presently, images on a web page are inefficiently stored, using a single image per file. Our approach is to use clustering to merge similar images into a single file in order to exploit the redundancy between images. Our studies indicate that a 30-50% image data size reduction can be achieved by eliminating the redundancies of color indexes. Attached to this file is new metadata to permit an easy extraction of images. This approach will permit a more efficient use of the cache, since a shorter list of cache references will be required. Packet and transmission delays can be reduced by 50% eliminating redundant TCP/IP headers and connection time. Thus, this innovative paradigm for the elimination of redundancy may provide valuable benefits for optimizing packet delivery in IP networks by reducing latency and minimizing the bandwidth requirements.

  4. 5 CFR 1201.14 - Electronic filing procedures.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... (PDF), and image files (files created by scanning). A list of formats allowed can be found at e-Appeal..., or by uploading the supporting documents in the form of one or more PDF files in which each...

  5. Selective document image data compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1998-05-19

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.

  6. Selective document image data compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1998-01-01

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)

  7. 42 CFR 37.44 - Approval of radiographic facilities that use digital radiography systems.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... effective management, safety, and proper performance of chest image acquisition, digitization, processing... digital chest radiographs by submitting to NIOSH digital radiographic image files of a test object (e.g... radiographic image files from six or more sample chest radiographs that are of acceptable quality to one or...

  8. In vitro radiographic determination of distances from working length files to root ends comparing Kodak RVG 6000, Schick CDR, and Kodak insight film.

    PubMed

    Radel, Robert T; Goodell, Gary G; McClanahan, Scott B; Cohen, Mark E

    2006-06-01

    Previous studies suggest that digital and film-based radiography are similar for endodontic measurements. This study compared the accuracy and acceptability of measured distances from the tips of size #10 and #15 files to molar root apices in cadaver jaw sections using the newly developed Kodak RVG 6000, and the Schick CDR digital systems to digitized Kodak film. Standardized images were taken of files placed 0.5 to 1.5 mm short of true radiographic lengths. Images were imported into Adobe PhotoShop 7.0, thereby blinding observers who measured distances from files to root apices and assessed images for clarity (acceptability). Repeated measures ANOVA and Tukey-Kramer post hoc tests demonstrated that Kodak RVG 6000 images with enhanced contrast produced significantly less measurement error than unenhanced contrast Schick CDR images (p < 0.05) and significantly higher acceptability ratings than all other systems (all p < 0.002). Among these conditions, the newly developed Kodak RVG 6000 system provided the best overall images.

  9. Singapore National Medical Image Resource Centre (SN.MIRC): a world wide web resource for radiology education.

    PubMed

    Yang, Guo-Liang; Lim, C C Tchoyoson

    2006-08-01

    Radiology education is heavily dependent on visual images, and case-based teaching files comprising medical images can be an important tool for teaching diagnostic radiology. Currently, hardcopy film is being rapidly replaced by digital radiological images in teaching hospitals, and an electronic teaching file (ETF) library would be desirable. Furthermore, a repository of ETFs deployed on the World Wide Web has the potential for e-learning applications to benefit a larger community of learners. In this paper, we describe a Singapore National Medical Image Resource Centre (SN.MIRC) that can serve as a World Wide Web resource for teaching diagnostic radiology. On SN.MIRC, ETFs can be created using a variety of mechanisms including file upload and online form-filling, and users can search for cases using the Medical Image Resource Center (MIRC) query schema developed by the Radiological Society of North America (RSNA). The system can be improved with future enhancements, including multimedia interactive teaching files and distance learning for continuing professional development. However, significant challenges exist when exploring the potential of using the World Wide Web for radiology education.

  10. High-throughput imaging of heterogeneous cell organelles with an X-ray laser (CXIDB ID 25)

    DOE Data Explorer

    Hantke, Max, F.

    2014-11-17

    Preprocessed detector images that were used for the paper "High-throughput imaging of heterogeneous cell organelles with an X-ray laser". The CXI file contains the entire recorded data - including both hits and blanks. It also includes down-sampled images and LCLS machine parameters. Additionally, the Cheetah configuration file is attached that was used to create the pre-processed data.

  11. Incidence of Apical Crack Initiation during Canal Preparation using Hand Stainless Steel (K-File) and Hand NiTi (Protaper) Files.

    PubMed

    Soni, Dileep; Raisingani, Deepak; Mathur, Rachit; Madan, Nidha; Visnoi, Suchita

    2016-01-01

    To evaluate the incidence of apical crack initiation during canal preparation with stainless steel K-files and hand protaper files (in vitro study). Sixty extracted mandibular premo-lar teeth are randomly selected and embedded in an acrylic tube filled with autopolymerizing resin. A baseline image of the apical surface of each specimen was recorded under a digital microscope (80×). The cervical and middle thirds of all samples were flared with #2 and #1 Gates-Glidden (GG) drills, and a second image was recorded. The teeth were randomly divided into four groups of 15 teeth each according to the file type (hand K-file and hand-protaper) and working length (WL) (instrumented at WL and 1 mm less than WL). Final image after dye penetration and photomicrograph of the apical root surface were digitally recorded. Maximum numbers of cracks were observed with hand protaper files compared with hand K-file at the WL and 1 mm short of WL. Chi-square testing revealed a highly significant effect of WL on crack formation at WL and 1 mm short of WL (p = 0.000). Minimum numbers of cracks at WL and 1 mm short of WL were observed with hand K-file and maximum with hand protaper files. Soni D, Raisingani D, Mathur R, Madan N, Visnoi S. Incidence of Apical Crack Initiation during Canal Preparation using Hand Stainless Steel (K-File) and Hand NiTi (Protaper) Files. Int J Clin Pediatr Dent 2016;9(4):303-307.

  12. The design of an fast Fourier filter for enhancing diagnostically relevant structures - endodontic files.

    PubMed

    Bruellmann, Dan; Sander, Steven; Schmidtmann, Irene

    2016-05-01

    The endodontic working length is commonly determined by electronic apex locators and intraoral periapical radiographs. No algorithms for the automatic detection of endodontic files in dental radiographs have been described in the recent literature. Teeth from the mandibles of pig cadavers were accessed, and digital radiographs of these specimens were obtained using an optical bench. The specimens were then recorded in identical positions and settings after the insertion of endodontic files of known sizes (ISO sizes 10-15). The frequency bands generated by the endodontic files were determined using fast Fourier transforms (FFTs) to convert the resulting images into frequency spectra. The detected frequencies were used to design a pre-segmentation filter, which was programmed using Delphi XE RAD Studio software (Embarcadero Technologies, San Francisco, USA) and tested on 20 radiographs. For performance evaluation purposes, the gauged lengths (measured with a caliper) of visible endodontic files were measured in the native and filtered images. The software was able to segment the endodontic files in both the samples and similar dental radiographs. We observed median length differences of 0.52 mm (SD: 2.76 mm) and 0.46 mm (SD: 2.33 mm) in the native and post-segmentation images, respectively. Pearson's correlation test revealed a significant correlation of 0.915 between the true length and the measured length in the native images; the corresponding correlation for the filtered images was 0.97 (p=0.0001). The algorithm can be used to automatically detect and measure the lengths of endodontic files in digital dental radiographs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Chemopreventive Agent Development | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"174","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Chemoprevenentive Agent Development Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Chemoprevenentive Agent Development Research Group Homepage

  14. Carnegie Mellon University bioimaging day 2014: Challenges and opportunities in digital pathology

    PubMed Central

    Rohde, Gustavo K.; Ozolek, John A.; Parwani, Anil V.; Pantanowitz, Liron

    2014-01-01

    Recent advances in digital imaging is impacting the practice of pathology. One of the key enabling technologies that is leading the way towards this transformation is the use of whole slide imaging (WSI) which allows glass slides to be converted into large image files that can be shared, stored, and analyzed rapidly. Many applications around this novel technology have evolved in the last decade including education, research and clinical applications. This publication highlights a collection of abstracts, each corresponding to a talk given at Carnegie Mellon University's (CMU) Bioimaging Day 2014 co-sponsored by the Biomedical Engineering and Lane Center for Computational Biology Departments at CMU. Topics related specifically to digital pathology are presented in this collection of abstracts. These include topics related to digital workflow implementation, imaging and artifacts, storage demands, and automated image analysis algorithms. PMID:25250190

  15. Carnegie Mellon University bioimaging day 2014: Challenges and opportunities in digital pathology.

    PubMed

    Rohde, Gustavo K; Ozolek, John A; Parwani, Anil V; Pantanowitz, Liron

    2014-01-01

    Recent advances in digital imaging is impacting the practice of pathology. One of the key enabling technologies that is leading the way towards this transformation is the use of whole slide imaging (WSI) which allows glass slides to be converted into large image files that can be shared, stored, and analyzed rapidly. Many applications around this novel technology have evolved in the last decade including education, research and clinical applications. This publication highlights a collection of abstracts, each corresponding to a talk given at Carnegie Mellon University's (CMU) Bioimaging Day 2014 co-sponsored by the Biomedical Engineering and Lane Center for Computational Biology Departments at CMU. Topics related specifically to digital pathology are presented in this collection of abstracts. These include topics related to digital workflow implementation, imaging and artifacts, storage demands, and automated image analysis algorithms.

  16. Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service.

    PubMed

    Bao, Shunxing; Plassard, Andrew J; Landman, Bennett A; Gokhale, Aniruddha

    2017-04-01

    Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based "medical image processing-as-a-service" offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop's distributed file system. Despite this promise, HBase's load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage.

  17. VO-Compatible Architecture for Managing and Processing Images of Moving Celestial Bodies : Application to the Gaia-GBOT Project

    NASA Astrophysics Data System (ADS)

    Barache, C.; Bouquillon, S.; Carlucci, T.; Taris, F.; Michel, L.; Altmann, M.

    2013-10-01

    The Ground Based Optical Tracking (GBOT) group is a part of the Data Processing and Analysis Consortium, the large consortium of over 400 scientists from many European countries, charged with the scientific conduction of the Gaia mission by ESA. The GBOT group is in charge of the optical part of tracking of the Gaia satellite. This optical tracking is necessary to allow the Gaia mission to fully reach its goal in terms of astrometry precision level. These observations will be done daily, during the 5 years of the mission, with the use of optical CCD frames taken by a small network of 1-2m class telescopes located all over the world. The requirements for the accuracy on the satellite position determination, with respect of the stars in the field of view, are 20 mas. These optical satellite positions will be sent weekly by GBOT to the SOC of ESAC and used with other kinds of observations (radio ranging and Doppler) by MOC of ESOC to improve the Gaia ephemeris. For this purpose, we developed a set of accurate astrometry reduction programs specially adapted for tracking moving objects. The inputs of these programs for each tracked target are an ephemeris and a set of FITS images. The outputs for each image are: a file containing all information about the detected objects, a catalogue file used for calibration, a TIFF file for visual explanation of the reduction result, and an improvement of the fits image header. The final result is an overview file containing only the data related to the target extracted from all the images. These programs are written in GNU Fortran 95 and provide results in VOTable format (supported by Virtual Observatory protocols). All these results are sent automatically into the GBOT Database which is built with the SAADA freeware. The user of this Database can archive and query the data but also, thanks to the delegate option provided by SAADA, select a set of images and directly run the GBOT reduction programs with a dedicated Web interface. For more information about SAADA (an Automatic System for Astronomy Data Archive under GPL license and VOcompatible), see the related paper Michel et al. (2013).

  18. Functional evaluation of telemedicine with super high definition images and B-ISDN.

    PubMed

    Takeda, H; Matsumura, Y; Okada, T; Kuwata, S; Komori, M; Takahashi, T; Minatom, K; Hashimoto, T; Wada, M; Fujio, Y

    1998-01-01

    In order to determine whether a super high definition (SHD) image running at a series of 2048 resolution x 2048 line x 60 frame/sec was capable of telemedicine, we established a filing system for medical images and two experiments for transmission of high quality images were performed. All images of various types, produced from one case of ischemic heart disease were digitized and registered into the filing system. Images consisted of plain chest x-ray, electrocardiogram, ultrasound cardiogram, cardiac scintigram, coronary angiogram, left ventriculogram and so on. All images were animated and totaled a number of 243. We prepared a graphic user interface (GUI) for image retrieval based on the medical events and modalities. Twenty one cardiac specialists evaluated quality of the SHD images to be somewhat poor compared to the original pictures but sufficient for making diagnoses, and effective as a tool for teaching and case study purposes. The system capability of simultaneously displaying several animated images was especially deemed effective in grasping comprehension of diagnosis. Efficient input methods and creating capacity of filing all produced images are future issue. Using B-ISDN network, the SHD file was prefetched to the servers at Kyoto University Hospital and BBCC (Bradband ISDN Business chance & Culture Creation) laboratory as an telemedicine experiment. Simultaneous video conference system, the control of image retrieval and pointing function made the teleconference successful in terms of high quality of medical images, quick response time and interactive data exchange.

  19. Incorporating the APS Catalog of the POSS I and Image Archive in ADS

    NASA Technical Reports Server (NTRS)

    Humphreys, Roberta M.

    1998-01-01

    The primary purpose of this contract was to develop the software to both create and access an on-line database of images from digital scans of the Palomar Sky Survey. This required modifying our DBMS (called Star Base) to create an image database from the actual raw pixel data from the scans. The digitized images are processed into a set of coordinate-reference index and pixel files that are stored in run-length files, thus achieving an efficient lossless compression. For efficiency and ease of referencing, each digitized POSS I plate is then divided into 900 subplates. Our custom DBMS maps each query into the corresponding POSS plate(s) and subplate(s). All images from the appropriate subplates are retrieved from disk with byte-offsets taken from the index files. These are assembled on-the-fly into a GIF image file for browser display, and a FITS format image file for retrieval. The FITS images have a pixel size of 0.33 arcseconds. The FITS header contains astrometric and photometric information. This method keeps the disk requirements manageable while allowing for future improvements. When complete, the APS Image Database will contain over 130 Gb of data. A set of web pages query forms are available on-line, as well as an on-line tutorial and documentation. The database is distributed to the Internet by a high-speed SGI server and a high-bandwidth disk system. URL is http://aps.umn.edu/IDB/. The image database software is written in perl and C and has been compiled on SGI computers with MIX5.3. A copy of the written documentation is included and the software is on the accompanying exabyte tape.

  20. BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences

    PubMed Central

    Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola

    2015-01-01

    Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org. PMID:26401099

  1. BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences.

    PubMed

    Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola

    2015-01-01

    Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org.

  2. A practical guide to cardiovascular 3D printing in clinical practice: Overview and examples.

    PubMed

    Abudayyeh, Islam; Gordon, Brent; Ansari, Mohammad M; Jutzy, Kenneth; Stoletniy, Liset; Hilliard, Anthony

    2018-06-01

    The advent of more advanced 3D image processing, reconstruction, and a variety of three-dimensional (3D) printing technologies using different materials has made rapid and fairly affordable anatomically accurate models much more achievable. These models show great promise in facilitating procedural and surgical planning for complex congenital and structural heart disease. Refinements in 3D printing technology lend itself to advanced applications in the fields of bio-printing, hemodynamic modeling, and implantable devices. As a novel technology with a large variability in software, processing tools and printing techniques, there is not a standardized method by which a clinician can go from an imaging data-set to a complete model. Furthermore, anatomy of interest and how the model is used can determine the most appropriate technology. In this over-view we discuss, from the standpoint of a clinical professional, image acquisition, processing, and segmentation by which a printable file is created. We then review the various printing technologies, advantages and disadvantages when printing the completed model file, and describe clinical scenarios where 3D printing can be utilized to address therapeutic challenges. © 2017, Wiley Periodicals, Inc.

  3. TCIApathfinder: an R client for The Cancer Imaging Archive REST API.

    PubMed

    Russell, Pamela; Fountain, Kelly; Wolverton, Dulcy; Ghosh, Debashis

    2018-06-05

    The Cancer Imaging Archive (TCIA) hosts publicly available de-identified medical images of cancer from over 25 body sites and over 30,000 patients. Over 400 published studies have utilized freely available TCIA images. Images and metadata are available for download through a web interface or a REST API. Here we present TCIApathfinder, an R client for the TCIA REST API. TCIApathfinder wraps API access in user-friendly R functions that can be called interactively within an R session or easily incorporated into scripts. Functions are provided to explore the contents of the large database and to download image files. TCIApathfinder provides easy access to TCIA resources in the highly popular R programming environment. TCIApathfinder is freely available under the MIT license as a package on CRAN (https://cran.r-project.org/web/packages/TCIApathfinder/index.html) and at https://github.com/pamelarussell/TCIApathfinder. Copyright ©2018, American Association for Cancer Research.

  4. Prostate and Urologic Cancer | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"183","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Prostate and Urologic Cancer Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Prostate and Urologic Cancer Research Group Homepage

  5. Surmounting the Effects of Lossy Compression on Steganography

    DTIC Science & Technology

    1996-10-01

    and can be exploited to export sensitive information. Since images are fre- quently compressed for storage or transmission, effective steganography ... steganography is that which is stored with an accuracy far greater than necessary for the data’s use and display. Image , Postscript, and audio files are...information can be concealed in bitmapped image files with little or no visible degradation of the image [4.]. This process, called steganography , is

  6. Secure Display of Space-Exploration Images

    NASA Technical Reports Server (NTRS)

    Cheng, Cecilia; Thornhill, Gillian; McAuley, Michael

    2006-01-01

    Java EDR Display Interface (JEDI) is software for either local display or secure Internet distribution, to authorized clients, of image data acquired from cameras aboard spacecraft engaged in exploration of remote planets. ( EDR signifies experimental data record, which, in effect, signifies image data.) Processed at NASA s Multimission Image Processing Laboratory (MIPL), the data can be from either near-realtime processing streams or stored files. JEDI uses the Java Advanced Imaging application program interface, plus input/output packages that are parts of the Video Image Communication and Retrieval software of the MIPL, to display images. JEDI can be run as either a standalone application program or within a Web browser as a servlet with an applet front end. In either operating mode, JEDI communicates using the HTTP(s) protocol(s). In the Web-browser case, the user must provide a password to gain access. For each user and/or image data type, there is a configuration file, called a "personality file," containing parameters that control the layout of the displays and the information to be included in them. Once JEDI has accepted the user s password, it processes the requested EDR (provided that user is authorized to receive the specific EDR) to create a display according to the user s personality file.

  7. Asynchronous data acquisition and on-the-fly analysis of dose fractionated cryoEM images by UCSFImage

    PubMed Central

    Li, Xueming; Zheng, Shawn; Agard, David A.; Cheng, Yifan

    2015-01-01

    Newly developed direct electron detection cameras have a high image output frame rate that enables recording dose fractionated image stacks of frozen hydrated biological samples by electron cryomicroscopy (cryoEM). Such novel image acquisition schemes provide opportunities to analyze cryoEM data in ways that were previously impossible. The file size of a dose fractionated image stack is 20 ~ 60 times larger than that of a single image. Thus, efficient data acquisition and on-the-fly analysis of a large number of dose-fractionated image stacks become a serious challenge to any cryoEM data acquisition system. We have developed a computer-assisted system, named UCSFImage4, for semi-automated cryo-EM image acquisition that implements an asynchronous data acquisition scheme. This facilitates efficient acquisition, on-the-fly motion correction, and CTF analysis of dose fractionated image stacks with a total time of ~60 seconds/exposure. Here we report the technical details and configuration of this system. PMID:26370395

  8. ARCUS Internet Media Archive (IMA): A Resource for Outreach and Education

    NASA Astrophysics Data System (ADS)

    Polly, Z.; Warnick, W. K.; Polly, J.

    2008-12-01

    The ARCUS Internet Media Archive (IMA) is a collection of photos, graphics, videos, and presentations about the Arctic that are shared through the Internet. It provides the arctic research community and the public at large with a centralized location where images and video pertaining to polar research can be browsed and retrieved for a variety of uses. The IMA currently contains almost 6,500 publicly accessible photos, including 4,000 photos from the National Science Foundation funded Teachers and Researchers Exploring and Collaborating (TREC, now PolarTREC) program, an educational research experience in which K-12 teachers participate in arctic research as a pathway to improving science education. The IMA also includes 450 video files, 270 audio files, nearly 100 graphics and logos, 28 presentations, and approximately 10,000 additional resources that are being prepared for public access. The contents of this archive are organized by file type, contributor's name, event, or by organization, with each photo or file accompanied by information on content, contributor source, and usage requirements. All the files are key-worded and all information, including file name and description, is completely searchable. ARCUS plans to continue to improve and expand the IMA with a particular focus on providing graphics depicting key arctic research results and findings as well as edited video archives of relevant scientific community meetings. To submit files or for more information and to view the ARCUS Internet Media Archive, please go to: http://media.arcus.org or email photo@arcus.org.

  9. NASA's Hyperwall Revealing the Big Picture

    NASA Technical Reports Server (NTRS)

    Sellers, Piers

    2011-01-01

    NASA:s hyperwall is a sophisticated visualization tool used to display large datasets. The hyperwall, or video wall, is capable of displaying multiple high-definition data visualizations and/or images simultaneously across an arrangement of screens. Functioning as a key component at many NASA exhibits, the hyperwall is used to help explain phenomena, ideas, or examples of world change. The traveling version of the hyperwall is typically comprised of nine 42-50" flat-screen monitors arranged in a 3x3 array (as depicted below). However, it is not limited to monitor size or number; screen sizes can be as large as 52" and the arrangement of screens can include more than nine monitors. Generally, NASA satellite and model data are used to highlight particular themes in atmospheric, land, and ocean science. Many of the existing hyperwall stories reveal change across space and time, while others display large-scale still-images accompanied by descriptive, story-telling captions. Hyperwall content on a variety of Earth Science topics already exists and is made available to the public at: eospso.gsfc.nasa.gov/hyperwall. Keynote and PowerPoint presentations as well as Summary of Story files are available for download on each existing topic. New hyperwall content and accompanying files will continue being developed to promote scientific literacy across a diverse group of audience members. NASA invites the use of content accessible through this website but requests the user to acknowledge any and all data sources referenced in the content being used.

  10. Reconstruction of Human Monte Carlo Geometry from Segmented Images

    NASA Astrophysics Data System (ADS)

    Zhao, Kai; Cheng, Mengyun; Fan, Yanchang; Wang, Wen; Long, Pengcheng; Wu, Yican

    2014-06-01

    Human computational phantoms have been used extensively for scientific experimental analysis and experimental simulation. This article presented a method for human geometry reconstruction from a series of segmented images of a Chinese visible human dataset. The phantom geometry could actually describe detailed structure of an organ and could be converted into the input file of the Monte Carlo codes for dose calculation. A whole-body computational phantom of Chinese adult female has been established by FDS Team which is named Rad-HUMAN with about 28.8 billion voxel number. For being processed conveniently, different organs on images were segmented with different RGB colors and the voxels were assigned with positions of the dataset. For refinement, the positions were first sampled. Secondly, the large sums of voxels inside the organ were three-dimensional adjacent, however, there were not thoroughly mergence methods to reduce the cell amounts for the description of the organ. In this study, the voxels on the organ surface were taken into consideration of the mergence which could produce fewer cells for the organs. At the same time, an indexed based sorting algorithm was put forward for enhancing the mergence speed. Finally, the Rad-HUMAN which included a total of 46 organs and tissues was described by the cuboids into the Monte Carlo Monte Carlo Geometry for the simulation. The Monte Carlo geometry was constructed directly from the segmented images and the voxels was merged exhaustively. Each organ geometry model was constructed without ambiguity and self-crossing, its geometry information could represent the accuracy appearance and precise interior structure of the organs. The constructed geometry largely retaining the original shape of organs could easily be described into different Monte Carlo codes input file such as MCNP. Its universal property was testified and high-performance was experimentally verified

  11. VizieR Online Data Catalog: HD61005 SPHERE H and Ks images (Olofsson+, 2016)

    NASA Astrophysics Data System (ADS)

    Olofsson, J.; Samland, M.; Avenhaus, H.; Caceres, C.; Henning, T.; Moor, A.; Milli, J.; Canovas, H.; Quanz, S. P.; Schreiber, M. R.; Augereau, J.-C.; Bayo, A.; Bazzon, A.; Beuzit, J.-L.; Boccaletti, A.; Buenzli, E.; Casassus, S.; Chauvin, G.; Dominik, C.; Desidera, S.; Feldt, M.; Gratton, R.; Janson, M.; Lagrange, A.-M.; Langlois, M.; Lannier, J.; Maire, A.-L.; Mesa, D.; Pinte, C.; Rouan, D.; Salter, G.; Thalmann, C.; Vigan, A.

    2016-05-01

    The fits files contains the reduced ADI and DPI SPHERE observations used to produce Fig. 1 of the paper. Besides the primary card, the files consists of 6 additional ImageHDU. The first and second one contain the SPHERE IRDIS ADI H band observations and the noise map. The third and fourth contain the SPHERE IRDIS ADI Ks band observations and the corresponding noise map. Finally, the fifth and sixth ImageHDU contain the SPHERE IRDIS DPI H band data as well as the noise map. Each ADI image has 1024x1024 pixels, while the DPI images have 1800x1800 pixels. The header of the primary card contains the pixel sizes for each datasets and the wavelengths of the H and K band observations. (2 data files).

  12. Post-Hurricane Irene coastal oblique aerial photographs collected from Ocracoke Inlet, North Carolina, to Virginia Beach, Virginia, August 30-31, 2011

    USGS Publications Warehouse

    Morgan, Karen L. M.; Krohn, M. Dennis

    2016-02-17

    Table 1 provides detailed information about the GPS location, image name, date, and time for each of the 2,688 photographs that were taken along with links to each photograph.In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML also shows the track of Hurricane Irene. The KML files were created using the photographic navigation files. These KML file(s) can be found in the kml folder.

  13. Incidence of Apical Crack Initiation during Canal Preparation using Hand Stainless Steel (K-File) and Hand NiTi (Protaper) Files

    PubMed Central

    Raisingani, Deepak; Mathur, Rachit; Madan, Nidha; Visnoi, Suchita

    2016-01-01

    Aim To evaluate the incidence of apical crack initiation during canal preparation with stainless steel K-files and hand protaper files (in vitro study). Materials and methods Sixty extracted mandibular premo-lar teeth are randomly selected and embedded in an acrylic tube filled with autopolymerizing resin. A baseline image of the apical surface of each specimen was recorded under a digital microscope (80×). The cervical and middle thirds of all samples were flared with #2 and #1 Gates-Glidden (GG) drills, and a second image was recorded. The teeth were randomly divided into four groups of 15 teeth each according to the file type (hand K-file and hand-protaper) and working length (WL) (instrumented at WL and 1 mm less than WL). Final image after dye penetration and photomicrograph of the apical root surface were digitally recorded. Results Maximum numbers of cracks were observed with hand protaper files compared with hand K-file at the WL and 1 mm short of WL. Chi-square testing revealed a highly significant effect of WL on crack formation at WL and 1 mm short of WL (p = 0.000). Conclusion Minimum numbers of cracks at WL and 1 mm short of WL were observed with hand K-file and maximum with hand protaper files. How to cite this article Soni D, Raisingani D, Mathur R, Madan N, Visnoi S. Incidence of Apical Crack Initiation during Canal Preparation using Hand Stainless Steel (K-File) and Hand NiTi (Protaper) Files. Int J Clin Pediatr Dent 2016;9(4):303-307. PMID:28127160

  14. General consumer communication tools for improved image management and communication in medicine.

    PubMed

    Rosset, Chantal; Rosset, Antoine; Ratib, Osman

    2005-12-01

    We elected to explore new technologies emerging on the general consumer market that can improve and facilitate image and data communication in medical and clinical environment. These new technologies developed for communication and storage of data can improve the user convenience and facilitate the communication and transport of images and related data beyond the usual limits and restrictions of a traditional picture archiving and communication systems (PACS) network. We specifically tested and implemented three new technologies provided on Apple computer platforms. (1) We adopted the iPod, a MP3 portable player with a hard disk storage, to easily and quickly move large number of DICOM images. (2) We adopted iChat, a videoconference and instant-messaging software, to transmit DICOM images in real time to a distant computer for conferencing teleradiology. (3) Finally, we developed a direct secure interface to use the iDisk service, a file-sharing service based on the WebDAV technology, to send and share DICOM files between distant computers. These three technologies were integrated in a new open-source image navigation and display software called OsiriX allowing for manipulation and communication of multimodality and multidimensional DICOM image data sets. This software is freely available as an open-source project at http://homepage.mac.com/rossetantoine/OsiriX. Our experience showed that the implementation of these technologies allowed us to significantly enhance the existing PACS with valuable new features without any additional investment or the need for complex extensions of our infrastructure. The added features such as teleradiology, secure and convenient image and data communication, and the use of external data storage services open the gate to a much broader extension of our imaging infrastructure to the outside world.

  15. Rapid Generation of Large Dimension Photon Sieve Designs

    NASA Technical Reports Server (NTRS)

    Hariharan, Shravan; Fitzpatrick, Sean; Kim, Hyun Jung; Julian, Matthew; Sun, Wenbo; Tedjojuwono, Ken; MacDonnell, David

    2017-01-01

    A photon sieve is a revolutionary optical instrument that provides high resolution imaging at a fraction of the weight of typical telescopes (areal density of 0.3 kg/m2 compared to 25 kg/m2 for the James Webb Space Telescope). The photon sieve is a variation of a Fresnel Zone Plate consisting of many small holes spread out in a ring-like pattern, which focuses light of a specific wavelength by diffraction. The team at NASA Langley Research Center has produced a variety of small photon sieves for testing. However, it is necessary to increase both the scale and rate of production, as a single sieve previously took multiple weeks to design and fabricate. This report details the different methods used in producing photon sieve designs in two file formats: CIF and DXF. The difference between these methods, and the two file formats were compared, to determine the most efficient design process. Finally, a step-by-step sieve design and fabrication process was described. The design files can be generated in both formats using an editing tool such as Microsoft Excel. However, an approach using a MATLAB program reduced the computing time of the designs and increased the ability of the user to generate large photon sieve designs. Although the CIF generation process was deemed the most efficient, the design techniques for both file types have been proven to generate complete photon sieves that can be used for scientific applications

  16. Digitizing an Analog Radiography Teaching File Under Time Constraint: Trade-Offs in Efficiency and Image Quality.

    PubMed

    Loehfelm, Thomas W; Prater, Adam B; Debebe, Tequam; Sekhar, Aarti K

    2017-02-01

    We digitized the radiography teaching file at Black Lion Hospital (Addis Ababa, Ethiopia) during a recent trip, using a standard digital camera and a fluorescent light box. Our goal was to photograph every radiograph in the existing library while optimizing the final image size to the maximum resolution of a high quality tablet computer, preserving the contrast resolution of the radiographs, and minimizing total library file size. A secondary important goal was to minimize the cost and time required to take and process the images. Three workers were able to efficiently remove the radiographs from their storage folders, hang them on the light box, operate the camera, catalog the image, and repack the radiographs back to the storage folder. Zoom, focal length, and film speed were fixed, while aperture and shutter speed were manually adjusted for each image, allowing for efficiency and flexibility in image acquisition. Keeping zoom and focal length fixed, which kept the view box at the same relative position in all of the images acquired during a single photography session, allowed unused space to be batch-cropped, saving considerable time in post-processing, at the expense of final image resolution. We present an analysis of the trade-offs in workflow efficiency and final image quality, and demonstrate that a few people with minimal equipment can efficiently digitize a teaching file library.

  17. Mass storage technology in networks

    NASA Astrophysics Data System (ADS)

    Ishii, Katsunori; Takeda, Toru; Itao, Kiyoshi; Kaneko, Reizo

    1990-08-01

    Trends and features of mass storage subsystems in network are surveyed and their key technologies spotlighted. Storage subsystems are becoming increasingly important in new network systems in which communications and data processing are systematically combined. These systems require a new class of high-performance mass-information storage in order to effectively utilize their processing power. The requirements of high transfer rates, high transactional rates and large storage capacities, coupled with high functionality, fault tolerance and flexibility in configuration, are major challenges in storage subsystems. Recent progress in optical disk technology has resulted in improved performance of on-line external memories to optical disk drives, which are competing with mid-range magnetic disks. Optical disks are more effective than magnetic disks in using low-traffic random-access file storing multimedia data that requires large capacity, such as in archive use and in information distribution use by ROM disks. Finally, it demonstrates image coded document file servers for local area network use that employ 130mm rewritable magneto-optical disk subsystems.

  18. Risk Factor Analysis in Low-Temperature Geothermal Play Fairway Analysis for the Appalachian Basin (GPFA-AB)

    DOE Data Explorer

    Teresa E. Jordan

    2015-09-30

    This submission contains information used to compute the risk factors for the GPFA-AB project (DE-EE0006726). The risk factors are natural reservoir quality, thermal resource quality, potential for induced seismicity, and utilization. The methods used to combine the risk factors included taking the product, sum, and minimum of the four risk factors. The files are divided into images, rasters, shapefiles, and supporting information. The image files show what the raster and shapefiles should look like. The raster files contain the input risk factors, calculation of the scaled risk factors, and calculation of the combined risk factors. The shapefiles include definition of the fairways, definition of the US Census Places, the center of the raster cells, and locations of industries. Supporting information contains details of the calculations or processing used in generating the files. An image of the raster will have the same name except *.png as the file ending instead of *.tif. Images with “fairways” or “industries” added to the name are composed of a raster with the relevant shapefile added. The file About_GPFA-AB_Phase1RiskAnalysisTask5DataUpload.pdf contains information the citation, special use considerations, authorship, etc. More details on each file are given in the spreadsheet “list_of_contents.csv” in the folder “SupportingInfo”. Code used to calculate values is available at https://github.com/calvinwhealton/geothermal_pfa under the folder “combining_metrics”.

  19. GPFA-AB_Phase1RiskAnalysisTask5DataUpload

    DOE Data Explorer

    Teresa E. Jordan

    2015-09-30

    This submission contains information used to compute the risk factors for the GPFA-AB project (DE-EE0006726). The risk factors are natural reservoir quality, thermal resource quality, potential for induced seismicity, and utilization. The methods used to combine the risk factors included taking the product, sum, and minimum of the four risk factors. The files are divided into images, rasters, shapefiles, and supporting information. The image files show what the raster and shapefiles should look like. The raster files contain the input risk factors, calculation of the scaled risk factors, and calculation of the combined risk factors. The shapefiles include definition of the fairways, definition of the US Census Places, the center of the raster cells, and locations of industries. Supporting information contains details of the calculations or processing used in generating the files. An image of the raster will have the same name except *.png as the file ending instead of *.tif. Images with “fairways” or “industries” added to the name are composed of a raster with the relevant shapefile added. The file About_GPFA-AB_Phase1RiskAnalysisTask5DataUpload.pdf contains information the citation, special use considerations, authorship, etc. More details on each file are given in the spreadsheet “list_of_contents.csv” in the folder “SupportingInfo”. Code used to calculate values is available at https://github.com/calvinwhealton/geothermal_pfa under the folder “combining_metrics”.

  20. Halftoning method for the generation of motion stimuli

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.; Stone, Leland S.

    1989-01-01

    This paper describes a novel computer-graphic technique for the generation of a broad class of motion stimuli for vision research, which uses color table animation in conjunction with a single base image. Using this technique, contrast and temporal frequency can be varied with a negligible amount of computation, once a single-base image is produced. Since only two-bit planes are needed to display a single drifting grating, an eight-bit/pixel display can be used to generate four-component plaids, in which each component of the plaid has independently programmable contrast and temporal frequency. Because the contrast and temporal frequencies of the various components are mutually independent, a large number of two-dimensional stimulus motions can be produced from a single image file.

  1. Nfsroot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garlick, J.

    2007-08-23

    The nfsroot RPM is installed into a root image which is served to clientsread-only. When the client boots, the union file systme (aufs) is sued to combine this read-only NFS root file systme with a tmpfs file system, yielding a read-write file systme where changes go to the tmpfs.

  2. 78 FR 77194 - In the Matter of the Enlightened Gourmet, Inc., Eternal Image, Inc., NMT Medical, Inc., and Wits...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-20

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] In the Matter of the Enlightened Gourmet, Inc., Eternal Image, Inc., NMT Medical, Inc., and Wits Basin Precious Minerals, Inc.; Order of Suspension of... securities of Eternal Image, Inc. because it has not filed any periodic reports since the period ended...

  3. Transportable Maps Software. Volume I.

    DTIC Science & Technology

    1982-07-01

    being collected at the beginning or end of the routine. This allows the interaction to be followed sequentially through its steps by anyone reading the...flow is either simple sequential , simple conditional (the equivalent of ’if-then-else’), simple iteration (’DO-loop’), or the non-linear recursion...input raster images to be in the form of sequential binary files with a SEGMENTED record type. The advantage of this form is that large logical records

  4. Survey of Non-Rigid Registration Tools in Medicine.

    PubMed

    Keszei, András P; Berkels, Benjamin; Deserno, Thomas M

    2017-02-01

    We catalogue available software solutions for non-rigid image registration to support scientists in selecting suitable tools for specific medical registration purposes. Registration tools were identified using non-systematic search in Pubmed, Web of Science, IEEE Xplore® Digital Library, Google Scholar, and through references in identified sources (n = 22). Exclusions are due to unavailability or inappropriateness. The remaining (n = 18) tools were classified by (i) access and technology, (ii) interfaces and application, (iii) living community, (iv) supported file formats, and (v) types of registration methodologies emphasizing the similarity measures implemented. Out of the 18 tools, (i) 12 are open source, 8 are released under a permissive free license, which imposes the least restrictions on the use and further development of the tool, 8 provide graphical processing unit (GPU) support; (ii) 7 are built on software platforms, 5 were developed for brain image registration; (iii) 6 are under active development but only 3 have had their last update in 2015 or 2016; (iv) 16 support the Analyze format, while 7 file formats can be read with only one of the tools; and (v) 6 provide multiple registration methods and 6 provide landmark-based registration methods. Based on open source, licensing, GPU support, active community, several file formats, algorithms, and similarity measures, the tools Elastics and Plastimatch are chosen for the platform ITK and without platform requirements, respectively. Researchers in medical image analysis already have a large choice of registration tools freely available. However, the most recently published algorithms may not be included in the tools, yet.

  5. Development and evaluation of oral reporting system for PACS.

    PubMed

    Umeda, T; Inamura, K; Inamoto, K; Ikezoe, J; Kozuka, T; Kawase, I; Fujii, Y; Karasawa, H

    1994-05-01

    Experimental workstations for oral reporting and synchronized image filing have been developed and evaluated by radiologists and referring physicians. The file media is a 5.25-inch rewritable magneto-optical disk of 600-Mb capacity whose file format is in accordance with the IS&C specification. The results of evaluation tell that this system is superior to other existing methods of the same kind such as transcribing, dictating, handwriting, typewriting and key selections. The most significant advantage of the system is that images and their interpretation are never separated. The first practical application to the teaching file and the teaching conference is contemplated in the Osaka University Hospital. This system is a complete digital system in terms of images, voices and demographic data, so that on-line transmission, off-line communication or filing to any database will be easily realized in a PACS environment. We are developing an integrated system of a speech recognizer connected to this digitized oral system.

  6. Documentation for the machine-readable version of a deep objective-prism survey for large Magellanic cloud members

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    This catalog contains 1273 proven or probable Large Magellanic Cloud (LMC) members, as found on deep objective-prism plates taken with the Curtis Schmidt telescope at Cerro Tololo Inter-American Observatory in Chile. The stars are generally brighter than about photographic magnitude 14. Approximate spectral types were determined by examination of the 580 A/mm objective-prism spectra; approximate 1975 positions were obtained by measuring relative to the 1975 coordinate grids on the Uppsala-Mount Stromlo Atlas of the LMC (Gascoigne and Westerlund 1961), and approximate photographic magnitudes were determined by averaging image density measures from the plates and image-diameter measures on the 'B' charts. The machine-readable version of the LMC survey catalog is described to enable users to read and process the tape file without problems or guesswork.

  7. Chapter 2: Tabular Data and Graphical Images in Support of the U.S. Geological Survey National Oil and Gas Assessment - The Wind River Basin Province

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2007-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files) because of the number and variety of platforms and software available.

  8. Mapping DICOM to OpenDocument format

    NASA Astrophysics Data System (ADS)

    Yu, Cong; Yao, Zhihong

    2009-02-01

    In order to enhance the readability, extensibility and sharing of DICOM files, we have introduced XML into DICOM file system (SPIE Volume 5748)[1] and the multilayer tree structure into DICOM (SPIE Volume 6145)[2]. In this paper, we proposed mapping DICOM to ODF(OpenDocument Format), for it is also based on XML. As a result, the new format realizes the separation of content(including text content and image) and display style. Meanwhile, since OpenDocument files take the format of a ZIP compressed archive, the new kind of DICOM files can benefit from ZIP's lossless compression to reduce file size. Moreover, this open format can also guarantee long-term access to data without legal or technical barriers, making medical images accessible to various fields.

  9. Incidence of apical root cracks and apical dentinal detachments after canal preparation with hand and rotary files at different instrumentation lengths.

    PubMed

    Liu, Rui; Kaiwar, Anjali; Shemesh, Hagay; Wesselink, Paul R; Hou, Benxiang; Wu, Min-Kai

    2013-01-01

    The aim of this study was to compare the incidence of apical root cracks and dentinal detachments after canal preparation with hand and rotary files at different instrumentation lengths. Two hundred forty mandibular incisors were mounted in resin blocks with simulated periodontal ligaments, and the apex was exposed. The root canals were instrumented with rotary and hand files, namely K3, ProTaper, and nickel-titanium Flex K files to the major apical foramen (AF), short AF, or beyond AF. Digital images of the apical surface of every tooth were taken during the apical enlargement at each file change. Development of dentinal defects was determined by comparing these images with the baseline image. Multinomial logistic regression test was performed to identify influencing factors. Apical crack developed in 1 of 80 teeth (1.3%) with hand files and 31 of 160 teeth (19.4%) with rotary files. Apical dentinal detachment developed in 2 of 80 teeth (2.5%) with hand files and 35 of 160 teeth (21.9%) with rotary files. Instrumentation with rotary files terminated 2 mm short of AF and did not cause any cracks. Significantly less cracks and detachments occurred when instrumentation with rotary files was terminated short of AF, as compared with that terminated at or beyond AF (P < .05). The AF deviated from the anatomic apex in 128 of 240 teeth (53%). Significantly more apical dentinal detachments appeared in teeth with a deviated AF (P = .033). Rotary instruments caused more dentinal defects than hand instruments; instrumentation short of AF reduced the risk of dentinal defects. Copyright © 2013 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  10. Reconstruction of organ dose for external radiotherapy patients in retrospective epidemiologic studies

    NASA Astrophysics Data System (ADS)

    Lee, Choonik; Jung, Jae Won; Pelletier, Christopher; Pyakuryal, Anil; Lamart, Stephanie; Kim, Jong Oh; Lee, Choonsik

    2015-03-01

    Organ dose estimation for retrospective epidemiological studies of late effects in radiotherapy patients involves two challenges: radiological images to represent patient anatomy are not usually available for patient cohorts who were treated years ago, and efficient dose reconstruction methods for large-scale patient cohorts are not well established. In the current study, we developed methods to reconstruct organ doses for radiotherapy patients by using a series of computational human phantoms coupled with a commercial treatment planning system (TPS) and a radiotherapy-dedicated Monte Carlo transport code, and performed illustrative dose calculations. First, we developed methods to convert the anatomy and organ contours of the pediatric and adult hybrid computational phantom series to Digital Imaging and Communications in Medicine (DICOM)-image and DICOM-structure files, respectively. The resulting DICOM files were imported to a commercial TPS for simulating radiotherapy and dose calculation for in-field organs. The conversion process was validated by comparing electron densities relative to water and organ volumes between the hybrid phantoms and the DICOM files imported in TPS, which showed agreements within 0.1 and 2%, respectively. Second, we developed a procedure to transfer DICOM-RT files generated from the TPS directly to a Monte Carlo transport code, x-ray Voxel Monte Carlo (XVMC) for more accurate dose calculations. Third, to illustrate the performance of the established methods, we simulated a whole brain treatment for the 10 year-old male phantom and a prostate treatment for the adult male phantom. Radiation doses to selected organs were calculated using the TPS and XVMC, and compared to each other. Organ average doses from the two methods matched within 7%, whereas maximum and minimum point doses differed up to 45%. The dosimetry methods and procedures established in this study will be useful for the reconstruction of organ dose to support retrospective epidemiological studies of late effects in radiotherapy patients.

  11. Semi-Automatic Extraction Algorithm for Images of the Ciliary Muscle

    PubMed Central

    Kao, Chiu-Yen; Richdale, Kathryn; Sinnott, Loraine T.; Ernst, Lauren E.; Bailey, Melissa D.

    2011-01-01

    Purpose To development and evaluate a semi-automatic algorithm for segmentation and morphological assessment of the dimensions of the ciliary muscle in Visante™ Anterior Segment Optical Coherence Tomography images. Methods Geometric distortions in Visante images analyzed as binary files were assessed by imaging an optical flat and human donor tissue. The appropriate pixel/mm conversion factor to use for air (n = 1) was estimated by imaging calibration spheres. A semi-automatic algorithm was developed to extract the dimensions of the ciliary muscle from Visante images. Measurements were also made manually using Visante software calipers. Interclass correlation coefficients (ICC) and Bland-Altman analyses were used to compare the methods. A multilevel model was fitted to estimate the variance of algorithm measurements that was due to differences within- and between-examiners in scleral spur selection versus biological variability. Results The optical flat and the human donor tissue were imaged and appeared without geometric distortions in binary file format. Bland-Altman analyses revealed that caliper measurements tended to underestimate ciliary muscle thickness at 3 mm posterior to the scleral spur in subjects with the thickest ciliary muscles (t = 3.6, p < 0.001). The percent variance due to within- or between-examiner differences in scleral spur selection was found to be small (6%) when compared to the variance due to biological difference across subjects (80%). Using the mean of measurements from three images achieved an estimated ICC of 0.85. Conclusions The semi-automatic algorithm successfully segmented the ciliary muscle for further measurement. Using the algorithm to follow the scleral curvature to locate more posterior measurements is critical to avoid underestimating thickness measurements. This semi-automatic algorithm will allow for repeatable, efficient, and masked ciliary muscle measurements in large datasets. PMID:21169877

  12. Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service

    PubMed Central

    Bao, Shunxing; Plassard, Andrew J.; Landman, Bennett A.; Gokhale, Aniruddha

    2017-01-01

    Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based “medical image processing-as-a-service” offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop’s distributed file system. Despite this promise, HBase’s load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage. PMID:28884169

  13. Flying across Galaxy Clusters with Google Earth: additional imagery from SDSS co-added data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, Jiangang; Annis, James; /Fermilab

    2010-10-01

    Galaxy clusters are spectacular. We provide a Google Earth compatible imagery for the deep co-added images from the Sloan Digital Sky Survey and make it a tool for examing galaxy clusters. Google Earth (in sky mode) provides a highly interactive environment for visualizing the sky. By encoding the galaxy cluster information into a kml/kmz file, one can use Google Earth as a tool for examining galaxy clusters and fly across them freely. However, the resolution of the images provided by Google Earth is not very high. This is partially because the major imagery google earth used is from Sloan Digitalmore » Sky Survey (SDSS) (SDSS collaboration 2000) and the resolutions have been reduced to speed up the web transferring. To have higher resolution images, you need to add your own images in a way that Google Earth can understand. The SDSS co-added data are the co-addition of {approx}100 scans of images from SDSS stripe 82 (Annis et al. 2010). It provides the deepest images based on SDSS and reach as deep as about redshift 1.0. Based on the co-added images, we created color images in a way as described by Lupton et al. (2004) and convert the color images to Google Earth compatible images using wcs2kml (Brewer et al. 2007). The images are stored at a public server at Fermi National Accelerator Laboratory and can be accessed by the public. To view those images in Google Earth, you need to download a kmz file, which contains the links to the color images, and then open the kmz file with your Google Earth. To meet different needs for resolutions, we provide three kmz files corresponding to low, medium and high resolution images. We recommend the high resolution one as long as you have a broadband Internet connection, though you should choose to download any of them, depending on your own needs and Internet speed. After you open the downloaded kmz file with Google Earth (in sky mode), it takes about 5 minutes (depending on your Internet connection and the resolution of images you want) to get some initial images loaded. Then, additional images corresponding to the region you are browsing will be loaded automatically. So far, you have access to all the co-added images. But you still do not have the galaxy cluster position information to look at. In order to see the galaxy clusters, you need to download another kmz file that tell Google Earth where to find the galaxy clusters in the co-added data region. We provide a kmz file for a few galaxy clusters in the stripe 82 region and you can download and open it with Google Earth. In the SDSS co-added region (stripe 82 region), the imagery from Google Earth itself is from the Digitized Sky Survey (2007), which is in very poor quality. In Figure1 and Figure2, we show screenshots of a cluster with and without the new co-added imagery in Google Earth. Much more details have been revealed with the deep images.« less

  14. Astronomical Image Processing with Hadoop

    NASA Astrophysics Data System (ADS)

    Wiley, K.; Connolly, A.; Krughoff, S.; Gardner, J.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-07-01

    In the coming decade astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. With a requirement that these images be analyzed in real time to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. In the commercial world, new techniques that utilize cloud computing have been developed to handle massive data streams. In this paper we describe how cloud computing, and in particular the map-reduce paradigm, can be used in astronomical data processing. We will focus on our experience implementing a scalable image-processing pipeline for the SDSS database using Hadoop (http://hadoop.apache.org). This multi-terabyte imaging dataset approximates future surveys such as those which will be conducted with the LSST. Our pipeline performs image coaddition in which multiple partially overlapping images are registered, integrated and stitched into a single overarching image. We will first present our initial implementation, then describe several critical optimizations that have enabled us to achieve high performance, and finally describe how we are incorporating a large in-house existing image processing library into our Hadoop system. The optimizations involve prefiltering of the input to remove irrelevant images from consideration, grouping individual FITS files into larger, more efficient indexed files, and a hybrid system in which a relational database is used to determine the input images relevant to the task. The incorporation of an existing image processing library, written in C++, presented difficult challenges since Hadoop is programmed primarily in Java. We will describe how we achieved this integration and the sophisticated image processing routines that were made feasible as a result. We will end by briefly describing the longer term goals of our work, namely detection and classification of transient objects and automated object classification.

  15. Gnuastro: GNU Astronomy Utilities

    NASA Astrophysics Data System (ADS)

    Akhlaghi, Mohammad

    2018-01-01

    Gnuastro (GNU Astronomy Utilities) manipulates and analyzes astronomical data. It is an official GNU package of a large collection of programs and C/C++ library functions. Command-line programs perform arithmetic operations on images, convert FITS images to common types like JPG or PDF, convolve an image with a given kernel or matching of kernels, perform cosmological calculations, crop parts of large images (possibly in multiple files), manipulate FITS extensions and keywords, and perform statistical operations. In addition, it contains programs to make catalogs from detection maps, add noise, make mock profiles with a variety of radial functions using monte-carlo integration for their centers, match catalogs, and detect objects in an image among many other operations. The command-line programs share the same basic command-line user interface for the comfort of both the users and developers. Gnuastro is written to comply fully with the GNU coding standards and integrates well with all Unix-like operating systems. This enables astronomers to expect a fully familiar experience in the source code, building, installing and command-line user interaction that they have seen in all the other GNU software that they use. Gnuastro's extensive library is included for users who want to build their own unique programs.

  16. An application of digital network technology to medical image management.

    PubMed

    Chu, W K; Smith, C L; Wobig, R K; Hahn, F A

    1997-01-01

    With the advent of network technology, there is considerable interest within the medical community to manage the storage and distribution of medical images by digital means. Higher workflow efficiency leading to better patient care is one of the commonly cited outcomes [1,2]. However, due to the size of medical image files and the unique requirements in detail and resolution, medical image management poses special challenges. Storage requirements are usually large, which implies expenses or investment costs make digital networking projects financially out of reach for many clinical institutions. New advances in network technology and telecommunication, in conjunction with the decreasing cost in computer devices, have made digital image management achievable. In our institution, we have recently completed a pilot project to distribute medical images both within the physical confines of the clinical enterprise as well as outside the medical center campus. The design concept and the configuration of a comprehensive digital image network is described in this report.

  17. 36 CFR 1237.28 - What special concerns apply to digital photographs?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... defects, evaluate the accuracy of finding aids, and verify file header information and file name integrity... sampling methods or more comprehensive verification systems (e.g., checksum programs), to evaluate image.... For permanent or unscheduled images descriptive elements must include: (1) An identification number...

  18. 36 CFR § 1237.28 - What special concerns apply to digital photographs?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... defects, evaluate the accuracy of finding aids, and verify file header information and file name integrity... sampling methods or more comprehensive verification systems (e.g., checksum programs), to evaluate image.... For permanent or unscheduled images descriptive elements must include: (1) An identification number...

  19. 36 CFR 1237.28 - What special concerns apply to digital photographs?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... defects, evaluate the accuracy of finding aids, and verify file header information and file name integrity... sampling methods or more comprehensive verification systems (e.g., checksum programs), to evaluate image.... For permanent or unscheduled images descriptive elements must include: (1) An identification number...

  20. 36 CFR 1237.28 - What special concerns apply to digital photographs?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... defects, evaluate the accuracy of finding aids, and verify file header information and file name integrity... sampling methods or more comprehensive verification systems (e.g., checksum programs), to evaluate image.... For permanent or unscheduled images descriptive elements must include: (1) An identification number...

  1. 36 CFR 1237.28 - What special concerns apply to digital photographs?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... defects, evaluate the accuracy of finding aids, and verify file header information and file name integrity... sampling methods or more comprehensive verification systems (e.g., checksum programs), to evaluate image.... For permanent or unscheduled images descriptive elements must include: (1) An identification number...

  2. Transcontinental communication and quantitative digital histopathology via the Internet; with special reference to prostate neoplasia

    PubMed Central

    Montironi, R; Thompson, D; Scarpelli, M; Bartels, H G; Hamilton, P W; Da Silva, V D; Sakr, W A; Weyn, B; Van Daele, A; Bartels, P H

    2002-01-01

    Objective: To describe practical experiences in the sharing of very large digital data bases of histopathological imagery via the Internet, by investigators working in Europe, North America, and South America. Materials: Experiences derived from medium power (sampling density 2.4 pixels/μm) and high power (6 pixels/μm) imagery of prostatic tissues, skin shave biopsies, breast lesions, endometrial sections, and colonic lesions. Most of the data included in this paper were from prostate. In particular, 1168 histological images of normal prostate, high grade prostatic intraepithelial neoplasia (PIN), and prostate cancer (PCa) were recorded, archived in an image format developed at the Optical Sciences Center (OSC), University of Arizona, and transmitted to Ancona, Italy, as JPEG (joint photographic experts group) files. Images were downloaded for review using the Internet application FTP (file transfer protocol). The images were then sent from Ancona to other laboratories for additional histopathological review and quantitative analyses. They were viewed using Adobe Photoshop, Paint Shop Pro, and Imaging for Windows. For karyometric analysis full resolution imagery was used, whereas histometric analyses were carried out on JPEG imagery also. Results: The three applications of the telecommunication system were remote histopathological assessment, remote data acquisition, and selection of material. Typical data volumes for each project ranged from 120 megabytes to one gigabyte, and transmission times were usually less than one hour. There were only negligible transmission errors, and no problem in efficient communication, although real time communication was an exception, because of the time zone differences. As far as the remote histopathological assessment of the prostate was concerned, agreement between the pathologist's electronic diagnosis and the diagnostic label applied to the images by the recording scientist was present in 96.6% of instances. When these images were forwarded to two pathologists, the level of concordance with the reviewing pathologist who originally downloaded the files from Tucson was as high as 97.2% and 98.0%. Initial results of studies made by researchers belonging to our group but located in others laboratories showed the feasibility of making quantitative analysis on the same images. Conclusions: These experiences show that diagnostic teleconsultation and quantitative image analyses via the Internet are not only feasible, but practical, and allow a close collaboration between researchers widely separated by geographical distance and analytical resources. PMID:12037030

  3. Knowledge representation for fuzzy inference aided medical image interpretation.

    PubMed

    Gal, Norbert; Stoicu-Tivadar, Vasile

    2012-01-01

    Knowledge defines how an automated system transforms data into information. This paper suggests a representation method of medical imaging knowledge using fuzzy inference systems coded in XML files. The imaging knowledge incorporates features of the investigated objects in linguistic form and inference rules that can transform the linguistic data into information about a possible diagnosis. A fuzzy inference system is used to model the vagueness of the linguistic medical imaging terms. XML files are used to facilitate easy manipulation and deployment of the knowledge into the imaging software. Preliminary results are presented.

  4. Application of whole slide image markup and annotation for pathologist knowledge capture.

    PubMed

    Campbell, Walter S; Foster, Kirk W; Hinrichs, Steven H

    2013-01-01

    The ability to transfer image markup and annotation data from one scanned image of a slide to a newly acquired image of the same slide within a single vendor platform was investigated. The goal was to study the ability to use image markup and annotation data files as a mechanism to capture and retain pathologist knowledge without retaining the entire whole slide image (WSI) file. Accepted mathematical principles were investigated as a method to overcome variations in scans of the same glass slide and to accurately associate image markup and annotation data across different WSI of the same glass slide. Trilateration was used to link fixed points within the image and slide to the placement of markups and annotations of the image in a metadata file. Variation in markup and annotation placement between WSI of the same glass slide was reduced from over 80 μ to less than 4 μ in the x-axis and from 17 μ to 6 μ in the y-axis (P < 0.025). This methodology allows for the creation of a highly reproducible image library of histopathology images and interpretations for educational and research use.

  5. Application of whole slide image markup and annotation for pathologist knowledge capture

    PubMed Central

    Campbell, Walter S.; Foster, Kirk W.; Hinrichs, Steven H.

    2013-01-01

    Objective: The ability to transfer image markup and annotation data from one scanned image of a slide to a newly acquired image of the same slide within a single vendor platform was investigated. The goal was to study the ability to use image markup and annotation data files as a mechanism to capture and retain pathologist knowledge without retaining the entire whole slide image (WSI) file. Methods: Accepted mathematical principles were investigated as a method to overcome variations in scans of the same glass slide and to accurately associate image markup and annotation data across different WSI of the same glass slide. Trilateration was used to link fixed points within the image and slide to the placement of markups and annotations of the image in a metadata file. Results: Variation in markup and annotation placement between WSI of the same glass slide was reduced from over 80 μ to less than 4 μ in the x-axis and from 17 μ to 6 μ in the y-axis (P < 0.025). Conclusion: This methodology allows for the creation of a highly reproducible image library of histopathology images and interpretations for educational and research use. PMID:23599902

  6. Likelihood Ratio Test Polarimetric SAR Ship Detection Application

    DTIC Science & Technology

    2005-12-01

    menu. Under the Matlab menu, the user can export an area of an image to the MatlabTM MAT file format, as well as call RGB image and Pauli...must specify various parameters such as the area of the image to analyze. Export Image Area to MatlabTM (PoIGASP & COASP) Generates a MatlabTM file...represented by the Minister of National Defence, 2005 (0 Sa majest6 la reine, repr(sent(e par le ministre de la Defense nationale, 2005 Abstract This

  7. Cancer Biomarkers | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"175","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Cancer Biomarkers Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Cancer Biomarkers Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Cancer Biomarkers Research Group Homepage Logo","title":"Cancer

  8. Gastrointestinal and Other Cancers | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"181","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Gastrointestinal and Other Cancers Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Gastrointestinal and Other Cancers Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Gastrointestinal and Other

  9. Biometry | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"66","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Biometry Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Biometry Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Biometry Research Group Homepage Logo","title":"Biometry Research Group Homepage

  10. Slide Set: Reproducible image analysis and batch processing with ImageJ.

    PubMed

    Nanes, Benjamin A

    2015-11-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.

  11. 5 CFR 1201.14 - Electronic filing procedures.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...-Appeal Online, in which case service is governed by paragraph (j) of this section, or by non-electronic... (PDF), and image files (files created by scanning). A list of formats allowed can be found at e-Appeal... representatives of the appeals in which they were filed. (j) Service of electronic pleadings and MSPB documents...

  12. 5 CFR 1201.14 - Electronic filing procedures.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...-Appeal Online, in which case service is governed by paragraph (j) of this section, or by non-electronic... (PDF), and image files (files created by scanning). A list of formats allowed can be found at e-Appeal... representatives of the appeals in which they were filed. (j) Service of electronic pleadings and MSPB documents...

  13. 5 CFR 1201.14 - Electronic filing procedures.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...-Appeal Online, in which case service is governed by paragraph (j) of this section, or by non-electronic... (PDF), and image files (files created by scanning). A list of formats allowed can be found at e-Appeal... representatives of the appeals in which they were filed. (j) Service of electronic pleadings and MSPB documents...

  14. 5 CFR 1201.14 - Electronic filing procedures.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...-Appeal Online, in which case service is governed by paragraph (j) of this section, or by non-electronic... (PDF), and image files (files created by scanning). A list of formats allowed can be found at e-Appeal... representatives of the appeals in which they were filed. (j) Service of electronic pleadings and MSPB documents...

  15. Chapter 3: Tabular Data and Graphical Images in Support of the U.S. Geological Survey National Oil and Gas Assessment - Western Gulf Province, Smackover-Austin-Eagle Ford Composite Total Petroleum System (504702)

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2006-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  16. Java Library for Input and Output of Image Data and Metadata

    NASA Technical Reports Server (NTRS)

    Deen, Robert; Levoe, Steven

    2003-01-01

    A Java-language library supports input and output (I/O) of image data and metadata (label data) in the format of the Video Image Communication and Retrieval (VICAR) image-processing software and in several similar formats, including a subset of the Planetary Data System (PDS) image file format. The library does the following: It provides low-level, direct access layer, enabling an application subprogram to read and write specific image files, lines, or pixels, and manipulate metadata directly. Two coding/decoding subprograms ("codecs" for short) based on the Java Advanced Imaging (JAI) software provide access to VICAR and PDS images in a file-format-independent manner. The VICAR and PDS codecs enable any program that conforms to the specification of the JAI codec to use VICAR or PDS images automatically, without specific knowledge of the VICAR or PDS format. The library also includes Image I/O plugin subprograms for VICAR and PDS formats. Application programs that conform to the Image I/O specification of Java version 1.4 can utilize any image format for which such a plug-in subprogram exists, without specific knowledge of the format itself. Like the aforementioned codecs, the VICAR and PDS Image I/O plug-in subprograms support reading and writing of metadata.

  17. Sally Ride EarthKAM - Automated Image Geo-Referencing Using Google Earth Web Plug-In

    NASA Technical Reports Server (NTRS)

    Andres, Paul M.; Lazar, Dennis K.; Thames, Robert Q.

    2013-01-01

    Sally Ride EarthKAM is an educational program funded by NASA that aims to provide the public the ability to picture Earth from the perspective of the International Space Station (ISS). A computer-controlled camera is mounted on the ISS in a nadir-pointing window; however, timing limitations in the system cause inaccurate positional metadata. Manually correcting images within an orbit allows the positional metadata to be improved using mathematical regressions. The manual correction process is time-consuming and thus, unfeasible for a large number of images. The standard Google Earth program allows for the importing of KML (keyhole markup language) files that previously were created. These KML file-based overlays could then be manually manipulated as image overlays, saved, and then uploaded to the project server where they are parsed and the metadata in the database is updated. The new interface eliminates the need to save, download, open, re-save, and upload the KML files. Everything is processed on the Web, and all manipulations go directly into the database. Administrators also have the control to discard any single correction that was made and validate a correction. This program streamlines a process that previously required several critical steps and was probably too complex for the average user to complete successfully. The new process is theoretically simple enough for members of the public to make use of and contribute to the success of the Sally Ride EarthKAM project. Using the Google Earth Web plug-in, EarthKAM images, and associated metadata, this software allows users to interactively manipulate an EarthKAM image overlay, and update and improve the associated metadata. The Web interface uses the Google Earth JavaScript API along with PHP-PostgreSQL to present the user the same interface capabilities without leaving the Web. The simpler graphical user interface will allow the public to participate directly and meaningfully with EarthKAM. The use of similar techniques is being investigated to place ground-based observations in a Google Mars environment, allowing the MSL (Mars Science Laboratory) Science Team a means to visualize the rover and its environment.

  18. Attenuated total internal reflection infrared microspectroscopic imaging using a large-radius germanium internal reflection element and a linear array detector.

    PubMed

    Patterson, Brian M; Havrilla, George J

    2006-11-01

    The number of techniques and instruments available for Fourier transform infrared (FT-IR) microspectroscopic imaging has grown significantly over the past few years. Attenuated total internal reflectance (ATR) FT-IR microspectroscopy reduces sample preparation time and has simplified the analysis of many difficult samples. FT-IR imaging has become a powerful analytical tool using either a focal plane array or a linear array detector, especially when coupled with a chemometric analysis package. The field of view of the ATR-IR microspectroscopic imaging area can be greatly increased from 300 x 300 microm to 2500 x 2500 microm using a larger internal reflection element of 12.5 mm radius instead of the typical 1.5 mm radius. This gives an area increase of 70x before aberrant effects become too great. Parameters evaluated include the change in penetration depth as a function of beam displacement, measurements of the active area, magnification factor, and change in spatial resolution over the imaging area. Drawbacks such as large file size will also be discussed. This technique has been successfully applied to the FT-IR imaging of polydimethylsiloxane foam cross-sections, latent human fingerprints, and a model inorganic mixture, which demonstrates the usefulness of the method for pharmaceuticals.

  19. Possible costs associated with investigating and mitigating geologic hazards in rural areas of western San Mateo County, California with a section on using the USGS website to determine the cost of developing property for residences in rural parts of San Mateo County, California

    USGS Publications Warehouse

    Brabb, Earl E.; Roberts, Sebastian; Cotton, William R.; Kropp, Alan L.; Wright, Robert H.; Zinn, Erik N.; Digital database by Roberts, Sebastian; Mills, Suzanne K.; Barnes, Jason B.; Marsolek, Joanna E.

    2000-01-01

    This publication consists of a digital map database on a geohazards web site, http://kaibab.wr.usgs.gov/geohazweb/intro.htm, this text, and 43 digital map images available for downloading at this site. The report is stored as several digital files, in ARC export (uncompressed) format for the database, and Postscript and PDF formats for the map images. Several of the source data layers for the images have already been released in other publications by the USGS and are available for downloading on the Internet. These source layers are not included in this digital database, but rather a reference is given for the web site where the data can be found in digital format. The exported ARC coverages and grids lie in UTM zone 10 projection. The pamphlet, which only describes the content and character of the digital map database, is included as Postscript, PDF, and ASCII text files and is also available on paper as USGS Open-File Report 00-127. The full versatility of the spatial database is realized by importing the ARC export files into ARC/INFO or an equivalent GIS. Other GIS packages, including MapInfo and ARCVIEW, can also use the ARC export files. The Postscript map image can be used for viewing or plotting in computer systems with sufficient capacity, and the considerably smaller PDF image files can be viewed or plotted in full or in part from Adobe ACROBAT software running on Macintosh, PC, or UNIX platforms.

  20. VizieR Online Data Catalog: BzJK observations around radio galaxies (Galametz+, 2009)

    NASA Astrophysics Data System (ADS)

    Galametz, A.; De Breuck, C.; Vernet, J.; Stern, D.; Rettura, A.; Marmo, C.; Omont, A.; Allen, M.; Seymour, N.

    2010-02-01

    We imaged the two targets using the Bessel B-band filter of the Large Format Camera (LFC) on the Palomar 5m Hale Telescope. We imaged the radio galaxy fields using the z-band filter of Palomar/LFC. In February 2005, we observed 7C 1751+6809 for 60-min under photometric conditions. In August 2005, we observed 7C 1756+6520 for 135-min but in non-photometric conditions. The tables provide the B, z, J and Ks magnitudes and coordinates of the pBzK* galaxies (red passively evolving candidates selected by BzK=(z-K)-(B-z)<-0.2 and (z-K)>2.2) for both fields. The B and z bands were obtained using the Large Format Camera (LFC) on the Palomar 5m Hale Telescope, and the J and Ks bands using Wide-field Infrared Camera (WIRCAM) of the Canada-France-Hawaii Telescope (CFHT). (2 data files).

  1. Using All-Sky Imaging to Improve Telescope Scheduling (Abstract)

    NASA Astrophysics Data System (ADS)

    Cole, G. M.

    2017-12-01

    (Abstract only) Automated scheduling makes it possible for a small telescope to observe a large number of targets in a single night. But when used in areas which have less-than-perfect sky conditions such automation can lead to large numbers of observations of clouds and haze. This paper describes the development of a "sky-aware" telescope automation system that integrates the data flow from an SBIG AllSky340c camera with an enhanced dispatch scheduler to make optimum use of the available observing conditions for two highly instrumented backyard telescopes. Using the minute-by-minute time series image stream and a self-maintained reference database, the software maintains a file of sky brightness, transparency, stability, and forecasted visibility at several hundred grid positions. The scheduling software uses this information in real time to exclude targets obscured by clouds and select the best observing task, taking into account the requirements and limits of each instrument.

  2. Breast and Gynecologic Cancer | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"184","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Breast and Gynecologic Cancer Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Breast and Gynecologic Cancer Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Breast and Gynecologic Cancer Research

  3. Lung and Upper Aerodigestive Cancer | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"180","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Lung and Upper Aerodigestive Cancer Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Lung and Upper Aerodigestive Cancer Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Lung and Upper Aerodigestive

  4. Visualization and manipulating the image of a formal data structure (FDS)-based database

    NASA Astrophysics Data System (ADS)

    Verdiesen, Franc; de Hoop, Sylvia; Molenaar, Martien

    1994-08-01

    A vector map is a terrain representation with a vector-structured geometry. Molenaar formulated an object-oriented formal data structure for 3D single valued vector maps. This FDS is implemented in a database (Oracle). In this study we describe a methodology for visualizing a FDS-based database and manipulating the image. A data set retrieved by querying the database is converted into an import file for a drawing application. An objective of this study is that an end-user can alter and add terrain objects in the image. The drawing application creates an export file, that is compared with the import file. Differences between these files result in updating the database which involves checks on consistency. In this study Autocad is used for visualizing and manipulating the image of the data set. A computer program has been written for the data exchange and conversion between Oracle and Autocad. The data structure of the FDS is compared to the data structure of Autocad and the data of the FDS is converted into the structure of Autocad equal to the FDS.

  5. Image Viewer using Digital Imaging and Communications in Medicine (DICOM)

    NASA Astrophysics Data System (ADS)

    Baraskar, Trupti N.

    2010-11-01

    Digital Imaging and Communications in Medicine is a standard for handling, storing, printing, and transmitting information in medical imaging. The National Electrical Manufacturers Association holds the copyright to this standard. It was developed by the DICOM Standards committee. The other image viewers cannot collectively store the image details as well as the patient's information. So the image may get separated from the details, but DICOM file format stores the patient's information and the image details. Main objective is to develop a DICOM image viewer. The image viewer will open .dcm i.e. DICOM image file and also will have additional features such as zoom in, zoom out, black and white inverter, magnifier, blur, B/W inverter, horizontal and vertical flipping, sharpening, contrast, brightness and .gif converter are incorporated.

  6. The PDS-based Data Processing, Archiving and Management Procedures in Chang'e Mission

    NASA Astrophysics Data System (ADS)

    Zhang, Z. B.; Li, C.; Zhang, H.; Zhang, P.; Chen, W.

    2017-12-01

    PDS is adopted as standard format of scientific data and foundation of all data-related procedures in Chang'e mission. Unlike the geographically distributed nature of the planetary data system, all procedures of data processing, archiving, management and distribution are proceeded in the headquarter of Ground Research and Application System of Chang'e mission in a centralized manner. The RAW data acquired by the ground stations is transmitted to and processed by data preprocessing subsystem (DPS) for the production of PDS-compliant Level 0 Level 2 data products using established algorithms, with each product file being well described using an attached label, then all products with the same orbit number are put together into a scheduled task for archiving along with a XML archive list file recoding all product files' properties such as file name, file size etc. After receiving the archive request from DPS, data management subsystem (DMS) is provoked to parse the XML list file to validate all the claimed files and their compliance to PDS using a prebuilt data dictionary, then to exact metadata of each data product file from its PDS label and the fields of its normalized filename. Various requirements of data management, retrieving, distribution and application can be well met using the flexible combination of the rich metadata empowered by the PDS. In the forthcoming CE-5 mission, all the design of data structure and procedures will be updated from PDS version 3 used in previous CE-1, CE-2 and CE-3 missions to the new version 4, the main changes would be: 1) a dedicated detached XML label will be used to describe the corresponding scientific data acquired by the 4 instruments carried, the XML parsing framework used in archive list validation will be reused for the label after some necessary adjustments; 2) all the image data acquired by the panorama camera, landing camera and lunar mineralogical spectrometer should use an Array_2D_Image/Array_3D_Image object to store image data, and use a Table_Character object to store image frame header; the tabulated data acquired by the lunar regolith penetrating radar should use a Table_Binary object to store measurements.

  7. Analysis towards VMEM File of a Suspended Virtual Machine

    NASA Astrophysics Data System (ADS)

    Song, Zheng; Jin, Bo; Sun, Yongqing

    With the popularity of virtual machines, forensic investigators are challenged with more complicated situations, among which discovering the evidences in virtualized environment is of significant importance. This paper mainly analyzes the file suffixed with .vmem in VMware Workstation, which stores all pseudo-physical memory into an image. The internal file structure of .vmem file is studied and disclosed. Key information about processes and threads of a suspended virtual machine is revealed. Further investigation into the Windows XP SP3 heap contents is conducted and a proof-of-concept tool is provided. Different methods to obtain forensic memory images are introduced, with both advantages and limits analyzed. We conclude with an outlook.

  8. User's manual for SEDCALC, a computer program for computation of suspended-sediment discharge

    USGS Publications Warehouse

    Koltun, G.F.; Gray, John R.; McElhone, T.J.

    1994-01-01

    Sediment-Record Calculations (SEDCALC), a menu-driven set of interactive computer programs, was developed to facilitate computation of suspended-sediment records. The programs comprising SEDCALC were developed independently in several District offices of the U.S. Geological Survey (USGS) to minimize the intensive labor associated with various aspects of sediment-record computations. SEDCALC operates on suspended-sediment-concentration data stored in American Standard Code for Information Interchange (ASCII) files in a predefined card-image format. Program options within SEDCALC can be used to assist in creating and editing the card-image files, as well as to reformat card-image files to and from formats used by the USGS Water-Quality System. SEDCALC provides options for creating card-image files containing time series of equal-interval suspended-sediment concentrations from 1. digitized suspended-sediment-concentration traces, 2. linear interpolation between log-transformed instantaneous suspended-sediment-concentration data stored at unequal time intervals, and 3. nonlinear interpolation between log-transformed instantaneous suspended-sediment-concentration data stored at unequal time intervals. Suspended-sediment discharge can be computed from the streamflow and suspended-sediment-concentration data or by application of transport relations derived by regressing log-transformed instantaneous streamflows on log-transformed instantaneous suspended-sediment concentrations or discharges. The computed suspended-sediment discharge data are stored in card-image files that can be either directly imported to the USGS Automated Data Processing System or used to generate plots by means of other SEDCALC options.

  9. Trueness and precision of digital impressions obtained using an intraoral scanner with different head size in the partially edentulous mandible.

    PubMed

    Hayama, Hironari; Fueki, Kenji; Wadachi, Juro; Wakabayashi, Noriyuki

    2018-03-01

    It remains unclear whether digital impressions obtained using an intraoral scanner are sufficiently accurate for use in fabrication of removable partial dentures. We therefore compared the trueness and precision between conventional and digital impressions in the partially edentulous mandible. Mandibular Kennedy Class I and III models with soft silicone simulated-mucosa placed on the residual edentulous ridge were used. The reference models were converted to standard triangulated language (STL) file format using an extraoral scanner. Digital impressions were obtained using an intraoral scanner with a large or small scanning head, and converted to STL files. For conventional impressions, pressure impressions of the reference models were made and working casts fabricated using modified dental stone; these were converted to STL file format using an extraoral scanner. Conversion to STL file format was performed 5 times for each method. Trueness and precision were evaluated by deviation analysis using three-dimensional image processing software. Digital impressions had superior trueness (54-108μm), but inferior precision (100-121μm) compared to conventional impressions (trueness 122-157μm, precision 52-119μm). The larger intraoral scanning head showed better trueness and precision than the smaller head, and on average required fewer scanned images of digital impressions than the smaller head (p<0.05). On the color map, the deviation distribution tended to differ between the conventional and digital impressions. Digital impressions are partially comparable to conventional impressions in terms of accuracy; the use of a larger scanning head may improve the accuracy for removable partial denture fabrication. Copyright © 2018 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  10. Chapter 6. Tabular data and graphical images in support of the U.S. Geological Survey National Oil and Gas Assessment-East Texas basin and Louisiana-Mississippi salt basins provinces, Jurassic Smackover interior salt basins total petroleum system (504902), Travis Peak and Hosston formations.

    USGS Publications Warehouse

    ,

    2006-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on the CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  11. Chapter 3. Tabular data and graphical images in support of the U.S. Geological Survey National Oil and Gas Assessment--East Texas basin and Louisiana-Mississippi salt basins provinces, Jurassic Smackover Interior salt basins total petroleum system (504902), Cotton Valley group.

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2006-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on the CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  12. MPST Software: MoonKommand

    NASA Technical Reports Server (NTRS)

    Kwok, John H.; Call, Jared A.; Khanampornpan, Teerapat

    2012-01-01

    This software automatically processes Sally Ride Science (SRS) delivered MoonKAM camera control files (ccf) into uplink products for the GRAIL-A and GRAIL-B spacecraft as part of an education and public outreach (EPO) extension to the Grail Mission. Once properly validated and deemed safe for execution onboard the spacecraft, MoonKommand generates the command products via the Automated Sequence Processor (ASP) and generates uplink (.scmf) files for radiation to the Grail-A and/or Grail-B spacecraft. Any errors detected along the way are reported back to SRS via email. With Moon Kommand, SRS can control their EPO instrument as part of a fully automated process. Inputs are received from SRS as either image capture files (.ccficd) for new image requests, or downlink/delete files (.ccfdl) for requesting image downlink from the instrument and on-board memory management. The Moon - Kommand outputs are command and file-load (.scmf) files that will be uplinked by the Deep Space Network (DSN). Without MoonKommand software, uplink product generation for the MoonKAM instrument would be a manual process. The software is specific to the Moon - KAM instrument on the GRAIL mission. At the time of this writing, the GRAIL mission was making final preparations to begin the science phase, which was scheduled to continue until June 2012.

  13. SU-E-J-150: Impact of Intrafractional Prostate Motion On the Accuracy and Efficiency of Prostate SBRT Delivery: A Retrospective Analysis of Prostate Tracking Log Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiang, H; Hirsch, A; Willins, J

    2014-06-01

    Purpose: To measure intrafractional prostate motion by time-based stereotactic x-ray imaging and investigate the impact on the accuracy and efficiency of prostate SBRT delivery. Methods: Prostate tracking log files with 1,892 x-ray image registrations from 18 SBRT fractions for 6 patients were retrospectively analyzed. Patient setup and beam delivery sessions were reviewed to identify extended periods of large prostate motion that caused delays in setup or interruptions in beam delivery. The 6D prostate motions were compared to the clinically used PTV margin of 3–5 mm (3 mm posterior, 5 mm all other directions), a hypothetical PTV margin of 2–3 mmmore » (2 mm posterior, 3 mm all other directions), and the rotation correction limits (roll ±2°, pitch ±5° and yaw ±3°) of CyberKnife to quantify beam delivery accuracy. Results: Significant incidents of treatment start delay and beam delivery interruption were observed, mostly related to large pitch rotations of ≥±5°. Optimal setup time of 5–15 minutes was recorded in 61% of the fractions, and optimal beam delivery time of 30–40 minutes in 67% of the fractions. At a default imaging interval of 15 seconds, the percentage of prostate motion beyond PTV margin of 3–5 mm varied among patients, with a mean at 12.8% (range 0.0%–31.1%); and the percentage beyond PTV margin of 2–3 mm was at a mean of 36.0% (range 3.3%–83.1%). These timely detected offsets were all corrected real-time by the robotic manipulator or by operator intervention at the time of treatment interruptions. Conclusion: The durations of patient setup and beam delivery were directly affected by the occurrence of large prostate motion. Frequent imaging of down to 15 second interval is necessary for certain patients. Techniques for reducing prostate motion, such as using endorectal balloon, can be considered to assure consistently higher accuracy and efficiency of prostate SBRT delivery.« less

  14. Utilization Analysis in Low-Temperature Geothermal Play Fairway Analysis for the Appalachian Basin (GPFA-AB)

    DOE Data Explorer

    Jordan, Teresa E.

    2015-09-30

    This submission of Utilization Analysis data to the Geothermal Data Repository (GDR) node of the National Geothermal Data System (NGDS) is in support of Phase 1 Low Temperature Geothermal Play Fairway Analysis for the Appalachian Basin (project DE-EE0006726). The submission includes data pertinent to the methods and results of an analysis of the Surface Levelized Cost of Heat (SLCOH) for US Census Bureau Places within the study area. This was calculated using a modification of a program called GEOPHIRES, available at http://koenraadbeckers.net/geophires/index.php. The MATLAB modules used in conjunction with GEOPHIRES, the MATLAB data input file, the GEOPHIRES output data file, and an explanation of the software components have been provided. Results of the SLCOH analysis appear on 4 .png image files as mapped risk of heat utilization. For each of the 4 image (.png) files, there is an accompanying georeferenced TIF (.tif) file by the same name. In addition to calculating SLCOH, this Task 4 also identified many sites that may be prospects for use of a geothermal district heating system, based on their size and industry, rather than on the SLCOH. An industry sorted listing of the sites (.xlsx) and a map of these sites plotted as a layer onto different iterations of maps combining the three geological risk factors (Thermal Quality, Natural Reservoir Quality, and Risk of Seismicity) has been provided. In addition to the 6 image (.png) files of the maps in this series, a shape (.shp) file and 7 associated files are included as well. Finally, supporting files (.pdf) describing the utilization analysis methodology and summarizing the anticipated permitting for a deep district heating system are supplied. UPDATE: Newer version of the Utilization Analysis has been added here: https://gdr.openei.org/submissions/878

  15. GPFA-AB_Phase1UtilizationTask4DataUpload

    DOE Data Explorer

    Teresa E. Jordan

    2015-09-30

    This submission of Utilization Analysis data to the Geothermal Data Repository (GDR) node of the National Geothermal Data System (NGDS) is in support of Phase 1 Low Temperature Geothermal Play Fairway Analysis for the Appalachian Basin (project DE-EE0006726). The submission includes data pertinent to the methods and results of an analysis of the Surface Levelized Cost of Heat (SLCOH) for US Census Bureau ‘Places’ within the study area. This was calculated using a modification of a program called GEOPHIRES, available at http://koenraadbeckers.net/geophires/index.php. The MATLAB modules used in conjunction with GEOPHIRES, the MATLAB data input file, the GEOPHIRES output data file, and an explanation of the software components have been provided. Results of the SLCOH analysis appear on 4 .png image files as mapped ‘risk’ of heat utilization. For each of the 4 image (.png) files, there is an accompanying georeferenced TIF (.tif) file by the same name. In addition to calculating SLCOH, this Task 4 also identified many sites that may be prospects for use of a geothermal district heating system, based on their size and industry, rather than on the SLCOH. An industry sorted listing of the sites (.xlsx) and a map of these sites plotted as a layer onto different iterations of maps combining the three geological risk factors (Thermal Quality, Natural Reservoir Quality, and Risk of Seismicity) has been provided. In addition to the 6 image (.png) files of the maps in this series, a shape (.shp) file and 7 associated files are included as well. Finally, supporting files (.pdf) describing the utilization analysis methodology and summarizing the anticipated permitting for a deep district heating system are supplied.

  16. Short-Term File Reference Patterns in a UNIX Environment,

    DTIC Science & Technology

    1986-03-01

    accounts mentioned ahose. This includes major administrative and status files (for example, /etc/ passwd ), system libraries, system include files and so on...34 files are those appearing in / and /etc. Examples are /vmunix (the bootable kernel image) and /etc/ passwd (passwords and other information on accounts...as /etc/ passwd ). The small size of opened files (55% are under 1024 bytes, a common block transfer size, and 75% are under 4096 bytes) suggests that

  17. Do you also have problems with the file format syndrome?

    PubMed

    De Cuyper, B; Nyssen, E; Christophe, Y; Cornelis, J

    1991-11-01

    In a biomedical data processing environment, an essential requirement is the ability to integrate a large class of standard modules for the acquisition, processing and display of the (image) data. Our approach to the management and manipulation of the different data formats is based on the specification of a common standard for the representation of data formats, called 'data nature descriptions' to emphasise that this representation not only specifies the structure but also the contents of data objects (files). The idea behind this concept is to associate each hardware and software component that produces or uses medical data with a description of the data objects manipulated by that component. In our approach a special software module (a format convertor generator) takes care of the appropriate data format conversions, required when two or more components of the system exchange data.

  18. NDSI products system based on Hadoop platform

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Jiang, He; Yang, Xiaoxia; Geng, Erhui

    2015-12-01

    Snow is solid state of water resources on earth, and plays an important role in human life. Satellite remote sensing is significant in snow extraction with the advantages of cyclical, macro, comprehensiveness, objectivity, timeliness. With the continuous development of remote sensing technology, remote sensing data access to the trend of multiple platforms, multiple sensors and multiple perspectives. At the same time, in view of the remote sensing data of compute-intensive applications demand increase gradually. However, current the producing system of remote sensing products is in a serial mode, and this kind of production system is used for professional remote sensing researchers mostly, and production systems achieving automatic or semi-automatic production are relatively less. Facing massive remote sensing data, the traditional serial mode producing system with its low efficiency has been difficult to meet the requirements of mass data timely and efficient processing. In order to effectively improve the production efficiency of NDSI products, meet the demand of large-scale remote sensing data processed timely and efficiently, this paper build NDSI products production system based on Hadoop platform, and the system mainly includes the remote sensing image management module, NDSI production module, and system service module. Main research contents and results including: (1)The remote sensing image management module: includes image import and image metadata management two parts. Import mass basis IRS images and NDSI product images (the system performing the production task output) into HDFS file system; At the same time, read the corresponding orbit ranks number, maximum/minimum longitude and latitude, product date, HDFS storage path, Hadoop task ID (NDSI products), and other metadata information, and then create thumbnails, and unique ID number for each record distribution, import it into base/product image metadata database. (2)NDSI production module: includes the index calculation, production tasks submission and monitoring two parts. Read HDF images related to production task in the form of a byte stream, and use Beam library to parse image byte stream to the form of Product; Use MapReduce distributed framework to perform production tasks, at the same time monitoring task status; When the production task complete, calls remote sensing image management module to store NDSI products. (3)System service module: includes both image search and DNSI products download. To image metadata attributes described in JSON format, return to the image sequence ID existing in the HDFS file system; For the given MapReduce task ID, package several task output NDSI products into ZIP format file, and return to the download link (4)System evaluation: download massive remote sensing data and use the system to process it to get the NDSI products testing the performance, and the result shows that the system has high extendibility, strong fault tolerance, fast production speed, and the image processing results with high accuracy.

  19. Reprocessing of multi-channel seismic-reflection data collected in the Beaufort Sea

    USGS Publications Warehouse

    Agena, W.F.; Lee, Myung W.; Hart, P.E.

    2000-01-01

    Contained on this set of two CD-ROMs are stacked and migrated multi-channel seismic-reflection data for 65 lines recorded in the Beaufort Sea by the United States Geological Survey in 1977. All data were reprocessed by the USGS using updated processing methods resulting in improved interpretability. Each of the two CD-ROMs contains the following files: 1) 65 files containing the digital seismic data in standard, SEG-Y format; 2) 1 file containing navigation data for the 65 lines in standard SEG-P1 format; 3) an ASCII text file with cross-reference information for relating the sequential trace numbers on each line to cdp numbers and shotpoint numbers; 4) 2 small scale graphic images (stacked and migrated) of a segment of line 722 in Adobe Acrobat (R) PDF format; 5) a graphic image of the location map, generated from the navigation file; 6) PlotSeis, an MS-DOS Application that allows PC users to interactively view the SEG-Y files; 7) a PlotSeis documentation file; and 8) an explanation of the processing used to create the final seismic sections (this document).

  20. Processing digital images and calculation of beam emittance (pepper-pot method for the Krion source)

    NASA Astrophysics Data System (ADS)

    Alexandrov, V. S.; Donets, E. E.; Nyukhalova, E. V.; Kaminsky, A. K.; Sedykh, S. N.; Tuzikov, A. V.; Philippov, A. V.

    2016-12-01

    Programs for the pre-processing of photographs of beam images on the mask based on Wolfram Mathematica and Origin software are described. Angles of rotation around the axis and in the vertical plane are taken into account in the generation of the file with image coordinates. Results of the emittance calculation by the Pep_emit program written in Visual Basic using the generated file in the test mode are presented.

  1. Effect of Instrumentation Length and Instrumentation Systems: Hand Versus Rotary Files on Apical Crack Formation – An In vitro Study

    PubMed Central

    Mahesh, MC; Bhandary, Shreetha

    2017-01-01

    Introduction Stresses generated during root canal instrumentation have been reported to cause apical cracks. The smaller, less pronounced defects like cracks can later propagate into vertical root fracture, when the tooth is subjected to repeated stresses from endodontic or restorative procedures. Aim This study evaluated occurrence of apical cracks with stainless steel hand files, rotary NiTi RaCe and K3 files at two different instrumentation lengths. Materials and Methods In the present in vitro study, 60 mandibular premolars were mounted in resin blocks with simulated periodontal ligament. Apical 3 mm of the root surfaces were exposed and stained using India ink. Preoperative images of root apices were obtained at 100x using stereomicroscope. The teeth were divided into six groups of 10 each. First two groups were instrumented with stainless steel files, next two groups with rotary NiTi RaCe files and the last two groups with rotary NiTi K3 files. The instrumentation was carried out till the apical foramen (Working Length-WL) and 1 mm short of the apical foramen (WL-1) with each file system. After root canal instrumentation, postoperative images of root apices were obtained. Preoperative and postoperative images were compared and the occurrence of cracks was recorded. Descriptive statistical analysis and Chi-square tests were used to analyze the results. Results Apical root cracks were seen in 30%, 35% and 20% of teeth instrumented with K-files, RaCe files and K3 files respectively. There was no statistical significance among three instrumentation systems in the formation of apical cracks (p=0.563). Apical cracks were seen in 40% and 20% of teeth instrumented with K-files; 60% and 10% of teeth with RaCe files and 40% and 0% of teeth with K3 files at WL and WL-1 respectively. For groups instrumented with hand files there was no statistical significance in number of cracks at WL and WL-1 (p=0.628). But for teeth instrumented with RaCe files and K3 files significantly more number of cracks were seen at WL than WL-1 (p=0.057 for RaCe files and p=0.087 for K3 files). Conclusion There was no statistical significance between stainless steel hand files and rotary files in terms of crack formation. Instrumentation length had a significant effect on the formation of cracks when rotary files were used. Using rotary instruments 1 mm short of apical foramen caused lesser crack formation. But, there was no statistically significant difference in number of cracks formed with hand files at two instrumentation levels. PMID:28274036

  2. Effect of Instrumentation Length and Instrumentation Systems: Hand Versus Rotary Files on Apical Crack Formation - An In vitro Study.

    PubMed

    Devale, Madhuri R; Mahesh, M C; Bhandary, Shreetha

    2017-01-01

    Stresses generated during root canal instrumentation have been reported to cause apical cracks. The smaller, less pronounced defects like cracks can later propagate into vertical root fracture, when the tooth is subjected to repeated stresses from endodontic or restorative procedures. This study evaluated occurrence of apical cracks with stainless steel hand files, rotary NiTi RaCe and K3 files at two different instrumentation lengths. In the present in vitro study, 60 mandibular premolars were mounted in resin blocks with simulated periodontal ligament. Apical 3 mm of the root surfaces were exposed and stained using India ink. Preoperative images of root apices were obtained at 100x using stereomicroscope. The teeth were divided into six groups of 10 each. First two groups were instrumented with stainless steel files, next two groups with rotary NiTi RaCe files and the last two groups with rotary NiTi K3 files. The instrumentation was carried out till the apical foramen (Working Length-WL) and 1 mm short of the apical foramen (WL-1) with each file system. After root canal instrumentation, postoperative images of root apices were obtained. Preoperative and postoperative images were compared and the occurrence of cracks was recorded. Descriptive statistical analysis and Chi-square tests were used to analyze the results. Apical root cracks were seen in 30%, 35% and 20% of teeth instrumented with K-files, RaCe files and K3 files respectively. There was no statistical significance among three instrumentation systems in the formation of apical cracks (p=0.563). Apical cracks were seen in 40% and 20% of teeth instrumented with K-files; 60% and 10% of teeth with RaCe files and 40% and 0% of teeth with K3 files at WL and WL-1 respectively. For groups instrumented with hand files there was no statistical significance in number of cracks at WL and WL-1 (p=0.628). But for teeth instrumented with RaCe files and K3 files significantly more number of cracks were seen at WL than WL-1 (p=0.057 for RaCe files and p=0.087 for K3 files). There was no statistical significance between stainless steel hand files and rotary files in terms of crack formation. Instrumentation length had a significant effect on the formation of cracks when rotary files were used. Using rotary instruments 1 mm short of apical foramen caused lesser crack formation. But, there was no statistically significant difference in number of cracks formed with hand files at two instrumentation levels.

  3. Radiology teaching file cases on the World Wide Web.

    PubMed

    Scalzetti, E M

    1997-08-01

    The presentation of a radiographic teaching file on the World Wide Web can be enhanced by attending to principles of web design. Chief among these are appropriate control of page layout, minimization of the time required to download a page from the remote server, and provision for navigation within and among the web pages that constitute the site. Page layout is easily accomplished by the use of tables; column widths can be fixed to maintain an acceptable line length for text. Downloading time is minimized by rigorous editing and by optimal compression of image files; beyond this, techniques like preloading of images and specification of image width and height are also helpful. Navigation controls should be clear, consistent, and readily available.

  4. Is the bang worth the buck? A RAID performance study

    NASA Technical Reports Server (NTRS)

    Hauser, Susan E.; Berman, Lewis E.; Thoma, George R.

    1996-01-01

    Expecting a high data delivery rate as well as data protection, the Lister Hill National Center for Biomedical Communications procured a RAID system to house image files for image delivery applications. A study was undertaken to determine the configuration of the RAID system that would provide for the fastest retrieval of image files. Average retrieval times with single and with concurrent users were measured for several stripe widths and several numbers of disks for RAID levels 0, 0+1 and 5. These are compared to each other and to average retrieval times for non-RAID configurations of the same hardware. Although the study in ongoing, a few conclusions have emerged regarding the tradeoffs among the different configurations with respect to file retrieval speed and cost.

  5. Distributed PACS using distributed file system with hierarchical meta data servers.

    PubMed

    Hiroyasu, Tomoyuki; Minamitani, Yoshiyuki; Miki, Mitsunori; Yokouchi, Hisatake; Yoshimi, Masato

    2012-01-01

    In this research, we propose a new distributed PACS (Picture Archiving and Communication Systems) which is available to integrate several PACSs that exist in each medical institution. The conventional PACS controls DICOM file into one data-base. On the other hand, in the proposed system, DICOM file is separated into meta data and image data and those are stored individually. Using this mechanism, since file is not always accessed the entire data, some operations such as finding files, changing titles, and so on can be performed in high-speed. At the same time, as distributed file system is utilized, accessing image files can also achieve high-speed access and high fault tolerant. The introduced system has a more significant point. That is the simplicity to integrate several PACSs. In the proposed system, only the meta data servers are integrated and integrated system can be constructed. This system also has the scalability of file access with along to the number of file numbers and file sizes. On the other hand, because meta-data server is integrated, the meta data server is the weakness of this system. To solve this defect, hieratical meta data servers are introduced. Because of this mechanism, not only fault--tolerant ability is increased but scalability of file access is also increased. To discuss the proposed system, the prototype system using Gfarm was implemented. For evaluating the implemented system, file search operating time of Gfarm and NFS were compared.

  6. VizieR Online Data Catalog: C/2012 F6 (Lemmon) and C/2012 S1 (ISON) maps (Cordiner+, 2014)

    NASA Astrophysics Data System (ADS)

    Cordiner, M. A.; Remijan, A. J.; Boissier, J.; Milam, S. N.; Mumma, M. J.; Charnley, S. B.; Paganini, L.; Villanueva, G.; Bockelee-Morvan, D.; Kuan, Y.-J.; Chuang, Y.-L.; Lis, D. C.; Biver, N.; Crovisier, J.; Minniti, D.; Coulson, I. M.

    2017-04-01

    WCS-calibrated fits image files of the molecular flux maps shown in Figure 1 for HCN, HNC and H2CO observed in comets C/2012 F6 (Lemmon) and C/2012 S1 (ISON) using ALMA. The files are labeled with the corresponding comet and molecule names. The files are standard two-dimensional fits images, which can be opened in fits image viewers such as SAOimage DS9, CASA viewer, or Starlink Gaia. GIMP and Adobe Photoshop can also be used, provided the appropriate plugins are present. The images contain flux values (in units of Jansky km/s per beam), as a function of celestial coordinate in the J2000 equatorial system. Due to the cometary motions, the absolute coordinate systems are accurate only at the start of the observations (dates and times are given in Table 1). These images are the result of integrating the (3D) ALMA data cubes over the full widths of the observed spectral lines (equivalent to collapsing the data cubes along their respective spectral/velocity axes). The beam dimensions (BMAJ and BMIN), corresponding to the angular resolution of the images, are given in the image headers in units of degrees. object.dat : -------------------------------------------------------------------------------- Code Name Elem q e i H1 d AU deg mag -------------------------------------------------------------------------------- C/2012 F6 Lemmon 2456375.5 0.7312461 0.9985125 82.607966 7.96 C/2012 S1 Ison 2456624.5 0.0124515 0.9998921 64.401571 6.11 (2 data files).

  7. Verification of respiratory-gated radiotherapy with new real-time tumour-tracking radiotherapy system using cine EPID images and a log file

    NASA Astrophysics Data System (ADS)

    Shiinoki, Takehiro; Hanazawa, Hideki; Yuasa, Yuki; Fujimoto, Koya; Uehara, Takuya; Shibuya, Keiko

    2017-02-01

    A combined system comprising the TrueBeam linear accelerator and a new real-time tumour-tracking radiotherapy system, SyncTraX, was installed at our institution. The objectives of this study are to develop a method for the verification of respiratory-gated radiotherapy with SyncTraX using cine electronic portal image device (EPID) images and a log file and to verify this treatment in clinical cases. Respiratory-gated radiotherapy was performed using TrueBeam and the SyncTraX system. Cine EPID images and a log file were acquired for a phantom and three patients during the course of the treatment. Digitally reconstructed radiographs (DRRs) were created for each treatment beam using a planning CT set. The cine EPID images, log file, and DRRs were analysed using a developed software. For the phantom case, the accuracy of the proposed method was evaluated to verify the respiratory-gated radiotherapy. For the clinical cases, the intra- and inter-fractional variations of the fiducial marker used as an internal surrogate were calculated to evaluate the gating accuracy and set-up uncertainty in the superior-inferior (SI), anterior-posterior (AP), and left-right (LR) directions. The proposed method achieved high accuracy for the phantom verification. For the clinical cases, the intra- and inter-fractional variations of the fiducial marker were  ⩽3 mm and  ±3 mm in the SI, AP, and LR directions. We proposed a method for the verification of respiratory-gated radiotherapy with SyncTraX using cine EPID images and a log file and showed that this treatment is performed with high accuracy in clinical cases. This work was partly presented at the 58th Annual meeting of American Association of Physicists in Medicine.

  8. Verification of respiratory-gated radiotherapy with new real-time tumour-tracking radiotherapy system using cine EPID images and a log file.

    PubMed

    Shiinoki, Takehiro; Hanazawa, Hideki; Yuasa, Yuki; Fujimoto, Koya; Uehara, Takuya; Shibuya, Keiko

    2017-02-21

    A combined system comprising the TrueBeam linear accelerator and a new real-time tumour-tracking radiotherapy system, SyncTraX, was installed at our institution. The objectives of this study are to develop a method for the verification of respiratory-gated radiotherapy with SyncTraX using cine electronic portal image device (EPID) images and a log file and to verify this treatment in clinical cases. Respiratory-gated radiotherapy was performed using TrueBeam and the SyncTraX system. Cine EPID images and a log file were acquired for a phantom and three patients during the course of the treatment. Digitally reconstructed radiographs (DRRs) were created for each treatment beam using a planning CT set. The cine EPID images, log file, and DRRs were analysed using a developed software. For the phantom case, the accuracy of the proposed method was evaluated to verify the respiratory-gated radiotherapy. For the clinical cases, the intra- and inter-fractional variations of the fiducial marker used as an internal surrogate were calculated to evaluate the gating accuracy and set-up uncertainty in the superior-inferior (SI), anterior-posterior (AP), and left-right (LR) directions. The proposed method achieved high accuracy for the phantom verification. For the clinical cases, the intra- and inter-fractional variations of the fiducial marker were  ⩽3 mm and  ±3 mm in the SI, AP, and LR directions. We proposed a method for the verification of respiratory-gated radiotherapy with SyncTraX using cine EPID images and a log file and showed that this treatment is performed with high accuracy in clinical cases.

  9. SU-F-T-469: A Clinically Observed Discrepancy Between Image-Based and Log- Based MLC Position

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neal, B; Ahmed, M; Siebers, J

    2016-06-15

    Purpose: To present a clinical case which challenges the base assumption of log-file based QA, by showing that the actual position of a MLC leaf can suddenly deviate from its programmed and logged position by >1 mm as observed with real-time imaging. Methods: An EPID-based exit-fluence dosimetry system designed to prevent gross delivery errors was used in cine mode to capture portal images during treatment. Visual monitoring identified an anomalous MLC leaf pair gap not otherwise detected by the automatic position verification. The position of the erred leaf was measured on EPID images and log files were analyzed for themore » treatment in question, the prior day’s treatment, and for daily MLC test patterns acquired on those treatment days. Additional standard test patterns were used to quantify the leaf position. Results: Whereas the log file reported no difference between planned and recorded positions, image-based measurements showed the leaf to be 1.3±0.1 mm medial from the planned position. This offset was confirmed with the test pattern irradiations. Conclusion: It has been clinically observed that log-file derived leaf positions can differ from their actual positions by >1 mm, and therefore cannot be considered to be the actual leaf positions. This cautions the use of log-based methods for MLC or patient quality assurance without independent confirmation of log integrity. Frequent verification of MLC positions through independent means is a necessary precondition to trusting log file records. Intra-treatment EPID imaging provides a method to capture departures from MLC planned positions. Work was supported in part by Varian Medical Systems.« less

  10. ARCUS Internet Media Archive (IMA): A Window into the Arctic - An Online Resource For Education and Outreach

    NASA Astrophysics Data System (ADS)

    Buxbaum, T. M.; Warnick, W. K.; Polly, B.; Breen, K. J.

    2007-12-01

    The ARCUS Internet Media Archive (IMA) is a collection of photos, graphics, videos, and presentations about the Arctic and Antarctic that are shared through the Internet. It provides the polar research community and the public at large with a centralized location where images and video pertaining to polar research can be browsed and retrieved for a variety of uses. The IMA currently contains almost 6,500 publicly accessible photos, including 4,000 photos from the National Science Foundation (NSF) funded Teachers and Researchers Exploring and Collaborating (TREC) program, an educational research experience in which K-12 teachers participate in arctic research as a pathway to improving science education. The IMA is also the future home of all electronic media from the NSF funded PolarTREC program, a continuation of TREC that now takes place in both the Arctic and Antarctic. The IMA includes 450 video files, 270 audio files, nearly 100 graphics and logos, 28 presentations, and approximately 10,000 additional resources that are being prepared for public access. The contents of this archive are organized by file type, photographer's name, event, or by organization, with each photo or file accompanied by information on content, contributor source, and usage requirements. All the files are keyworded and all information, including file name and description, is completely searchable. ARCUS plans to continue to improve and expand the IMA with a particular focus on providing graphics depicting key arctic research results and findings as well as edited video archives of relevant scientific community meetings. To submit files or for more information and to view the ARCUS Internet Media Archive, please go to: http://media.arcus.org or email photo@arcus.org.

  11. Evaluation of an interactive case simulation system in dermatology and venereology for medical students

    PubMed Central

    Wahlgren, Carl-Fredrik; Edelbring, Samuel; Fors, Uno; Hindbeck, Hans; Ståhle, Mona

    2006-01-01

    Background Most of the many computer resources used in clinical teaching of dermatology and venereology for medical undergraduates are information-oriented and focus mostly on finding a "correct" multiple-choice alternative or free-text answer. We wanted to create an interactive computer program, which facilitates not only factual recall but also clinical reasoning. Methods Through continuous interaction with students, a new computerised interactive case simulation system, NUDOV, was developed. It is based on authentic cases and contains images of real patients, actors and healthcare providers. The student selects a patient and proposes questions for medical history, examines the skin, and suggests investigations, diagnosis, differential diagnoses and further management. Feedback is given by comparing the user's own suggestions with those of a specialist. In addition, a log file of the student's actions is recorded. The program includes a large number of images, video clips and Internet links. It was evaluated with a student questionnaire and by randomising medical students to conventional teaching (n = 85) or conventional teaching plus NUDOV (n = 31) and comparing the results of the two groups in a final written examination. Results The questionnaire showed that 90% of the NUDOV students stated that the program facilitated their learning to a large/very large extent, and 71% reported that extensive working with authentic computerised cases made it easier to understand and learn about diseases and their management. The layout, user-friendliness and feedback concept were judged as good/very good by 87%, 97%, and 100%, respectively. Log files revealed that the students, in general, worked with each case for 60–90 min. However, the intervention group did not score significantly better than the control group in the written examination. Conclusion We created a computerised case simulation program allowing students to manage patients in a non-linear format supporting the clinical reasoning process. The student gets feedback through comparison with a specialist, eliminating the need for external scoring or correction. The model also permits discussion of case processing, since all transactions are stored in a log file. The program was highly appreciated by the students, but did not significantly improve their performance in the written final examination. PMID:16907972

  12. Detection and segmentation of multiple touching product inspection items

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Talukder, Ashit; Cox, Westley; Chang, Hsuan-Ting; Weber, David

    1996-12-01

    X-ray images of pistachio nuts on conveyor trays for product inspection are considered. The first step in such a processor is to locate each individual item and place it in a separate file for input to a classifier to determine the quality of each nut. This paper considers new techniques to: detect each item (each nut can be in any orientation, we employ new rotation-invariant filters to locate each item independent of its orientation), produce separate image files for each item [a new blob coloring algorithm provides this for isolated (non-touching) input items], segmentation to provide separate image files for touching or overlapping input items (we use a morphological watershed transform to achieve this), and morphological processing to remove the shell and produce an image of only the nutmeat. Each of these operations and algorithms are detailed and quantitative data for each are presented for the x-ray image nut inspection problem noted. These techniques are of general use in many different product inspection problems in agriculture and other areas.

  13. Informatics in radiology (infoRAD): Vendor-neutral case input into a server-based digital teaching file system.

    PubMed

    Kamauu, Aaron W C; DuVall, Scott L; Robison, Reid J; Liimatta, Andrew P; Wiggins, Richard H; Avrin, David E

    2006-01-01

    Although digital teaching files are important to radiology education, there are no current satisfactory solutions for export of Digital Imaging and Communications in Medicine (DICOM) images from picture archiving and communication systems (PACS) in desktop publishing format. A vendor-neutral digital teaching file, the Radiology Interesting Case Server (RadICS), offers an efficient tool for harvesting interesting cases from PACS without requiring modifications of the PACS configurations. Radiologists push imaging studies from PACS to RadICS via the standard DICOM Send process, and the RadICS server automatically converts the DICOM images into the Joint Photographic Experts Group format, a common desktop publishing format. They can then select key images and create an interesting case series at the PACS workstation. RadICS was tested successfully against multiple unmodified commercial PACS. Using RadICS, radiologists are able to harvest and author interesting cases at the point of clinical interpretation with minimal disruption in clinical work flow. RSNA, 2006

  14. Nosql for Storage and Retrieval of Large LIDAR Data Collections

    NASA Astrophysics Data System (ADS)

    Boehm, J.; Liu, K.

    2015-08-01

    Developments in LiDAR technology over the past decades have made LiDAR to become a mature and widely accepted source of geospatial information. This in turn has led to an enormous growth in data volume. The central idea for a file-centric storage of LiDAR point clouds is the observation that large collections of LiDAR data are typically delivered as large collections of files, rather than single files of terabyte size. This split of the dataset, commonly referred to as tiling, was usually done to accommodate a specific processing pipeline. It makes therefore sense to preserve this split. A document oriented NoSQL database can easily emulate this data partitioning, by representing each tile (file) in a separate document. The document stores the metadata of the tile. The actual files are stored in a distributed file system emulated by the NoSQL database. We demonstrate the use of MongoDB a highly scalable document oriented NoSQL database for storing large LiDAR files. MongoDB like any NoSQL database allows for queries on the attributes of the document. As a specialty MongoDB also allows spatial queries. Hence we can perform spatial queries on the bounding boxes of the LiDAR tiles. Inserting and retrieving files on a cloud-based database is compared to native file system and cloud storage transfer speed.

  15. Archive Inventory Management System (AIMS) — A Fast, Metrics Gathering Framework for Validating and Gaining Insight from Large File-Based Data Archives

    NASA Astrophysics Data System (ADS)

    Verma, R. V.

    2018-04-01

    The Archive Inventory Management System (AIMS) is a software package for understanding the distribution, characteristics, integrity, and nuances of files and directories in large file-based data archives on a continuous basis.

  16. Legato: Personal Computer Software for Analyzing Pressure-Sensitive Paint Data

    NASA Technical Reports Server (NTRS)

    Schairer, Edward T.

    2001-01-01

    'Legato' is personal computer software for analyzing radiometric pressure-sensitive paint (PSP) data. The software is written in the C programming language and executes under Windows 95/98/NT operating systems. It includes all operations normally required to convert pressure-paint image intensities to normalized pressure distributions mapped to physical coordinates of the test article. The program can analyze data from both single- and bi-luminophore paints and provides for both in situ and a priori paint calibration. In addition, there are functions for determining paint calibration coefficients from calibration-chamber data. The software is designed as a self-contained, interactive research tool that requires as input only the bare minimum of information needed to accomplish each function, e.g., images, model geometry, and paint calibration coefficients (for a priori calibration) or pressure-tap data (for in situ calibration). The program includes functions that can be used to generate needed model geometry files for simple model geometries (e.g., airfoils, trapezoidal wings, rotor blades) based on the model planform and airfoil section. All data files except images are in ASCII format and thus are easily created, read, and edited. The program does not use database files. This simplifies setup but makes the program inappropriate for analyzing massive amounts of data from production wind tunnels. Program output consists of Cartesian plots, false-colored real and virtual images, pressure distributions mapped to the surface of the model, assorted ASCII data files, and a text file of tabulated results. Graphical output is displayed on the computer screen and can be saved as publication-quality (PostScript) files.

  17. A system for verifying models and classification maps by extraction of information from a variety of data sources

    NASA Technical Reports Server (NTRS)

    Norikane, L.; Freeman, A.; Way, J.; Okonek, S.; Casey, R.

    1992-01-01

    Recent updates to a geographical information system (GIS) called VICAR (Video Image Communication and Retrieval)/IBIS are described. The system is designed to handle data from many different formats (vector, raster, tabular) and many different sources (models, radar images, ground truth surveys, optical images). All the data are referenced to a single georeference plane, and average or typical values for parameters defined within a polygonal region are stored in a tabular file, called an info file. The info file format allows tracking of data in time, maintenance of links between component data sets and the georeference image, conversion of pixel values to `actual' values (e.g., radar cross-section, luminance, temperature), graph plotting, data manipulation, generation of training vectors for classification algorithms, and comparison between actual measurements and model predictions (with ground truth data as input).

  18. VizieR Online Data Catalog: Jekyll & Hyde galaxies ALMA cube & spectrum (Schreiber+, 2018)

    NASA Astrophysics Data System (ADS)

    Schreiber, C.; Labbe, I.; Glazebrook, K.; Bekiaris, G.; Papovich, C.; Costa, T.; Elbaz, D.; Kacprzak, G. G.; Nanayakkara, T.; Oesch, P.; Pannella, M.; Spitler, L.; Straatman, C.; Tran, K.-V.; Wang, T.

    2017-11-01

    These files consist of the full ALMA data cube for the galaxies Jekyll and Hyde, together with the extracted continuum image and the spectrum of Hyde. The data cube was produced by CASA (v4.7.0), the continuum image was constructed as the weighted average in line-free channels, and the spectrum was extracted at the peak flux position of Hyde. The data cube and spectrum files contain two extensions, one for the flux, and another for the uncertainty. This uncertainty was determined from the RMS of the cube data between 2 and 8" away from the center. All fluxes are in units of Jansky, and the spectral axis is given in observed frequency (GHz). The images were not CLEANed, therefore the dirty beam (which is also provided here) is the correct point-spread function to use when analyzing these images. (2 data files).

  19. Tabular data and graphical images in support of the U.S. Geological Survey National Oil and Gas Assessment -- San Joaquin Basin (5010): Chapter 28 in Petroleum systems and geologic assessment of oil and gas in the San Joaquin Basin Province, California

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2007-01-01

    This chapter describes data used in support of the assessment process. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD–ROM. Computers and software may import the data without transcription from the portable document format (.pdf) files of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  20. WE-G-213CD-03: A Dual Complementary Verification Method for Dynamic Tumor Tracking on Vero SBRT.

    PubMed

    Poels, K; Depuydt, T; Verellen, D; De Ridder, M

    2012-06-01

    to use complementary cine EPID and gimbals log file analysis for in-vivo tracking accuracy monitoring. A clinical prototype of dynamic tracking (DT) was installed on the Vero SBRT system. This prototype version allowed tumor tracking by gimballed linac rotations using an internal-external correspondence model. The DT prototype software allowed the detailed logging of all applied gimbals rotations during tracking. The integration of an EPID on the vero system allowed the acquisition of cine EPID images during DT. We quantified the tracking error on cine EPID (E-EPID) by subtracting the target center (fiducial marker detection) and the field centroid. Dynamic gimbals log file information was combined with orthogonal x-ray verification images to calculate the in-vivo tracking error (E-kVLog). The correlation between E-kVLog and E-EPID was calculated for validation of the gimbals log file. Further, we investigated the sensitivity of the log file tracking error by introducing predefined systematic tracking errors. As an application we calculate gimbals log file tracking error for dynamic hidden target tests to investigate gravity effects and decoupled gimbals rotation from gantry rotation. Finally, calculating complementary cine EPID and log file tracking errors evaluated the clinical accuracy of dynamic tracking. A strong correlation was found between log file and cine EPID tracking error distribution during concurrent measurements (R=0.98). We found sensitivity in the gimbals log files to detect a systematic tracking error up to 0.5 mm. Dynamic hidden target tests showed no gravity influence on tracking performance and high degree of decoupled gimbals and gantry rotation during dynamic arc dynamic tracking. A submillimetric agreement between clinical complementary tracking error measurements was found. Redundancy of the internal gimbals log file with x-ray verification images with complementary independent cine EPID images was implemented to monitor the accuracy of gimballed tumor tracking on Vero SBRT. Research was financially supported by the Flemish government (FWO), Hercules Foundation and BrainLAB AG. © 2012 American Association of Physicists in Medicine.

  1. TOPPE: A framework for rapid prototyping of MR pulse sequences.

    PubMed

    Nielsen, Jon-Fredrik; Noll, Douglas C

    2018-06-01

    To introduce a framework for rapid prototyping of MR pulse sequences. We propose a simple file format, called "TOPPE", for specifying all details of an MR imaging experiment, such as gradient and radiofrequency waveforms and the complete scan loop. In addition, we provide a TOPPE file "interpreter" for GE scanners, which is a binary executable that loads TOPPE files and executes the sequence on the scanner. We also provide MATLAB scripts for reading and writing TOPPE files and previewing the sequence prior to hardware execution. With this setup, the task of the pulse sequence programmer is reduced to creating TOPPE files, eliminating the need for hardware-specific programming. No sequence-specific compilation is necessary; the interpreter only needs to be compiled once (for every scanner software upgrade). We demonstrate TOPPE in three different applications: k-space mapping, non-Cartesian PRESTO whole-brain dynamic imaging, and myelin mapping in the brain using inhomogeneous magnetization transfer. We successfully implemented and executed the three example sequences. By simply changing the various TOPPE sequence files, a single binary executable (interpreter) was used to execute several different sequences. The TOPPE file format is a complete specification of an MR imaging experiment, based on arbitrary sequences of a (typically small) number of unique modules. Along with the GE interpreter, TOPPE comprises a modular and flexible platform for rapid prototyping of new pulse sequences. Magn Reson Med 79:3128-3134, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  2. Shaping ability of 4 different single-file systems in simulated S-shaped canals.

    PubMed

    Saleh, Abdulrahman Mohammed; Vakili Gilani, Pouyan; Tavanafar, Saeid; Schäfer, Edgar

    2015-04-01

    The aim of this study was to compare the shaping ability of 4 different single-file systems in simulated S-shaped canals. Sixty-four S-shaped canals in resin blocks were prepared to an apical size of 25 using Reciproc (VDW, Munich, Germany), WaveOne (Dentsply Maillefer, Ballaigues, Switzerland), OneShape (Micro Méga, Besançon, France), and F360 (Komet Brasseler, Lemgo, Germany) (n = 16 canals/group) systems. Composite images were made from the superimposition of pre- and postinstrumentation images. The amount of resin removed by each system was measured by using a digital template and image analysis software. Canal aberrations and the preparation time were also recorded. The data were statistically analyzed by using analysis of variance, Tukey, and chi-square tests. Canals prepared with the F360 and OneShape systems were better centered compared with the Reciproc and WaveOne systems. Reciproc and WaveOne files removed significantly greater amounts of resin from the inner side of both curvatures (P < .05). Instrumentation with OneShape and Reciproc files was significantly faster compared with WaveOne and F360 files (P < .05). No instrument fractured during canal preparation. Under the conditions of this study, all single-file instruments were safe to use and were able to prepare the canals efficiently. However, single-file systems that are less tapered seem to be more favorable when preparing S-shaped canals. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  3. Real-time distortion correction of spiral and echo planar images using the gradient system impulse response function.

    PubMed

    Campbell-Washburn, Adrienne E; Xue, Hui; Lederman, Robert J; Faranesh, Anthony Z; Hansen, Michael S

    2016-06-01

    MRI-guided interventions demand high frame rate imaging, making fast imaging techniques such as spiral imaging and echo planar imaging (EPI) appealing. In this study, we implemented a real-time distortion correction framework to enable the use of these fast acquisitions for interventional MRI. Distortions caused by gradient waveform inaccuracies were corrected using the gradient impulse response function (GIRF), which was measured by standard equipment and saved as a calibration file on the host computer. This file was used at runtime to calculate the predicted k-space trajectories for image reconstruction. Additionally, the off-resonance reconstruction frequency was modified in real time to interactively deblur spiral images. Real-time distortion correction for arbitrary image orientations was achieved in phantoms and healthy human volunteers. The GIRF-predicted k-space trajectories matched measured k-space trajectories closely for spiral imaging. Spiral and EPI image distortion was visibly improved using the GIRF-predicted trajectories. The GIRF calibration file showed no systematic drift in 4 months and was demonstrated to correct distortions after 30 min of continuous scanning despite gradient heating. Interactive off-resonance reconstruction was used to sharpen anatomical boundaries during continuous imaging. This real-time distortion correction framework will enable the use of these high frame rate imaging methods for MRI-guided interventions. Magn Reson Med 75:2278-2285, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  4. Real-time distortion correction of spiral and echo planar images using the gradient system impulse response function

    PubMed Central

    Campbell-Washburn, Adrienne E; Xue, Hui; Lederman, Robert J; Faranesh, Anthony Z; Hansen, Michael S

    2015-01-01

    Purpose MRI-guided interventions demand high frame-rate imaging, making fast imaging techniques such as spiral imaging and echo planar imaging (EPI) appealing. In this study, we implemented a real-time distortion correction framework to enable the use of these fast acquisitions for interventional MRI. Methods Distortions caused by gradient waveform inaccuracies were corrected using the gradient impulse response function (GIRF), which was measured by standard equipment and saved as a calibration file on the host computer. This file was used at runtime to calculate the predicted k-space trajectories for image reconstruction. Additionally, the off-resonance reconstruction frequency was modified in real-time to interactively de-blur spiral images. Results Real-time distortion correction for arbitrary image orientations was achieved in phantoms and healthy human volunteers. The GIRF predicted k-space trajectories matched measured k-space trajectories closely for spiral imaging. Spiral and EPI image distortion was visibly improved using the GIRF predicted trajectories. The GIRF calibration file showed no systematic drift in 4 months and was demonstrated to correct distortions after 30 minutes of continuous scanning despite gradient heating. Interactive off-resonance reconstruction was used to sharpen anatomical boundaries during continuous imaging. Conclusions This real-time distortion correction framework will enable the use of these high frame-rate imaging methods for MRI-guided interventions. PMID:26114951

  5. Some utilities to help produce Rich Text Files from Stata.

    PubMed

    Gillman, Matthew S

    Producing RTF files from Stata can be difficult and somewhat cryptic. Utilities are introduced to simplify this process; one builds up a table row-by-row, another inserts a PNG image file into an RTF document, and the others start and finish the RTF document.

  6. Optical Coherence Tomography in the UK Biobank Study - Rapid Automated Analysis of Retinal Thickness for Large Population-Based Studies.

    PubMed

    Keane, Pearse A; Grossi, Carlota M; Foster, Paul J; Yang, Qi; Reisman, Charles A; Chan, Kinpui; Peto, Tunde; Thomas, Dhanes; Patel, Praveen J

    2016-01-01

    To describe an approach to the use of optical coherence tomography (OCT) imaging in large, population-based studies, including methods for OCT image acquisition, storage, and the remote, rapid, automated analysis of retinal thickness. In UK Biobank, OCT images were acquired between 2009 and 2010 using a commercially available "spectral domain" OCT device (3D OCT-1000, Topcon). Images were obtained using a raster scan protocol, 6 mm x 6 mm in area, and consisting of 128 B-scans. OCT image sets were stored on UK Biobank servers in a central repository, adjacent to high performance computers. Rapid, automated analysis of retinal thickness was performed using custom image segmentation software developed by the Topcon Advanced Biomedical Imaging Laboratory (TABIL). This software employs dual-scale gradient information to allow for automated segmentation of nine intraretinal boundaries in a rapid fashion. 67,321 participants (134,642 eyes) in UK Biobank underwent OCT imaging of both eyes as part of the ocular module. 134,611 images were successfully processed with 31 images failing segmentation analysis due to corrupted OCT files or withdrawal of subject consent for UKBB study participation. Average time taken to call up an image from the database and complete segmentation analysis was approximately 120 seconds per data set per login, and analysis of the entire dataset was completed in approximately 28 days. We report an approach to the rapid, automated measurement of retinal thickness from nearly 140,000 OCT image sets from the UK Biobank. In the near future, these measurements will be publically available for utilization by researchers around the world, and thus for correlation with the wealth of other data collected in UK Biobank. The automated analysis approaches we describe may be of utility for future large population-based epidemiological studies, clinical trials, and screening programs that employ OCT imaging.

  7. Optical Coherence Tomography in the UK Biobank Study – Rapid Automated Analysis of Retinal Thickness for Large Population-Based Studies

    PubMed Central

    Grossi, Carlota M.; Foster, Paul J.; Yang, Qi; Reisman, Charles A.; Chan, Kinpui; Peto, Tunde; Thomas, Dhanes; Patel, Praveen J.

    2016-01-01

    Purpose To describe an approach to the use of optical coherence tomography (OCT) imaging in large, population-based studies, including methods for OCT image acquisition, storage, and the remote, rapid, automated analysis of retinal thickness. Methods In UK Biobank, OCT images were acquired between 2009 and 2010 using a commercially available “spectral domain” OCT device (3D OCT-1000, Topcon). Images were obtained using a raster scan protocol, 6 mm x 6 mm in area, and consisting of 128 B-scans. OCT image sets were stored on UK Biobank servers in a central repository, adjacent to high performance computers. Rapid, automated analysis of retinal thickness was performed using custom image segmentation software developed by the Topcon Advanced Biomedical Imaging Laboratory (TABIL). This software employs dual-scale gradient information to allow for automated segmentation of nine intraretinal boundaries in a rapid fashion. Results 67,321 participants (134,642 eyes) in UK Biobank underwent OCT imaging of both eyes as part of the ocular module. 134,611 images were successfully processed with 31 images failing segmentation analysis due to corrupted OCT files or withdrawal of subject consent for UKBB study participation. Average time taken to call up an image from the database and complete segmentation analysis was approximately 120 seconds per data set per login, and analysis of the entire dataset was completed in approximately 28 days. Conclusions We report an approach to the rapid, automated measurement of retinal thickness from nearly 140,000 OCT image sets from the UK Biobank. In the near future, these measurements will be publically available for utilization by researchers around the world, and thus for correlation with the wealth of other data collected in UK Biobank. The automated analysis approaches we describe may be of utility for future large population-based epidemiological studies, clinical trials, and screening programs that employ OCT imaging. PMID:27716837

  8. Tabular data and graphical images in support of the U.S. Geological Survey National Oil and Gas Assessment--San Juan Basin Province (5022): Chapter 7 in Total petroleum systems and geologic assessment of undiscovered oil and gas resources in the San Juan Basin Province, exclusive of Paleozoic rocks, New Mexico and Colorado

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2013-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD–ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  9. Informatics in Radiology (infoRAD): personal computer security: part 2. Software Configuration and file protection.

    PubMed

    Caruso, Ronald D

    2004-01-01

    Proper configuration of software security settings and proper file management are necessary and important elements of safe computer use. Unfortunately, the configuration of software security options is often not user friendly. Safe file management requires the use of several utilities, most of which are already installed on the computer or available as freeware. Among these file operations are setting passwords, defragmentation, deletion, wiping, removal of personal information, and encryption. For example, Digital Imaging and Communications in Medicine medical images need to be anonymized, or "scrubbed," to remove patient identifying information in the header section prior to their use in a public educational or research environment. The choices made with respect to computer security may affect the convenience of the computing process. Ultimately, the degree of inconvenience accepted will depend on the sensitivity of the files and communications to be protected and the tolerance of the user. Copyright RSNA, 2004

  10. Free software helps map and display data

    NASA Astrophysics Data System (ADS)

    Wessel, Paul; Smith, Walter H. F.

    When creating camera-ready figures, most scientists are familiar with the sequence of raw data → processing → final illustration and with the spending of large sums of money to finalize papers for submission to scientific journals, prepare proposals, and create overheads and slides for various presentations. This process can be tedious and is often done manually, since available commercial or in-house software usually can do only part of the job.To expedite this process, we introduce the Generic Mapping Tools (GMT), which is a free, public domain software package that can be used to manipulate columns of tabular data, time series, and gridded data sets and to display these data in a variety of forms ranging from simple x-y plots to maps and color, perspective, and shaded-relief illustrations. GMT uses the PostScript page description language, which can create arbitrarily complex images in gray tones or 24-bit true color by superimposing multiple plot files. Line drawings, bitmapped images, and text can be easily combined in one illustration. PostScript plot files are device-independent, meaning the same file can be printed at 300 dots per inch (dpi) on an ordinary laserwriter or at 2470 dpi on a phototypesetter when ultimate quality is needed. GMT software is written as a set of UNIX tools and is totally self contained and fully documented. The system is offered free of charge to federal agencies and nonprofit educational organizations worldwide and is distributed over the computer network Internet.

  11. VirGO: A Visual Browser for the ESO Science Archive Facility

    NASA Astrophysics Data System (ADS)

    Hatziminaoglou, Evanthia; Chéreau, Fabien

    2009-03-01

    VirGO is the next generation Visual Browser for the ESO Science Archive Facility (SAF) developed in the Virtual Observatory Project Office. VirGO enables astronomers to discover and select data easily from millions of observations in a visual and intuitive way. It allows real-time access and the graphical display of a large number of observations by showing instrumental footprints and image previews, as well as their selection and filtering for subsequent download from the ESO SAF web interface. It also permits the loading of external FITS files or VOTables, as well as the superposition of Digitized Sky Survey images to be used as background. All data interfaces are based on Virtual Observatory (VO) standards that allow access to images and spectra from external data centres, and interaction with the ESO SAF web interface or any other VO applications.

  12. VizieR Online Data Catalog: GLASS. VI. MCS J0416.1-2403 HFF imaging & spectra (Hoag+, 2016)

    NASA Astrophysics Data System (ADS)

    Hoag, A.; Huang, K.-H.; Treu, T.; Bradac, M.; Schmidt, K. B.; Wang, X.; Brammer, G. B.; Broussard, A.; Amorin, R.; Castellano, M.; Fontana, A.; Merlin, E.; Schrabback, T.; Trenti, M.; Vulcani, B.

    2017-02-01

    The Grism Lens-Amplified Survey from Space (GLASS; GO-13459, PI: Treu; Schmidt+ 2014ApJ...782L..36S; Treu+ 2015, J/ApJ/812/114) observed 10 massive galaxy clusters with the HST WFC3-IR G102 and G141 grism between 2013 December and 2015 January. Each of the clusters targeted by GLASS has deep multi-band HST imaging from the Hubble Frontier Fields (HFF) in 2014 September and/or from CLASH (ESO VIMOS large program CLASH-VLT; 186.A-0798; PI: P. Rosati; Balestra+, 2016, J/ApJS/224/33). We also use mid-IR imaging data acquired with the IRAC on board the Spitzer Space Telescope obtained by the DDT program #90258 (PI: Soifer; P. Capak+ 2016, in prep.) and #80168 (PI: Bouwens). (2 data files).

  13. Launch Control System Software Development System Automation Testing

    NASA Technical Reports Server (NTRS)

    Hwang, Andrew

    2017-01-01

    The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administration's (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This system requires high quality testing that will measure and test the capabilities of the system. For the past two years, the Exploration and Operations Division at Kennedy Space Center (KSC) has assigned a group including interns and full-time engineers to develop automated tests to save the project time and money. The team worked on automating the testing process for the SCCS GUI that would use streamed simulated data from the testing servers to produce data, plots, statuses, etc. to the GUI. The software used to develop automated tests included an automated testing framework and an automation library. The automated testing framework has a tabular-style syntax, which means the functionality of a line of code must have the appropriate number of tabs for the line to function as intended. The header section contains either paths to custom resources or the names of libraries being used. The automation library contains functionality to automate anything that appears on a desired screen with the use of image recognition software to detect and control GUI components. The data section contains any data values strictly created for the current testing file. The body section holds the tests that are being run. The function section can include any number of functions that may be used by the current testing file or any other file that resources it. The resources and body section are required for all test files; the data and function sections can be left empty if the data values and functions being used are from a resourced library or another file. To help equip the automation team with better tools, the Project Lead of the Automated Testing Team, Jason Kapusta, assigned the task to install and train an optical character recognition (OCR) tool to Brandon Echols, a fellow intern, and I. The purpose of the OCR tool is to analyze an image and find the coordinates of any group of text. Some issues that arose while installing the OCR tool included the absence of certain libraries needed to train the tool and an outdated software version. We eventually resolved the issues and successfully installed the OCR tool. Training the tool required many images and different fonts and sizes, but in the end the tool learned to accurately decipher the text in the images and their coordinates. The OCR tool produced a file that contained significant metadata for each section of text, but only the text and coordinates of the text was required for our purpose. The team made a script to parse the information we wanted from the OCR file to a different file that would be used by automation functions within the automated framework. Since a majority of development and testing for the automated test cases for the GUI in question has been done using live simulated data on the workstations at the Launch Control Center (LCC), a large amount of progress has been made. As of this writing, about 60% of all of automated testing has been implemented. Additionally, the OCR tool will help make our automated tests more robust due to the tool's text recognition being highly scalable to different text fonts and text sizes. Soon we will have the whole test system automated, allowing for more full-time engineers working on development projects.

  14. SU-E-T-473: A Patient-Specific QC Paradigm Based On Trajectory Log Files and DICOM Plan Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeMarco, J; McCloskey, S; Low, D

    Purpose: To evaluate a remote QC tool for monitoring treatment machine parameters and treatment workflow. Methods: The Varian TrueBeamTM linear accelerator is a digital machine that records machine axis parameters and MLC leaf positions as a function of delivered monitor unit or control point. This information is saved to a binary trajectory log file for every treatment or imaging field in the patient treatment session. A MATLAB analysis routine was developed to parse the trajectory log files for a given patient, compare the expected versus actual machine and MLC positions as well as perform a cross-comparison with the DICOM-RT planmore » file exported from the treatment planning system. The parsing routine sorts the trajectory log files based on the time and date stamp and generates a sequential report file listing treatment parameters and provides a match relative to the DICOM-RT plan file. Results: The trajectory log parsing-routine was compared against a standard record and verify listing for patients undergoing initial IMRT dosimetry verification and weekly and final chart QC. The complete treatment course was independently verified for 10 patients of varying treatment site and a total of 1267 treatment fields were evaluated including pre-treatment imaging fields where applicable. In the context of IMRT plan verification, eight prostate SBRT plans with 4-arcs per plan were evaluated based on expected versus actual machine axis parameters. The average value for the maximum RMS MLC error was 0.067±0.001mm and 0.066±0.002mm for leaf bank A and B respectively. Conclusion: A real-time QC analysis program was tested using trajectory log files and DICOM-RT plan files. The parsing routine is efficient and able to evaluate all relevant machine axis parameters during a patient treatment course including MLC leaf positions and table positions at time of image acquisition and during treatment.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillion, D.

    This code enables one to display, take line-outs on, and perform various transformations on an image created by an array of integer*2 data. Uncompressed eight-bit TIFF files created on either the Macintosh or the IBM PC may also be read in and converted to a 16 bit signed integer image. This code is designed to handle all the formats used for PDS (photo-densitometer) files at the Lawrence Livermore National Laboratory. These formats are all explained by the application code. The image may be zoomed infinitely and the gray scale mapping can be easily changed. Line-outs may be horizontal or verticalmore » with arbitrary width, angled with arbitrary end points, or taken along any path. This code is usually used to examine spectrograph data. Spectral lines may be identified and a polynomial fit from position to wavelength may be found. The image array can be remapped so that the pixels all have the same change of lambda width. It is not necessary to do this, however. Lineouts may be printed, saved as Cricket tab-delimited files, or saved as PICT2 files. The plots may be linear, semilog, or logarithmic with nice values and proper scientific notation. Typically, spectral lines are curved.« less

  16. Some utilities to help produce Rich Text Files from Stata

    PubMed Central

    Gillman, Matthew S.

    2018-01-01

    Producing RTF files from Stata can be difficult and somewhat cryptic. Utilities are introduced to simplify this process; one builds up a table row-by-row, another inserts a PNG image file into an RTF document, and the others start and finish the RTF document. PMID:29731697

  17. Dependency Tree Annotation Software

    DTIC Science & Technology

    2015-11-01

    formats, and it provides numerous options for customizing how dependency trees are displayed. Built entirely in Java , it can run on a wide range of...tree can be saved as an image, .mxe (a mxGraph editing file), a .conll file, and several other file formats. DTE uses the open source Java version

  18. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  19. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  20. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  1. Computed Tomographic Evaluation of K3 Rotary and Stainless Steel K File Instrumentation in Primary Teeth

    PubMed Central

    Kavitha, Swaminathan; Thomas, Eapen; Anadhan, Vasanthakumari; Vijayakumar, Rajendran

    2016-01-01

    Introduction The intention of root canal preparation is to reduce infected content and create a root canal shape allowing for a well condensed root filling. Therefore, it is not necessary to remove excessive dentine for successful root canal preparation and concern must be taken not to over instrument as perforations can occur in the thin dentinal walls of primary molars. Aim This study was done to evaluate the time preparation, the risk of lateral perforation and dentine removal of the stainless steel K file and K3 rotary instrumentation in primary teeth. Materials and Methods Seventy-five primary molars were selected and divided into three groups. Using spiral computed tomography the teeth were scanned before instrumentation. Teeth were prepared using a stainless steel K file for manual technique. All the canals were prepared up to file size 35. In K3 rotary files (.02 taper) instrumentation was done up to 35 size file. In K3 rotary files (.04 taper) the instrumentation was done up to 25 size file and simultaneously the instrumentation time was recorded. The instrumented teeth were once again scanned and the images were compared with the images of the uninstrumented canals. Statistical Analysis Data was statistically analysed using Kruskal Wallis One-way ANOVA, Mann-Whitney U-Test and Pearson’s Chi-square Test. Results K3 rotary files (.02 taper) removed a significantly less amount of dentine, required less instrumentation time than a stainless steel K file. Conclusion K3 files (.02 taper) generated less dentine removal than the stainless steel K file and K3 files (.04 taper). K3 rotary files (.02 taper) were more effective for root canal instrumentation in primary teeth. PMID:26894166

  2. Evaluation of a new filing system's ability to maintain canal morphology.

    PubMed

    Thompson, Matthew; Sidow, Stephanie J; Lindsey, Kimberly; Chuang, Augustine; McPherson, James C

    2014-06-01

    The manufacturer of the Hyflex CM endodontic files claims the files remain centered within the canal, and if unwound during treatment, they will regain their original shape after sterilization. The purpose of this study was to evaluate and compare the canal centering ability of the Hyflex CM and the ProFile ISO filing systems after repeated uses in simulated canals, followed by autoclaving. Sixty acrylic blocks with a canal curvature of 45° were stained with methylene blue, photographed, and divided into 2 groups, H (Hyflex CM) and P (ProFile ISO). The groups were further subdivided into 3 subgroups: H1, H2, H3; P1, P2, P3 (n = 10). Groups H1 and P1 were instrumented to 40 (.04) with the respective file system. Used files were autoclaved for 26 minutes at 126°C. After sterilization, the files were used to instrument groups H2 and P2. The same sterilization and instrumentation procedure was repeated for groups H3 and P3. Post-instrumentation digital images were taken and superimposed over the pre-instrumentation images. Changes in the location of the center of the canal at predetermined reference points were recorded and compared within subgroups and between filing systems. Statistical differences in intergroup and intragroup transportation measures were analyzed by using the Kruskal-Wallis analysis of variance of ranks with the Bonferroni post hoc test. There was a difference between Hyflex CM and ProFile ISO groups, although it was not statistically significant. Intragroup differences for both Hyflex CM and ProFile ISO groups were not significant (P < .05). The Hyflex CM and ProFile ISO files equally maintained the original canal's morphology after 2 sterilization cycles. Published by Elsevier Inc.

  3. Theoretical and Empirical Comparison of Big Data Image Processing with Apache Hadoop and Sun Grid Engine.

    PubMed

    Bao, Shunxing; Weitendorf, Frederick D; Plassard, Andrew J; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A

    2017-02-11

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging.

  4. Theoretical and empirical comparison of big data image processing with Apache Hadoop and Sun Grid Engine

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.

    2017-03-01

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and nonrelevant for medical imaging.

  5. Iris Recognition: The Consequences of Image Compression

    NASA Astrophysics Data System (ADS)

    Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig

    2010-12-01

    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  6. Efficient Retrieval of Massive Ocean Remote Sensing Images via a Cloud-Based Mean-Shift Algorithm.

    PubMed

    Yang, Mengzhao; Song, Wei; Mei, Haibin

    2017-07-23

    The rapid development of remote sensing (RS) technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for storm surges and typhoon warnings. In this paper, we present an efficient retrieval of massive ocean RS images via a Cloud-based mean-shift algorithm. Distributed construction method via the pyramid model is proposed based on the maximum hierarchical layer algorithm and used to realize efficient storage structure of RS images on the Cloud platform. We achieve high-performance processing of massive RS images in the Hadoop system. Based on the pyramid Hadoop distributed file system (HDFS) storage method, an improved mean-shift algorithm for RS image retrieval is presented by fusion with the canopy algorithm via Hadoop MapReduce programming. The results show that the new method can achieve better performance for data storage than HDFS alone and WebGIS-based HDFS. Speedup and scaleup are very close to linear changes with an increase of RS images, which proves that image retrieval using our method is efficient.

  7. Efficient Retrieval of Massive Ocean Remote Sensing Images via a Cloud-Based Mean-Shift Algorithm

    PubMed Central

    Song, Wei; Mei, Haibin

    2017-01-01

    The rapid development of remote sensing (RS) technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for storm surges and typhoon warnings. In this paper, we present an efficient retrieval of massive ocean RS images via a Cloud-based mean-shift algorithm. Distributed construction method via the pyramid model is proposed based on the maximum hierarchical layer algorithm and used to realize efficient storage structure of RS images on the Cloud platform. We achieve high-performance processing of massive RS images in the Hadoop system. Based on the pyramid Hadoop distributed file system (HDFS) storage method, an improved mean-shift algorithm for RS image retrieval is presented by fusion with the canopy algorithm via Hadoop MapReduce programming. The results show that the new method can achieve better performance for data storage than HDFS alone and WebGIS-based HDFS. Speedup and scaleup are very close to linear changes with an increase of RS images, which proves that image retrieval using our method is efficient. PMID:28737699

  8. Interactive Radiology teaching file system: the development of a MIRC-compliant and user-centered e-learning resource.

    PubMed

    dos-Santos, M; Fujino, A

    2012-01-01

    Radiology teaching usually employs a systematic and comprehensive set of medical images and related information. Databases with representative radiological images and documents are highly desirable and widely used in Radiology teaching programs. Currently, computer-based teaching file systems are widely used in Medicine and Radiology teaching as an educational resource. This work addresses a user-centered radiology electronic teaching file system as an instance of MIRC compliant medical image database. Such as a digital library, the clinical cases are available to access by using a web browser. The system has offered great opportunities to some Radiology residents interact with experts. This has been done by applying user-centered techniques and creating usage context-based tools in order to make available an interactive system.

  9. Support the Design of Improved IUE NEWSIPS High Dispersion Extraction Algorithms: Improved IUE High Dispersion Extraction Algorithms

    NASA Technical Reports Server (NTRS)

    Lawton, Pat

    2004-01-01

    The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.

  10. VizieR Online Data Catalog: MIPS 24um nebulae (Gvaramadze+, 2010)

    NASA Astrophysics Data System (ADS)

    Gvaramadze, V. V.; Kniazev, A. Y.; Fabrika, S.

    2011-03-01

    Massive evolved stars lose a large fraction of their mass via copious stellar wind or instant outbursts. During certain evolutionary phases, they can be identified by the presence of their circumstellar nebulae. In this paper, we present the results of a search for compact nebulae (reminiscent of circumstellar nebulae around evolved massive stars) using archival 24um data obtained with the Multiband Imaging Photometer for Spitzer. We have discovered 115 nebulae, most of which bear a striking resemblance to the circumstellar nebulae associated with luminous blue variables (LBVs) and late WN-type (WNL) Wolf-Rayet (WR) stars in the Milky Way and the Large Magellanic Cloud (LMC). (1 data file).

  11. Proposed patient motion monitoring system using feature point tracking with a web camera.

    PubMed

    Miura, Hideharu; Ozawa, Shuichi; Matsuura, Takaaki; Yamada, Kiyoshi; Nagata, Yasushi

    2017-12-01

    Patient motion monitoring systems play an important role in providing accurate treatment dose delivery. We propose a system that utilizes a web camera (frame rate up to 30 fps, maximum resolution of 640 × 480 pixels) and an in-house image processing software (developed using Microsoft Visual C++ and OpenCV). This system is simple to use and convenient to set up. The pyramidal Lucas-Kanade method was applied to calculate motions for each feature point by analysing two consecutive frames. The image processing software employs a color scheme where the defined feature points are blue under stable (no movement) conditions and turn red along with a warning message and an audio signal (beeping alarm) for large patient movements. The initial position of the marker was used by the program to determine the marker positions in all the frames. The software generates a text file that contains the calculated motion for each frame and saves it as a compressed audio video interleave (AVI) file. We proposed a patient motion monitoring system using a web camera, which is simple and convenient to set up, to increase the safety of treatment delivery.

  12. Supporting geoscience with graphical-user-interface Internet tools for the Macintosh

    NASA Astrophysics Data System (ADS)

    Robin, Bernard

    1995-07-01

    This paper describes a suite of Macintosh graphical-user-interface (GUI) software programs that can be used in conjunction with the Internet to support geoscience education. These software programs allow science educators to access and retrieve a large body of resources from an increasing number of network sites, taking advantage of the intuitive, simple-to-use Macintosh operating system. With these tools, educators easily can locate, download, and exchange not only text files but also sound resources, video movie clips, and software application files from their desktop computers. Another major advantage of these software tools is that they are available at no cost and may be distributed freely. The following GUI software tools are described including examples of how they can be used in an educational setting: ∗ Eudora—an e-mail program ∗ NewsWatcher—a newsreader ∗ TurboGopher—a Gopher program ∗ Fetch—a software application for easy File Transfer Protocol (FTP) ∗ NCSA Mosaic—a worldwide hypertext browsing program. An explosive growth of online archives currently is underway as new electronic sites are being added continuously to the Internet. Many of these resources may be of interest to science educators who learn they can share not only ASCII text files, but also graphic image files, sound resources, QuickTime movie clips, and hypermedia projects with colleagues from locations around the world. These powerful, yet simple to learn GUI software tools are providing a revolution in how knowledge can be accessed, retrieved, and shared.

  13. TOASTing Your Images With Montage

    NASA Astrophysics Data System (ADS)

    Berriman, G. Bruce; Good, John

    2017-01-01

    The Montage image mosaic engine is a scalable toolkit for creating science-grade mosaics of FITS files, according to the user's specifications of coordinates, projection, sampling, and image rotation. It is written in ANSI-C and runs on all common *nix-based platforms. The code is freely available and is released with a BSD 3-clause license. Version 5 is a major upgrade to Montage, and provides support for creating images that can be consumed by the World Wide Telescope (WWT). Montage treats the TOAST sky tessellation scheme, used by the WWT, as a spherical projection like those in the WCStools library. Thus images in any projection can be converted to the TOAST projection by Montage’s reprojection services. These reprojections can be performed at scale on high-performance platforms and on desktops. WWT consumes PNG or JPEG files, organized according to WWT’s tiling and naming scheme. Montage therefore provides a set of dedicated modules to create the required files from FITS images that contain the TOAST projection. There are two other major features of Version 5. It supports processing of HEALPix files to any projection in the WCS tools library. And it can be built as a library that can be called from other languages, primarily Python. http://montage.ipac.caltech.edu.GitHub download page: https://github.com/Caltech-IPAC/Montage.ASCL record: ascl:1010.036. DOI: dx.doi.org/10.5281/zenodo.49418 Montage is funded by the National Science Foundation under Grant Number ACI-1440620,

  14. A clinically observed discrepancy between image-based and log-based MLC positions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neal, Brian, E-mail: bpn2p@virginia.edu; Ahmed, Mahmoud; Kathuria, Kunal

    2016-06-15

    Purpose: To present a clinical case in which real-time intratreatment imaging identified an multileaf collimator (MLC) leaf to be consistently deviating from its programmed and logged position by >1 mm. Methods: An EPID-based exit-fluence dosimetry system designed to prevent gross delivery errors was used to capture cine during treatment images. The author serendipitously visually identified a suspected MLC leaf displacement that was not otherwise detected. The leaf position as recorded on the EPID images was measured and log-files were analyzed for the treatment in question, the prior day’s treatment, and for daily MLC test patterns acquired on those treatment days.more » Additional standard test patterns were used to quantify the leaf position. Results: Whereas the log-file reported no difference between planned and recorded positions, image-based measurements showed the leaf to be 1.3 ± 0.1 mm medial from the planned position. This offset was confirmed with the test pattern irradiations. Conclusions: It has been clinically observed that log-file derived leaf positions can differ from their actual position by >1 mm, and therefore cannot be considered to be the actual leaf positions. This cautions the use of log-based methods for MLC or patient quality assurance without independent confirmation of log integrity. Frequent verification of MLC positions through independent means is a necessary precondition to trust log-file records. Intratreatment EPID imaging provides a method to capture departures from MLC planned positions.« less

  15. Baseline coastal oblique aerial photographs collected from Navarre Beach, Florida, to Breton Island, Louisiana, September 18–19, 2015

    USGS Publications Warehouse

    Morgan, Karen L. M.

    2016-08-01

    In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then the thumbnail or the link below the thumbnail. The KML file was created using the photographic navigation files. This KML file can be found in the kml folder.

  16. Portable document format file showing the surface models of cadaver whole body.

    PubMed

    Shin, Dong Sun; Chung, Min Suk; Park, Jin Seo; Park, Hyung Seon; Lee, Sangho; Moon, Young Lae; Jang, Hae Gwon

    2012-08-01

    In the Visible Korean project, 642 three-dimensional (3D) surface models have been built from the sectioned images of a male cadaver. It was recently discovered that popular PDF file enables users to approach the numerous surface models conveniently on Adobe Reader. Purpose of this study was to present a PDF file including systematized surface models of human body as the beneficial contents. To achieve the purpose, fitting software packages were employed in accordance with the procedures. Two-dimensional (2D) surface models including the original sectioned images were embedded into the 3D surface models. The surface models were categorized into systems and then groups. The adjusted surface models were inserted to a PDF file, where relevant multimedia data were added. The finalized PDF file containing comprehensive data of a whole body could be explored in varying manners. The PDF file, downloadable freely from the homepage (http://anatomy.co.kr), is expected to be used as a satisfactory self-learning tool of anatomy. Raw data of the surface models can be extracted from the PDF file and employed for various simulations for clinical practice. The technique to organize the surface models will be applied to manufacture of other PDF files containing various multimedia contents.

  17. BOREAS RSS-20 POLDER Radiance Images From the NASA C-130

    NASA Technical Reports Server (NTRS)

    Leroy, M.; Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Smith, David E. (Technical Monitor)

    2000-01-01

    These Boreal Ecosystem-Atmosphere Study (BOREAS) Remote Sensing Science (RSS)-20 data are a subset of images collected by the Polarization and Directionality of Earth's Reflectance (POLDER) instrument over tower sites in the BOREAS study areas during the intensive field campaigns (IFCs) in 1994. The POLDER images presented here from the NASA ARC C-130 aircraft are made available for illustration purposes only. The data are stored in binary image-format files. The POLDER radiance images are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). The data files are available on a CD-ROM (see document number 20010000884).

  18. Vector assembly of colloids on monolayer substrates

    NASA Astrophysics Data System (ADS)

    Jiang, Lingxiang; Yang, Shenyu; Tsang, Boyce; Tu, Mei; Granick, Steve

    2017-06-01

    The key to spontaneous and directed assembly is to encode the desired assembly information to building blocks in a programmable and efficient way. In computer graphics, raster graphics encodes images on a single-pixel level, conferring fine details at the expense of large file sizes, whereas vector graphics encrypts shape information into vectors that allow small file sizes and operational transformations. Here, we adapt this raster/vector concept to a 2D colloidal system and realize `vector assembly' by manipulating particles on a colloidal monolayer substrate with optical tweezers. In contrast to raster assembly that assigns optical tweezers to each particle, vector assembly requires a minimal number of optical tweezers that allow operations like chain elongation and shortening. This vector approach enables simple uniform particles to form a vast collection of colloidal arenes and colloidenes, the spontaneous dissociation of which is achieved with precision and stage-by-stage complexity by simply removing the optical tweezers.

  19. The acquisition, storage, and dissemination of LANDSAT and other LACIE support data

    NASA Technical Reports Server (NTRS)

    Abbotts, L. F.; Nelson, R. M. (Principal Investigator)

    1979-01-01

    Activities performed at the LACIE physical data library are described. These include the researching, acquisition, indexing, maintenance, distribution, tracking, and control of LACIE operational data and documents. Much of the data available can be incorporated into an Earth resources data base. Elements of the data collection that can support future remote sensing programs include: (1) the LANDSAT full-frame image files; (2) the microfilm file of aerial and space photographic and multispectral maps and charts that encompasses a large portion of the Earth's surface; (3) the map/chart collection that includes various scale maps and charts for a good portion of the U.S. and the LACIE area in foreign countries; (4) computer-compatible tapes of good quality LANDSAT scenes; (5) basic remote sensing data, project data, reference material, and associated publications; (6) visual aids to support presentation on remote sensing projects; and (7) research acquisition and handling procedures for managing data.

  20. VizieR Online Data Catalog: Pinpointing the SMBH in NGC1052 (Baczko+, 2016)

    NASA Astrophysics Data System (ADS)

    Baczko, A.-K.; Schulz, R.; Kadler, M.; Ros, E.; Perucho, M.; Krichbaum, T. P.; Bock, M.; Bremer, M.; Grossberger, C.; Lindqvist, M.; Lobanov, A. P.; Mannheim, K.; Marti-Vidal, I.; Mueller, C.; Wilms, J.; Zensus, J. A.

    2016-06-01

    The source NGC1052 was observed with the GMVA at 86GHz in Oct. 2004. One naturally weighted and one uniformly weighted CLEAN-image as FITS-files (Fig. 1 and 2) and one tapered map with more weight to short baselines as FITS-file (Fig. 3). (2 data files).

  1. Development of an indexed integrated neuroradiology reports for teaching file creation

    NASA Astrophysics Data System (ADS)

    Tameem, Hussain Z.; Morioka, Craig; Bennett, David; El-Saden, Suzie; Sinha, Usha; Taira, Ricky; Bui, Alex; Kangarloo, Hooshang

    2007-03-01

    The decrease in reimbursement rates for radiology procedures has placed even more pressure on radiology departments to increase their clinical productivity. Clinical faculties have less time for teaching residents, but with the advent and prevalence of an electronic environment that includes PACS, RIS, and HIS, there is an opportunity to create electronic teaching files for fellows, residents, and medical students. Experienced clinicians, who select the most appropriate radiographic image, and clinical information relevant to that patient, create these teaching files. Important cases are selected based on the difficulty in determining the diagnosis or the manifestation of rare diseases. This manual process of teaching file creation is time consuming and may not be practical under the pressure of increased demands on the radiologist. It is the goal of this research to automate the process of teaching file creation by manually selecting key images and automatically extracting key sections from clinical reports and laboratories. The text report is then processed for indexing to two standard nomenclatures UMLS and RADLEX. Interesting teaching files can then be queried based on specific anatomy and findings found within the clinical reports.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Temple, Brian Allen; Armstrong, Jerawan Chudoung

    This document is a mid-year report on a deliverable for the PYTHON Radiography Analysis Tool (PyRAT) for project LANL12-RS-107J in FY15. The deliverable is deliverable number 2 in the work package and is titled “Add the ability to read in more types of image file formats in PyRAT”. Right now PyRAT can only read in uncompressed TIF files (tiff files). It is planned to expand the file formats that can be read by PyRAT, making it easier to use in more situations. A summary of the file formats added include jpeg, jpg, png and formatted ASCII files.

  3. New concepts for building vocabulary for cell image ontologies.

    PubMed

    Plant, Anne L; Elliott, John T; Bhat, Talapady N

    2011-12-21

    There are significant challenges associated with the building of ontologies for cell biology experiments including the large numbers of terms and their synonyms. These challenges make it difficult to simultaneously query data from multiple experiments or ontologies. If vocabulary terms were consistently used and reused across and within ontologies, queries would be possible through shared terms. One approach to achieving this is to strictly control the terms used in ontologies in the form of a pre-defined schema, but this approach limits the individual researcher's ability to create new terms when needed to describe new experiments. Here, we propose the use of a limited number of highly reusable common root terms, and rules for an experimentalist to locally expand terms by adding more specific terms under more general root terms to form specific new vocabulary hierarchies that can be used to build ontologies. We illustrate the application of the method to build vocabularies and a prototype database for cell images that uses a visual data-tree of terms to facilitate sophisticated queries based on a experimental parameters. We demonstrate how the terminology might be extended by adding new vocabulary terms into the hierarchy of terms in an evolving process. In this approach, image data and metadata are handled separately, so we also describe a robust file-naming scheme to unambiguously identify image and other files associated with each metadata value. The prototype database http://sbd.nist.gov/ consists of more than 2000 images of cells and benchmark materials, and 163 metadata terms that describe experimental details, including many details about cell culture and handling. Image files of interest can be retrieved, and their data can be compared, by choosing one or more relevant metadata values as search terms. Metadata values for any dataset can be compared with corresponding values of another dataset through logical operations. Organizing metadata for cell imaging experiments under a framework of rules that include highly reused root terms will facilitate the addition of new terms into a vocabulary hierarchy and encourage the reuse of terms. These vocabulary hierarchies can be converted into XML schema or RDF graphs for displaying and querying, but this is not necessary for using it to annotate cell images. Vocabulary data trees from multiple experiments or laboratories can be aligned at the root terms to facilitate query development. This approach of developing vocabularies is compatible with the major advances in database technology and could be used for building the Semantic Web.

  4. New concepts for building vocabulary for cell image ontologies

    PubMed Central

    2011-01-01

    Background There are significant challenges associated with the building of ontologies for cell biology experiments including the large numbers of terms and their synonyms. These challenges make it difficult to simultaneously query data from multiple experiments or ontologies. If vocabulary terms were consistently used and reused across and within ontologies, queries would be possible through shared terms. One approach to achieving this is to strictly control the terms used in ontologies in the form of a pre-defined schema, but this approach limits the individual researcher's ability to create new terms when needed to describe new experiments. Results Here, we propose the use of a limited number of highly reusable common root terms, and rules for an experimentalist to locally expand terms by adding more specific terms under more general root terms to form specific new vocabulary hierarchies that can be used to build ontologies. We illustrate the application of the method to build vocabularies and a prototype database for cell images that uses a visual data-tree of terms to facilitate sophisticated queries based on a experimental parameters. We demonstrate how the terminology might be extended by adding new vocabulary terms into the hierarchy of terms in an evolving process. In this approach, image data and metadata are handled separately, so we also describe a robust file-naming scheme to unambiguously identify image and other files associated with each metadata value. The prototype database http://sbd.nist.gov/ consists of more than 2000 images of cells and benchmark materials, and 163 metadata terms that describe experimental details, including many details about cell culture and handling. Image files of interest can be retrieved, and their data can be compared, by choosing one or more relevant metadata values as search terms. Metadata values for any dataset can be compared with corresponding values of another dataset through logical operations. Conclusions Organizing metadata for cell imaging experiments under a framework of rules that include highly reused root terms will facilitate the addition of new terms into a vocabulary hierarchy and encourage the reuse of terms. These vocabulary hierarchies can be converted into XML schema or RDF graphs for displaying and querying, but this is not necessary for using it to annotate cell images. Vocabulary data trees from multiple experiments or laboratories can be aligned at the root terms to facilitate query development. This approach of developing vocabularies is compatible with the major advances in database technology and could be used for building the Semantic Web. PMID:22188658

  5. Preservation of root canal anatomy using self-adjusting file instrumentation with glide path prepared by 20/0.02 hand files versus 20/0.04 rotary files

    PubMed Central

    Jain, Niharika; Pawar, Ajinkya M.; Ukey, Piyush D.; Jain, Prashant K.; Thakur, Bhagyashree; Gupta, Abhishek

    2017-01-01

    Objectives: To compare the relative axis modification and canal concentricity after glide path preparation with 20/0.02 hand K-file (NITIFLEX®) and 20/0.04 rotary file (HyFlex™ CM) with subsequent instrumentation with 1.5 mm self-adjusting file (SAF). Materials and Methods: One hundred and twenty ISO 15, 0.02 taper, Endo Training Blocks (Dentsply Maillefer, Ballaigues, Switzerland) were acquired and randomly divided into following two groups (n = 60): group 1, establishing glide path till 20/0.02 hand K-file (NITIFLEX®) followed by instrumentation with 1.5 mm SAF; and Group 2, establishing glide path till 20/0.04 rotary file (HyFlex™ CM) followed by instrumentation with 1.5 mm SAF. Pre- and post-instrumentation digital images were processed with MATLAB R 2013 software to identify the central axis, and then superimposed using digital imaging software (Picasa 3.0 software, Google Inc., California, USA) taking five landmarks as reference points. Student's t-test for pairwise comparisons was applied with the level of significance set at 0.05. Results: Training blocks instrumented with 20/0.04 rotary file and SAF were associated less deviation in canal axis (at all the five marked points), representing better canal concentricity compared to those, in which glide path was established by 20/0.02 hand K-files followed by SAF instrumentation. Conclusion: Canal geometry is better maintained after SAF instrumentation with a prior glide path established with 20/0.04 rotary file. PMID:28855752

  6. LAS - LAND ANALYSIS SYSTEM, VERSION 5.0

    NASA Technical Reports Server (NTRS)

    Pease, P. B.

    1994-01-01

    The Land Analysis System (LAS) is an image analysis system designed to manipulate and analyze digital data in raster format and provide the user with a wide spectrum of functions and statistical tools for analysis. LAS offers these features under VMS with optional image display capabilities for IVAS and other display devices as well as the X-Windows environment. LAS provides a flexible framework for algorithm development as well as for the processing and analysis of image data. Users may choose between mouse-driven commands or the traditional command line input mode. LAS functions include supervised and unsupervised image classification, film product generation, geometric registration, image repair, radiometric correction and image statistical analysis. Data files accepted by LAS include formats such as Multi-Spectral Scanner (MSS), Thematic Mapper (TM) and Advanced Very High Resolution Radiometer (AVHRR). The enhanced geometric registration package now includes both image to image and map to map transformations. The over 200 LAS functions fall into image processing scenario categories which include: arithmetic and logical functions, data transformations, fourier transforms, geometric registration, hard copy output, image restoration, intensity transformation, multispectral and statistical analysis, file transfer, tape profiling and file management among others. Internal improvements to the LAS code have eliminated the VAX VMS dependencies and improved overall system performance. The maximum LAS image size has been increased to 20,000 lines by 20,000 samples with a maximum of 256 bands per image. The catalog management system used in earlier versions of LAS has been replaced by a more streamlined and maintenance-free method of file management. This system is not dependent on VAX/VMS and relies on file naming conventions alone to allow the use of identical LAS file names on different operating systems. While the LAS code has been improved, the original capabilities of the system have been preserved. These include maintaining associated image history, session logging, and batch, asynchronous and interactive mode of operation. The LAS application programs are integrated under version 4.1 of an interface called the Transportable Applications Executive (TAE). TAE 4.1 has four modes of user interaction: menu, direct command, tutor (or help), and dynamic tutor. In addition TAE 4.1 allows the operation of LAS functions using mouse-driven commands under the TAE-Facelift environment provided with TAE 4.1. These modes of operation allow users, from the beginner to the expert, to exercise specific application options. LAS is written in C-language and FORTRAN 77 for use with DEC VAX computers running VMS with approximately 16Mb of physical memory. This program runs under TAE 4.1. Since TAE 4.1 is not a current version of TAE, TAE 4.1 is included within the LAS distribution. Approximately 130,000 blocks (65Mb) of disk storage space are necessary to store the source code and files generated by the installation procedure for LAS and 44,000 blocks (22Mb) of disk storage space are necessary for TAE 4.1 installation. The only other dependencies for LAS are the subroutine libraries for the specific display device(s) that will be used with LAS/DMS (e.g. X-Windows and/or IVAS). The standard distribution medium for LAS is a set of two 9track 6250 BPI magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. This program was developed in 1986 and last updated in 1992.

  7. Experimental Analysis of File Transfer Rates over Wide-Area Dedicated Connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Liu, Qiang; Sen, Satyabrata

    2016-12-01

    File transfers over dedicated connections, supported by large parallel file systems, have become increasingly important in high-performance computing and big data workflows. It remains a challenge to achieve peak rates for such transfers due to the complexities of file I/O, host, and network transport subsystems, and equally importantly, their interactions. We present extensive measurements of disk-to-disk file transfers using Lustre and XFS file systems mounted on multi-core servers over a suite of 10 Gbps emulated connections with 0-366 ms round trip times. Our results indicate that large buffer sizes and many parallel flows do not always guarantee high transfer rates.more » Furthermore, large variations in the measured rates necessitate repeated measurements to ensure confidence in inferences based on them. We propose a new method to efficiently identify the optimal joint file I/O and network transport parameters using a small number of measurements. We show that for XFS and Lustre with direct I/O, this method identifies configurations achieving 97% of the peak transfer rate while probing only 12% of the parameter space.« less

  8. Image tools for UNIX

    NASA Technical Reports Server (NTRS)

    Banks, David C.

    1994-01-01

    This talk features two simple and useful tools for digital image processing in the UNIX environment. They are xv and pbmplus. The xv image viewer which runs under the X window system reads images in a number of different file formats and writes them out in different formats. The view area supports a pop-up control panel. The 'algorithms' menu lets you blur an image. The xv control panel also activates the color editor which displays the image's color map (if one exists). The xv image viewer is available through the internet. The pbmplus package is a set of tools designed to perform image processing from within a UNIX shell. The acronym 'pbm' stands for portable bit map. Like xv, the pbm plus tool can convert images from and to many different file formats. The source code and manual pages for pbmplus are also available through the internet. This software is in the public domain.

  9. Web-based document and content management with off-the-shelf software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuster, J

    1999-03-18

    This, then, is the current status of the project: Since we made the switch to Intradoc, we are now treating the project as a document and image management system. In reality, it could be considered a document and content management system since we can manage almost any file input to the system such as video or audio. At present, however, we are concentrating on images. As mentioned above, my CRADA funding was only targeted at including thumbnails of images in Intradoc. We still had to modify Intradoc so that it would compress images submitted to the system. All processing ofmore » files submitted to Intradoc is handled in what is called the Document Refinery. Even though MrSID created thumbnails in the process of compressing an image, work needed to be done to somehow build this capability into the Document Refinery. Therefore we made the decision to contract the Intradoc Engineering Team to perform this custom development work. To make Intradoc even more capable of handling images, we have also contracted for customization of the Document Refinery to accept Adobe PhotoShop and Illustrator file in their native format.« less

  10. Baseline coastal oblique aerial photographs collected from Key Largo, Florida, to the Florida/Georgia border, September 5-6, 2014

    USGS Publications Warehouse

    Morgan, Karen L. M.

    2015-09-14

    In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files. These KML files can be found in the kml folder.

  11. VizieR Online Data Catalog: Deep Objective-Prism Survey for LMC Members (Sanduleak 1970)

    NASA Astrophysics Data System (ADS)

    Sanduleak, N.

    2008-02-01

    The catalog contains 1273 proven or probable Large Magellanic Cloud (LMC) members, identified on plates taken with the Curtis-Schmidt telescope at Cerro Tololo Inter-American Observatory in Chile. The stars are generally brighter than photographic magnitude 14 and are identified on charts published by Hodge and Wright (1967) and reproduced in the source publication (1970CoTol..89....1S). Approximate spectral types were determined by examination of the 580 Angstroms/mm (at H{gamma}) objective-prism spectra; approximate 1975 positions were obtained by measuring relative to the 1975 coordinate grids on the Uppsala-Mount Stromlo Atlas of the LMC (Gascoigne and Westerlund 1961), and approximate photographic magnitudes were determined by averaging image density measures from the plates and image-diameter measures on the "B" charts of Hodge and Wright (1967SAOP.4699....1H). The catalog includes an identification number (Sk), HD(E) number, Cape Photographic Durchmusterung number, right ascension and declination (equinox B1975), spectral type, photographic magnitude, and alternate identifications. The machine version, updated in September 1986, includes corrections supplied by the author in 1985; thus, it differs somewhat from the published version. Accurate positions, and cross-identifications with the modern surveys, were determined by Brian Skiff in 2008, and make up the "sk_pos.dat" file. This work is based on a file prepared through great effort by Mati Morel in 1999. Brian Skiff examined every object on DSS cut-outs to make sure the star chosen matched the Sanduleak charts. The Goddard SkyView utility was used, looking at an 0.07{deg} (4'x4') field from the DSS1 (short-V plate), DSS2 far-red, and 2MASS J-band images. These three have the shallowest effective exposure (these are bright stars!), and usually the best image quality to check for companions etc as well as star colors. Precise coordinates were then obtained via VizieR mainly from UCAC2, but occasionally elsewhere as indicated in the column 's' for each star. The list was also matched against Tycho-2 and the GSC, and the Massey et al. photometric survey from 2002ApJS..141...81M (Cat. II/236). The file "sk_pos.dat" includes also Sanduleak's original approximate spectral types, and approximative V magnitudes that Mati Morel adopted from the work of the Marseille group; some missing HD(E) and CPD names were also added. It should be noticed also that the file has 1275 entries, 3 stars being resolved into resolved pairs. The notes includes Sanduleak's original notes, as well as remarks added by Brian Skiff in the course of his verifications. (3 data files).

  12. Post-Hurricane Ivan coastal oblique aerial photographs collected from Crawfordville, Florida, to Petit Bois Island, Mississippi, September 17, 2004

    USGS Publications Warehouse

    Morgan, Karen L.M.; Krohn, M. Dennis; Peterson, Russell D.; Thompson, Philip R.; Subino, Janice A.

    2015-01-01

    Table 1 provides detailed information about the GPS location, image name, date, and time for each of the 3,381 photographs taken, along with links to each photograph. The photographs are organized into segments, also referred to as contact sheets, and represent approximately 5 minutes of flight time. In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided, which can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files.

  13. Large-scale Scanning Transmission Electron Microscopy (Nanotomy) of Healthy and Injured Zebrafish Brain.

    PubMed

    Kuipers, Jeroen; Kalicharan, Ruby D; Wolters, Anouk H G; van Ham, Tjakko J; Giepmans, Ben N G

    2016-05-25

    Large-scale 2D electron microscopy (EM), or nanotomy, is the tissue-wide application of nanoscale resolution electron microscopy. Others and we previously applied large scale EM to human skin pancreatic islets, tissue culture and whole zebrafish larvae(1-7). Here we describe a universally applicable method for tissue-scale scanning EM for unbiased detection of sub-cellular and molecular features. Nanotomy was applied to investigate the healthy and a neurodegenerative zebrafish brain. Our method is based on standardized EM sample preparation protocols: Fixation with glutaraldehyde and osmium, followed by epoxy-resin embedding, ultrathin sectioning and mounting of ultrathin-sections on one-hole grids, followed by post staining with uranyl and lead. Large-scale 2D EM mosaic images are acquired using a scanning EM connected to an external large area scan generator using scanning transmission EM (STEM). Large scale EM images are typically ~ 5 - 50 G pixels in size, and best viewed using zoomable HTML files, which can be opened in any web browser, similar to online geographical HTML maps. This method can be applied to (human) tissue, cross sections of whole animals as well as tissue culture(1-5). Here, zebrafish brains were analyzed in a non-invasive neuronal ablation model. We visualize within a single dataset tissue, cellular and subcellular changes which can be quantified in various cell types including neurons and microglia, the brain's macrophages. In addition, nanotomy facilitates the correlation of EM with light microscopy (CLEM)(8) on the same tissue, as large surface areas previously imaged using fluorescent microscopy, can subsequently be subjected to large area EM, resulting in the nano-anatomy (nanotomy) of tissues. In all, nanotomy allows unbiased detection of features at EM level in a tissue-wide quantifiable manner.

  14. Large-scale Scanning Transmission Electron Microscopy (Nanotomy) of Healthy and Injured Zebrafish Brain

    PubMed Central

    Kuipers, Jeroen; Kalicharan, Ruby D.; Wolters, Anouk H. G.

    2016-01-01

    Large-scale 2D electron microscopy (EM), or nanotomy, is the tissue-wide application of nanoscale resolution electron microscopy. Others and we previously applied large scale EM to human skin pancreatic islets, tissue culture and whole zebrafish larvae1-7. Here we describe a universally applicable method for tissue-scale scanning EM for unbiased detection of sub-cellular and molecular features. Nanotomy was applied to investigate the healthy and a neurodegenerative zebrafish brain. Our method is based on standardized EM sample preparation protocols: Fixation with glutaraldehyde and osmium, followed by epoxy-resin embedding, ultrathin sectioning and mounting of ultrathin-sections on one-hole grids, followed by post staining with uranyl and lead. Large-scale 2D EM mosaic images are acquired using a scanning EM connected to an external large area scan generator using scanning transmission EM (STEM). Large scale EM images are typically ~ 5 - 50 G pixels in size, and best viewed using zoomable HTML files, which can be opened in any web browser, similar to online geographical HTML maps. This method can be applied to (human) tissue, cross sections of whole animals as well as tissue culture1-5. Here, zebrafish brains were analyzed in a non-invasive neuronal ablation model. We visualize within a single dataset tissue, cellular and subcellular changes which can be quantified in various cell types including neurons and microglia, the brain's macrophages. In addition, nanotomy facilitates the correlation of EM with light microscopy (CLEM)8 on the same tissue, as large surface areas previously imaged using fluorescent microscopy, can subsequently be subjected to large area EM, resulting in the nano-anatomy (nanotomy) of tissues. In all, nanotomy allows unbiased detection of features at EM level in a tissue-wide quantifiable manner. PMID:27285162

  15. Web-based Quality Control Tool used to validate CERES products on a cluster of Linux servers

    NASA Astrophysics Data System (ADS)

    Chu, C.; Sun-Mack, S.; Heckert, E.; Chen, Y.; Mlynczak, P.; Mitrescu, C.; Doelling, D.

    2014-12-01

    There have been a few popular desktop tools used in the Earth Science community to validate science data. Because of the limitation on the capacity of desktop hardware such as disk space and CPUs, those softwares are not able to display large amount of data from files.This poster will talk about an in-house developed web-based software built on a cluster of Linux servers. That allows users to take advantage of a few Linux servers working in parallel to generate hundreds images in a short period of time. The poster will demonstrate:(1) The hardware and software architecture is used to provide high throughput of images. (2) The software structure that can incorporate new products and new requirement quickly. (3) The user interface about how users can manipulate the data and users can control how the images are displayed.

  16. Collected Data of The Boreal Ecosystem and Atmosphere Study (BOREAS)

    NASA Technical Reports Server (NTRS)

    Newcomer, J. (Editor); Landis, D. (Editor); Conrad, S. (Editor); Curd, S. (Editor); Huemmrich, K. (Editor); Knapp, D. (Editor); Morrell, A. (Editor); Nickerson, J. (Editor); Papagno, A. (Editor); Rinker, D. (Editor)

    2000-01-01

    The Boreal Ecosystem-Atmosphere Study (BOREAS) was a large-scale international interdisciplinary climate-ecosystem interaction experiment in the northern boreal forests of Canada. Its goal was to improve our understanding of the boreal forests -- how they interact with the atmosphere, how much CO2 they can store, and how climate change will affect them. BOREAS wanted to learn to use satellite data to monitor the forests, and to improve computer simulation and weather models so scientists can anticipate the effects of global change. This BOREAS CD-ROM set is a set of 12 CD-ROMs containing the finalized point data sets and compressed image data from the BOREAS Project. All point data are stored in ASCII text files, and all image and GIS products are stored as binary images, compressed using GZip. Additional descriptions of the various data sets on this CD-ROM are available in other documents in the BOREAS series.

  17. Efficacy of ProTaper universal retreatment files in removing filling materials during root canal retreatment.

    PubMed

    Giuliani, Valentina; Cocchetti, Roberto; Pagavino, Gabriella

    2008-11-01

    The aim of this study was to evaluate the efficacy of the ProTaper Universal System rotary retreatment system and of Profile 0.06 and hand instruments (K-file) in the removal of root filling materials. Forty-two extracted single-rooted anterior teeth were selected. The root canals were enlarged with nickel-titanium (NiTi) rotary files, filled with gutta-percha and sealer, and randomly divided into 3 experimental groups. The filling materials were removed with solvent in conjunction with one of the following devices and techniques: the ProTaper Universal System for retreatment, ProFile 0.06, and hand instruments (K-file). The roots were longitudinally sectioned, and the image of the root surface was photographed. The images were captured in JPEG format; the areas of the remaining filling materials and the time required for removing the gutta-percha and sealer were calculated by using the nonparametric one-way Kruskal-Wallis test and Tukey-Kramer tests, respectively. The group that showed better results for removing filling materials was the ProTaper Universal System for retreatment files, whereas the group of ProFile rotary instruments yielded better root canal cleanliness than the hand instruments, even though there was no statistically significant difference. The ProTaper Universal System for retreatment and ProFile rotary instruments worked significantly faster than the K-file. The ProTaper Universal System for retreatment files left cleaner root canal walls than the K-file hand instruments and the ProFile Rotary instruments, although none of the devices used guaranteed complete removal of the filling materials. The rotary NiTi system proved to be faster than hand instruments in removing root filling materials.

  18. Synchronizing files or images among several computers or removable devices. A utility to avoid frequent back-ups.

    PubMed

    Leonardi, Rosalia; Maiorana, Francesco; Giordano, Daniela

    2008-06-01

    Many of us use and maintain files on more than 1 computer--a desktop part of the time, and a notebook, a palmtop, or removable devices at other times. It can be easy to forget which device contains the latest version of a particular file, and time-consuming searches often ensue. One way to solve this problem is to use software that synchronizes the files. This allows users to maintain updated versions of the same file in several locations.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillion, D.

    This code enables one to display, take line-outs on, and perform various transformations on an image created by an array of integer*2 data. Uncompressed eight-bit TIFF files created on either the Macintosh or the IBM PC may also be read in and converted to a 16 bit signed integer image. This code is designed to handle all the formates used for PDS (photo-densitometer) files at the Lawrence Livermore National Laboratory. These formats are all explained by the application code. The image may be zoomed infinitely and the gray scale mapping can be easily changed. Line-outs may be horizontal or verticalmore » with arbitrary width, angled with arbitrary end points, or taken along any path. This code is usually used to examine spectrograph data. Spectral lines may be identified and a polynomial fit from position to wavelength may be found. The image array can be remapped so that the pixels all have the same change of lambda width. It is not necessary to do this, however. Lineouts may be printed, saved as Cricket tab-delimited files, or saved as PICT2 files. The plots may be linear, semilog, or logarithmic with nice values and proper scientific notation. Typically, spectral lines are curved. By identifying points on these lines and fitting their shapes by polyn.« less

  20. Collecting and Animating Online Satellite Images.

    ERIC Educational Resources Information Center

    Irons, Ralph

    1995-01-01

    Describes how to generate automated classroom resources from the Internet. Topics covered include viewing animated satellite weather images using file transfer protocol (FTP); sources of images on the Internet; shareware available for viewing images; software for automating image retrieval; procedures for animating satellite images; and storing…

  1. The Pediatric Imaging, Neurocognition, and Genetics (PING) Data Repository

    PubMed Central

    Jernigan, Terry L.; Brown, Timothy T.; Hagler, Donald J.; Akshoomoff, Natacha; Bartsch, Hauke; Newman, Erik; Thompson, Wesley K.; Bloss, Cinnamon S.; Murray, Sarah S.; Schork, Nicholas; Kennedy, David N.; Kuperman, Joshua M.; McCabe, Connor; Chung, Yoonho; Libiger, Ondrej; Maddox, Melanie; Casey, B. J.; Chang, Linda; Ernst, Thomas M.; Frazier, Jean A.; Gruen, Jeffrey R.; Sowell, Elizabeth R.; Kenet, Tal; Kaufmann, Walter E.; Mostofsky, Stewart; Amaral, David G.; Dale, Anders M.

    2015-01-01

    The main objective of the multi-site Pediatric Imaging, Neurocognition, and Genetics (PING) study was to create a large repository of standardized measurements of behavioral and imaging phenotypes accompanied by whole genome genotyping acquired from typically-developing children varying widely in age (3 to 20 years). This cross-sectional study produced sharable data from 1493 children, and these data have been described in several publications focusing on brain and cognitive development. Researchers may gain access to these data by applying for an account on the PING Portal and filing a Data Use Agreement. Here we describe the recruiting and screening of the children and give a brief overview of the assessments performed, the imaging methods applied, the genetic data produced, and the numbers of cases for whom different data types are available. We also cite sources of more detailed information about the methods and data. Finally we describe the procedures for accessing the data and for using the PING data exploration portal. PMID:25937488

  2. The Pediatric Imaging, Neurocognition, and Genetics (PING) Data Repository.

    PubMed

    Jernigan, Terry L; Brown, Timothy T; Hagler, Donald J; Akshoomoff, Natacha; Bartsch, Hauke; Newman, Erik; Thompson, Wesley K; Bloss, Cinnamon S; Murray, Sarah S; Schork, Nicholas; Kennedy, David N; Kuperman, Joshua M; McCabe, Connor; Chung, Yoonho; Libiger, Ondrej; Maddox, Melanie; Casey, B J; Chang, Linda; Ernst, Thomas M; Frazier, Jean A; Gruen, Jeffrey R; Sowell, Elizabeth R; Kenet, Tal; Kaufmann, Walter E; Mostofsky, Stewart; Amaral, David G; Dale, Anders M

    2016-01-01

    The main objective of the multi-site Pediatric Imaging, Neurocognition, and Genetics (PING) study was to create a large repository of standardized measurements of behavioral and imaging phenotypes accompanied by whole genome genotyping acquired from typically-developing children varying widely in age (3 to 20 years). This cross-sectional study produced sharable data from 1493 children, and these data have been described in several publications focusing on brain and cognitive development. Researchers may gain access to these data by applying for an account on the PING portal and filing a data use agreement. Here we describe the recruiting and screening of the children and give a brief overview of the assessments performed, the imaging methods applied, the genetic data produced, and the numbers of cases for whom different data types are available. We also cite sources of more detailed information about the methods and data. Finally we describe the procedures for accessing the data and for using the PING data exploration portal. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. 76 FR 10405 - Federal Copyright Protection of Sound Recordings Fixed Before February 15, 1972

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-24

    ... file in either the Adobe Portable Document File (PDF) format that contains searchable, accessible text (not an image); Microsoft Word; WordPerfect; Rich Text Format (RTF); or ASCII text file format (not a..., comments may be delivered in hard copy. If hand delivered by a private party, an original [[Page 10406...

  4. Creating databases for biological information: an introduction.

    PubMed

    Stein, Lincoln

    2002-08-01

    The essence of bioinformatics is dealing with large quantities of information. Whether it be sequencing data, microarray data files, mass spectrometric data (e.g., fingerprints), the catalog of strains arising from an insertional mutagenesis project, or even large numbers of PDF files, there inevitably comes a time when the information can simply no longer be managed with files and directories. This is where databases come into play. This unit briefly reviews the characteristics of several database management systems, including flat file, indexed file, and relational databases, as well as ACeDB. It compares their strengths and weaknesses and offers some general guidelines for selecting an appropriate database management system.

  5. Astrometrica: Astrometric data reduction of CCD images

    NASA Astrophysics Data System (ADS)

    Raab, Herbert

    2012-03-01

    Astrometrica is an interactive software tool for scientific grade astrometric data reduction of CCD images. The current version of the software is for the Windows 32bit operating system family. Astrometrica reads FITS (8, 16 and 32 bit integer files) and SBIG image files. The size of the images is limited only by available memory. It also offers automatic image calibration (Dark Frame and Flat Field correction), automatic reference star identification, automatic moving object detection and identification, and access to new-generation star catalogs (PPMXL, UCAC 3 and CMC-14), in addition to online help and other features. Astrometrica is shareware, available for use for a limited period of time (100 days) for free; special arrangements can be made for educational projects.

  6. On-demand server-side image processing for web-based DICOM image display

    NASA Astrophysics Data System (ADS)

    Sakusabe, Takaya; Kimura, Michio; Onogi, Yuzo

    2000-04-01

    Low cost image delivery is needed in modern networked hospitals. If a hospital has hundreds of clients, cost of client systems is a big problem. Naturally, a Web-based system is the most effective solution. But a Web browser could not display medical images with certain image processing such as a lookup table transformation. We developed a Web-based medical image display system using Web browser and on-demand server-side image processing. All images displayed on a Web page are generated from DICOM files on a server, delivered on-demand. User interaction on the Web page is handled by a client-side scripting technology such as JavaScript. This combination makes a look-and-feel of an imaging workstation not only for its functionality but also for its speed. Real time update of images with tracing mouse motion is achieved on Web browser without any client-side image processing which may be done by client-side plug-in technology such as Java Applets or ActiveX. We tested performance of the system in three cases. Single client, small number of clients in a fast speed network, and large number of clients in a normal speed network. The result shows that there are very slight overhead for communication and very scalable in number of clients.

  7. Cyanopolyyne Chemistry in TMC-1

    NASA Astrophysics Data System (ADS)

    Winstanley, N.; Nejad, L. A. M.

    1996-03-01

    Using pseudo-time-dependent models and three different reaction networks, a detailed study of the dominant reaction pathways for the formation of cyanopolyynes and their abundances in TMC-1 is presented. The analysis of the chemical reactions show that for the formation of cyanopolyynes there are two major chemical regimes. First, early times of less than ˜104 yrs when ion-molecule reactions are dominant, the main chemical route for the formation of larger cyanopolyynes is C_n H^ + xrightarrow{N}C_n N^ + xrightarrow{{H_2 }}HC_n N^ + xrightarrow{{H_2 }}H_2 C_n N^ + xrightarrow{{e^ - }}HC_n N wheren=5, 7, and 9. Second, at times greater than 104 yrs, when neutral-neutral reactions become dominant, two major reaction routes for the formation of cyanopolyynes are (a), HCNxrightarrow{{C_2 H}}HC_3 Nxrightarrow{{C_2 H}}HC_5 Nxrightarrow{{C_2 H}}HC_7 Nxrightarrow{{C_2 H}}HC_9 N and (b) C_n H_2 + CN to HC_{n + 1} N + H,{text{ }}n = 4,6, and 8 depending on the reaction network used. The results indicate that for route (a) large abundances ofC 2 H (fractional abundances of ˜10-7), and for route (b) large abundances ofC 2 H 2 are required in order to reproduce the observed abundances of cyanopolyynes. The calculated abundances of cyanopolyynes show great sensitivity to the value of extinction particularly att≳5×105 yrs (i.e. photochemical timescale). The effect of other physical parameters, such as the cosmic-ray ionization abundances are also examined. In general, the model calculations show that the observed abundances of cyanopolyynes can be achieved by pseudo-time-dependent models at late times of several million years.

  8. Natural-color and color-infrared image mosaics of the Colorado River corridor in Arizona derived from the May 2009 airborne image collection

    USGS Publications Warehouse

    Davis, Philip A.

    2013-01-01

    The Grand Canyon Monitoring and Research Center (GCMRC) of the U.S. Geological Survey (USGS) periodically collects airborne image data for the Colorado River corridor within Arizona (fig. 1) to allow scientists to study the impacts of Glen Canyon Dam water release on the corridor’s natural and cultural resources. These data are collected from just above Glen Canyon Dam (in Lake Powell) down to the entrance of Lake Mead, for a total distance of 450 kilometers (km) and within a 500-meter (m) swath centered on the river’s mainstem and its seven main tributaries (fig. 1). The most recent airborne data collection in 2009 acquired image data in four wavelength bands (blue, green, red, and near infrared) at a spatial resolution of 20 centimeters (cm). The image collection used the latest model of the Leica ADS40 airborne digital sensor (the SH52), which uses a single optic for all four bands and collects and stores band radiance in 12-bits. Davis (2012) reported on the performance of the SH52 sensor and on the processing steps required to produce the nearly flawless four-band image mosaic (sectioned into map tiles) for the river corridor. The final image mosaic has a total of only 3 km of surface defects in addition to some areas of cloud shadow because of persistent inclement weather during data collection. The 2009 four-band image mosaic is perhaps the best image dataset that exists for the entire Arizona part of the Colorado River. Some analyses of these image mosaics do not require the full 12-bit dynamic range or all four bands of the calibrated image database, in which atmospheric scattering (or haze) had not been removed from the four bands. To provide scientists and the general public with image products that are more useful for visual interpretation, the 12-bit image data were converted to 8-bit natural-color and color-infrared images, which also removed atmospheric scattering within each wavelength-band image. The conversion required an evaluation of the histograms of each band’s digital-number population within each map tile throughout the corridor and the determination of the digital numbers corresponding to the lower and upper one percent of the picture-element population within each map tile. Visual examination of the image tiles that were given a 1-percent stretch (whereby the lower 1- percent 12-bit digital number is assigned an 8-bit value of zero and the upper 1-percent 12-bit digital number is assigned an 8-bit value of 255) indicated that this stretch sufficiently removed atmospheric scattering, which provided improved image clarity and true natural colors for all surface materials. The lower and upper 1-percent, 12-bit digital numbers for each wavelength-band image in the image tiles exhibit erratic variations along the river corridor; the variations exhibited similar trends in both the lower and upper 1-percent digital numbers for all four wavelength-band images (figs. 2–5). The erratic variations are attributed to (1) daily variations in atmospheric water-vapor content due to monsoonal storms, (2) variations in channel water color due to variable sediment input from tributaries, and (3) variations in the amount of topographic shadows within each image tile, in which reflectance is dominated by atmospheric scattering. To make the surface colors of the stretched, 8-bit images consistent among adjacent image tiles, it was necessary to average both the lower and upper 1-percent digital values for each wavelength-band image over 20 river miles to subdue the erratic variations. The average lower and upper 1-percent digital numbers for each image tile (figs. 2–5) were used to convert the 12-bit image values to 8-bit values and the resulting 8-bit four-band images were stored as natural-color (red, green, and blue wavelength bands) and color-infrared (near-infrared, red, and green wavelength bands) images in embedded geotiff format, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. All image data are projected in the State Plane (SP) map projection using the central Arizona zone (202) and the North American Datum of 1983 (NAD83). The map-tile scheme used to segment the corridor image mosaic followed the standard USGS quarter-quadrangle (QQ) map borders, but the high resolution (20 cm) of the images required further quarter segmentation (QQQ) of the standard QQ tiles, where the image mosaic covered a large fraction of a QQ map tile (segmentation shown in (figure 6), where QQ_1 to QQ_4 shows the number convention used to designate a quarter of a QQ tile). To minimize the size of each image tile, each image or map tile was subset to only include that part of the tile that had image data. In addition, some QQQ image tiles within a QQ tile were combined when adjacent QQQ map tiles were small. Thus, some image tiles consist of combinations of QQQ map tiles, some consist of an entire QQ map tile, and some consist of two adjoining QQ map tiles. The final image tiles number 143, which is a large number of files to list on the Internet for both the natural-color and color-infrared images. Thus, the image tiles were placed in seven file folders based on the one-half-degree geographic boundaries within the study area (fig. 7). The map tiles in each file folder were compressed to minimize folder size for more efficient downloading. The file folders are sequentially referred to as zone 1 through zone 7, proceeding down river (fig. 7). The QQ designations of the image tiles contained within each folder or zone are shown on the index map for each respective zone (figs. 8–14).

  9. Challenges in sending large radiology images over military communications channels

    NASA Astrophysics Data System (ADS)

    Cleary, Kevin R.; Levine, Betty A.; Norton, Gary S.; Mundur, Padmavathi V.

    1997-05-01

    In cooperation with the US Army, Georgetown University Medical Center (GUMC) deployed a teleradiology network to sites in Bosnia-Herzegovina, Hungary, and Germany in early 1996. This deployment was part of Operation Primetime III, a military project to provide state-of-the-art medical care to the 20,000 US troops stationed in Bosnia-Herzegovina.In a three-month time frame from January to April 1996, the Imaging Sciences and Information Systems (ISIS) Center at GUMC worked with the Army to design, develop, and deploy a teleradiology network for the digital storage and transmission of radiology images. This paper will discuss some of the problems associated with sending large files over communications networks with significant delays such as those introduced by satellite transmissions.Radiology images of up to 10 megabytes are acquired, stored, and transmitted over the wide area network (WAN). The WAN included leased lines from Germany to Hungary and a satellite link form Germany to Bosnia-Herzegovina. The communications links provided at least a T-1 bandwidth. The satellite link introduces a round-trip delay of approximately 500 milliseconds. This type of high bandwidth, high delay network is called a long fat network. The images are transferred across this network using the Transmission Control Protocol (TCP/IP). By modifying the TCP/IP software to increase the window size, the throughput of the satellite link can be greatly improved.

  10. Regional seismic lines reprocessed using post-stack processing techniques; National Petroleum Reserve, Alaska

    USGS Publications Warehouse

    Miller, John J.; Agena, W.F.; Lee, M.W.; Zihlman, F.N.; Grow, J.A.; Taylor, D.J.; Killgore, Michele; Oliver, H.L.

    2000-01-01

    This CD-ROM contains stacked, migrated, 2-Dimensional seismic reflection data and associated support information for 22 regional seismic lines (3,470 line-miles) recorded in the National Petroleum Reserve ? Alaska (NPRA) from 1974 through 1981. Together, these lines constitute about one-quarter of the seismic data collected as part of the Federal Government?s program to evaluate the petroleum potential of the Reserve. The regional lines, which form a grid covering the entire NPRA, were created by combining various individual lines recorded in different years using different recording parameters. These data were reprocessed by the USGS using modern, post-stack processing techniques, to create a data set suitable for interpretation on interactive seismic interpretation computer workstations. Reprocessing was done in support of ongoing petroleum resource studies by the USGS Energy Program. The CD-ROM contains the following files: 1) 22 files containing the digital seismic data in standard, SEG-Y format; 2) 1 file containing navigation data for the 22 lines in standard SEG-P1 format; 3) 22 small scale graphic images of each seismic line in Adobe Acrobat? PDF format; 4) a graphic image of the location map, generated from the navigation file, with hyperlinks to the graphic images of the seismic lines; 5) an ASCII text file with cross-reference information for relating the sequential trace numbers on each regional line to the line number and shotpoint number of the original component lines; and 6) an explanation of the processing used to create the final seismic sections (this document). The SEG-Y format seismic files and SEG-P1 format navigation file contain all the information necessary for loading the data onto a seismic interpretation workstation.

  11. Image-based optimization of coronal magnetic field models for improved space weather forecasting

    NASA Astrophysics Data System (ADS)

    Uritsky, V. M.; Davila, J. M.; Jones, S. I.; MacNeice, P. J.

    2017-12-01

    The existing space weather forecasting frameworks show a significant dependence on the accuracy of the photospheric magnetograms and the extrapolation models used to reconstruct the magnetic filed in the solar corona. Minor uncertainties in the magnetic field magnitude and direction near the Sun, when propagated through the heliosphere, can lead to unacceptible prediction errors at 1 AU. We argue that ground based and satellite coronagraph images can provide valid geometric constraints that could be used for improving coronal magnetic field extrapolation results, enabling more reliable forecasts of extreme space weather events such as major CMEs. In contrast to the previously developed loop segmentation codes designed for detecting compact closed-field structures above solar active regions, we focus on the large-scale geometry of the open-field coronal regions up to 1-2 solar radii above the photosphere. By applying the developed image processing techniques to high-resolution Mauna Loa Solar Observatory images, we perform an optimized 3D B-line tracing for a full Carrington rotation using the magnetic field extrapolation code developed S. Jones at al. (ApJ 2016, 2017). Our tracing results are shown to be in a good qualitative agreement with the large-scale configuration of the optical corona, and lead to a more consistent reconstruction of the large-scale coronal magnetic field geometry, and potentially more accurate global heliospheric simulation results. Several upcoming data products for the space weather forecasting community will be also discussed.

  12. Database Objects vs Files: Evaluation of alternative strategies for managing large remote sensing data

    NASA Astrophysics Data System (ADS)

    Baru, Chaitan; Nandigam, Viswanath; Krishnan, Sriram

    2010-05-01

    Increasingly, the geoscience user community expects modern IT capabilities to be available in service of their research and education activities, including the ability to easily access and process large remote sensing datasets via online portals such as GEON (www.geongrid.org) and OpenTopography (opentopography.org). However, serving such datasets via online data portals presents a number of challenges. In this talk, we will evaluate the pros and cons of alternative storage strategies for management and processing of such datasets using binary large object implementations (BLOBs) in database systems versus implementation in Hadoop files using the Hadoop Distributed File System (HDFS). The storage and I/O requirements for providing online access to large datasets dictate the need for declustering data across multiple disks, for capacity as well as bandwidth and response time performance. This requires partitioning larger files into a set of smaller files, and is accompanied by the concomitant requirement for managing large numbers of file. Storing these sub-files as blobs in a shared-nothing database implemented across a cluster provides the advantage that all the distributed storage management is done by the DBMS. Furthermore, subsetting and processing routines can be implemented as user-defined functions (UDFs) on these blobs and would run in parallel across the set of nodes in the cluster. On the other hand, there are both storage overheads and constraints, and software licensing dependencies created by such an implementation. Another approach is to store the files in an external filesystem with pointers to them from within database tables. The filesystem may be a regular UNIX filesystem, a parallel filesystem, or HDFS. In the HDFS case, HDFS would provide the file management capability, while the subsetting and processing routines would be implemented as Hadoop programs using the MapReduce model. Hadoop and its related software libraries are freely available. Another consideration is the strategy used for partitioning large data collections, and large datasets within collections, using round-robin vs hash partitioning vs range partitioning methods. Each has different characteristics in terms of spatial locality of data and resultant degree of declustering of the computations on the data. Furthermore, we have observed that, in practice, there can be large variations in the frequency of access to different parts of a large data collection and/or dataset, thereby creating "hotspots" in the data. We will evaluate the ability of different approaches for dealing effectively with such hotspots and alternative strategies for dealing with hotspots.

  13. VizieR Online Data Catalog: Tully-Fisher relation for SDSS galaxies (Reyes+, 2011)

    NASA Astrophysics Data System (ADS)

    Reyes, R.; Mandelbaum, R.; Gunn, J. E.; Pizagno, J.; Lackner, C. N.

    2012-05-01

    In this paper, we derive scaling relations between photometric observable quantities and disc galaxy rotation velocity Vrot or Tully-Fisher relations (TFRs). Our methodology is dictated by our purpose of obtaining purely photometric, minimal-scatter estimators of Vrot applicable to large galaxy samples from imaging surveys. To achieve this goal, we have constructed a sample of 189 disc galaxies at redshifts z<0.1 with long-slit Hα spectroscopy from Pizagno et al. (2007, Cat. J/AJ/134/945) and new observations. By construction, this sample is a fair subsample of a large, well-defined parent disc sample of ~170000 galaxies selected from the Sloan Digital Sky Survey Data Release 7 (SDSS DR7). (4 data files).

  14. Workflow opportunities using JPEG 2000

    NASA Astrophysics Data System (ADS)

    Foshee, Scott

    2002-11-01

    JPEG 2000 is a new image compression standard from ISO/IEC JTC1 SC29 WG1, the Joint Photographic Experts Group (JPEG) committee. Better thought of as a sibling to JPEG rather than descendant, the JPEG 2000 standard offers wavelet based compression as well as companion file formats and related standardized technology. This paper examines the JPEG 2000 standard for features in four specific areas-compression, file formats, client-server, and conformance/compliance that enable image workflows.

  15. Montage Version 3.0

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph; Katz, Daniel; Prince, Thomas; Berriman, Graham; Good, John; Laity, Anastasia

    2006-01-01

    The final version (3.0) of the Montage software has been released. To recapitulate from previous NASA Tech Briefs articles about Montage: This software generates custom, science-grade mosaics of astronomical images on demand from input files that comply with the Flexible Image Transport System (FITS) standard and contain image data registered on projections that comply with the World Coordinate System (WCS) standards. This software can be executed on single-processor computers, multi-processor computers, and such networks of geographically dispersed computers as the National Science Foundation s TeraGrid or NASA s Information Power Grid. The primary advantage of running Montage in a grid environment is that computations can be done on a remote supercomputer for efficiency. Multiple computers at different sites can be used for different parts of a computation a significant advantage in cases of computations for large mosaics that demand more processor time than is available at any one site. Version 3.0 incorporates several improvements over prior versions. The most significant improvement is that this version is accessible to scientists located anywhere, through operational Web services that provide access to data from several large astronomical surveys and construct mosaics on either local workstations or remote computational grids as needed.

  16. The evolution of the FIGARO data reduction system

    NASA Technical Reports Server (NTRS)

    Shortridge, K.

    1992-01-01

    The Figaro data reduction system originated at Caltech around 1983. It was based on concepts being developed in the U.K. by the Starlink organization, particularly the use of hierarchical self-defining data structures and the abstraction of most user-interaction into a set of 'parameter system' routines. Since 1984 it has continued to be developed at AAO, in collaboration with Starlink and Caltech. It was adopted as Starlink's main spectroscopic data reduction package, although it is by no means limited to spectra; it has operations for images and data cubes and even a few (very specialized) for four-dimensional data hypercubes. It continued to be used at Caltech and will be used at the Keck. It is also in use at a variety of other organizations around the world. Figaro was originally a system for VMS Vaxes. Recently it was ported (at Caltech) to run on SUN's, and work is underway at the University of New South Wales on a DecStation version. It is hoped to coordinate all this work into a unified release, but coordination of the development of a system by organizations covering three continents poses a number of interesting administrative problems. The hierarchical data structures used by Figaro allow it to handle a variety of types of data, and to add new items to data structures. Error and data quality information was added to the basic file format used, error information being particularly useful for infrared data. Cooperating sets of programs can add specific sub-structures to data files to carry information that they understand (polarimetry data containing multiple data arrays, for example), without this affecting the way other programs handle the files. Complex instrument-specific ancillary information can be added to data files written at a telescope and can be used by programs that understand the instrumental details in order to produce properly calibrated data files. Once this preliminary data processing was done the resulting files contain 'ordinary' spectra or images that can be processed by programs that are not instrument-specific. The structures holding the instrumental information can then be discarded from the files. Much effort has gone into trying to make it easy to write Figaro programs; data access subroutines are now available to handle access to all the conventional items found in Figaro files (main data arrays, error information, quality information etc), and programs that only need to access such items can be very simple indeed. A large number of Figaro users do indeed write their own Figaro applications using these routines. The fact that Figaro programs are written as callable subroutines getting information from the user through a small set of parameter routines means that they can be invoked in numerous ways; they are normally linked and run as individual programs (called by a small main routine that is generated automatically), but are also available linked to run under the ADAM data acquisition system and there is an interface that lets them be called as part of a user-written Fortran program. The long-term future of Figaro probably depends to a large extent on how successfully it manages the transition from being a VMS-only system to being a multi-platform system.

  17. PySE: Python Source Extractor for radio astronomical images

    NASA Astrophysics Data System (ADS)

    Spreeuw, Hanno; Swinbank, John; Molenaar, Gijs; Staley, Tim; Rol, Evert; Sanders, John; Scheers, Bart; Kuiack, Mark

    2018-05-01

    PySE finds and measures sources in radio telescope images. It is run with several options, such as the detection threshold (a multiple of the local noise), grid size, and the forced clean beam fit, followed by a list of input image files in standard FITS or CASA format. From these, PySe provides a list of found sources; information such as the calculated background image, source list in different formats (e.g. text, region files importable in DS9), and other data may be saved. PySe can be integrated into a pipeline; it was originally written as part of the LOFAR Transient Detection Pipeline (TraP, ascl:1412.011).

  18. EROS main image file - A picture perfect database for Landsat imagery and aerial photography

    NASA Technical Reports Server (NTRS)

    Jack, R. F.

    1984-01-01

    The Earth Resources Observation System (EROS) Program was established by the U.S. Department of the Interior in 1966 under the administration of the Geological Survey. It is primarily concerned with the application of remote sensing techniques for the management of natural resources. The retrieval system employed to search the EROS database is called INORAC (Inquiry, Ordering, and Accounting). A description is given of the types of images identified in EROS, taking into account Landsat imagery, Skylab images, Gemini/Apollo photography, and NASA aerial photography. Attention is given to retrieval commands, geographic coordinate searching, refinement techniques, various online functions, and questions regarding the access to the EROS Main Image File.

  19. Automated sea floor extraction from underwater video

    NASA Astrophysics Data System (ADS)

    Kelly, Lauren; Rahmes, Mark; Stiver, James; McCluskey, Mike

    2016-05-01

    Ocean floor mapping using video is a method to simply and cost-effectively record large areas of the seafloor. Obtaining visual and elevation models has noteworthy applications in search and recovery missions. Hazards to navigation are abundant and pose a significant threat to the safety, effectiveness, and speed of naval operations and commercial vessels. This project's objective was to develop a workflow to automatically extract metadata from marine video and create image optical and elevation surface mosaics. Three developments made this possible. First, optical character recognition (OCR) by means of two-dimensional correlation, using a known character set, allowed for the capture of metadata from image files. Second, exploiting the image metadata (i.e., latitude, longitude, heading, camera angle, and depth readings) allowed for the determination of location and orientation of the image frame in mosaic. Image registration improved the accuracy of mosaicking. Finally, overlapping data allowed us to determine height information. A disparity map was created using the parallax from overlapping viewpoints of a given area and the relative height data was utilized to create a three-dimensional, textured elevation map.

  20. Post-Nor'Ida coastal oblique aerial photographs collected from Ocean City, Maryland, to Hatteras, North Carolina, December 4, 2009

    USGS Publications Warehouse

    Morgan, Karen L. M.; Krohn, M. Dennis; Guy, Kristy K.

    2015-01-01

    In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files.

  1. Baseline Coastal Oblique Aerial Photographs Collected from Navarre Beach, Florida, to Breton Island, Louisiana, September 1, 2014

    USGS Publications Warehouse

    Morgan, Karen L. M.

    2015-08-31

    In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files.

  2. Indexing and filing of pathological illustrations.

    PubMed Central

    Brown, R A; Fawkes, R S; Beck, J S

    1975-01-01

    An inexpensive feature card retrieval system has been combined with the Systematised Nomenclature of Pathology (SNOP) to provide simple but efficient means of indexing and filing 2 in. x 2 in. transparencies within a department of pathology. Using this system 2400 transparencies and the associated index cards can be conveniently stored in one drawer of a standard filing cabinet. Images PMID:1123438

  3. To Image...or Not to Image?

    ERIC Educational Resources Information Center

    Bruley, Karina

    1996-01-01

    Provides a checklist of considerations for installing document image processing with an electronic document management system. Other topics include scanning; indexing; the image file life cycle; benefits of imaging; document-driven workflow; and planning for workplace changes like postsorting, creating a scanning room, redeveloping job tasks and…

  4. Digital pathology: A systematic evaluation of the patent landscape.

    PubMed

    Cucoranu, Ioan C; Parwani, Anil V; Vepa, Suryanarayana; Weinstein, Ronald S; Pantanowitz, Liron

    2014-01-01

    Digital pathology is a relatively new field. Inventors of technology in this field typically file for patents to protect their intellectual property. An understanding of the patent landscape is crucial for companies wishing to secure patent protection and market dominance for their products. To our knowledge, there has been no prior systematic review of patents related to digital pathology. Therefore, the aim of this study was to systematically identify and evaluate United States patents and patent applications related to digital pathology. Issued patents and patent applications related to digital pathology published in the United States Patent and Trademark Office (USPTO) database (www.uspto.gov) (through January 2014) were searched using the Google Patents search engine (Google Inc., Mountain View, California, USA). Keywords and phrases related to digital pathology, whole-slide imaging (WSI), image analysis, and telepathology were used to query the USPTO database. Data were downloaded and analyzed using the Papers application (Mekentosj BV, Aalsmeer, Netherlands). A total of 588 United States patents that pertain to digital pathology were identified. In addition, 228 patent applications were identified, including 155 that were pending, 65 abandoned, and eight rejected. Of the 588 patents granted, 348 (59.18%) were specific to pathology, while 240 (40.82%) included more general patents also usable outside of pathology. There were 70 (21.12%) patents specific to pathology and 57 (23.75%) more general patents that had expired. Over 120 unique entities (individual inventors, academic institutions, and private companies) applied for pathology specific patents. Patents dealt largely with telepathology and image analysis. WSI related patents addressed image acquisition (scanning and focus), quality (z-stacks), management (storage, retrieval, and transmission of WSI files), and viewing (graphical user interface (GUI), workflow, slide navigation and remote control). An increasing number of recent patents focused on computer-aided diagnosis (CAD) and digital consultation networks. In the last 2 decades, there have been an increasing number of patents granted and patent applications filed related to digital pathology. The number of these patents quadrupled during the last decade, and this trend is predicted to intensify based on the number of patent applications already published by the USPTO.

  5. Digital pathology: A systematic evaluation of the patent landscape

    PubMed Central

    Cucoranu, Ioan C.; Parwani, Anil V.; Vepa, Suryanarayana; Weinstein, Ronald S.; Pantanowitz, Liron

    2014-01-01

    Introduction: Digital pathology is a relatively new field. Inventors of technology in this field typically file for patents to protect their intellectual property. An understanding of the patent landscape is crucial for companies wishing to secure patent protection and market dominance for their products. To our knowledge, there has been no prior systematic review of patents related to digital pathology. Therefore, the aim of this study was to systematically identify and evaluate United States patents and patent applications related to digital pathology. Materials and Methods: Issued patents and patent applications related to digital pathology published in the United States Patent and Trademark Office (USPTO) database (www.uspto.gov) (through January 2014) were searched using the Google Patents search engine (Google Inc., Mountain View, California, USA). Keywords and phrases related to digital pathology, whole-slide imaging (WSI), image analysis, and telepathology were used to query the USPTO database. Data were downloaded and analyzed using the Papers application (Mekentosj BV, Aalsmeer, Netherlands). Results: A total of 588 United States patents that pertain to digital pathology were identified. In addition, 228 patent applications were identified, including 155 that were pending, 65 abandoned, and eight rejected. Of the 588 patents granted, 348 (59.18%) were specific to pathology, while 240 (40.82%) included more general patents also usable outside of pathology. There were 70 (21.12%) patents specific to pathology and 57 (23.75%) more general patents that had expired. Over 120 unique entities (individual inventors, academic institutions, and private companies) applied for pathology specific patents. Patents dealt largely with telepathology and image analysis. WSI related patents addressed image acquisition (scanning and focus), quality (z-stacks), management (storage, retrieval, and transmission of WSI files), and viewing (graphical user interface (GUI), workflow, slide navigation and remote control). An increasing number of recent patents focused on computer-aided diagnosis (CAD) and digital consultation networks. Conclusion: In the last 2 decades, there have been an increasing number of patents granted and patent applications filed related to digital pathology. The number of these patents quadrupled during the last decade, and this trend is predicted to intensify based on the number of patent applications already published by the USPTO. PMID:25057430

  6. Retrieving high-resolution images over the Internet from an anatomical image database

    NASA Astrophysics Data System (ADS)

    Strupp-Adams, Annette; Henderson, Earl

    1999-12-01

    The Visible Human Data set is an important contribution to the national collection of anatomical images. To enhance the availability of these images, the National Library of Medicine has supported the design and development of a prototype object-oriented image database which imports, stores, and distributes high resolution anatomical images in both pixel and voxel formats. One of the key database modules is its client-server Internet interface. This Web interface provides a query engine with retrieval access to high-resolution anatomical images that range in size from 100KB for browser viewable rendered images, to 1GB for anatomical structures in voxel file formats. The Web query and retrieval client-server system is composed of applet GUIs, servlets, and RMI application modules which communicate with each other to allow users to query for specific anatomical structures, and retrieve image data as well as associated anatomical images from the database. Selected images can be downloaded individually as single files via HTTP or downloaded in batch-mode over the Internet to the user's machine through an applet that uses Netscape's Object Signing mechanism. The image database uses ObjectDesign's object-oriented DBMS, ObjectStore that has a Java interface. The query and retrieval systems has been tested with a Java-CDE window system, and on the x86 architecture using Windows NT 4.0. This paper describes the Java applet client search engine that queries the database; the Java client module that enables users to view anatomical images online; the Java application server interface to the database which organizes data returned to the user, and its distribution engine that allow users to download image files individually and/or in batch-mode.

  7. VizieR Online Data Catalog: Double-peaked narrow lines in AGN. II. z<0.1 (Nevin+, 2016)

    NASA Astrophysics Data System (ADS)

    Nevin, R.; Comerford, J.; Muller-Sanchez, F.; Barrows, R.; Cooper, M.

    2017-02-01

    To determine the nature of 71 Type 2 AGNs with double-peaked [OIII] emission lines in SDSS that are at z<0.1 and further characterize their properties, we observe them using two complementary follow-up methods: optical long-slit spectroscopy and Jansky Very Large Array (VLA) radio observations. We use various spectrographs with similar pixel scales (Lick Kast Spectrograph; Palomar Double Spectrograph; MMT Blue Channel Spectrograph; APO Dual Imaging Spectrograph and Keck DEep Imaging Multi-Object Spectrograph. We use a 1200 lines/mm grating for all spectrographs; see table 1. In future work, we will combine our long-slit observations with the VLA data for the full sample of 71 galaxies (O. Muller-Sanchez+ 2016, in preparation). (4 data files).

  8. 78 FR 17233 - Notice of Opportunity To File Amicus Briefs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-20

    .... Any commonly-used word processing format or PDF format is acceptable; text formats are preferable to image formats. Briefs may also be filed with the Office of the Clerk of the Board, Merit Systems...

  9. Centralized Accounting and Electronic Filing Provides Efficient Receivables Collection.

    ERIC Educational Resources Information Center

    School Business Affairs, 1983

    1983-01-01

    An electronic filing system makes financial control manageable at Bowling Green State University, Ohio. The system enables quick access to computer-stored consolidated account data and microfilm images of charges, statements, and other billing documents. (MLF)

  10. Forward and store telemedicine using Motion Pictures Expert Group: a novel approach to pediatric tele-echocardiography.

    PubMed

    Woodson, Kristina E; Sable, Craig A; Cross, Russell R; Pearson, Gail D; Martin, Gerard R

    2004-11-01

    Live transmission of echocardiograms over integrated services digital network lines is accurate and has led to improvements in the delivery of pediatric cardiology care. Permanent archiving of the live studies has not previously been reported. Specific obstacles to permanent storage of telemedicine files have included the ability to produce accurate images without a significant increase in storage requirements. We evaluated the accuracy of Motion Pictures Expert Group (MPEG) digitization of incoming video streams and assessed the storage requirements of these files for infants in a real-time pediatric tele-echocardiography program. All major cardiac diagnoses were correctly diagnosed by review of MPEG images. MPEG file size ranged from 11.1 to 182 MB (56.5 +/- 29.9 MB). MPEG digitization during live neonatal telemedicine is accurate and provides an efficient method for storage. This modality has acceptable storage requirements; file sizes are comparable to other digital modalities.

  11. Creating databases for biological information: an introduction.

    PubMed

    Stein, Lincoln

    2013-06-01

    The essence of bioinformatics is dealing with large quantities of information. Whether it be sequencing data, microarray data files, mass spectrometric data (e.g., fingerprints), the catalog of strains arising from an insertional mutagenesis project, or even large numbers of PDF files, there inevitably comes a time when the information can simply no longer be managed with files and directories. This is where databases come into play. This unit briefly reviews the characteristics of several database management systems, including flat file, indexed file, relational databases, and NoSQL databases. It compares their strengths and weaknesses and offers some general guidelines for selecting an appropriate database management system. Copyright 2013 by JohnWiley & Sons, Inc.

  12. Computer image analysis in obtaining characteristics of images: greenhouse tomatoes in the process of generating learning sets of artificial neural networks

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Przybył, J.; Koszela, K.; Boniecki, P.; Mueller, W.; Raba, B.; Lewicki, A.; Przybył, K.

    2014-04-01

    The aim of the project was to make the software which on the basis on image of greenhouse tomato allows for the extraction of its characteristics. Data gathered during the image analysis and processing were used to build learning sets of artificial neural networks. Program enables to process pictures in jpeg format, acquisition of statistical information of the picture and export them to an external file. Produced software is intended to batch analyze collected research material and obtained information saved as a csv file. Program allows for analysis of 33 independent parameters implicitly to describe tested image. The application is dedicated to processing and image analysis of greenhouse tomatoes. The program can be used for analysis of other fruits and vegetables of a spherical shape.

  13. Open-loop measurement of data sampling point for SPM

    NASA Astrophysics Data System (ADS)

    Wang, Yueyu; Zhao, Xuezeng

    2006-03-01

    SPM (Scanning Probe Microscope) provides "three-dimensional images" with nanometer level resolution, and some of them can be used as metrology tools. However, SPM's images are commonly distorted by non-ideal properties of SPM's piezoelectric scanner, which reduces metrological accuracy and data repeatability. In order to eliminate this limit, an "open-loop sampling" method is presented. In this method, the positional values of sampling points in all three directions on the surface of the sample are measured by the position sensor and recorded in SPM's image file, which is used to replace the image file from a conventional SPM. Because the positions in X and Y directions are measured at the same time of sampling height information in Z direction, the image distortion caused by scanner locating error can be reduced by proper image processing algorithm.

  14. A cloud-based multimodality case file for mobile devices.

    PubMed

    Balkman, Jason D; Loehfelm, Thomas W

    2014-01-01

    Recent improvements in Web and mobile technology, along with the widespread use of handheld devices in radiology education, provide unique opportunities for creating scalable, universally accessible, portable image-rich radiology case files. A cloud database and a Web-based application for radiologic images were developed to create a mobile case file with reasonable usability, download performance, and image quality for teaching purposes. A total of 75 radiology cases related to breast, thoracic, gastrointestinal, musculoskeletal, and neuroimaging subspecialties were included in the database. Breast imaging cases are the focus of this article, as they best demonstrate handheld display capabilities across a wide variety of modalities. This case subset also illustrates methods for adapting radiologic content to cloud platforms and mobile devices. Readers will gain practical knowledge about storage and retrieval of cloud-based imaging data, an awareness of techniques used to adapt scrollable and high-resolution imaging content for the Web, and an appreciation for optimizing images for handheld devices. The evaluation of this software demonstrates the feasibility of adapting images from most imaging modalities to mobile devices, even in cases of full-field digital mammograms, where high resolution is required to represent subtle pathologic features. The cloud platform allows cases to be added and modified in real time by using only a standard Web browser with no application-specific software. Challenges remain in developing efficient ways to generate, modify, and upload radiologic and supplementary teaching content to this cloud-based platform. Online supplemental material is available for this article. ©RSNA, 2014.

  15. Imaging Systems: What, When, How.

    ERIC Educational Resources Information Center

    Lunin, Lois F.; And Others

    1992-01-01

    The three articles in this special section on document image files discuss intelligent character recognition, including comparison with optical character recognition; selection of displays for document image processing, focusing on paperlike displays; and imaging hardware, software, and vendors, including guidelines for system selection. (MES)

  16. Design and Implementation of a Metadata-rich File System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address thesemore » problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.« less

  17. Low Cost Desktop Image Analysis Workstation With Enhanced Interactive User Interface

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Huang, H. K.

    1989-05-01

    A multimodality picture archiving and communication system (PACS) is in routine clinical use in the UCLA Radiology Department. Several types workstations are currently implemented for this PACS. Among them, the Apple Macintosh II personal computer was recently chosen to serve as a desktop workstation for display and analysis of radiological images. This personal computer was selected mainly because of its extremely friendly user-interface, its popularity among the academic and medical community and its low cost. In comparison to other microcomputer-based systems the Macintosh II offers the following advantages: the extreme standardization of its user interface, file system and networking, and the availability of a very large variety of commercial software packages. In the current configuration the Macintosh II operates as a stand-alone workstation where images are imported from a centralized PACS server through an Ethernet network using a standard TCP-IP protocol, and stored locally on magnetic disk. The use of high resolution screens (1024x768 pixels x 8bits) offer sufficient performance for image display and analysis. We focused our project on the design and implementation of a variety of image analysis algorithms ranging from automated structure and edge detection to sophisticated dynamic analysis of sequential images. Specific analysis programs were developed for ultrasound images, digitized angiograms, MRI and CT tomographic images and scintigraphic images.

  18. A Review of Aeromagnetic Anomalies in the Sawatch Range, Central Colorado

    USGS Publications Warehouse

    Bankey, Viki

    2010-01-01

    This report contains digital data and image files of aeromagnetic anomalies in the Sawatch Range of central Colorado. The primary product is a data layer of polygons with linked data records that summarize previous interpretations of aeromagnetic anomalies in this region. None of these data files and images are new; rather, they are presented in updated formats that are intended to be used as input to geographic information systems, standard graphics software, or map-plotting packages.

  19. Clinical validation of different echocardiographic motion pictures expert group-4 algorythms and compression levels for telemedicine.

    PubMed

    Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Cavoretto, Dario; Celeste, Fabrizio; Muratori, Manuela; Guazzi, Maurizio D

    2004-01-01

    Tele-echocardiography is not widely used because of lengthy transmission times when using standard Motion Pictures Expert Groups (MPEG)-2 lossy compression algorythms, unless expensive high bandwidth lines are used. We sought to validate the newer MPEG-4 algorythms to allow further reduction in echocardiographic motion video file size. Four cardiologists expert in echocardiography read blindly 165 randomized uncompressed and compressed 2D and color Doppler normal and pathologic motion images. One Digital Video and 3 MPEG-4 compression algorythms were tested, the latter at 3 decreasing compression quality levels (100%, 65% and 40%). Mean diagnostic and image quality scores were computed for each file and compared across the 3 compression levels using uncompressed files as controls. File dimensions decreased from a range of uncompressed 12-83 MB to MPEG-4 0.03-2.3 MB. All algorythms showed mean scores that were not significantly different from uncompressed source, except the MPEG-4 DivX algorythm at the highest selected compression (40%, p=.002). These data support the use of MPEG-4 compression to reduce echocardiographic motion image size for transmission purposes, allowing cost reduction through use of low bandwidth lines.

  20. CEDIMS: cloud ethical DICOM image Mojette storage

    NASA Astrophysics Data System (ADS)

    Guédon, Jeanpierre; Evenou, Pierre; Tervé, Pierre; David, Sylvain; Béranger, Jérome

    2012-02-01

    Dicom images of patients will necessarily been stored in Clouds. However, ethical constraints must apply. In this paper, a method which provides the two following conditions is presented: 1) the medical information is not readable by the cloud owner since it is distributed along several clouds 2) the medical information can be retrieved from any sufficient subset of clouds In order to obtain this result in a real time processing, the Mojette transform is used. This paper reviews the interesting features of the Mojette transform in terms of information theory. Since only portions of the original Dicom files are stored into each cloud, their contents are not reachable. For instance, we use 4 different public clouds to save 4 different projections of each file, with the additional condition that any 3 over 4 projections are enough to reconstruct the original file. Thus, even if a cloud is unavailable when the user wants to load a Dicom file, the other 3 are giving enough information for real time reconstruction. The paper presents an implementation on 3 actual clouds. For ethical reasons, we use a Dicom image spreaded over 3 public clouds to show the obtained confidentiality and possible real time recovery.

  1. Quantitative histogram analysis of images

    NASA Astrophysics Data System (ADS)

    Holub, Oliver; Ferreira, Sérgio T.

    2006-11-01

    A routine for histogram analysis of images has been written in the object-oriented, graphical development environment LabVIEW. The program converts an RGB bitmap image into an intensity-linear greyscale image according to selectable conversion coefficients. This greyscale image is subsequently analysed by plots of the intensity histogram and probability distribution of brightness, and by calculation of various parameters, including average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of the histogram and the median of the probability distribution. The program allows interactive selection of specific regions of interest (ROI) in the image and definition of lower and upper threshold levels (e.g., to permit the removal of a constant background signal). The results of the analysis of multiple images can be conveniently saved and exported for plotting in other programs, which allows fast analysis of relatively large sets of image data. The program file accompanies this manuscript together with a detailed description of two application examples: The analysis of fluorescence microscopy images, specifically of tau-immunofluorescence in primary cultures of rat cortical and hippocampal neurons, and the quantification of protein bands by Western-blot. The possibilities and limitations of this kind of analysis are discussed. Program summaryTitle of program: HAWGC Catalogue identifier: ADXG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXG_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Mobile Intel Pentium III, AMD Duron Installations: No installation necessary—Executable file together with necessary files for LabVIEW Run-time engine Operating systems or monitors under which the program has been tested: WindowsME/2000/XP Programming language used: LabVIEW 7.0 Memory required to execute with typical data:˜16MB for starting and ˜160MB used for loading of an image No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: No No of lines in distributed program, including test data, etc.:138 946 No. of bytes in distributed program, including test data, etc.:15 166 675 Distribution format: tar.gz Nature of physical problem: Quantification of image data (e.g., for discrimination of molecular species in gels or fluorescent molecular probes in cell cultures) requires proprietary or complex software packages, which might not include the relevant statistical parameters or make the analysis of multiple images a tedious procedure for the general user. Method of solution: Tool for conversion of RGB bitmap image into luminance-linear image and extraction of luminance histogram, probability distribution, and statistical parameters (average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of histogram and median of probability distribution) with possible selection of region of interest (ROI) and lower and upper threshold levels. Restrictions on the complexity of the problem: Does not incorporate application-specific functions (e.g., morphometric analysis) Typical running time: Seconds (depending on image size and processor speed) Unusual features of the program: None

  2. Testing the Forensic Interestingness of Image Files Based on Size and Type

    DTIC Science & Technology

    2017-09-01

    there were still a lot of uninteresting files that were marked as interesting. Also, the results do not show a correlation between the...interesting, but there were still a lot of uninteresting files that were marked as interesting. Also, the results do not show a correlation between...7  IV.  DESCRIPTION OF METHODOLOGY ...........................................................11  A.  TEST SETUP

  3. Baseline coastal oblique aerial photographs collected from the Virginia/North Carolina border to Montauk Point, New York, October 5-6, 2014

    USGS Publications Warehouse

    Morgan, Karen L. M.

    2015-10-02

    In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files.

  4. Electronic hand-drafting and picture management system.

    PubMed

    Yang, Tsung-Han; Ku, Cheng-Yuan; Yen, David C; Hsieh, Wen-Huai

    2012-08-01

    The Department of Health of Executive Yuan in Taiwan (R.O.C.) is implementing a five-stage project entitled Electronic Medical Record (EMR) converting all health records from written to electronic form. Traditionally, physicians record patients' symptoms, related examinations, and suggested treatments on paper medical records. Currently when implementing the EMR, all text files and image files in the Hospital Information System (HIS) and Picture Archiving and Communication Systems (PACS) are kept separate. The current medical system environment is unable to combine text files, hand-drafted files, and photographs in the same system, so it is difficult to support physicians with the recording of medical data. Furthermore, in surgical and other related departments, physicians need immediate access to medical records in order to understand the details of a patient's condition. In order to address these problems, the Department of Health has implemented an EMR project, with the primary goal of building an electronic hand-drafting and picture management system (HDP system) that can be used by medical personnel to record medical information in a convenient way. This system can simultaneously edit text files, hand-drafted files, and image files and then integrate these data into Portable Document Format (PDF) files. In addition, the output is designed to fit a variety of formats in order to meet various laws and regulations. By combining the HDP system with HIS and PACS, the applicability can be enhanced to fit various scenarios and can assist the medical industry in moving into the final phase of EMR.

  5. NOTE: MMCTP: a radiotherapy research environment for Monte Carlo and patient-specific treatment planning

    NASA Astrophysics Data System (ADS)

    Alexander, A.; DeBlois, F.; Stroian, G.; Al-Yahya, K.; Heath, E.; Seuntjens, J.

    2007-07-01

    Radiotherapy research lacks a flexible computational research environment for Monte Carlo (MC) and patient-specific treatment planning. The purpose of this study was to develop a flexible software package on low-cost hardware with the aim of integrating new patient-specific treatment planning with MC dose calculations suitable for large-scale prospective and retrospective treatment planning studies. We designed the software package 'McGill Monte Carlo treatment planning' (MMCTP) for the research development of MC and patient-specific treatment planning. The MMCTP design consists of a graphical user interface (GUI), which runs on a simple workstation connected through standard secure-shell protocol to a cluster for lengthy MC calculations. Treatment planning information (e.g., images, structures, beam geometry properties and dose distributions) is converted into a convenient MMCTP local file storage format designated, the McGill RT format. MMCTP features include (a) DICOM_RT, RTOG and CADPlan CART format imports; (b) 2D and 3D visualization views for images, structure contours, and dose distributions; (c) contouring tools; (d) DVH analysis, and dose matrix comparison tools; (e) external beam editing; (f) MC transport calculation from beam source to patient geometry for photon and electron beams. The MC input files, which are prepared from the beam geometry properties and patient information (e.g., images and structure contours), are uploaded and run on a cluster using shell commands controlled from the MMCTP GUI. The visualization, dose matrix operation and DVH tools offer extensive options for plan analysis and comparison between MC plans and plans imported from commercial treatment planning systems. The MMCTP GUI provides a flexible research platform for the development of patient-specific MC treatment planning for photon and electron external beam radiation therapy. The impact of this tool lies in the fact that it allows for systematic, platform-independent, large-scale MC treatment planning for different treatment sites. Patient recalculations were performed to validate the software and ensure proper functionality.

  6. The Use of an On-Board MV Imager for Plan Verification of Intensity Modulated Radiation Therapy and Volumetrically Modulated Arc Therapy

    NASA Astrophysics Data System (ADS)

    Walker, Justin A.

    The introduction of complex treatment modalities such as IMRT and VMAT has led to the development of many devices for plan verification. One such innovation in this field is the repurposing of the portal imager to not only be used for tumor localization but for recording dose distributions as well. Several advantages make portal imagers attractive options for this purpose. Very high spatial resolution allows for better verification of small field plans than may be possible with commercially available devices. Because the portal imager is attached to the gantry set up is simpler than any other method available, requiring no additional accessories, and often can be accomplished from outside the treatment room. Dose images capture by the portal imager are in digital format make permanent records that can be analyzed immediately. Portal imaging suffers from a few limitations however that must be overcome. Images captured contain dose information and a calibration must be maintained for image to dose conversion. Dose images can only be taken perpendicular to the treatment beam allowing only for planar dose comparison. Planar dose files are themself difficult to obtain for VMAT treatments and an in-house script had to be developed to create such a file before analysis could be performed. Using the methods described in this study, excellent agreement between planar dose files generated and dose images taken were found. The average agreement for IMRT field analyzed being greater than 97% for non-normalized images at 3mm and 3%. Comparable agreement for VAMT plans was found as well with the average agreement being greater than 98%.

  7. Home teleradiology system

    NASA Astrophysics Data System (ADS)

    Komo, Darmadi; Garra, Brian S.; Freedman, Matthew T.; Mun, Seong K.

    1997-05-01

    The Home Teleradiology Server system has been developed and installed at the Department of Radiology, Georgetown University Medical Center. The main purpose of the system is to provide a service for on-call physicians to view patients' medical images at home during off-hours. This service will reduce the overhead time required by on-call physicians to travel to the hospital, thereby increasing the efficiency of patient care and improving the total quality of the health care. Typically when a new case is conducted, the medical images generated from CT, US, and/or MRI modalities are transferred to a central server at the hospital via DICOM messages over an existing hospital network. The server has a DICOM network agent that listens to DICOM messages sent by CT, US, and MRI modalities and stores them into separate DICOM files for sending purposes. The server also has a general purpose, flexible scheduling software that can be configured to send image files to specific user(s) at certain times on any day(s) of the week. The server will then distribute the medical images to on- call physicians' homes via a high-speed modem. All file transmissions occur in the background without human interaction after the scheduling software is pre-configured accordingly. At the receiving end, the physicians' computers consist of high-end workstations that have high-speed modems to receive the medical images sent by the central server from the hospital, and DICOM compatible viewer software to view the transmitted medical images in DICOM format. A technician from the hospital, and DICOM compatible viewer software to view the transmitted medical images in DICOM format. A technician from the hospital will notify the physician(s) after all the image files have been completely sent. The physician(s) will then examine the medical images and decide if it is necessary to travel to the hospital for further examination on the patients. Overall, the Home Teleradiology system provides the on-call physicians with a cost-effective and convenient environment for viewing patients' medical images at home.

  8. An Efficient Format for Nearly Constant-Time Access to Arbitrary Time Intervals in Large Trace Files

    DOE PAGES

    Chan, Anthony; Gropp, William; Lusk, Ewing

    2008-01-01

    A powerful method to aid in understanding the performance of parallel applications uses log or trace files containing time-stamped events and states (pairs of events). These trace files can be very large, often hundreds or even thousands of megabytes. Because of the cost of accessing and displaying such files, other methods are often used that reduce the size of the tracefiles at the cost of sacrificing detail or other information. This paper describes a hierarchical trace file format that provides for display of an arbitrary time window in a time independent of the total size of the file and roughlymore » proportional to the number of events within the time window. This format eliminates the need to sacrifice data to achieve a smaller trace file size (since storage is inexpensive, it is necessary only to make efficient use of bandwidth to that storage). The format can be used to organize a trace file or to create a separate file of annotations that may be used with conventional trace files. We present an analysis of the time to access all of the events relevant to an interval of time and we describe experiments demonstrating the performance of this file format.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burnett, R.A.

    A major goal of the Analysis of Large Data Sets (ALDS) research project at Pacific Northwest Laboratory (PNL) is to provide efficient data organization, storage, and access capabilities for statistical applications involving large amounts of data. As part of the effort to achieve this goal, a self-describing binary (SDB) data file structure has been designed and implemented together with a set of basic data manipulation functions and supporting SDB data access routines. Logical and physical data descriptors are stored in SDB files preceding the data values. SDB files thus provide a common data representation for interfacing diverse software components. Thismore » paper describes the various types of data descriptors and data structures permitted by the file design. Data buffering, file segmentation, and a segment overflow handler are also discussed.« less

  10. TU-CD-304-11: Veritas 2.0: A Cloud-Based Tool to Facilitate Research and Innovation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, P; Patankar, A; Etmektzoglou, A

    Purpose: We introduce Veritas 2.0, a cloud-based, non-clinical research portal, to facilitate translation of radiotherapy research ideas to new delivery techniques. The ecosystem of research tools includes web apps for a research beam builder for TrueBeam Developer Mode, an image reader for compressed and uncompressed XIM files, and a trajectory log file based QA/beam delivery analyzer. Methods: The research beam builder can generate TrueBeam readable XML file either from scratch or from pre-existing DICOM-RT plans. DICOM-RT plan is first converted to XML format and then researcher can interactively modify or add control points to them. Delivered beam can be verifiedmore » via reading generated images and analyzing trajectory log files. Image reader can read both uncompressed and HND-compressed XIM images. The trajectory log analyzer lets researchers plot expected vs. actual values and deviations among 30 mechanical axes. The analyzer gives an animated view of MLC patterns for the beam delivery. Veritas 2.0 is freely available and its advantages versus standalone software are i) No software installation or maintenance needed, ii) easy accessibility across all devices iii) seamless upgrades and iv) OS independence. Veritas is written using open-source tools like twitter bootstrap, jQuery, flask, and Python-based modules. Results: In the first experiment, an anonymized 7-beam DICOM-RT IMRT plan was converted to XML beam containing 1400 control points. kV and MV imaging points were inserted into this XML beam. In another experiment, a binary log file was analyzed to compare actual vs expected values and deviations among axes. Conclusions: Veritas 2.0 is a public cloud-based web app that hosts a pool of research tools for facilitating research from conceptualization to verification. It is aimed at providing a platform for facilitating research and collaboration. I am full time employee at Varian Medical systems, Palo Alto.« less

  11. VizieR Online Data Catalog: WSRT survey of Cygnus OB2 (Setia Gunawan+, 2003)

    NASA Astrophysics Data System (ADS)

    Setia Gunawan, D. Y. A.; de Bruyn, A. G.; van der Hucht, K. A.; Williams, P. M.

    2003-11-01

    The Cygnus region is too large to be imaged with a single pointing of the 25m dishes of the WSRT. The half-power beamwidths (HPBW) of the WSRT at 350 and 1400MHz are about 2.4{deg} and 0.6{deg}, respectively. Therefore, we used a mosaicking technique at both frequencies. The 350MHz observations were taken in 1994 as part of a larger survey of the Galactic plane in the Cygnus area (Vashist & de Bruyn, unpublished); only a small part of it is used in this study. (2 data files).

  12. Construction of image database for newspapaer articles using CTS

    NASA Astrophysics Data System (ADS)

    Kamio, Tatsuo

    Nihon Keizai Shimbun, Inc. developed a system of making articles' image database automatically by use of CTS (Computer Typesetting System). Besides the articles and the headlines inputted in CTS, it reproduces the image of elements of such as photography and graphs by article in accordance with information of position on the paper. So to speak, computer itself clips the articles out of the newspaper. Image database is accumulated in magnetic file and optical file and is output to the facsimile of users. With diffusion of CTS, newspaper companies which start to have structure of articles database are increased rapidly, the said system is the first attempt to make database automatically. This paper describes the device of CTS which supports this system and outline.

  13. Malpractice claims related to musculoskeletal imaging. Incidence and anatomical location of lesions.

    PubMed

    Fileni, Adriano; Fileni, Gaia; Mirk, Paoletta; Magnavita, Giulia; Nicoli, Marzia; Magnavita, Nicola

    2013-12-01

    Failure to detect lesions of the musculoskeletal system is a frequent cause of malpractice claims against radiologists. We examined all the malpractice claims related to alleged errors in musculoskeletal imaging filed against Italian radiologists over a period of 14 years (1993-2006). During the period considered, a total of 416 claims for alleged diagnostic errors relating to the musculoskeletal system were filed against radiologists; of these, 389 (93.5%) concerned failure to report fractures, and 15 (3.6%) failure to diagnose a tumour. Incorrect interpretation of bone pathology is among the most common causes of litigation against radiologists; alone, it accounts for 36.4% of all malpractice claims filed during the observation period. Awareness of this risk should encourage extreme caution and diligence.

  14. Implementation of a Landscape Lighting System to Display Images

    NASA Astrophysics Data System (ADS)

    Sun, Gi-Ju; Cho, Sung-Jae; Kim, Chang-Beom; Moon, Cheol-Hong

    The system implemented in this study consists of a PC, MASTER, SLAVEs and MODULEs. The PC sets the various landscape lighting displays, and the image files can be sent to the MASTER through a virtual serial port connected to the USB (Universal Serial Bus). The MASTER sends a sync signal to the SLAVE. The SLAVE uses the signal received from the MASTER and the landscape lighting display pattern. The video file is saved in the NAND Flash memory and the R, G, B signals are separated using the self-made display signal and sent to the MODULE so that it can display the image.

  15. Digital Aeromagnetic Data and Derivative Products from a Helicopter Survey over the Town of Taos and Surrounding Areas, Taos County, New Mexico

    USGS Publications Warehouse

    Bankey, Viki; Grauch, V.J.S.; ,

    2004-01-01

    This report contains digital data, image files, and text files describing data formats and survey procedures for aeromagnetic data collected during a helicopter geophysical survey in northern New Mexico during October 2003. The survey covers the Town of Taos, Taos Pueblo, and surrounding communities in Taos County. Several derivative products from these data are also presented, including reduced-to-pole, horizontal gradient magnitude, and downward continued grids and images.

  16. Digital aeromagnetic data and derivative products from a helicopter survey over the town of Blanca and surrounding areas, Alamosa and Costilla counties, Colorado

    USGS Publications Warehouse

    Bankey, Viki; Grauch, V.J.S.; ,

    2004-01-01

    This CD-ROM contains digital data, image files, and text files describing data formats and survey procedures for aeromagnetic data collected during a helicopter geophysical survey in southern Colorado during October 2003. The survey covers the town of Blanca and surrounding communities in Alamosa and Costilla Counties. Several derivative products from these data are also presented, including reduced-to-pole, horizontal gradient magnitude, and downward continued grids and images.

  17. Astrometry with A-Track Using Gaia DR1 Catalogue

    NASA Astrophysics Data System (ADS)

    Kılıç, Yücel; Erece, Orhan; Kaplan, Murat

    2018-04-01

    In this work, we built all sky index files from Gaia DR1 catalogue for the high-precision astrometric field solution and the precise WCS coordinates of the moving objects. For this, we used build-astrometry-index program as a part of astrometry.net code suit. Additionally, we added astrometry.net's WCS solution tool to our previously developed software which is a fast and robust pipeline for detecting moving objects such as asteroids and comets in sequential FITS images, called A-Track. Moreover, MPC module was added to A-Track. This module is linked to an asteroid database to name the found objects and prepare the MPC file to report the results. After these innovations, we tested a new version of the A-Track code on photometrical data taken by the SI-1100 CCD with 1-meter telescope at TÜBİTAK National Observatory, Antalya. The pipeline can be used to analyse large data archives or daily sequential data. The code is hosted on GitHub under the GNU GPL v3 license.

  18. 77 FR 11483 - Request for Extension of a Currently Approved Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-27

    ... that is a scanned Adobe PDF file, it must be scanned as text and not as an image, thus allowing FCIC to... February 17, 2012. William J. Murphy, Manager, Federal Crop Insurance Corporation. [FR Doc. 2012-4465 Filed...

  19. Cycle time reduction by Html report in mask checking flow

    NASA Astrophysics Data System (ADS)

    Chen, Jian-Cheng; Lu, Min-Ying; Fang, Xiang; Shen, Ming-Feng; Ma, Shou-Yuan; Yang, Chuen-Huei; Tsai, Joe; Lee, Rachel; Deng, Erwin; Lin, Ling-Chieh; Liao, Hung-Yueh; Tsai, Jenny; Bowhill, Amanda; Vu, Hien; Russell, Gordon

    2017-07-01

    The Mask Data Correctness Check (MDCC) is a reticle-level, multi-layer DRC-like check evolved from mask rule check (MRC). The MDCC uses extended job deck (EJB) to achieve mask composition and to perform a detailed check for positioning and integrity of each component of the reticle. Different design patterns on the mask will be mapped to different layers. Therefore, users may be able to review the whole reticle and check the interactions between different designs before the final mask pattern file is available. However, many types of MDCC check results, such as errors from overlapping patterns usually have very large and complex-shaped highlighted areas covering the boundary of the design. Users have to load the result OASIS file and overlap it to the original database that was assembled in MDCC process on a layout viewer, then search for the details of the check results. We introduce a quick result-reviewing method based on an html format report generated by Calibre® RVE. In the report generation process, we analyze and extract the essential part of result OASIS file to a result database (RDB) file by standard verification rule format (SVRF) commands. Calibre® RVE automatically loads the assembled reticle pattern and generates screen shots of these check results. All the processes are automatically triggered just after the MDCC process finishes. Users just have to open the html report to get the information they need: for example, check summary, captured images of results and their coordinates.

  20. MrEnt: an editor for publication-quality phylogenetic tree illustrations.

    PubMed

    Zuccon, Alessandro; Zuccon, Dario

    2014-09-01

    We developed MrEnt, a Windows-based, user-friendly software that allows the production of complex, high-resolution, publication-quality phylogenetic trees in few steps, directly from the analysis output. The program recognizes the standard Nexus tree format and the annotated tree files produced by BEAST and MrBayes. MrEnt combines in a single software a large suite of tree manipulation functions (e.g. handling of multiple trees, tree rotation, character mapping, node collapsing, compression of large clades, handling of time scale and error bars for chronograms) with drawing tools typical of standard graphic editors, including handling of graphic elements and images. The tree illustration can be printed or exported in several standard formats suitable for journal publication, PowerPoint presentation or Web publication. © 2014 John Wiley & Sons Ltd.

  1. D-GENIES: dot plot large genomes in an interactive, efficient and simple way.

    PubMed

    Cabanettes, Floréal; Klopp, Christophe

    2018-01-01

    Dot plots are widely used to quickly compare sequence sets. They provide a synthetic similarity overview, highlighting repetitions, breaks and inversions. Different tools have been developed to easily generated genomic alignment dot plots, but they are often limited in the input sequence size. D-GENIES is a standalone and web application performing large genome alignments using minimap2 software package and generating interactive dot plots. It enables users to sort query sequences along the reference, zoom in the plot and download several image, alignment or sequence files. D-GENIES is an easy-to-install, open-source software package (GPL) developed in Python and JavaScript. The source code is available at https://github.com/genotoul-bioinfo/dgenies and it can be tested at http://dgenies.toulouse.inra.fr/.

  2. The AstroHDF Effort

    NASA Astrophysics Data System (ADS)

    Masters, J.; Alexov, A.; Folk, M.; Hanisch, R.; Heber, G.; Wise, M.

    2012-09-01

    Here we update the astronomy community on our effort to deal with the demands of ever-increasing astronomical data size and complexity, using the Hierarchical Data Format, version 5 (HDF5) format (Wise et al. 2011). NRAO, LOFAR and VAO have joined forces with The HDF Group to write an NSF grant, requesting funding to assist in the effort. This paper briefly summarizes our motivation for the proposed project, an outline of the project itself, and some of the material discussed at the ADASS Birds of a Feather (BoF) discussion. Topics of discussion included: community experiences with HDF5 and other file formats; toolsets which exist and/or can be adapted for HDF5; a call for development towards visualizing large (> 1 TB) image cubes; and, general lessons learned from working with large and complex data.

  3. Glide path preparation in S-shaped canals with rotary pathfinding nickel-titanium instruments.

    PubMed

    Ajuz, Natasha C C; Armada, Luciana; Gonçalves, Lucio S; Debelian, Gilberto; Siqueira, José F

    2013-04-01

    This study compared the incidence of deviation along S-shaped (double-curved) canals after glide path preparation with 2 nickel-titanium (NiTi) rotary pathfinding instruments and hand K-files. S-shaped canals from 60 training blocks were filled with ink, and preinstrumentation images were obtained by using a stereomicroscope. Glide path preparation was performed by an endodontist who used hand stainless steel K-files (up to size 20), rotary NiTi PathFile instruments (up to size 19), or rotary NiTi Scout RaCe instruments (up to size 20). Postinstrumentation images were taken by using exactly the same conditions as for the preinstrumentation images, and both pictures were superimposed. Differences along the S-shaped canal for the mesial and distal aspects were measured to evaluate the occurrence of deviation. Intragroup analysis showed that all instruments promoted some deviation in virtually all levels. Overall, regardless of the group, deviations were observed in the mesial wall at the canal terminus and at levels 4, 5, 6 and 7 mm and in the distal wall at levels 1, 2, and 3 mm. These levels corresponded to the inner walls of each curvature. Both rotary NiTi instruments performed significantly better than hand K-files at all levels (P < .05), except for PathFiles at the 0-mm level. ScoutRaCe instruments showed significantly better results than PathFiles at levels 0, 2, 3, 5, and 6 mm (P < .05). Findings suggest that rotary NiTi instruments are suitable for adequate glide path preparation because they promoted less deviation from the original canal anatomy when compared with hand-operated instruments. Of the 2 rotary pathfinding instruments, Scout RaCe showed an overall significantly better performance. Copyright © 2013 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  4. Characterizing parallel file-access patterns on a large-scale multiprocessor

    NASA Technical Reports Server (NTRS)

    Purakayastha, Apratim; Ellis, Carla Schlatter; Kotz, David; Nieuwejaar, Nils; Best, Michael

    1994-01-01

    Rapid increases in the computational speeds of multiprocessors have not been matched by corresponding performance enhancements in the I/O subsystem. To satisfy the large and growing I/O requirements of some parallel scientific applications, we need parallel file systems that can provide high-bandwidth and high-volume data transfer between the I/O subsystem and thousands of processors. Design of such high-performance parallel file systems depends on a thorough grasp of the expected workload. So far there have been no comprehensive usage studies of multiprocessor file systems. Our CHARISMA project intends to fill this void. The first results from our study involve an iPSC/860 at NASA Ames. This paper presents results from a different platform, the CM-5 at the National Center for Supercomputing Applications. The CHARISMA studies are unique because we collect information about every individual read and write request and about the entire mix of applications running on the machines. The results of our trace analysis lead to recommendations for parallel file system design. First the file system should support efficient concurrent access to many files, and I/O requests from many jobs under varying load conditions. Second, it must efficiently manage large files kept open for long periods. Third, it should expect to see small requests predominantly sequential access patterns, application-wide synchronous access, no concurrent file-sharing between jobs appreciable byte and block sharing between processes within jobs, and strong interprocess locality. Finally, the trace data suggest that node-level write caches and collective I/O request interfaces may be useful in certain environments.

  5. Data storage and retrieval system

    NASA Technical Reports Server (NTRS)

    Nakamoto, Glen

    1991-01-01

    The Data Storage and Retrieval System (DSRS) consists of off-the-shelf system components integrated as a file server supporting very large files. These files are on the order of one gigabyte of data per file, although smaller files on the order of one megabyte can be accommodated as well. For instance, one gigabyte of data occupies approximately six 9 track tape reels (recorded at 6250 bpi). Due to this large volume of media, it was desirable to shrink the size of the proposed media to a single portable cassette. In addition to large size, a key requirement was that the data needs to be transferred to a (VME based) workstation at very high data rates. One gigabyte (GB) of data needed to be transferred from an archiveable media on a file server to a workstation in less than 5 minutes. Equivalent size, on-line data needed to be transferred in less than 3 minutes. These requirements imply effective transfer rates on the order of four to eight megabytes per second (4-8 MB/s). The DSRS also needed to be able to send and receive data from a variety of other sources accessible from an Ethernet local area network.

  6. Data storage and retrieval system

    NASA Technical Reports Server (NTRS)

    Nakamoto, Glen

    1992-01-01

    The Data Storage and Retrieval System (DSRS) consists of off-the-shelf system components integrated as a file server supporting very large files. These files are on the order of one gigabyte of data per file, although smaller files on the order of one megabyte can be accommodated as well. For instance, one gigabyte of data occupies approximately six 9-track tape reels (recorded at 6250 bpi). Due to this large volume of media, it was desirable to 'shrink' the size of the proposed media to a single portable cassette. In addition to large size, a key requirement was that the data needs to be transferred to a (VME based) workstation at very high data rates. One gigabyte (GB) of data needed to be transferred from an archiveable media on a file server to a workstation in less than 5 minutes. Equivalent size, on-line data needed to be transferred in less than 3 minutes. These requirements imply effective transfer rates on the order of four to eight megabytes per second (4-8 MB/s). The DSRS also needed to be able to send and receive data from a variety of other sources accessible from an Ethernet local area network.

  7. MSL: Facilitating automatic and physical analysis of published scientific literature in PDF format.

    PubMed

    Ahmed, Zeeshan; Dandekar, Thomas

    2015-01-01

    Published scientific literature contains millions of figures, including information about the results obtained from different scientific experiments e.g. PCR-ELISA data, microarray analysis, gel electrophoresis, mass spectrometry data, DNA/RNA sequencing, diagnostic imaging (CT/MRI and ultrasound scans), and medicinal imaging like electroencephalography (EEG), magnetoencephalography (MEG), echocardiography  (ECG), positron-emission tomography (PET) images. The importance of biomedical figures has been widely recognized in scientific and medicine communities, as they play a vital role in providing major original data, experimental and computational results in concise form. One major challenge for implementing a system for scientific literature analysis is extracting and analyzing text and figures from published PDF files by physical and logical document analysis. Here we present a product line architecture based bioinformatics tool 'Mining Scientific Literature (MSL)', which supports the extraction of text and images by interpreting all kinds of published PDF files using advanced data mining and image processing techniques. It provides modules for the marginalization of extracted text based on different coordinates and keywords, visualization of extracted figures and extraction of embedded text from all kinds of biological and biomedical figures using applied Optimal Character Recognition (OCR). Moreover, for further analysis and usage, it generates the system's output in different formats including text, PDF, XML and images files. Hence, MSL is an easy to install and use analysis tool to interpret published scientific literature in PDF format.

  8. In vitro evaluation of efficacy of different rotary instrument systems for gutta percha removal during root canal retreatment

    PubMed Central

    Joseph, Mercy; Malhotra, Amit; Rao, Murali; Sharma, Abhimanyu; Talwar, Sangeeta

    2016-01-01

    Background Complete removal of old filling material during root canal retreatment is fundamental for predictable cleaning and shaping of canal anatomy. Most of the retreatment methods tested in earlier studies have shown inability to achieve complete removal of root canal filling. Therefore the aim of this investigation was to assess the efficacy of three different rotary nickel titanium retreatment systems and Hedstrom files in removing filling material from root canals. Material and Methods Sixty extracted mandibular premolars were decoronated to leave 15 mm root. Specimen were hand instrumented and obturated using gutta percha and AH plus root canal sealer. After storage period of two weeks, roots were retreated with three (Protaper retreatment files, Mtwo retreatment files, NRT GPR) rotary retreatment instrument systems and Hedstroem files. Subsequently, samples were sectioned longitudinally and examined under stereomicroscope. Digital images were recorded and evaluated using Digital Image Analysing Software. The retreatment time was recorded for each tooth using a stopwatch. The area of canal and the residual filling material was recorded in mm2 and the percentage of remaining filling material on canal walls was calculated. Data was analysed using ANOVA test. Results Significantly less amount of residual filling material was present in protaper and Mtwo instrumented teeth (p < 0.05) compared to NRT GPR and Hedstrom files group. Protaper instruments also required lesser time during removal of filling material followed by Mtwo instruments, NRT GPR files and Hedstrom files. Conclusions None of the instruments were able to remove the filling material completely from root canal. Protaper universal retreatment system and Mtwo retreatment files were more efficient and faster compared to NRT GPR fles and Hedstrom files. Key words:Gutta-percha removal, nickel titanium, root canal retreatment, rotary instruments. PMID:27703601

  9. Project BALLOTS: Bibliographic Automation of Large Library Operations Using a Time-Sharing System. Progress Report (3/27/69 - 6/26/69).

    ERIC Educational Resources Information Center

    Veaner, Allen B.

    Project BALLOTS is a large-scale library automation development project of the Stanford University Libraries which has demonstrated the feasibility of conducting on-line interactive searches of complex bibliographic files, with a large number of users working simultaneously in the same or different files. This report documents the continuing…

  10. Distributed File System Utilities to Manage Large DatasetsVersion 0.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-05-21

    FileUtils provides a suite of tools to manage large datasets typically created by large parallel MPI applications. They are written in C and use standard POSIX I/Ocalls. The current suite consists of tools to copy, compare, remove, and list. The tools provide dramatic speedup over existing Linux tools, which often run as a single process.

  11. Method for position emission mammography image reconstruction

    DOEpatents

    Smith, Mark Frederick

    2004-10-12

    An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.

  12. The Native American Experience. American Historical Images on File.

    ERIC Educational Resources Information Center

    Wardwell, Lelia, Ed.

    This photo-documentation reference body presents more than 275 images chronicling the experiences of the American Indian from their prehistoric migrations to the present. The volume includes information and images illustrating the life ways of various tribes. The images are accompanied by historical information providing cultural context. The book…

  13. JuxtaView - A tool for interactive visualization of large imagery on scalable tiled displays

    USGS Publications Warehouse

    Krishnaprasad, N.K.; Vishwanath, V.; Venkataraman, S.; Rao, A.G.; Renambot, L.; Leigh, J.; Johnson, A.E.; Davis, B.

    2004-01-01

    JuxtaView is a cluster-based application for viewing ultra-high-resolution images on scalable tiled displays. We present in JuxtaView, a new parallel computing and distributed memory approach for out-of-core montage visualization, using LambdaRAM, a software-based network-level cache system. The ultimate goal of JuxtaView is to enable a user to interactively roam through potentially terabytes of distributed, spatially referenced image data such as those from electron microscopes, satellites and aerial photographs. In working towards this goal, we describe our first prototype implemented over a local area network, where the image is distributed using LambdaRAM, on the memory of all nodes of a PC cluster driving a tiled display wall. Aggressive pre-fetching schemes employed by LambdaRAM help to reduce latency involved in remote memory access. We compare LambdaRAM with a more traditional memory-mapped file approach for out-of-core visualization. ?? 2004 IEEE.

  14. An Implementation Of Elias Delta Code And ElGamal Algorithm In Image Compression And Security

    NASA Astrophysics Data System (ADS)

    Rachmawati, Dian; Andri Budiman, Mohammad; Saffiera, Cut Amalia

    2018-01-01

    In data transmission such as transferring an image, confidentiality, integrity, and efficiency of data storage aspects are highly needed. To maintain the confidentiality and integrity of data, one of the techniques used is ElGamal. The strength of this algorithm is found on the difficulty of calculating discrete logs in a large prime modulus. ElGamal belongs to the class of Asymmetric Key Algorithm and resulted in enlargement of the file size, therefore data compression is required. Elias Delta Code is one of the compression algorithms that use delta code table. The image was first compressed using Elias Delta Code Algorithm, then the result of the compression was encrypted by using ElGamal algorithm. Prime test was implemented using Agrawal Biswas Algorithm. The result showed that ElGamal method could maintain the confidentiality and integrity of data with MSE and PSNR values 0 and infinity. The Elias Delta Code method generated compression ratio and space-saving each with average values of 62.49%, and 37.51%.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ushizima, Daniela; Perciano, Talita; Krishnan, Harinarayan

    Fibers provide exceptional strength-to-weight ratio capabilities when woven into ceramic composites, transforming them into materials with exceptional resistance to high temperature, and high strength combined with improved fracture toughness. Microcracks are inevitable when the material is under strain, which can be imaged using synchrotron X-ray computed micro-tomography (mu-CT) for assessment of material mechanical toughness variation. An important part of this analysis is to recognize fibrillar features. This paper presents algorithms for detecting and quantifying composite cracks and fiber breaks from high-resolution image stacks. First, we propose recognition algorithms to identify the different structures of the composite, including matrix cracks andmore » fibers breaks. Second, we introduce our package F3D for fast filtering of large 3D imagery, implemented in OpenCL to take advantage of graphic cards. Results show that our algorithms automatically identify micro-damage and that the GPU-based implementation introduced here takes minutes, being 17x faster than similar tools on a typical image file.« less

  16. VizieR Online Data Catalog: Imaging observations of iPTF 13ajg (Vreeswijk+, 2014)

    NASA Astrophysics Data System (ADS)

    Vreeswijk, P. M.; Savaglio, S.; Gal-Yam, A.; De Cia, A.; Quimby, R. M.; Sullivan, M.; Cenko, S. B.; Perley, D. A.; Filippenko, A. V.; Clubb, K. I.; Taddia, F.; Sollerman, J.; Leloudas, G.; Arcavi, I.; Rubin, A.; Kasliwal, M. M.; Cao, Y.; Yaron, O.; Tal, D.; Ofek, E. O.; Capone, J.; Kutyrev, A. S.; Toy, V.; Nugent, P. E.; Laher, R.; Surace, J.; Kulkarni, S. R.

    2017-08-01

    iPTF 13ajg was imaged with the Palomar 48 inch (P48) Oschin iPTF survey telescope equipped with a 12kx8k CCD mosaic camera (Rahmer et al. 2008SPIE.7014E..4YR) in the Mould R filter, the Palomar 60 inch and CCD camera (Cenko et al. 2006PASP..118.1396C) in Johnson B and Sloan Digital Sky Survey (SDSS) gri, the 2.56 m Nordic Optical Telescope (on La Palma, Canary Islands) with the Andalucia Faint Object Spectrograph and Camera (ALFOSC) in SDSS ugriz, the 4.3 m Discovery Channel Telescope (at Lowell Observatory, Arizona) with the Large Monolithic Imager (LMI) in SDSS r, and with LRIS (Oke et al. 1995PASP..107..375O) and the Multi-Object Spectrometer for Infrared Exploration (MOSFIRE; McLean et al. 2012SPIE.8446E..0JM), both mounted on the 10 m Keck-I telescope (on Mauna Kea, Hawaii), in g and Rs with LRIS and J and Ks with MOSFIRE. (1 data file).

  17. Moths on the Flatbed Scanner: The Art of Joseph Scheer

    PubMed Central

    Buchmann, Stephen L.

    2011-01-01

    During the past decade a few artists and even fewer entomologists discovered flatbed scanning technology, using extreme resolution graphical arts scanners for acquiring high magnification digital images of plants, animals and inanimate objects. They are not just for trip receipts anymore. The special attributes of certain scanners, to image thick objects is discussed along with the technical features of the scanners including magnification, color depth and shadow detail. The work of pioneering scanner artist, Joseph Scheer from New York's Alfred University is highlighted. Representative flatbed-scanned images of moths are illustrated along with techniques to produce them. Collecting and preparing moths, and other objects, for scanning are described. Highlights of the Fulbright sabbatical year of professor Scheer in Arizona and Sonora, Mexico are presented, along with comments on moths in science, folklore, art and pop culture. The use of flatbed scanners is offered as a relatively new method for visualizing small objects while acquiring large files for creating archival inkjet prints for display and sale. PMID:26467835

  18. File Transfers from Peregrine to the Mass Storage System - Gyrfalcon |

    Science.gov Websites

    login node or data-transfer queue node. Below is an example to access data-tranfer queue Interactively number of container files using the tar command. For example, $ cd /scratch//directory1 tar files. The rsync command is convenient for handling a large number of files. For example, make

  19. Challenges for data storage in medical imaging research.

    PubMed

    Langer, Steve G

    2011-04-01

    Researchers in medical imaging have multiple challenges for storing, indexing, maintaining viability, and sharing their data. Addressing all these concerns requires a constellation of tools, but not all of them need to be local to the site. In particular, the data storage challenges faced by researchers can begin to require professional information technology skills. With limited human resources and funds, the medical imaging researcher may be better served with an outsourcing strategy for some management aspects. This paper outlines an approach to manage the main objectives faced by medical imaging scientists whose work includes processing and data mining on non-standard file formats, and relating those files to the their DICOM standard descendents. The capacity of the approach scales as the researcher's need grows by leveraging the on-demand provisioning ability of cloud computing.

  20. Steganalysis feature improvement using expectation maximization

    NASA Astrophysics Data System (ADS)

    Rodriguez, Benjamin M.; Peterson, Gilbert L.; Agaian, Sos S.

    2007-04-01

    Images and data files provide an excellent opportunity for concealing illegal or clandestine material. Currently, there are over 250 different tools which embed data into an image without causing noticeable changes to the image. From a forensics perspective, when a system is confiscated or an image of a system is generated the investigator needs a tool that can scan and accurately identify files suspected of containing malicious information. The identification process is termed the steganalysis problem which focuses on both blind identification, in which only normal images are available for training, and multi-class identification, in which both the clean and stego images at several embedding rates are available for training. In this paper an investigation of a clustering and classification technique (Expectation Maximization with mixture models) is used to determine if a digital image contains hidden information. The steganalysis problem is for both anomaly detection and multi-class detection. The various clusters represent clean images and stego images with between 1% and 10% embedding percentage. Based on the results it is concluded that the EM classification technique is highly suitable for both blind detection and the multi-class problem.

  1. New directions in the CernVM file system

    NASA Astrophysics Data System (ADS)

    Blomer, Jakob; Buncic, Predrag; Ganis, Gerardo; Hardi, Nikola; Meusel, Rene; Popescu, Radu

    2017-10-01

    The CernVM File System today is commonly used to host and distribute application software stacks. In addition to this core task, recent developments expand the scope of the file system into two new areas. Firstly, CernVM-FS emerges as a good match for container engines to distribute the container image contents. Compared to native container image distribution (e.g. through the “Docker registry”), CernVM-FS massively reduces the network traffic for image distribution. This has been shown, for instance, by a prototype integration of CernVM-FS into Mesos developed by Mesosphere, Inc. We present a path for a smooth integration of CernVM-FS and Docker. Secondly, CernVM-FS recently raised new interest as an option for the distribution of experiment conditions data. Here, the focus is on improved versioning capabilities of CernVM-FS that allows to link the conditions data of a run period to the state of a CernVM-FS repository. Lastly, CernVM-FS has been extended to provide a name space for physics data for the LIGO and CMS collaborations. Searching through a data namespace is often done by a central, experiment specific database service. A name space on CernVM-FS can particularly benefit from an existing, scalable infrastructure and from the POSIX file system interface.

  2. A new version of Visual tool for estimating the fractal dimension of images

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Felea, D.; Besliu, C.; Jipa, Al.; Bordeianu, C. C.; Stan, E.; Esanu, T.

    2010-04-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images (Grossu et al., 2009 [1]). The earlier version was limited to bi-dimensional sets of points, stored in bitmap files. The application was extended for working also with comma separated values files and three-dimensional images. New version program summaryProgram title: Fractal Analysis v02 Catalogue identifier: AEEG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9999 No. of bytes in distributed program, including test data, etc.: 4 366 783 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30 M Classification: 14 Catalogue identifier of previous version: AEEG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1999 Does the new version supersede the previous version?: Yes Nature of problem: Estimating the fractal dimension of 2D and 3D images. Solution method: Optimized implementation of the box-counting algorithm. Reasons for new version:The previous version was limited to bitmap image files. The new application was extended in order to work with objects stored in comma separated values (csv) files. The main advantages are: Easier integration with other applications (csv is a widely used, simple text file format); Less resources consumed and improved performance (only the information of interest, the "black points", are stored); Higher resolution (the points coordinates are loaded into Visual Basic double variables [2]); Possibility of storing three-dimensional objects (e.g. the 3D Sierpinski gasket). In this version the optimized box-counting algorithm [1] was extended to the three-dimensional case. Summary of revisions:The application interface was changed from SDI (single document interface) to MDI (multi-document interface). One form was added in order to provide a graphical user interface for the new functionalities (fractal analysis of 2D and 3D images stored in csv files). Additional comments: User friendly graphical interface; Easy deployment mechanism. Running time: In the first approximation, the algorithm is linear. References:[1] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C.C. Bordeianu, D. Felea, Comput. Phys. Comm. 180 (2009) 1999-2001.[2] F. Balena, Programming Microsoft Visual Basic 6.0, Microsoft Press, US, 1999.

  3. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    PubMed

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  4. [Design of visualized medical images network and web platform based on MeVisLab].

    PubMed

    Xiang, Jun; Ye, Qing; Yuan, Xun

    2017-04-01

    With the trend of the development of "Internet +", some further requirements for the mobility of medical images have been required in the medical field. In view of this demand, this paper presents a web-based visual medical imaging platform. First, the feasibility of medical imaging is analyzed and technical points. CT (Computed Tomography) or MRI (Magnetic Resonance Imaging) images are reconstructed three-dimensionally by MeVisLab and packaged as X3D (Extensible 3D Graphics) files shown in the present paper. Then, the B/S (Browser/Server) system specially designed for 3D image is designed by using the HTML 5 and WebGL rendering engine library, and the X3D image file is parsed and rendered by the system. The results of this study showed that the platform was suitable for multiple operating systems to realize the platform-crossing and mobilization of medical image data. The development of medical imaging platform is also pointed out in this paper. It notes that web application technology will not only promote the sharing of medical image data, but also facilitate image-based medical remote consultations and distance learning.

  5. The browse file of NASA/JPL quick-look radar images from the Loch Linnhe 1989 experiment

    NASA Technical Reports Server (NTRS)

    Brown, Walter E., Jr. (Editor)

    1989-01-01

    The Jet Propulsion Laboratory (JPL) Aircraft Synthetic Aperture Radar (AIRSAR) was deployed to Scotland to obtain radar imagery of ship wakes generated in Loch Linnhe. These observations were part of a joint US and UK experiment to study the internal waves generated by ships under partially controlled conditions. The AIRSAR was mounted on the NASA-Ames DC-8 aircraft. The data acquisition sequence consisted of 8 flights, each about 6 hours in duration, wherein 24 observations of the instrumented site were made on each flight. This Browse File provides the experimenters with a reference of the real time imagery (approximately 100 images) obtained on the 38-deg track. These radar images are copies of those obtained at the time of observation and show the general geometry of the ship wake features. To speed up processing during this flight, the images were all processed around zero Doppler, and thus azimuth ambiguities often occur when the drift angel (yaw) exceeded a few degrees. However, even with the various shortcomings, it is believed that the experimenter will find the Browse File useful in establishing a basis for further investigations.

  6. Improved image classification with neural networks by fusing multispectral signatures with topological data

    NASA Technical Reports Server (NTRS)

    Harston, Craig; Schumacher, Chris

    1992-01-01

    Automated schemes are needed to classify multispectral remotely sensed data. Human intelligence is often required to correctly interpret images from satellites and aircraft. Humans suceed because they use various types of cues about a scene to accurately define the contents of the image. Consequently, it follows that computer techniques that integrate and use different types of information would perform better than single source approaches. This research illustrated that multispectral signatures and topographical information could be used in concert. Significantly, this dual source tactic classified a remotely sensed image better than the multispectral classification alone. These classifications were accomplished by fusing spectral signatures with topographical information using neural network technology. A neural network was trained to classify Landsat mulitspectral signatures. A file of georeferenced ground truth classifications were used as the training criterion. The network was trained to classify urban, agriculture, range, and forest with an accuracy of 65.7 percent. Another neural network was programmed and trained to fuse these multispectral signature results with a file of georeferenced altitude data. This topological file contained 10 levels of elevations. When this nonspectral elevation information was fused with the spectral signatures, the classifications were improved to 73.7 and 75.7 percent.

  7. BOREAS RSS-14 Level -3 Gridded Radiometer and Satellite Surface Radiation Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Hodges, Gary; Smith, Eric A.

    2000-01-01

    The BOREAS RSS-14 team collected and processed GOES-7 and -8 images of the BOREAS region as part of its effort to characterize the incoming, reflected, and emitted radiation at regional scales. This data set contains surface radiation parameters, such as net radiation and net solar radiation, that have been interpolated from GOES-7 images and AMS data onto the standard BOREAS mapping grid at a resolution of 5 km N-S and E-W. While some parameters are taken directly from the AMS data set, others have been corrected according to calibrations carried out during IFC-2 in 1994. The corrected values as well as the uncorrected values are included. For example, two values of net radiation are provided: an uncorrected value (Rn), and a value that has been corrected according to the calibrations (Rn-COR). The data are provided in binary image format data files. Some of the data files on the BOREAS CD-ROMs have been compressed using the Gzip program. See section 8.2 for details. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  8. WADeG Cell Phone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-09-01

    The on cell phone software captures the images from the CMOS camera periodically, stores the pictures, and periodically transmits those images over the cellular network to the server. The cell phone software consists of several modules: CamTest.cpp, CamStarter.cpp, StreamIOHandler .cpp, and covertSmartDevice.cpp. The camera application on the SmartPhone is CamStarter, which is "the" user interface for the camera system. The CamStarter user interface allows a user to start/stop the camera application and transfer files to the server. The CamStarter application interfaces to the CamTest application through registry settings. Both the CamStarter and CamTest applications must be separately deployed on themore » smartphone to run the camera system application. When a user selects the Start button in CamStarter, CamTest is created as a process. The smartphone begins taking small pictures (CAPTURE mode), analyzing those pictures for certain conditions, and saving those pictures on the smartphone. This process will terminate when the user selects the Stop button. The camtest code spins off an asynchronous thread, StreamIOHandler, to check for pictures taken by the camera. The received image is then tested by StreamIOHandler to see if it meets certain conditions. If those conditions are met, the CamTest program is notified through the setting of a registry key value and the image is saved in a designated directory in a custom BMP file which includes a header and the image data. When the user selects the Transfer button in the CamStarter user interface, the covertsmartdevice code is created as a process. Covertsmartdevice gets all of the files in a designated directory, opens a socket connection to the server, sends each file, and then terminates.« less

  9. Advantages to Geoscience and Disaster Response from QuakeSim Implementation of Interferometric Radar Maps in a GIS Database System

    NASA Astrophysics Data System (ADS)

    Parker, Jay; Donnellan, Andrea; Glasscoe, Margaret; Fox, Geoffrey; Wang, Jun; Pierce, Marlon; Ma, Yu

    2015-08-01

    High-resolution maps of earth surface deformation are available in public archives for scientific interpretation, but are primarily available as bulky downloads on the internet. The NASA uninhabited aerial vehicle synthetic aperture radar (UAVSAR) archive of airborne radar interferograms delivers very high resolution images (approximately seven meter pixels) making remote handling of the files that much more pressing. Data exploration requiring data selection and exploratory analysis has been tedious. QuakeSim has implemented an archive of UAVSAR data in a web service and browser system based on GeoServer (http://geoserver.org). This supports a variety of services that supply consistent maps, raster image data and geographic information systems (GIS) objects including standard earthquake faults. Browsing the database is supported by initially displaying GIS-referenced thumbnail images of the radar displacement maps. Access is also provided to image metadata and links for full file downloads. One of the most widely used features is the QuakeSim line-of-sight profile tool, which calculates the radar-observed displacement (from an unwrapped interferogram product) along a line specified through a web browser. Displacement values along a profile are updated to a plot on the screen as the user interactively redefines the endpoints of the line and the sampling density. The profile and also a plot of the ground height are available as CSV (text) files for further examination, without any need to download the full radar file. Additional tools allow the user to select a polygon overlapping the radar displacement image, specify a downsampling rate and extract a modest sized grid of observations for display or for inversion, for example, the QuakeSim simplex inversion tool which estimates a consistent fault geometry and slip model.

  10. BOREAS TE-18 Biomass Density Image of the SSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. This biomass density image covers almost the entire BOREAS SSA. The pixels for which biomass density is computed include areas, that are in conifer land cover classes only. The biomass density values represent the amount of overstory biomass (i.e., tree biomass only) per unit area. It is derived from a Landsat-5 TM image collected on 02-Sep-1994. The technique that was used to create this image is very similar to the technique that was as used to create the physical classification of the SSA. The data are provided in a binary image file format. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  11. BOREAS Level-2 MAS Surface Reflectance and Temperature Images in BSQ Format

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Newcomer, Jeffrey (Editor); Lobitz, Brad; Spanner, Michael; Strub, Richard; Lobitz, Brad

    2000-01-01

    The BOReal Ecosystem-Atmosphere Study (BOREAS) Staff Science Aircraft Data Acquisition Program focused on providing the research teams with the remotely sensed aircraft data products they needed to compare and spatially extend point results. The MODIS Airborne Simulator (MAS) images, along with other remotely sensed data, were collected to provide spatially extensive information over the primary study areas. This information includes biophysical parameter maps such as surface reflectance and temperature. Collection of the MAS images occurred over the study areas during the 1994 field campaigns. The level-2 MAS data cover the dates of 21-Jul-1994, 24-Jul-1994, 04-Aug-1994, and 08-Aug-1994. The data are not geographically/geometrically corrected; however, files of relative X and Y coordinates for each image pixel were derived by using the C130 navigation data in a MAS scan model. The data are provided in binary image format files.

  12. BOREAS RSS-8 Snow Maps Derived from Landsat TM Imagery

    NASA Technical Reports Server (NTRS)

    Hall, Dorothy; Chang, Alfred T. C.; Foster, James L.; Chien, Janeet Y. L.; Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Smith, David E. (Technical Monitor)

    2000-01-01

    The Boreal Ecosystem-Atmosphere Study (BOREAS) Remote Sensing Science (RSS)-8 team utilized Landsat Thematic Mapper (TM) images to perform mapping of snow extent over the Southern Study Area (SSA). This data set consists of two Landsat TM images that were used to determine the snow-covered pixels over the BOREAS SSA on 18 Jan 1993 and on 06 Feb 1994. The data are stored in binary image format files. The RSS-08 snow map data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). The data files are available on a CD-ROM (see document number 20010000884).

  13. Pre-slip and Localized Strain Band - A Study Based on Large Sample Experiment and DIC

    NASA Astrophysics Data System (ADS)

    Ji, Y.; Zhuo, Y. Q.; Liu, L.; Ma, J.

    2017-12-01

    Meta-instability stage (MIS) is the stage occurs between a fault reaching the peak differential stress and the onset of the final stress drop. It is the crucial stage during which a fault transits from "stick" to "slip". Therefore, if one can quantitatively analyze the spatial and temporal characteristics of the deformation field of a fault at MIS, it will be of great significance both to fault mechanics and earthquake prediction study. In order to do so, a series of stick-slip experiments were conducted using a biaxial servo-controlled pressure machine. Digital images of the sample surfaces were captured by a high speed camera and processed using a digital image correlation method (DIC). If images of a rock sample are acquired before and after deformation, then DIC can be used to infer the displacement and strain fields. In our study, sample images were captured at the rate of 1000 frame per second and the resolution is 2048 by 2048 in pixel. The displacement filed, strain filed and fault displacement were calculated from the captured images. Our data shows that (1) pre-sliding can be a three-stage process, including a relative long and slow first stage at slipping rate of 7.9nm/s, a relatively short and fast second one at rate of 3µm/s and the last stage only last for 0.2s but the slipping rate reached as high as 220µm/s. (2) Localized strain bands were observed nearly perpendicular to the fault. A possible mechanism is that the pre-sliding is distributed heterogeneously along the fault, which means there are relatively adequately sliding segments and the less sliding ones, they become the constrain condition of deformation of the adjacent subregion. The localized deformation band tends to radiate from the discontinuity point of sliding. While the adequately sliding segments are competing with the less sliding ones, the strain bands are evolving accordingly.

  14. A deep learning method for early screening of lung cancer

    NASA Astrophysics Data System (ADS)

    Zhang, Kunpeng; Jiang, Huiqin; Ma, Ling; Gao, Jianbo; Yang, Xiaopeng

    2018-04-01

    Lung cancer is the leading cause of cancer-related deaths among men. In this paper, we propose a pulmonary nodule detection method for early screening of lung cancer based on the improved AlexNet model. In order to maintain the same image quality as the existing B/S architecture PACS system, we convert the original CT image into JPEG format image by analyzing the DICOM file firstly. Secondly, in view of the large size and complex background of CT chest images, we design the convolution neural network on basis of AlexNet model and sparse convolution structure. At last we train our models on the software named DIGITS which is provided by NVIDIA. The main contribution of this paper is to apply the convolutional neural network for the early screening of lung cancer and improve the screening accuracy by combining the AlexNet model with the sparse convolution structure. We make a series of experiments on the chest CT images using the proposed method, of which the sensitivity and specificity indicates that the method presented in this paper can effectively improve the accuracy of early screening of lung cancer and it has certain clinical significance at the same time.

  15. Toward virtual anatomy: a stereoscopic 3-D interactive multimedia computer program for cranial osteology.

    PubMed

    Trelease, R B

    1996-01-01

    Advances in computer visualization and user interface technologies have enabled development of "virtual reality" programs that allow users to perceive and to interact with objects in artificial three-dimensional environments. Such technologies were used to create an image database and program for studying the human skull, a specimen that has become increasingly expensive and scarce. Stereoscopic image pairs of a museum-quality skull were digitized from multiple views. For each view, the stereo pairs were interlaced into a single, field-sequential stereoscopic picture using an image processing program. The resulting interlaced image files are organized in an interactive multimedia program. At run-time, gray-scale 3-D images are displayed on a large-screen computer monitor and observed through liquid-crystal shutter goggles. Users can then control the program and change views with a mouse and cursor to point-and-click on screen-level control words ("buttons"). For each view of the skull, an ID control button can be used to overlay pointers and captions for important structures. Pointing and clicking on "hidden buttons" overlying certain structures triggers digitized audio spoken word descriptions or mini lectures.

  16. APT: Aperture Photometry Tool

    NASA Astrophysics Data System (ADS)

    Laher, Russ

    2012-08-01

    Aperture Photometry Tool (APT) is software for astronomers and students interested in manually exploring the photometric qualities of astronomical images. It has a graphical user interface (GUI) which allows the image data associated with aperture photometry calculations for point and extended sources to be visualized and, therefore, more effectively analyzed. Mouse-clicking on a source in the displayed image draws a circular or elliptical aperture and sky annulus around the source and computes the source intensity and its uncertainty, along with several commonly used measures of the local sky background and its variability. The results are displayed and can be optionally saved to an aperture-photometry-table file and plotted on graphs in various ways using functions available in the software. APT is geared toward processing sources in a small number of images and is not suitable for bulk processing a large number of images, unlike other aperture photometry packages (e.g., SExtractor). However, APT does have a convenient source-list tool that enables calculations for a large number of detections in a given image. The source-list tool can be run either in automatic mode to generate an aperture photometry table quickly or in manual mode to permit inspection and adjustment of the calculation for each individual detection. APT displays a variety of useful graphs, including image histogram, and aperture slices, source scatter plot, sky scatter plot, sky histogram, radial profile, curve of growth, and aperture-photometry-table scatter plots and histograms. APT has functions for customizing calculations, including outlier rejection, pixel “picking” and “zapping,” and a selection of source and sky models. The radial-profile-interpolation source model, accessed via the radial-profile-plot panel, allows recovery of source intensity from pixels with missing data and can be especially beneficial in crowded fields.

  17. Mars Global Digital Dune Database; MC-1

    USGS Publications Warehouse

    Hayward, R.K.; Fenton, L.K.; Tanaka, K.L.; Titus, T.N.; Colaprete, A.; Christensen, P.R.

    2010-01-01

    The Mars Global Digital Dune Database presents data and describes the methodology used in creating the global database of moderate- to large-size dune fields on Mars. The database is being released in a series of U.S. Geological Survey (USGS) Open-File Reports. The first release (Hayward and others, 2007) included dune fields from 65 degrees N to 65 degrees S (http://pubs.usgs.gov/of/2007/1158/). The current release encompasses ~ 845,000 km2 of mapped dune fields from 65 degrees N to 90 degrees N latitude. Dune fields between 65 degrees S and 90 degrees S will be released in a future USGS Open-File Report. Although we have attempted to include all dune fields, some have likely been excluded for two reasons: (1) incomplete THEMIS IR (daytime) coverage may have caused us to exclude some moderate- to large-size dune fields or (2) resolution of THEMIS IR coverage (100m/pixel) certainly caused us to exclude smaller dune fields. The smallest dune fields in the database are ~ 1 km2 in area. While the moderate to large dune fields are likely to constitute the largest compilation of sediment on the planet, smaller stores of sediment of dunes are likely to be found elsewhere via higher resolution data. Thus, it should be noted that our database excludes all small dune fields and some moderate to large dune fields as well. Therefore, the absence of mapped dune fields does not mean that such dune fields do not exist and is not intended to imply a lack of saltating sand in other areas. Where availability and quality of THEMIS visible (VIS), Mars Orbiter Camera narrow angle (MOC NA), or Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) images allowed, we classified dunes and included some dune slipface measurements, which were derived from gross dune morphology and represent the prevailing wind direction at the last time of significant dune modification. It was beyond the scope of this report to look at the detail needed to discern subtle dune modification. It was also beyond the scope of this report to measure all slipfaces. We attempted to include enough slipface measurements to represent the general circulation (as implied by gross dune morphology) and to give a sense of the complex nature of aeolian activity on Mars. The absence of slipface measurements in a given direction should not be taken as evidence that winds in that direction did not occur. When a dune field was located within a crater, the azimuth from crater centroid to dune field centroid was calculated, as another possible indicator of wind direction. Output from a general circulation model (GCM) is also included. In addition to polygons locating dune fields, the database includes THEMIS visible (VIS) and Mars Orbiter Camera Narrow Angle (MOC NA) images that were used to build the database. The database is presented in a variety of formats. It is presented as an ArcReader project which can be opened using the free ArcReader software. The latest version of ArcReader can be downloaded at http://www.esri.com/software/arcgis/arcreader/download.html. The database is also presented in an ArcMap project. The ArcMap project allows fuller use of the data, but requires ESRI ArcMap(Registered) software. A fuller description of the projects can be found in the NP_Dunes_ReadMe file (NP_Dunes_ReadMe folder_ and the NP_Dunes_ReadMe_GIS file (NP_Documentation folder). For users who prefer to create their own projects, the data are available in ESRI shapefile and geodatabase formats, as well as the open Geography Markup Language (GML) format. A printable map of the dunes and craters in the database is available as a Portable Document Format (PDF) document. The map is also included as a JPEG file. (NP_Documentation folder) Documentation files are available in PDF and ASCII (.txt) files. Tables are available in both Excel and ASCII (.txt)

  18. SoilJ - An ImageJ plugin for semi-automatized image-processing of 3-D X-ray images of soil columns

    NASA Astrophysics Data System (ADS)

    Koestel, John

    2016-04-01

    3-D X-ray imaging is a formidable tool for quantifying soil structural properties which are known to be extremely diverse. This diversity necessitates the collection of large sample sizes for adequately representing the spatial variability of soil structure at a specific sampling site. One important bottleneck of using X-ray imaging is however the large amount of time required by a trained specialist to process the image data which makes it difficult to process larger amounts of samples. The software SoilJ aims at removing this bottleneck by automatizing most of the required image processing steps needed to analyze image data of cylindrical soil columns. SoilJ is a plugin of the free Java-based image-processing software ImageJ. The plugin is designed to automatically process all images located with a designated folder. In a first step, SoilJ recognizes the outlines of the soil column upon which the column is rotated to an upright position and placed in the center of the canvas. Excess canvas is removed from the images. Then, SoilJ samples the grey values of the column material as well as the surrounding air in Z-direction. Assuming that the column material (mostly PVC of aluminium) exhibits a spatially constant density, these grey values serve as a proxy for the image illumination at a specific Z-coordinate. Together with the grey values of the air they are used to correct image illumination fluctuations which often occur along the axis of rotation during image acquisition. SoilJ includes also an algorithm for beam-hardening artefact removal and extended image segmentation options. Finally, SoilJ integrates the morphology analyses plugins of BoneJ (Doube et al., 2006, BoneJ Free and extensible bone image analysis in ImageJ. Bone 47: 1076-1079) and provides an ASCII file summarizing these measures for each investigated soil column, respectively. In the future it is planned to integrate SoilJ into FIJI, the maintained and updated edition of ImageJ with selected plugins.

  19. Using applet-servlet communication for optimizing window, level and crop for DICOM to JPEG conversion.

    PubMed

    Kamauu, Aaron W C; DuVall, Scott L; Wiggins, Richard H; Avrin, David E

    2008-09-01

    In the creation of interesting radiological cases in a digital teaching file, it is necessary to adjust the window and level settings of an image to effectively display the educational focus. The web-based applet described in this paper presents an effective solution for real-time window and level adjustments without leaving the picture archiving and communications system workstation. Optimized images are created, as user-defined parameters are passed between the applet and a servlet on the Health Insurance Portability and Accountability Act-compliant teaching file server.

  20. Astrophysical blast wave data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riley, Nathan; Geissel, Matthias; Lewis, Sean M

    2015-03-01

    The data described in this document consist of image files of shadowgraphs of astrophysically relevant laser driven blast waves. Supporting files include Mathematica notebooks containing design calculations, tabulated experimental data and notes, and relevant publications from the open research literature. The data was obtained on the Z-Beamlet laser from July to September 2014. Selected images and calculations will be published as part of a PhD dissertation and in associated publications in the open research literature, with Sandia credited as appropriate. The authors are not aware of any restrictions that could affect the release of the data.

Top