Sample records for document image processing

  1. Adaptive Algorithms for Automated Processing of Document Images

    DTIC Science & Technology

    2011-01-01

    ABSTRACT Title of dissertation: ADAPTIVE ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES Mudit Agrawal, Doctor of Philosophy, 2011...2011 4. TITLE AND SUBTITLE Adaptive Algorithms for Automated Processing of Document Images 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES by Mudit Agrawal Dissertation submitted to the Faculty of the Graduate School of the University

  2. Document Image Processing: Going beyond the Black-and-White Barrier. Progress, Issues and Options with Greyscale and Colour Image Processing.

    ERIC Educational Resources Information Center

    Hendley, Tom

    1995-01-01

    Discussion of digital document image processing focuses on issues and options associated with greyscale and color image processing. Topics include speed; size of original document; scanning resolution; markets for different categories of scanners, including photographic libraries, publishing, and office applications; hybrid systems; data…

  3. Content-based retrieval of historical Ottoman documents stored as textual images.

    PubMed

    Saykol, Ediz; Sinop, Ali Kemal; Güdükbay, Ugur; Ulusoy, Ozgür; Cetin, A Enis

    2004-03-01

    There is an accelerating demand to access the visual content of documents stored in historical and cultural archives. Availability of electronic imaging tools and effective image processing techniques makes it feasible to process the multimedia data in large databases. In this paper, a framework for content-based retrieval of historical documents in the Ottoman Empire archives is presented. The documents are stored as textual images, which are compressed by constructing a library of symbols occurring in a document, and the symbols in the original image are then replaced with pointers into the codebook to obtain a compressed representation of the image. The features in wavelet and spatial domain based on angular and distance span of shapes are used to extract the symbols. In order to make content-based retrieval in historical archives, a query is specified as a rectangular region in an input image and the same symbol-extraction process is applied to the query region. The queries are processed on the codebook of documents and the query images are identified in the resulting documents using the pointers in textual images. The querying process does not require decompression of images. The new content-based retrieval framework is also applicable to many other document archives using different scripts.

  4. Script identification from images using cluster-based templates

    DOEpatents

    Hochberg, J.G.; Kelly, P.M.; Thomas, T.R.

    1998-12-01

    A computer-implemented method identifies a script used to create a document. A set of training documents for each script to be identified is scanned into the computer to store a series of exemplary images representing each script. Pixels forming the exemplary images are electronically processed to define a set of textual symbols corresponding to the exemplary images. Each textual symbol is assigned to a cluster of textual symbols that most closely represents the textual symbol. The cluster of textual symbols is processed to form a representative electronic template for each cluster. A document having a script to be identified is scanned into the computer to form one or more document images representing the script to be identified. Pixels forming the document images are electronically processed to define a set of document textual symbols corresponding to the document images. The set of document textual symbols is compared to the electronic templates to identify the script. 17 figs.

  5. Script identification from images using cluster-based templates

    DOEpatents

    Hochberg, Judith G.; Kelly, Patrick M.; Thomas, Timothy R.

    1998-01-01

    A computer-implemented method identifies a script used to create a document. A set of training documents for each script to be identified is scanned into the computer to store a series of exemplary images representing each script. Pixels forming the exemplary images are electronically processed to define a set of textual symbols corresponding to the exemplary images. Each textual symbol is assigned to a cluster of textual symbols that most closely represents the textual symbol. The cluster of textual symbols is processed to form a representative electronic template for each cluster. A document having a script to be identified is scanned into the computer to form one or more document images representing the script to be identified. Pixels forming the document images are electronically processed to define a set of document textual symbols corresponding to the document images. The set of document textual symbols is compared to the electronic templates to identify the script.

  6. To Image...or Not to Image?

    ERIC Educational Resources Information Center

    Bruley, Karina

    1996-01-01

    Provides a checklist of considerations for installing document image processing with an electronic document management system. Other topics include scanning; indexing; the image file life cycle; benefits of imaging; document-driven workflow; and planning for workplace changes like postsorting, creating a scanning room, redeveloping job tasks and…

  7. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  8. Parallel processing considerations for image recognition tasks

    NASA Astrophysics Data System (ADS)

    Simske, Steven J.

    2011-01-01

    Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.

  9. Digital document imaging systems: An overview and guide

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This is an aid to NASA managers in planning the selection of a Digital Document Imaging System (DDIS) as a possible solution for document information processing and storage. Intended to serve as a manager's guide, this document contains basic information on digital imaging systems, technology, equipment standards, issues of interoperability and interconnectivity, and issues related to selecting appropriate imaging equipment based upon well defined needs.

  10. Interactive degraded document enhancement and ground truth generation

    NASA Astrophysics Data System (ADS)

    Bal, G.; Agam, G.; Frieder, O.; Frieder, G.

    2008-01-01

    Degraded documents are frequently obtained in various situations. Examples of degraded document collections include historical document depositories, document obtained in legal and security investigations, and legal and medical archives. Degraded document images are hard to to read and are hard to analyze using computerized techniques. There is hence a need for systems that are capable of enhancing such images. We describe a language-independent semi-automated system for enhancing degraded document images that is capable of exploiting inter- and intra-document coherence. The system is capable of processing document images with high levels of degradations and can be used for ground truthing of degraded document images. Ground truthing of degraded document images is extremely important in several aspects: it enables quantitative performance measurements of enhancement systems and facilitates model estimation that can be used to improve performance. Performance evaluation is provided using the historical Frieder diaries collection.1

  11. Comparison of approaches for mobile document image analysis using server supported smartphones

    NASA Astrophysics Data System (ADS)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-03-01

    With the recent advances in mobile technologies, new capabilities are emerging, such as mobile document image analysis. However, mobile phones are still less powerful than servers, and they have some resource limitations. One approach to overcome these limitations is performing resource-intensive processes of the application on remote servers. In mobile document image analysis, the most resource consuming process is the Optical Character Recognition (OCR) process, which is used to extract text in mobile phone captured images. In this study, our goal is to compare the in-phone and the remote server processing approaches for mobile document image analysis in order to explore their trade-offs. For the inphone approach, all processes required for mobile document image analysis run on the mobile phone. On the other hand, in the remote-server approach, core OCR process runs on the remote server and other processes run on the mobile phone. Results of the experiments show that the remote server approach is considerably faster than the in-phone approach in terms of OCR time, but adds extra delays such as network delay. Since compression and downscaling of images significantly reduce file sizes and extra delays, the remote server approach overall outperforms the in-phone approach in terms of selected speed and correct recognition metrics, if the gain in OCR time compensates for the extra delays. According to the results of the experiments, using the most preferable settings, the remote server approach performs better than the in-phone approach in terms of speed and acceptable correct recognition metrics.

  12. Imaging Systems: What, When, How.

    ERIC Educational Resources Information Center

    Lunin, Lois F.; And Others

    1992-01-01

    The three articles in this special section on document image files discuss intelligent character recognition, including comparison with optical character recognition; selection of displays for document image processing, focusing on paperlike displays; and imaging hardware, software, and vendors, including guidelines for system selection. (MES)

  13. Requirements for a documentation of the image manipulation processes within PACS

    NASA Astrophysics Data System (ADS)

    Retter, Klaus; Rienhoff, Otto; Karsten, Ch.; Prince, Hazel E.

    1990-08-01

    This paper discusses to which extent manipulation functions which have been applied to images handled in PACS should be documented. After postulating an increasing amount of postprocessing features on PACS-consoles, legal, educational and medical reasons for a documentation of image manipulation processes are presented. Besides legal necessities, aspects of storage capacity, response time, and potential uses determine the extent of this documentation. Is there a specific kind of manipulation functions which has to be documented generally? Should the physician decide which parts of the various pathways he tries are recorded by the system? To distinguish, for example, between reversible and irreversible functions or between interactive and non-interactive functions is one step towards a solution. Another step is to establish definitions for terms like "raw" and "final" image. The paper systematizes these questions and offers strategic help. The answers will have an important impact on PACS design and functionality.

  14. Web-based document and content management with off-the-shelf software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuster, J

    1999-03-18

    This, then, is the current status of the project: Since we made the switch to Intradoc, we are now treating the project as a document and image management system. In reality, it could be considered a document and content management system since we can manage almost any file input to the system such as video or audio. At present, however, we are concentrating on images. As mentioned above, my CRADA funding was only targeted at including thumbnails of images in Intradoc. We still had to modify Intradoc so that it would compress images submitted to the system. All processing ofmore » files submitted to Intradoc is handled in what is called the Document Refinery. Even though MrSID created thumbnails in the process of compressing an image, work needed to be done to somehow build this capability into the Document Refinery. Therefore we made the decision to contract the Intradoc Engineering Team to perform this custom development work. To make Intradoc even more capable of handling images, we have also contracted for customization of the Document Refinery to accept Adobe PhotoShop and Illustrator file in their native format.« less

  15. Document Examination: Applications of Image Processing Systems.

    PubMed

    Kopainsky, B

    1989-12-01

    Dealing with images is a familiar business for an expert in questioned documents: microscopic, photographic, infrared, and other optical techniques generate images containing the information he or she is looking for. A recent method for extracting most of this information is digital image processing, ranging from the simple contrast and contour enhancement to the advanced restoration of blurred texts. When combined with a sophisticated physical imaging system, an image pricessing system has proven to be a powerful and fast tool for routine non-destructive scanning of suspect documents. This article reviews frequent applications, comprising techniques to increase legibility, two-dimensional spectroscopy (ink discrimination, alterations, erased entries, etc.), comparison techniques (stamps, typescript letters, photo substitution), and densitometry. Computerized comparison of handwriting is not included. Copyright © 1989 Central Police University.

  16. Path Searching Based Crease Detection for Large Scale Scanned Document Images

    NASA Astrophysics Data System (ADS)

    Zhang, Jifu; Li, Yi; Li, Shutao; Sun, Bin; Sun, Jun

    2017-12-01

    Since the large size documents are usually folded for preservation, creases will occur in the scanned images. In this paper, a crease detection method is proposed to locate the crease pixels for further processing. According to the imaging process of contactless scanners, the shading on both sides of the crease usually varies a lot. Based on this observation, a convex hull based algorithm is adopted to extract the shading information of the scanned image. Then, the possible crease path can be achieved by applying the vertical filter and morphological operations on the shading image. Finally, the accurate crease is detected via Dijkstra path searching. Experimental results on the dataset of real scanned newspapers demonstrate that the proposed method can obtain accurate locations of the creases in the large size document images.

  17. Statistical Techniques for Efficient Indexing and Retrieval of Document Images

    ERIC Educational Resources Information Center

    Bhardwaj, Anurag

    2010-01-01

    We have developed statistical techniques to improve the performance of document image search systems where the intermediate step of OCR based transcription is not used. Previous research in this area has largely focused on challenges pertaining to generation of small lexicons for processing handwritten documents and enhancement of poor quality…

  18. IDAPS (Image Data Automated Processing System) System Description

    DTIC Science & Technology

    1988-06-24

    This document describes the physical configuration and components used in the image processing system referred to as IDAPS (Image Data Automated ... Processing System). This system was developed by the Environmental Research Institute of Michigan (ERIM) for Eglin Air Force Base. The system is designed

  19. 49 CFR 1104.2 - Document specifications.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... to facilitate automated processing in document sheet feeders, original documents of more than one... textual submissions. Use of color in filings is limited to images such as graphs, maps and photographs. To facilitate automated processing of color pages, color pages may not be inserted among pages containing text...

  20. 49 CFR 1104.2 - Document specifications.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... to facilitate automated processing in document sheet feeders, original documents of more than one... textual submissions. Use of color in filings is limited to images such as graphs, maps and photographs. To facilitate automated processing of color pages, color pages may not be inserted among pages containing text...

  1. 49 CFR 1104.2 - Document specifications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... to facilitate automated processing in document sheet feeders, original documents of more than one... textual submissions. Use of color in filings is limited to images such as graphs, maps and photographs. To facilitate automated processing of color pages, color pages may not be inserted among pages containing text...

  2. 49 CFR 1104.2 - Document specifications.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... to facilitate automated processing in document sheet feeders, original documents of more than one... textual submissions. Use of color in filings is limited to images such as graphs, maps and photographs. To facilitate automated processing of color pages, color pages may not be inserted among pages containing text...

  3. 49 CFR 1104.2 - Document specifications.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... to facilitate automated processing in document sheet feeders, original documents of more than one... textual submissions. Use of color in filings is limited to images such as graphs, maps and photographs. To facilitate automated processing of color pages, color pages may not be inserted among pages containing text...

  4. TU-B-19A-01: Image Registration II: TG132-Quality Assurance for Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brock, K; Mutic, S

    2014-06-15

    AAPM Task Group 132 was charged with a review of the current approaches and solutions for image registration in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. As the results of image registration are always used as the input of another process for planning or delivery, it is important for the user to understand and document the uncertainty associate with the algorithm in general and the Result of a specific registration. The recommendations of this task group, which at the time of abstract submission are currently being reviewed by the AAPM, include themore » following components. The user should understand the basic image registration techniques and methods of visualizing image fusion. The disclosure of basic components of the image registration by commercial vendors is critical in this respect. The physicists should perform end-to-end tests of imaging, registration, and planning/treatment systems if image registration is performed on a stand-alone system. A comprehensive commissioning process should be performed and documented by the physicist prior to clinical use of the system. As documentation is important to the safe implementation of this process, a request and report system should be integrated into the clinical workflow. Finally, a patient specific QA practice should be established for efficient evaluation of image registration results. The implementation of these recommendations will be described and illustrated during this educational session. Learning Objectives: Highlight the importance of understanding the image registration techniques used in their clinic. Describe the end-to-end tests needed for stand-alone registration systems. Illustrate a comprehensive commissioning program using both phantom data and clinical images. Describe a request and report system to ensure communication and documentation. Demonstrate an clinically-efficient patient QA practice for efficient evaluation of image registration.« less

  5. One-click scanning of large-size documents using mobile phone camera

    NASA Astrophysics Data System (ADS)

    Liu, Sijiang; Jiang, Bo; Yang, Yuanjie

    2016-07-01

    Currently mobile apps for document scanning do not provide convenient operations to tackle large-size documents. In this paper, we present a one-click scanning approach for large-size documents using mobile phone camera. After capturing a continuous video of documents, our approach automatically extracts several key frames by optical flow analysis. Then based on key frames, a mobile GPU based image stitching method is adopted to generate a completed document image with high details. There are no extra manual intervention in the process and experimental results show that our app performs well, showing convenience and practicability for daily life.

  6. Contrast in Terahertz Images of Archival Documents—Part II: Influence of Topographic Features

    NASA Astrophysics Data System (ADS)

    Bardon, Tiphaine; May, Robert K.; Taday, Philip F.; Strlič, Matija

    2017-04-01

    We investigate the potential of terahertz time-domain imaging in reflection mode to reveal archival information in documents in a non-invasive way. In particular, this study explores the parameters and signal processing tools that can be used to produce well-contrasted terahertz images of topographic features commonly found in archival documents, such as indentations left by a writing tool, as well as sieve lines. While the amplitude of the waveforms at a specific time delay can provide the most contrasted and legible images of topographic features on flat paper or parchment sheets, this parameter may not be suitable for documents that have a highly irregular surface, such as water- or fire-damaged documents. For analysis of such documents, cross-correlation of the time-domain signals can instead yield images with good contrast. Analysis of the frequency-domain representation of terahertz waveforms can also provide well-contrasted images of topographic features, with improved spatial resolution when utilising high-frequency content. Finally, we point out some of the limitations of these means of analysis for extracting information relating to topographic features of interest from documents.

  7. 12 CFR 225.28 - List of permissible nonbanking activities.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... financial nature and other business records and documents used in processing such media. 14 14 See also the... selling checks and related documents, including corporate image checks, cash tickets, voucher checks... checks. (14) Data processing. (i) Providing data processing, data storage and data transmission services...

  8. 12 CFR 225.28 - List of permissible nonbanking activities.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... financial nature and other business records and documents used in processing such media. 13 13 See also the... selling checks and related documents, including corporate image checks, cash tickets, voucher checks... checks. (14) Data processing. (i) Providing data processing, data storage and data transmission services...

  9. 12 CFR 225.28 - List of permissible nonbanking activities.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... financial nature and other business records and documents used in processing such media. 13 13 See also the... selling checks and related documents, including corporate image checks, cash tickets, voucher checks... checks. (14) Data processing. (i) Providing data processing, data storage and data transmission services...

  10. 12 CFR 225.28 - List of permissible nonbanking activities.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... financial nature and other business records and documents used in processing such media. 13 13 See also the... selling checks and related documents, including corporate image checks, cash tickets, voucher checks... checks. (14) Data processing. (i) Providing data processing, data storage and data transmission services...

  11. Faxed document image restoration method based on local pixel patterns

    NASA Astrophysics Data System (ADS)

    Akiyama, Teruo; Miyamoto, Nobuo; Oguro, Masami; Ogura, Kenji

    1998-04-01

    A method for restoring degraded faxed document images using the patterns of pixels that construct small areas in a document is proposed. The method effectively restores faxed images that contain the halftone textures and/or density salt-and-pepper noise that degrade OCR system performance. The halftone image restoration process, white-centered 3 X 3 pixels, in which black-and-white pixels alternate, are identified first using the distribution of the pixel values as halftone textures, and then the white center pixels are inverted to black. To remove high-density salt- and-pepper noise, it is assumed that the degradation is caused by ill-balanced bias and inappropriate thresholding of the sensor output which results in the addition of random noise. Restored image can be estimated using an approximation that uses the inverse operation of the assumed original process. In order to process degraded faxed images, the algorithms mentioned above are combined. An experiment is conducted using 24 especially poor quality examples selected from data sets that exemplify what practical fax- based OCR systems cannot handle. The maximum recovery rate in terms of mean square error was 98.8 percent.

  12. A methodology for evaluation of an interactive multispectral image processing system

    NASA Technical Reports Server (NTRS)

    Kovalick, William M.; Newcomer, Jeffrey A.; Wharton, Stephen W.

    1987-01-01

    Because of the considerable cost of an interactive multispectral image processing system, an evaluation of a prospective system should be performed to ascertain if it will be acceptable to the anticipated users. Evaluation of a developmental system indicated that the important system elements include documentation, user friendliness, image processing capabilities, and system services. The criteria and evaluation procedures for these elements are described herein. The following factors contributed to the success of the evaluation of the developmental system: (1) careful review of documentation prior to program development, (2) construction and testing of macromodules representing typical processing scenarios, (3) availability of other image processing systems for referral and verification, and (4) use of testing personnel with an applications perspective and experience with other systems. This evaluation was done in addition to and independently of program testing by the software developers of the system.

  13. The Power of Imaging.

    ERIC Educational Resources Information Center

    Haapaniemi, Peter

    1990-01-01

    Describes imaging technology, which allows huge numbers of words and illustrations to be reduced to tiny fraction of space required by originals and discusses current applications. Highlights include image processing system at National Archives; use by banks for high-speed check processing; engineering document management systems (EDMS); folder…

  14. Automation of Cassini Support Imaging Uplink Command Development

    NASA Technical Reports Server (NTRS)

    Ly-Hollins, Lisa; Breneman, Herbert H.; Brooks, Robert

    2010-01-01

    "Support imaging" is imagery requested by other Cassini science teams to aid in the interpretation of their data. The generation of the spacecraft command sequences for these images is performed by the Cassini Instrument Operations Team. The process initially established for doing this was very labor-intensive, tedious and prone to human error. Team management recognized this process as one that could easily benefit from automation. Team members were tasked to document the existing manual process, develop a plan and strategy to automate the process, implement the plan and strategy, test and validate the new automated process, and deliver the new software tools and documentation to Flight Operations for use during the Cassini extended mission. In addition to the goals of higher efficiency and lower risk in the processing of support imaging requests, an effort was made to maximize adaptability of the process to accommodate uplink procedure changes and the potential addition of new capabilities outside the scope of the initial effort.

  15. Composition of a dewarped and enhanced document image from two view images.

    PubMed

    Koo, Hyung Il; Kim, Jinho; Cho, Nam Ik

    2009-07-01

    In this paper, we propose an algorithm to compose a geometrically dewarped and visually enhanced image from two document images taken by a digital camera at different angles. Unlike the conventional works that require special equipment or assumptions on the contents of books or complicated image acquisition steps, we estimate the unfolded book or document surface from the corresponding points between two images. For this purpose, the surface and camera matrices are estimated using structure reconstruction, 3-D projection analysis, and random sample consensus-based curve fitting with the cylindrical surface model. Because we do not need any assumption on the contents of books, the proposed method can be applied not only to optical character recognition (OCR), but also to the high-quality digitization of pictures in documents. In addition to the dewarping for a structurally better image, image mosaic is also performed for further improving the visual quality. By finding better parts of images (with less out of focus blur and/or without specular reflections) from either of views, we compose a better image by stitching and blending them. These processes are formulated as energy minimization problems that can be solved using a graph cut method. Experiments on many kinds of book or document images show that the proposed algorithm robustly works and yields visually pleasing results. Also, the OCR rate of the resulting image is comparable to that of document images from a flatbed scanner.

  16. scikit-image: image processing in Python.

    PubMed

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  17. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  18. 75 FR 32860 - Regulatory Guidance Concerning the Preparation of Drivers' Record of Duty Status To Document...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-10

    ... motor carrier of a scanned image of the original record; the driver would retain the original while the carrier maintains the electronic scanned electronic image along with any supporting documents. [[Page... plans to implement a new approach for receiving and processing RODS. Its drivers would complete their...

  19. Using color management in color document processing

    NASA Astrophysics Data System (ADS)

    Nehab, Smadar

    1995-04-01

    Color Management Systems have been used for several years in Desktop Publishing (DTP) environments. While this development hasn't matured yet, we are already experiencing the next generation of the color imaging revolution-Device Independent Color for the small office/home office (SOHO) environment. Though there are still open technical issues with device independent color matching, they are not the focal point of this paper. This paper discusses two new and crucial aspects in using color management in color document processing: the management of color objects and their associated color rendering methods; a proposal for a precedence order and handshaking protocol among the various software components involved in color document processing. As color peripherals become affordable to the SOHO market, color management also becomes a prerequisite for common document authoring applications such as word processors. The first color management solutions were oriented towards DTP environments whose requirements were largely different. For example, DTP documents are image-centric, as opposed to SOHO documents that are text and charts centric. To achieve optimal reproduction on low-cost SOHO peripherals, it is critical that different color rendering methods are used for the different document object types. The first challenge in using color management of color document processing is the association of rendering methods with object types. As a result of an evolutionary process, color matching solutions are now available as application software, as driver embedded software and as operating system extensions. Consequently, document processing faces a new challenge, the correct selection of the color matching solution while avoiding duplicate color corrections.

  20. Fast words boundaries localization in text fields for low quality document images

    NASA Astrophysics Data System (ADS)

    Ilin, Dmitry; Novikov, Dmitriy; Polevoy, Dmitry; Nikolaev, Dmitry

    2018-04-01

    The paper examines the problem of word boundaries precise localization in document text zones. Document processing on a mobile device consists of document localization, perspective correction, localization of individual fields, finding words in separate zones, segmentation and recognition. While capturing an image with a mobile digital camera under uncontrolled capturing conditions, digital noise, perspective distortions or glares may occur. Further document processing gets complicated because of its specifics: layout elements, complex background, static text, document security elements, variety of text fonts. However, the problem of word boundaries localization has to be solved at runtime on mobile CPU with limited computing capabilities under specified restrictions. At the moment, there are several groups of methods optimized for different conditions. Methods for the scanned printed text are quick but limited only for images of high quality. Methods for text in the wild have an excessively high computational complexity, thus, are hardly suitable for running on mobile devices as part of the mobile document recognition system. The method presented in this paper solves a more specialized problem than the task of finding text on natural images. It uses local features, a sliding window and a lightweight neural network in order to achieve an optimal algorithm speed-precision ratio. The duration of the algorithm is 12 ms per field running on an ARM processor of a mobile device. The error rate for boundaries localization on a test sample of 8000 fields is 0.3

  1. [Quality control of laser imagers].

    PubMed

    Winkelbauer, F; Ammann, M; Gerstner, N; Imhof, H

    1992-11-01

    Multiformat imagers based on laser systems are used for documentation in an increasing number of investigations. The specific problems of quality control are explained and the persistence of film processing in these imager systems of different configuration with (Machine 1: 3M-Laser-Imager-Plus M952 with connected 3M Film-Processor, 3M-Film IRB, X-Rax Chemical Mixer 3M-XPM, 3M-Developer and Fixer) or without (Machine 2: 3M-Laser-Imager-Plus M952 with separate DuPont-Cronex Film-processor, Kodak IR-Film, Kodak Automixer, Kodak-Developer and Fixer) connected film processing unit are investigated. In our checking based on DIN 6868 and ONORM S 5240 we found persistence of film processing in the equipment with directly adapted film processing unit according to DIN and ONORM. The checking of film persistence as demanded by DIN 6868 in these equipment could therefore be performed in longer periods. Systems with conventional darkroom processing comparatively show plain increased fluctuation, and hence the demanded daily control is essential to guarantee appropriate reaction and constant quality of documentation.

  2. Electronic Document Supply Systems.

    ERIC Educational Resources Information Center

    Cawkell, A. E.

    1991-01-01

    Describes electronic document delivery systems used by libraries and document image processing systems used for business purposes. Topics discussed include technical specifications; analogue read-only laser videodiscs; compact discs and CD-ROM; WORM; facsimile; ADONIS (Article Delivery over Network Information System); DOCDEL; and systems at the…

  3. Integrated system for automated financial document processing

    NASA Astrophysics Data System (ADS)

    Hassanein, Khaled S.; Wesolkowski, Slawo; Higgins, Ray; Crabtree, Ralph; Peng, Antai

    1997-02-01

    A system was developed that integrates intelligent document analysis with multiple character/numeral recognition engines in order to achieve high accuracy automated financial document processing. In this system, images are accepted in both their grayscale and binary formats. A document analysis module starts by extracting essential features from the document to help identify its type (e.g. personal check, business check, etc.). These features are also utilized to conduct a full analysis of the image to determine the location of interesting zones such as the courtesy amount and the legal amount. These fields are then made available to several recognition knowledge sources such as courtesy amount recognition engines and legal amount recognition engines through a blackboard architecture. This architecture allows all the available knowledge sources to contribute incrementally and opportunistically to the solution of the given recognition query. Performance results on a test set of machine printed business checks using the integrated system are also reported.

  4. scikit-image: image processing in Python

    PubMed Central

    Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  5. Performance evaluation methodology for historical document image binarization.

    PubMed

    Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis

    2013-02-01

    Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.

  6. New concept high-speed and high-resolution color scanner

    NASA Astrophysics Data System (ADS)

    Nakashima, Keisuke; Shinoda, Shin'ichi; Konishi, Yoshiharu; Sugiyama, Kenji; Hori, Tetsuya

    2003-05-01

    We have developed a new concept high-speed and high-resolution color scanner (Blinkscan) using digital camera technology. With our most advanced sub-pixel image processing technology, approximately 12 million pixel image data can be captured. High resolution imaging capability allows various uses such as OCR, color document read, and document camera. The scan time is only about 3 seconds for a letter size sheet. Blinkscan scans documents placed "face up" on its scan stage and without any special illumination lights. Using Blinkscan, a high-resolution color document can be easily inputted into a PC at high speed, a paperless system can be built easily. It is small, and since the occupancy area is also small, setting it on an individual desk is possible. Blinkscan offers the usability of a digital camera and accuracy of a flatbed scanner with high-speed processing. Now, about several hundred of Blinkscan are mainly shipping for the receptionist operation in a bank and a security. We will show the high-speed and high-resolution architecture of Blinkscan. Comparing operation-time with conventional image capture device, the advantage of Blinkscan will make clear. And image evaluation for variety of environment, such as geometric distortions or non-uniformity of brightness, will be made.

  7. Case retrieval in medical databases by fusing heterogeneous information.

    PubMed

    Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Roux, Christian; Cochener, Béatrice

    2011-01-01

    A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper. It was designed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework. The proposed retrieval method relies on image processing, in order to characterize each individual image in a document by their digital content, and information fusion. Once the available images in a query document are characterized, a degree of match, between the query document and each reference document stored in the database, is defined for each attribute (an image feature or a metadata). A Bayesian network is used to recover missing information if need be. Finally, two novel information fusion methods are proposed to combine these degrees of match, in order to rank the reference documents by decreasing relevance for the query. In the first method, the degrees of match are fused by the Bayesian network itself. In the second method, they are fused by the Dezert-Smarandache theory: the second approach lets us model our confidence in each source of information (i.e., each attribute) and take it into account in the fusion process for a better retrieval performance. The proposed methods were applied to two heterogeneous medical databases, a diabetic retinopathy database and a mammography screening database, for computer aided diagnosis. Precisions at five of 0.809 ± 0.158 and 0.821 ± 0.177, respectively, were obtained for these two databases, which is very promising.

  8. Commercial applications for optical data storage

    NASA Astrophysics Data System (ADS)

    Tas, Jeroen

    1991-03-01

    Optical data storage has spurred the market for document imaging systems. These systems are increasingly being used to electronically manage the processing, storage and retrieval of documents. Applications range from straightforward archives to sophisticated workflow management systems. The technology is developing rapidly and within a few years optical imaging facilities will be incorporated in most of the office information systems. This paper gives an overview of the status of the market, the applications and the trends of optical imaging systems.

  9. Novel computer-based endoscopic camera

    NASA Astrophysics Data System (ADS)

    Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia

    1995-05-01

    We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.

  10. 36 CFR § 1238.14 - What are the microfilming requirements for permanent and unscheduled records?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... processing procedures in ANSI/AIIM MS1 and ANSI/AIIM MS23 (both incorporated by reference, see § 1238.5). (d... reference, see § 1238.5). (2) Background density of images. Agencies must use the background ISO standard... densities for images of documents are as follows: Classification Description of document Background density...

  11. Degraded document image enhancement

    NASA Astrophysics Data System (ADS)

    Agam, G.; Bal, G.; Frieder, G.; Frieder, O.

    2007-01-01

    Poor quality documents are obtained in various situations such as historical document collections, legal archives, security investigations, and documents found in clandestine locations. Such documents are often scanned for automated analysis, further processing, and archiving. Due to the nature of such documents, degraded document images are often hard to read, have low contrast, and are corrupted by various artifacts. We describe a novel approach for the enhancement of such documents based on probabilistic models which increases the contrast, and thus, readability of such documents under various degradations. The enhancement produced by the proposed approach can be viewed under different viewing conditions if desired. The proposed approach was evaluated qualitatively and compared to standard enhancement techniques on a subset of historical documents obtained from the Yad Vashem Holocaust museum. In addition, quantitative performance was evaluated based on synthetically generated data corrupted under various degradation models. Preliminary results demonstrate the effectiveness of the proposed approach.

  12. 5 CFR 850.301 - Electronic records; other acceptable records.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... SERVICE REGULATIONS (CONTINUED) ELECTRONIC RETIREMENT PROCESSING Records § 850.301 Electronic records; other acceptable records. (a) Acceptable electronic records for retirement and insurance processing by... (SF 2806 or SF 3100), or data or images obtained from such documents, including images stored in EDMS...

  13. Digital imaging technology assessment: Digital document storage project

    NASA Technical Reports Server (NTRS)

    1989-01-01

    An ongoing technical assessment and requirements definition project is examining the potential role of digital imaging technology at NASA's STI facility. The focus is on the basic components of imaging technology in today's marketplace as well as the components anticipated in the near future. Presented is a requirement specification for a prototype project, an initial examination of current image processing at the STI facility, and an initial summary of image processing projects at other sites. Operational imaging systems incorporate scanners, optical storage, high resolution monitors, processing nodes, magnetic storage, jukeboxes, specialized boards, optical character recognition gear, pixel addressable printers, communications, and complex software processes.

  14. DOCLIB: a software library for document processing

    NASA Astrophysics Data System (ADS)

    Jaeger, Stefan; Zhu, Guangyu; Doermann, David; Chen, Kevin; Sampat, Summit

    2006-01-01

    Most researchers would agree that research in the field of document processing can benefit tremendously from a common software library through which institutions are able to develop and share research-related software and applications across academic, business, and government domains. However, despite several attempts in the past, the research community still lacks a widely-accepted standard software library for document processing. This paper describes a new library called DOCLIB, which tries to overcome the drawbacks of earlier approaches. Many of DOCLIB's features are unique either in themselves or in their combination with others, e.g. the factory concept for support of different image types, the juxtaposition of image data and metadata, or the add-on mechanism. We cherish the hope that DOCLIB serves the needs of researchers better than previous approaches and will readily be accepted by a larger group of scientists.

  15. KAPPA -- Kernel Application Package

    NASA Astrophysics Data System (ADS)

    Currie, Malcolm J.; Berry, David. S.

    KAPPA is an applications package comprising about 180 general-purpose commands for image processing, data visualisation, and manipulation of the standard Starlink data format---the NDF. It is intended to work in conjunction with Starlink's various specialised packages. In addition to the NDF, KAPPA can also process data in other formats by using the `on-the-fly' conversion scheme. Many commands can process data arrays of arbitrary dimension, and others work on both spectra and images. KAPPA operates from both the UNIX C-shell and the ICL command language. This document describes how to use KAPPA and its features. There is some description of techniques too, including a section on writing scripts. This document includes several tutorials and is illustrated with numerous examples. The bulk of this document comprises detailed descriptions of each command as well as classified and alphabetical summaries.

  16. The paper crisis: from hospitals to medical practices.

    PubMed

    Park, Gregory; Neaveill, Rodney S

    2009-01-01

    Hospitals, not unlike physician practices, are faced with an increasing burden of managing piles of hard copy documents including insurance forms, requests for information, and advance directives. Healthcare organizations are moving to transform paper-based forms and documents into digitized files in order to save time and money and to have those documents available at a moment's notice. The cost of these document management/imaging systems can be easily justified with the significant savings of resources realized from the implementation of these systems. This article illustrates the enormity of the "paper problem" in healthcare and outlines just a few of the required processes that could be improved with the use of automated document management/imaging systems.

  17. Document Monitor

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The charters of Freedom Monitoring System will periodically assess the physical condition of the U.S. Constitution, Declaration of Independence and Bill of Rights. Although protected in helium filled glass cases, the documents are subject to damage from light vibration and humidity. The photometer is a CCD detector used as the electronic film for the camera system's scanning camera which mechanically scans the document line by line and acquires a series of images, each representing a one square inch portion of the document. Perkin-Elmer Corporation's photometer is capable of detecting changes in contrast, shape or other indicators of degradation with 5 to 10 times the sensitivity of the human eye. A Vicom image processing computer receives the data from the photometer stores it and manipulates it, allowing comparison of electronic images over time to detect changes.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendonsa, D; Nekoogar, F; Martz, H

    This document describes the functionality of every component in the DHS/IDD archival and storage hardware system shown in Fig. 1. The document describes steps by step process of image data being received at LLNL then being processed and made available to authorized personnel and collaborators. Throughout this document references will be made to one of two figures, Fig. 1 describing the elements of the architecture and the Fig. 2 describing the workflow and how the project utilizes the available hardware.

  19. The Precise and Efficient Identification of Medical Order Forms Using Shape Trees

    NASA Astrophysics Data System (ADS)

    Henker, Uwe; Petersohn, Uwe; Ultsch, Alfred

    A powerful and flexible technique to identify, classify and process documents using images from a scanning process is presented. The types of documents can be described to the system as a set of differentiating features in a case base using shape trees. The features are filtered and abstracted from an extremely reduced scanner image of the document. Classification rules are stored with the cases to enable precise recognition and further mark reading and Optical Character Recognition (OCR) process. The method is implemented in a system which actually processes the majority of requests for medical lab procedures in Germany. A large practical experiment with data from practitioners was performed. An average of 97% of the forms were correctly identified; none were identified incorrectly. This meets the quality requirements for most medical applications. The modular description of the recognition process allows for a flexible adaptation of future changes to the form and content of the document’s structures.

  20. Virtual environments from panoramic images

    NASA Astrophysics Data System (ADS)

    Chapman, David P.; Deacon, Andrew

    1998-12-01

    A number of recent projects have demonstrated the utility of Internet-enabled image databases for the documentation of complex, inaccessible and potentially hazardous environments typically encountered in the petrochemical and nuclear industries. Unfortunately machine vision and image processing techniques have not, to date, enabled the automatic extraction geometrical data from such images and thus 3D CAD modeling remains an expensive and laborious manual activity. Recent developments in panoramic image capture and presentation offer an alternative intermediate deliverable which, in turn, offers some of the benefits of a 3D model at a fraction of the cost. Panoramic image display tools such as Apple's QuickTime VR (QTVR) and Live Spaces RealVR provide compelling and accessible digital representations of the real world and justifiably claim to 'put the reality in Virtual Reality.' This paper will demonstrate how such technologies can be customized, extended and linked to facility management systems delivered over a corporate intra-net to enable end users to become familiar with remote sites and extract simple dimensional data. In addition strategies for the integration of such images with documents gathered from 2D or 3D CAD and Process and Instrumentation Diagrams (P&IDs) will be described as will techniques for precise 'As-Built' modeling using the calibrated images from which panoramas have been derived and the use of textures from these images to increase the realism of rendered scenes. A number of case studies relating to both nuclear and process engineering will demonstrate the extent to which such solution are scaleable in order to deal with the very large volumes of image data required to fully document the large, complex facilities typical of these industry sectors.

  1. Underwater Photogrammetry and 3d Reconstruction of Marble Cargos Shipwreck

    NASA Astrophysics Data System (ADS)

    Balletti, C.; Beltrame, C.; Costa, E.; Guerra, F.; Vernier, P.

    2015-04-01

    Nowadays archaeological and architectural surveys are based on the acquisition and processing of point clouds, allowing a high metric precision, essential prerequisite for a good documentation. Digital image processing and laser scanner have changed the archaeological survey campaign, from manual and direct survey to a digital one and, actually, multi-image photogrammetry is a good solution for the underwater archaeology. This technical documentation cannot operate alone, but it has to be supported by a topographical survey to georeference all the finds in the same reference system. In the last years the Ca' Foscari and IUAV University of Venice are conducting a research on integrated survey techniques to support underwater metric documentation. The paper will explain all the phases regarding the survey's design, images acquisition, topographic measure and the data processing of two Roman shipwrecks in south Sicily. The cargos of the shipwrecks are composed by huge marble blocks, but they are different for morphological characteristic of the sites, for the depth and for their distribution on the seabed. Photogrammetrical and topographical surveys were organized in two distinct methods, especially for the second one, due to the depth that have allowed an experimentation of GPS RTK's measurements on one shipwreck. Moreover, this kind of three-dimensional documentation is useful for educational and dissemination aspect, for the ease of understanding by wide public.

  2. Textual blocks rectification method based on fast Hough transform analysis in identity documents recognition

    NASA Astrophysics Data System (ADS)

    Bezmaternykh, P. V.; Nikolaev, D. P.; Arlazarov, V. L.

    2018-04-01

    Textual blocks rectification or slant correction is an important stage of document image processing in OCR systems. This paper considers existing methods and introduces an approach for the construction of such algorithms based on Fast Hough Transform analysis. A quality measurement technique is proposed and obtained results are shown for both printed and handwritten textual blocks processing as a part of an industrial system of identity documents recognition on mobile devices.

  3. Electrocortical consequences of image processing: The influence of working memory load and worry.

    PubMed

    White, Evan J; Grant, DeMond M

    2017-03-30

    Research suggests that worry precludes emotional processing as well as biases attentional processes. Although there is burgeoning evidence for the relationship between executive functioning and worry, more research in this area is needed. A recent theory suggests one mechanism for the negative effects of worry on neural indicators of attention may be working memory load, however few studies have examined this directly. The goal of the current study was to document the influence of both visual and verbal working memory load and worry on attention allocation during processing of emotional images in a cued image paradigm. It was hypothesized that working memory load will decrease attention allocation during processing of emotional images. This was tested among 38 participants using a modified S1-S2 paradigm. Results indicated that both the visual and verbal working memory tasks resulted in a reduction of attention allocation to the processing of images across stimulus types compared to the baseline task, although only for individuals low in worry. These data extend the literature by documenting decreased neural responding (i.e., LPP amplitude) to imagery both the visual and verbal working memory load, particularly among individuals low in worry. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  4. SSME propellant path leak detection real-time

    NASA Technical Reports Server (NTRS)

    Crawford, R. A.; Smith, L. M.

    1994-01-01

    Included are four documents that outline the technical aspects of the research performed on NASA Grant NAG8-140: 'A System for Sequential Step Detection with Application to Video Image Processing'; 'Leak Detection from the SSME Using Sequential Image Processing'; 'Digital Image Processor Specifications for Real-Time SSME Leak Detection'; and 'A Color Change Detection System for Video Signals with Applications to Spectral Analysis of Rocket Engine Plumes'.

  5. Signature detection and matching for document image retrieval.

    PubMed

    Zhu, Guangyu; Zheng, Yefeng; Doermann, David; Jaeger, Stefan

    2009-11-01

    As one of the most pervasive methods of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. However, detection and segmentation of free-form objects such as signatures from clustered background is currently an open document analysis problem. In this paper, we focus on two fundamental problems in signature-based document image retrieval. First, we propose a novel multiscale approach to jointly detecting and segmenting signatures from document images. Rather than focusing on local features that typically have large variations, our approach captures the structural saliency using a signature production model and computes the dynamic curvature of 2D contour fragments over multiple scales. This detection framework is general and computationally tractable. Second, we treat the problem of signature retrieval in the unconstrained setting of translation, scale, and rotation invariant nonrigid shape matching. We propose two novel measures of shape dissimilarity based on anisotropic scaling and registration residual error and present a supervised learning framework for combining complementary shape information from different dissimilarity metrics using LDA. We quantitatively study state-of-the-art shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple instances as query in document image retrieval. We further demonstrate our matching techniques in offline signature verification. Extensive experiments using large real-world collections of English and Arabic machine-printed and handwritten documents demonstrate the excellent performance of our approaches.

  6. Automated measurement of pressure injury through image processing.

    PubMed

    Li, Dan; Mathews, Carol

    2017-11-01

    To develop an image processing algorithm to automatically measure pressure injuries using electronic pressure injury images stored in nursing documentation. Photographing pressure injuries and storing the images in the electronic health record is standard practice in many hospitals. However, the manual measurement of pressure injury is time-consuming, challenging and subject to intra/inter-reader variability with complexities of the pressure injury and the clinical environment. A cross-sectional algorithm development study. A set of 32 pressure injury images were obtained from a western Pennsylvania hospital. First, we transformed the images from an RGB (i.e. red, green and blue) colour space to a YC b C r colour space to eliminate inferences from varying light conditions and skin colours. Second, a probability map, generated by a skin colour Gaussian model, guided the pressure injury segmentation process using the Support Vector Machine classifier. Third, after segmentation, the reference ruler - included in each of the images - enabled perspective transformation and determination of pressure injury size. Finally, two nurses independently measured those 32 pressure injury images, and intraclass correlation coefficient was calculated. An image processing algorithm was developed to automatically measure the size of pressure injuries. Both inter- and intra-rater analysis achieved good level reliability. Validation of the size measurement of the pressure injury (1) demonstrates that our image processing algorithm is a reliable approach to monitoring pressure injury progress through clinical pressure injury images and (2) offers new insight to pressure injury evaluation and documentation. Once our algorithm is further developed, clinicians can be provided with an objective, reliable and efficient computational tool for segmentation and measurement of pressure injuries. With this, clinicians will be able to more effectively monitor the healing process of pressure injuries. © 2017 John Wiley & Sons Ltd.

  7. Document authentication at molecular levels using desorption atmospheric pressure chemical ionization mass spectrometry imaging.

    PubMed

    Li, Ming; Jia, Bin; Ding, Liying; Hong, Feng; Ouyang, Yongzhong; Chen, Rui; Zhou, Shumin; Chen, Huanwen; Fang, Xiang

    2013-09-01

    Molecular images of documents were obtained by sequentially scanning the surface of the document using desorption atmospheric pressure chemical ionization mass spectrometry (DAPCI-MS), which was operated in either a gasless, solvent-free or methanol vapor-assisted mode. The decay process of the ink used for handwriting was monitored by following the signal intensities recorded by DAPCI-MS. Handwritings made using four types of inks on four kinds of paper surfaces were tested. By studying the dynamic decay of the inks, DAPCI-MS imaging differentiated a 10-min old from two 4 h old samples. Non-destructive forensic analysis of forged signatures either handwritten or computer-assisted was achieved according to the difference of the contour in DAPCI images, which was attributed to the strength personalized by different writers. Distinction of the order of writing/stamping on documents and detection of illegal printings were accomplished with a spatial resolution of about 140 µm. A Matlab® written program was developed to facilitate the visualization of the similarity between signature images obtained by DAPCI-MS. The experimental results show that DAPCI-MS imaging provides rich information at the molecular level and thus can be used for the reliable document analysis in forensic applications. © 2013 The Authors. Journal of Mass Spectrometry published by John Wiley & Sons, Ltd.

  8. Preliminary application of Structure from Motion and GIS to document decomposition and taphonomic processes.

    PubMed

    Carlton, Connor D; Mitchell, Samantha; Lewis, Patrick

    2018-01-01

    Over the past decade, Structure from Motion (SfM) has increasingly been used as a means of digital preservation and for documenting archaeological excavations, architecture, and cultural material. However, few studies have tapped the potential of using SfM to document and analyze taphonomic processes affecting burials for forensic sciences purposes. This project utilizes SfM models to elucidate specific post-depositional events that affected a series of three human cadavers deposited at the South East Texas Applied Forensic Science Facility (STAFS). The aim of this research was to test the ability for untrained researchers to employ spatial software and photogrammetry for data collection purposes. For a series of three months a single lens reflex (SLR) camera was used to capture a series of overlapping images at periodic stages in the decomposition process of each cadaver. These images are processed through photogrammetric software that creates a 3D model that can be measured, manipulated, and viewed. This project used photogrammetric and geospatial software to map changes in decomposition and movement of the body from original deposition points. Project results indicate SfM and GIS as a useful tool for documenting decomposition and taphonomic processes. Results indicate photogrammetry is an efficient, relatively simple, and affordable tool for the documentation of decomposition. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Clustering of Farsi sub-word images for whole-book recognition

    NASA Astrophysics Data System (ADS)

    Soheili, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier

    2015-01-01

    Redundancy of word and sub-word occurrences in large documents can be effectively utilized in an OCR system to improve recognition results. Most OCR systems employ language modeling techniques as a post-processing step; however these techniques do not use important pictorial information that exist in the text image. In case of large-scale recognition of degraded documents, this information is even more valuable. In our previous work, we proposed a subword image clustering method for the applications dealing with large printed documents. In our clustering method, the ideal case is when all equivalent sub-word images lie in one cluster. To overcome the issues of low print quality, the clustering method uses an image matching algorithm for measuring the distance between two sub-word images. The measured distance with a set of simple shape features were used to cluster all sub-word images. In this paper, we analyze the effects of adding more shape features on processing time, purity of clustering, and the final recognition rate. Previously published experiments have shown the efficiency of our method on a book. Here we present extended experimental results and evaluate our method on another book with totally different font face. Also we show that the number of the new created clusters in a page can be used as a criteria for assessing the quality of print and evaluating preprocessing phases.

  10. 76 FR 45794 - Notice of Public Information Collection(s) Being Reviewed by the Federal Communications...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-01

    ... must be submitted electronically in machine-readable format. PDF images created by scanning a paper document may not be submitted, except in cases in which a word- processing version of the document is not...

  11. Fast title extraction method for business documents

    NASA Astrophysics Data System (ADS)

    Katsuyama, Yutaka; Naoi, Satoshi

    1997-04-01

    Conventional electronic document filing systems are inconvenient because the user must specify the keywords in each document for later searches. To solve this problem, automatic keyword extraction methods using natural language processing and character recognition have been developed. However, these methods are slow, especially for japanese documents. To develop a practical electronic document filing system, we focused on the extraction of keyword areas from a document by image processing. Our fast title extraction method can automatically extract titles as keywords from business documents. All character strings are evaluated for similarity by rating points associated with title similarity. We classified these points as four items: character sitting size, position of character strings, relative position among character strings, and string attribution. Finally, the character string that has the highest rating is selected as the title area. The character recognition process is carried out on the selected area. It is fast because this process must recognize a small number of patterns in the restricted area only, and not throughout the entire document. The mean performance of this method is an accuracy of about 91 percent and a 1.8 sec. processing time for an examination of 100 Japanese business documents.

  12. Automated detection using natural language processing of radiologists recommendations for additional imaging of incidental findings.

    PubMed

    Dutta, Sayon; Long, William J; Brown, David F M; Reisner, Andrew T

    2013-08-01

    As use of radiology studies increases, there is a concurrent increase in incidental findings (eg, lung nodules) for which the radiologist issues recommendations for additional imaging for follow-up. Busy emergency physicians may be challenged to carefully communicate recommendations for additional imaging not relevant to the patient's primary evaluation. The emergence of electronic health records and natural language processing algorithms may help address this quality gap. We seek to describe recommendations for additional imaging from our institution and develop and validate an automated natural language processing algorithm to reliably identify recommendations for additional imaging. We developed a natural language processing algorithm to detect recommendations for additional imaging, using 3 iterative cycles of training and validation. The third cycle used 3,235 radiology reports (1,600 for algorithm training and 1,635 for validation) of discharged emergency department (ED) patients from which we determined the incidence of discharge-relevant recommendations for additional imaging and the frequency of appropriate discharge documentation. The test characteristics of the 3 natural language processing algorithm iterations were compared, using blinded chart review as the criterion standard. Discharge-relevant recommendations for additional imaging were found in 4.5% (95% confidence interval [CI] 3.5% to 5.5%) of ED radiology reports, but 51% (95% CI 43% to 59%) of discharge instructions failed to note those findings. The final natural language processing algorithm had 89% (95% CI 82% to 94%) sensitivity and 98% (95% CI 97% to 98%) specificity for detecting recommendations for additional imaging. For discharge-relevant recommendations for additional imaging, sensitivity improved to 97% (95% CI 89% to 100%). Recommendations for additional imaging are common, and failure to document relevant recommendations for additional imaging in ED discharge instructions occurs frequently. The natural language processing algorithm's performance improved with each iteration and offers a promising error-prevention tool. Copyright © 2013 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.

  13. Walker Ranch 3D seismic images

    DOE Data Explorer

    Robert J. Mellors

    2016-03-01

    Amplitude images (both vertical and depth slices) extracted from 3D seismic reflection survey over area of Walker Ranch area (adjacent to Raft River). Crossline spacing of 660 feet and inline of 165 feet using a Vibroseis source. Processing included depth migration. Micro-earthquake hypocenters on images. Stratigraphic information and nearby well tracks added to images. Images are embedded in a Microsoft Word document with additional information. Exact location and depth restricted for proprietary reasons. Data collection and processing funded by Agua Caliente. Original data remains property of Agua Caliente.

  14. A user's guide for the signal processing software for image and speech compression developed in the Communications and Signal Processing Laboratory (CSPL), version 1

    NASA Technical Reports Server (NTRS)

    Kumar, P.; Lin, F. Y.; Vaishampayan, V.; Farvardin, N.

    1986-01-01

    A complete documentation of the software developed in the Communication and Signal Processing Laboratory (CSPL) during the period of July 1985 to March 1986 is provided. Utility programs and subroutines that were developed for a user-friendly image and speech processing environment are described. Additional programs for data compression of image and speech type signals are included. Also, programs for the zero-memory and block transform quantization in the presence of channel noise are described. Finally, several routines for simulating the perfromance of image compression algorithms are included.

  15. 10 CFR 2.1011 - Management of electronic information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... participants shall make textual (or, where non-text, image) versions of their documents available on a web... of the following acceptable formats: ASCII, native word processing (Word, WordPerfect), PDF Normal, or HTML. (iv) Image files must be formatted as TIFF CCITT G4 for bi-tonal images or PNG (Portable...

  16. 10 CFR 2.1011 - Management of electronic information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... participants shall make textual (or, where non-text, image) versions of their documents available on a web... of the following acceptable formats: ASCII, native word processing (Word, WordPerfect), PDF Normal, or HTML. (iv) Image files must be formatted as TIFF CCITT G4 for bi-tonal images or PNG (Portable...

  17. Contrast in Terahertz Images of Archival Documents—Part I: Influence of the Optical Parameters from the Ink and Support

    NASA Astrophysics Data System (ADS)

    Bardon, Tiphaine; May, Robert K.; Jackson, J. Bianca; Beentjes, Gabriëlle; de Bruin, Gerrit; Taday, Philip F.; Strlič, Matija

    2017-04-01

    This study aims to objectively inform curators when terahertz time-domain (TD) imaging set in reflection mode is likely to give well-contrasted images of inscriptions in a complex archival document and is a useful non-invasive alternative to current digitisation processes. To this end, the dispersive refractive indices and absorption coefficients from various archival materials are assessed and their influence on contrast in terahertz images from historical documents is explored. Sepia ink and inks produced with bistre or verdigris mixed with a solution of Arabic gum or rabbit skin glue are unlikely to lead to well-contrasted images. However, dispersions of bone black, ivory black, iron gall ink, malachite, lapis lazuli, minium and vermilion are likely to lead to well-contrasted images. Inscriptions written with lamp black, carbon black and graphite give the best imaging results. The characteristic spectral signatures from iron gall ink, minium and vermilion pellets between 5 and 100 cm-1 relate to a ringing effect at late collection times in TD waveforms transmitted through these pellets. The same ringing effect can be probed in waveforms reflected from iron gall, minium and vermilion ink deposits at the surface of a document. Since TD waveforms collected for each scanning pixel can be Fourier-transformed into spectral information, terahertz TD imaging in reflection mode can serve as a hyperspectral imaging tool. However, chemical recognition and mapping of the ink is currently limited by the fact that the morphology of the document influences more the terahertz spectral response of the document than the resonant behaviour of the ink.

  18. Functional Magnetic Resonance Imaging of Story Listening in Adolescents and Young Adults with Down Syndrome: Evidence for Atypical Neurodevelopment

    ERIC Educational Resources Information Center

    Jacola, L. M.; Byars, A. W.; Hickey, F.; Vannest, J.; Holland, S. K.; Schapiro, M. B.

    2014-01-01

    Background: Previous studies have documented differences in neural activation during language processing in individuals with Down syndrome (DS) in comparison with typically developing individuals matched for chronological age. This study used functional magnetic resonance imaging (fMRI) to compare activation during language processing in young…

  19. Chain of evidence generation for contrast enhancement in digital image forensics

    NASA Astrophysics Data System (ADS)

    Battiato, Sebastiano; Messina, Giuseppe; Strano, Daniela

    2010-01-01

    The quality of the images obtained by digital cameras has improved a lot since digital cameras early days. Unfortunately, it is not unusual in image forensics to find wrongly exposed pictures. This is mainly due to obsolete techniques or old technologies, but also due to backlight conditions. To extrapolate some invisible details a stretching of the image contrast is obviously required. The forensics rules to produce evidences require a complete documentation of the processing steps, enabling the replication of the entire process. The automation of enhancement techniques is thus quite difficult and needs to be carefully documented. This work presents an automatic procedure to find contrast enhancement settings, allowing both image correction and automatic scripting generation. The technique is based on a preprocessing step which extracts the features of the image and selects correction parameters. The parameters are thus saved through a JavaScript code that is used in the second step of the approach to correct the image. The generated script is Adobe Photoshop compliant (which is largely used in image forensics analysis) thus permitting the replication of the enhancement steps. Experiments on a dataset of images are also reported showing the effectiveness of the proposed methodology.

  20. Controlled electrostatic methodology for imaging indentations in documents.

    PubMed

    Yaraskavitch, Luke; Graydon, Matthew; Tanaka, Tobin; Ng, Lay-Keow

    2008-05-20

    The electrostatic process for imaging indentations on documents using the ESDA device is investigated under controlled experimental settings. An in-house modified commercial xerographic developer housing is used to control the uniformity and volume of toner deposition, allowing for reproducible image development. Along with this novel development tool, an electrostatic voltmeter and fixed environmental conditions facilitate an optimization process. Sample documents are preconditioned in a humidity cabinet with microprocessor control, and the significant benefit of humidification above 70% RH on image quality is verified. Improving on the subjective methods of previous studies, image quality analysis is carried out in an objective and reproducible manner using the PIAS-II. For the seven commercial paper types tested, the optimum ESDA operating point is found to be at an electric potential near -400V at the Mylar surface; however, for most paper types, the optimum operating regime is found to be quite broad, spanning relatively small electric potentials between -200 and -550V. At -400V, the film right above an indented area generally carries a voltage which is 30-50V less negative than the non-indented background. In contrast with Seward's findings [G.H. Seward, Model for electrostatic imaging of forensic evidence via discharge through Mylar-paper path, J. Appl. Phys. 83 (3) (1998) 1450-1456; G.H. Seward, Practical implications of the charge transport model for electrostatic detection apparatus (ESDA), J. Forensic Sci. 44 (4) (1999) 832-836], a period of charge decay before image development is not required when operating in this optimal regime. A brief investigation of the role played by paper-to-paper friction during the indentation process is conducted using our optimized development method.

  1. Stroke-model-based character extraction from gray-level document images.

    PubMed

    Ye, X; Cheriet, M; Suen, C Y

    2001-01-01

    Global gray-level thresholding techniques such as Otsu's method, and local gray-level thresholding techniques such as edge-based segmentation or the adaptive thresholding method are powerful in extracting character objects from simple or slowly varying backgrounds. However, they are found to be insufficient when the backgrounds include sharply varying contours or fonts in different sizes. A stroke-model is proposed to depict the local features of character objects as double-edges in a predefined size. This model enables us to detect thin connected components selectively, while ignoring relatively large backgrounds that appear complex. Meanwhile, since the stroke width restriction is fully factored in, the proposed technique can be used to extract characters in predefined font sizes. To process large volumes of documents efficiently, a hybrid method is proposed for character extraction from various backgrounds. Using the measurement of class separability to differentiate images with simple backgrounds from those with complex backgrounds, the hybrid method can process documents with different backgrounds by applying the appropriate methods. Experiments on extracting handwriting from a check image, as well as machine-printed characters from scene images demonstrate the effectiveness of the proposed model.

  2. Processing Of Binary Images

    NASA Astrophysics Data System (ADS)

    Hou, H. S.

    1985-07-01

    An overview of the recent progress in the area of digital processing of binary images in the context of document processing is presented here. The topics covered include input scan, adaptive thresholding, halftoning, scaling and resolution conversion, data compression, character recognition, electronic mail, digital typography, and output scan. Emphasis has been placed on illustrating the basic principles rather than descriptions of a particular system. Recent technology advances and research in this field are also mentioned.

  3. Document analysis with neural net circuits

    NASA Technical Reports Server (NTRS)

    Graf, Hans Peter

    1994-01-01

    Document analysis is one of the main applications of machine vision today and offers great opportunities for neural net circuits. Despite more and more data processing with computers, the number of paper documents is still increasing rapidly. A fast translation of data from paper into electronic format is needed almost everywhere, and when done manually, this is a time consuming process. Markets range from small scanners for personal use to high-volume document analysis systems, such as address readers for the postal service or check processing systems for banks. A major concern with present systems is the accuracy of the automatic interpretation. Today's algorithms fail miserably when noise is present, when print quality is poor, or when the layout is complex. A common approach to circumvent these problems is to restrict the variations of the documents handled by a system. In our laboratory, we had the best luck with circuits implementing basic functions, such as convolutions, that can be used in many different algorithms. To illustrate the flexibility of this approach, three applications of the NET32K circuit are described in this short viewgraph presentation: locating address blocks, cleaning document images by removing noise, and locating areas of interest in personal checks to improve image compression. Several of the ideas realized in this circuit that were inspired by neural nets, such as analog computation with a low resolution, resulted in a chip that is well suited for real-world document analysis applications and that compares favorably with alternative, 'conventional' circuits.

  4. Correcting geometric and photometric distortion of document images on a smartphone

    NASA Astrophysics Data System (ADS)

    Simon, Christian; Williem; Park, In Kyu

    2015-01-01

    A set of document image processing algorithms for improving the optical character recognition (OCR) capability of smartphone applications is presented. The scope of the problem covers the geometric and photometric distortion correction of document images. The proposed framework was developed to satisfy industrial requirements. It is implemented on an off-the-shelf smartphone with limited resources in terms of speed and memory. Geometric distortions, i.e., skew and perspective distortion, are corrected by sending horizontal and vertical vanishing points toward infinity in a downsampled image. Photometric distortion includes image degradation from moiré pattern noise and specular highlights. Moiré pattern noise is removed using low-pass filters with different sizes independently applied to the background and text region. The contrast of the text in a specular highlighted area is enhanced by locally enlarging the intensity difference between the background and text while the noise is suppressed. Intensive experiments indicate that the proposed methods show a consistent and robust performance on a smartphone with a runtime of less than 1 s.

  5. Some utilities to help produce Rich Text Files from Stata.

    PubMed

    Gillman, Matthew S

    Producing RTF files from Stata can be difficult and somewhat cryptic. Utilities are introduced to simplify this process; one builds up a table row-by-row, another inserts a PNG image file into an RTF document, and the others start and finish the RTF document.

  6. Video Information Communication and Retrieval/Image Based Information System (VICAR/IBIS)

    NASA Technical Reports Server (NTRS)

    Wherry, D. B.

    1981-01-01

    The acquisition, operation, and planning stages of installing a VICAR/IBIS system are described. The system operates in an IBM mainframe environment, and provides image processing of raster data. System support problems with software and documentation are discussed.

  7. Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). A description of the sensor, ground data processing facility, laboratory calibration, and first results

    NASA Technical Reports Server (NTRS)

    Vane, Gregg (Editor)

    1987-01-01

    The papers in this document were presented at the Imaging Spectroscopy 2 Conference of the 31st International Symposium on Optical and Optoelectronic Applied Science and Engineering, in San Diego, California, on 20 and 21 August 1987. They describe the design and performance of the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor and its subsystems, the ground data processing facility, laboratory calibration, and first results.

  8. Document image binarization using "multi-scale" predefined filters

    NASA Astrophysics Data System (ADS)

    Saabni, Raid M.

    2018-04-01

    Reading text or searching for key words within a historical document is a very challenging task. one of the first steps of the complete task is binarization, where we separate foreground such as text, figures and drawings from the background. Successful results of this important step in many cases can determine next steps to success or failure, therefore it is very vital to the success of the complete task of reading and analyzing the content of a document image. Generally, historical documents images are of poor quality due to their storage condition and degradation over time, which mostly cause to varying contrasts, stains, dirt and seeping ink from reverse side. In this paper, we use banks of anisotropic predefined filters in different scales and orientations to develop a binarization method for degraded documents and manuscripts. Using the fact, that handwritten strokes may follow different scales and orientations, we use predefined sets of filter banks having various scales, weights, and orientations to seek a compact set of filters and weights in order to generate diffrent layers of foregrounds and background. Results of convolving these fiters on the gray level image locally, weighted and accumulated to enhance the original image. Based on the different layers, seeds of components in the gray level image and a learning process, we present an improved binarization algorithm to separate the background from layers of foreground. Different layers of foreground which may be caused by seeping ink, degradation or other factors are also separated from the real foreground in a second phase. Promising experimental results were obtained on the DIBCO2011 , DIBCO2013 and H-DIBCO2016 data sets and a collection of images taken from real historical documents.

  9. Multispectral image restoration of historical documents based on LAAMs and mathematical morphology

    NASA Astrophysics Data System (ADS)

    Lechuga-S., Edwin; Valdiviezo-N., Juan C.; Urcid, Gonzalo

    2014-09-01

    This research introduces an automatic technique designed for the digital restoration of the damaged parts in historical documents. For this purpose an imaging spectrometer is used to acquire a set of images in the wavelength interval from 400 to 1000 nm. Assuming the presence of linearly mixed spectral pixels registered from the multispectral image, our technique uses two lattice autoassociative memories to extract the set of pure pigments conforming a given document. Through an spectral unmixing analysis, our method produces fractional abundance maps indicating the distributions of each pigment in the scene. These maps are then used to locate cracks and holes in the document under study. The restoration process is performed by the application of a region filling algorithm, based on morphological dilation, followed by a color interpolation to restore the original appearance of the filled areas. This procedure has been successfully applied to the analysis and restoration of three multispectral data sets: two corresponding to artificially superimposed scripts and a real data acquired from a Mexican pre-Hispanic codex, whose restoration results are presented.

  10. Creating Polyphony with Exploratory Web Documentation in Singapore

    ERIC Educational Resources Information Center

    Lim, Sirene; Hoo, Lum Chee

    2012-01-01

    We introduce and reflect on "Images of Teaching", an ongoing web documentation research project on preschool teaching in Singapore. This paper discusses the project's purpose, methodological process, and our learning points as researchers who aim to contribute towards inquiry-based professional learning. The website offers a window into…

  11. Some utilities to help produce Rich Text Files from Stata

    PubMed Central

    Gillman, Matthew S.

    2018-01-01

    Producing RTF files from Stata can be difficult and somewhat cryptic. Utilities are introduced to simplify this process; one builds up a table row-by-row, another inserts a PNG image file into an RTF document, and the others start and finish the RTF document. PMID:29731697

  12. Review of chart recognition in document images

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Lu, Xiaoqing; Qin, Yeyang; Tang, Zhi; Xu, Jianbo

    2013-01-01

    As an effective information transmitting way, chart is widely used to represent scientific statistics datum in books, research papers, newspapers etc. Though textual information is still the major source of data, there has been an increasing trend of introducing graphs, pictures, and figures into the information pool. Text recognition techniques for documents have been accomplished using optical character recognition (OCR) software. Chart recognition techniques as a necessary supplement of OCR for document images are still an unsolved problem due to the great subjectiveness and variety of charts styles. This paper reviews the development process of chart recognition techniques in the past decades and presents the focuses of current researches. The whole process of chart recognition is presented systematically, which mainly includes three parts: chart segmentation, chart classification, and chart Interpretation. In each part, the latest research work is introduced. In the last, the paper concludes with a summary and promising future research direction.

  13. Tools for a Document Image Utility.

    ERIC Educational Resources Information Center

    Krishnamoorthy, M.; And Others

    1993-01-01

    Describes a project conducted at Rensselaer Polytechnic Institute (New York) that developed methods for automatically subdividing pages from technical journals into smaller semantic units for transmission, display, and further processing in an electronic environment. Topics discussed include optical scanning and image compression, digital image…

  14. Document image retrieval through word shape coding.

    PubMed

    Lu, Shijian; Li, Linlin; Tan, Chew Lim

    2008-11-01

    This paper presents a document retrieval technique that is capable of searching document images without OCR (optical character recognition). The proposed technique retrieves document images by a new word shape coding scheme, which captures the document content through annotating each word image by a word shape code. In particular, we annotate word images by using a set of topological shape features including character ascenders/descenders, character holes, and character water reservoirs. With the annotated word shape codes, document images can be retrieved by either query keywords or a query document image. Experimental results show that the proposed document image retrieval technique is fast, efficient, and tolerant to various types of document degradation.

  15. 10 CFR 2.1013 - Use of the electronic docket during the proceeding.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... searchable full text, by header and image, as appropriate. (b) Absent good cause, all exhibits tendered... circumstances where submitters may need to use an image scanned before January 1, 2004, in a document created after January 1, 2004, or the scanning process for a large, one-page image may not successfully complete...

  16. 10 CFR 2.1013 - Use of the electronic docket during the proceeding.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... header and image, as appropriate. (b) Absent good cause, all exhibits tendered during the hearing must... may need to use an image scanned before January 1, 2004, in a document created after January 1, 2004, or the scanning process for a large, one-page image may not successfully complete at the 300 dpi...

  17. Use of Image Based Modelling for Documentation of Intricately Shaped Objects

    NASA Astrophysics Data System (ADS)

    Marčiš, M.; Barták, P.; Valaška, D.; Fraštia, M.; Trhan, O.

    2016-06-01

    In the documentation of cultural heritage, we can encounter three dimensional shapes and structures which are complicated to measure. Such objects are for example spiral staircases, timber roof trusses, historical furniture or folk costume where it is nearly impossible to effectively use the traditional surveying or the terrestrial laser scanning due to the shape of the object, its dimensions and the crowded environment. The actual methods of digital photogrammetry can be very helpful in such cases with the emphasis on the automated processing of the extensive image data. The created high resolution 3D models and 2D orthophotos are very important for the documentation of architectural elements and they can serve as an ideal base for the vectorization and 2D drawing documentation. This contribution wants to describe the various usage of image based modelling in specific interior spaces and specific objects. The advantages and disadvantages of the photogrammetric measurement of such objects in comparison to other surveying methods are reviewed.

  18. Standardization of left atrial, right ventricular, and right atrial deformation imaging using two-dimensional speckle tracking echocardiography: a consensus document of the EACVI/ASE/Industry Task Force to standardize deformation imaging.

    PubMed

    Badano, Luigi P; Kolias, Theodore J; Muraru, Denisa; Abraham, Theodore P; Aurigemma, Gerard; Edvardsen, Thor; D'Hooge, Jan; Donal, Erwan; Fraser, Alan G; Marwick, Thomas; Mertens, Luc; Popescu, Bogdan A; Sengupta, Partho P; Lancellotti, Patrizio; Thomas, James D; Voigt, Jens-Uwe

    2018-03-27

    The EACVI/ASE/Industry Task Force to standardize deformation imaging prepared this consensus document to standardize definitions and techniques for using two-dimensional (2D) speckle tracking echocardiography (STE) to assess left atrial, right ventricular, and right atrial myocardial deformation. This document is intended for both the technical engineering community and the clinical community at large to provide guidance on selecting the functional parameters to measure and how to measure them using 2D STE.This document aims to represent a significant step forward in the collaboration between the scientific societies and the industry since technical specifications of the software packages designed to post-process echocardiographic datasets have been agreed and shared before their actual development. Hopefully, this will lead to more clinically oriented software packages which will be better tailored to clinical needs and will allow industry to save time and resources in their development.

  19. Generating Accurate 3d Models of Architectural Heritage Structures Using Low-Cost Camera and Open Source Algorithms

    NASA Astrophysics Data System (ADS)

    Zacharek, M.; Delis, P.; Kedzierski, M.; Fryskowska, A.

    2017-05-01

    These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.

  20. Image Display and Manipulation System (IDAMS) program documentation, Appendixes A-D. [including routines, convolution filtering, image expansion, and fast Fourier transformation

    NASA Technical Reports Server (NTRS)

    Cecil, R. W.; White, R. A.; Szczur, M. R.

    1972-01-01

    The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.

  1. Image/Time Series Mining Algorithms: Applications to Developmental Biology, Document Processing and Data Streams

    ERIC Educational Resources Information Center

    Tataw, Oben Moses

    2013-01-01

    Interdisciplinary research in computer science requires the development of computational techniques for practical application in different domains. This usually requires careful integration of different areas of technical expertise. This dissertation presents image and time series analysis algorithms, with practical interdisciplinary applications…

  2. NASA sea ice and snow validation plan for the Defense Meteorological Satellite Program special sensor microwave/imager

    NASA Technical Reports Server (NTRS)

    Cavalieri, Donald J. (Editor); Swift, Calvin T. (Editor)

    1987-01-01

    This document addresses the task of developing and executing a plan for validating the algorithm used for initial processing of sea ice data from the Special Sensor Microwave/Imager (SSMI). The document outlines a plan for monitoring the performance of the SSMI, for validating the derived sea ice parameters, and for providing quality data products before distribution to the research community. Because of recent advances in the application of passive microwave remote sensing to snow cover on land, the validation of snow algorithms is also addressed.

  3. Extraction of latent images from printed media

    NASA Astrophysics Data System (ADS)

    Sergeyev, Vladislav; Fedoseev, Victor

    2015-12-01

    In this paper we propose an automatic technology for extraction of latent images from printed media such as documents, banknotes, financial securities, etc. This technology includes image processing by adaptively constructed Gabor filter bank for obtaining feature images, as well as subsequent stages of feature selection, grouping and multicomponent segmentation. The main advantage of the proposed technique is versatility: it allows to extract latent images made by different texture variations. Experimental results showing performance of the method over another known system for latent image extraction are given.

  4. Using 3D range cameras for crime scene documentation and legal medicine

    NASA Astrophysics Data System (ADS)

    Cavagnini, Gianluca; Sansoni, Giovanna; Trebeschi, Marco

    2009-01-01

    Crime scene documentation and legal medicine analysis are part of a very complex process which is aimed at identifying the offender starting from the collection of the evidences on the scene. This part of the investigation is very critical, since the crime scene is extremely volatile, and once it is removed, it can not be precisely created again. For this reason, the documentation process should be as complete as possible, with minimum invasiveness. The use of optical 3D imaging sensors has been considered as a possible aid to perform the documentation step, since (i) the measurement is contactless and (ii) the process required to editing and modeling the 3D data is quite similar to the reverse engineering procedures originally developed for the manufacturing field. In this paper we show the most important results obtained in the experimentation.

  5. Warped document image correction method based on heterogeneous registration strategies

    NASA Astrophysics Data System (ADS)

    Tong, Lijing; Zhan, Guoliang; Peng, Quanyao; Li, Yang; Li, Yifan

    2013-03-01

    With the popularity of digital camera and the application requirement of digitalized document images, using digital cameras to digitalize document images has become an irresistible trend. However, the warping of the document surface impacts on the quality of the Optical Character Recognition (OCR) system seriously. To improve the warped document image's vision quality and the OCR rate, this paper proposed a warped document image correction method based on heterogeneous registration strategies. This method mosaics two warped images of the same document from different viewpoints. Firstly, two feature points are selected from one image. Then the two feature points are registered in the other image base on heterogeneous registration strategies. At last, image mosaics are done for the two images, and the best mosaiced image is selected by OCR recognition results. As a result, for the best mosaiced image, the distortions are mostly removed and the OCR results are improved markedly. Experimental results show that the proposed method can resolve the issue of warped document image correction more effectively.

  6. Determination of representative elementary areas for soil redoximorphic features by digital image processing

    USDA-ARS?s Scientific Manuscript database

    Photography has been a welcome tool in documenting and conveying qualitative soil information. When coupled with image analysis software, the usefulness of digital cameras can be increased to advance the field of micropedology. The determination of a Representative Elementary Area (REA) still rema...

  7. The Role of Grammatical Class on Word Recognition

    ERIC Educational Resources Information Center

    Vigliocco, Gabriella; Vinson, David P.; Arciuli, Joanne; Barber, Horacio

    2008-01-01

    The double dissociation between noun and verb processing, well documented in the neuropsychological literature, has not been supported in imaging studies. Recent imaging studies, in fact, suggest that once confounding with semantics is eliminated, grammatical class effects only emerge as a consequence of building frames. Here we assess this…

  8. CellAnimation: an open source MATLAB framework for microscopy assays.

    PubMed

    Georgescu, Walter; Wikswo, John P; Quaranta, Vito

    2012-01-01

    Advances in microscopy technology have led to the creation of high-throughput microscopes that are capable of generating several hundred gigabytes of images in a few days. Analyzing such wealth of data manually is nearly impossible and requires an automated approach. There are at present a number of open-source and commercial software packages that allow the user to apply algorithms of different degrees of sophistication to the images and extract desired metrics. However, the types of metrics that can be extracted are severely limited by the specific image processing algorithms that the application implements, and by the expertise of the user. In most commercial software, code unavailability prevents implementation by the end user of newly developed algorithms better suited for a particular type of imaging assay. While it is possible to implement new algorithms in open-source software, rewiring an image processing application requires a high degree of expertise. To obviate these limitations, we have developed an open-source high-throughput application that allows implementation of different biological assays such as cell tracking or ancestry recording, through the use of small, relatively simple image processing modules connected into sophisticated imaging pipelines. By connecting modules, non-expert users can apply the particular combination of well-established and novel algorithms developed by us and others that are best suited for each individual assay type. In addition, our data exploration and visualization modules make it easy to discover or select specific cell phenotypes from a heterogeneous population. CellAnimation is distributed under the Creative Commons Attribution-NonCommercial 3.0 Unported license (http://creativecommons.org/licenses/by-nc/3.0/). CellAnimationsource code and documentation may be downloaded from www.vanderbilt.edu/viibre/software/documents/CellAnimation.zip. Sample data are available at www.vanderbilt.edu/viibre/software/documents/movies.zip. walter.georgescu@vanderbilt.edu Supplementary data available at Bioinformatics online.

  9. Ensemble LUT classification for degraded document enhancement

    NASA Astrophysics Data System (ADS)

    Obafemi-Ajayi, Tayo; Agam, Gady; Frieder, Ophir

    2008-01-01

    The fast evolution of scanning and computing technologies have led to the creation of large collections of scanned paper documents. Examples of such collections include historical collections, legal depositories, medical archives, and business archives. Moreover, in many situations such as legal litigation and security investigations scanned collections are being used to facilitate systematic exploration of the data. It is almost always the case that scanned documents suffer from some form of degradation. Large degradations make documents hard to read and substantially deteriorate the performance of automated document processing systems. Enhancement of degraded document images is normally performed assuming global degradation models. When the degradation is large, global degradation models do not perform well. In contrast, we propose to estimate local degradation models and use them in enhancing degraded document images. Using a semi-automated enhancement system we have labeled a subset of the Frieder diaries collection.1 This labeled subset was then used to train an ensemble classifier. The component classifiers are based on lookup tables (LUT) in conjunction with the approximated nearest neighbor algorithm. The resulting algorithm is highly effcient. Experimental evaluation results are provided using the Frieder diaries collection.1

  10. 39 CFR 3050.24 - Documentation supporting estimates of costs avoided by worksharing and other mail characteristics...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Computer Reader finalization costs, cost per image, and Remote Bar Code Sorter leakage; (8) Percentage of... processing units costs for Carrier Route, High Density, and Saturation mail; (j) Mail processing unit costs...

  11. 39 CFR 3050.24 - Documentation supporting estimates of costs avoided by worksharing and other mail characteristics...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Computer Reader finalization costs, cost per image, and Remote Bar Code Sorter leakage; (8) Percentage of... processing units costs for Carrier Route, High Density, and Saturation mail; (j) Mail processing unit costs...

  12. 39 CFR 3050.24 - Documentation supporting estimates of costs avoided by worksharing and other mail characteristics...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Computer Reader finalization costs, cost per image, and Remote Bar Code Sorter leakage; (8) Percentage of... processing units costs for Carrier Route, High Density, and Saturation mail; (j) Mail processing unit costs...

  13. 39 CFR 3050.24 - Documentation supporting estimates of costs avoided by worksharing and other mail characteristics...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Computer Reader finalization costs, cost per image, and Remote Bar Code Sorter leakage; (8) Percentage of... processing units costs for Carrier Route, High Density, and Saturation mail; (j) Mail processing unit costs...

  14. 39 CFR 3050.24 - Documentation supporting estimates of costs avoided by worksharing and other mail characteristics...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Computer Reader finalization costs, cost per image, and Remote Bar Code Sorter leakage; (8) Percentage of... processing units costs for Carrier Route, High Density, and Saturation mail; (j) Mail processing unit costs...

  15. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    PubMed

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  16. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    PubMed Central

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  17. Radar data processing and analysis

    NASA Technical Reports Server (NTRS)

    Ausherman, D.; Larson, R.; Liskow, C.

    1976-01-01

    Digitized four-channel radar images corresponding to particular areas from the Phoenix and Huntington test sites were generated in conjunction with prior experiments performed to collect X- and L-band synthetic aperture radar imagery of these two areas. The methods for generating this imagery are documented. A secondary objective was the investigation of digital processing techniques for extraction of information from the multiband radar image data. Following the digitization, the remaining resources permitted a preliminary machine analysis to be performed on portions of the radar image data. The results, although necessarily limited, are reported.

  18. Graph-based layout analysis for PDF documents

    NASA Astrophysics Data System (ADS)

    Xu, Canhui; Tang, Zhi; Tao, Xin; Li, Yun; Shi, Cao

    2013-03-01

    To increase the flexibility and enrich the reading experience of e-book on small portable screens, a graph based method is proposed to perform layout analysis on Portable Document Format (PDF) documents. Digital born document has its inherent advantages like representing texts and fractional images in explicit form, which can be straightforwardly exploited. To integrate traditional image-based document analysis and the inherent meta-data provided by PDF parser, the page primitives including text, image and path elements are processed to produce text and non text layer for respective analysis. Graph-based method is developed in superpixel representation level, and page text elements corresponding to vertices are used to construct an undirected graph. Euclidean distance between adjacent vertices is applied in a top-down manner to cut the graph tree formed by Kruskal's algorithm. And edge orientation is then used in a bottom-up manner to extract text lines from each sub tree. On the other hand, non-textual objects are segmented by connected component analysis. For each segmented text and non-text composite, a 13-dimensional feature vector is extracted for labelling purpose. The experimental results on selected pages from PDF books are presented.

  19. The Multimission Image Processing Laboratory's virtual frame buffer interface

    NASA Technical Reports Server (NTRS)

    Wolfe, T.

    1984-01-01

    Large image processing systems use multiple frame buffers with differing architectures and vendor supplied interfaces. This variety of architectures and interfaces creates software development, maintenance and portability problems for application programs. Several machine-dependent graphics standards such as ANSI Core and GKS are available, but none of them are adequate for image processing. Therefore, the Multimission Image Processing laboratory project has implemented a programmer level virtual frame buffer interface. This interface makes all frame buffers appear as a generic frame buffer with a specified set of characteristics. This document defines the virtual frame uffer interface and provides information such as FORTRAN subroutine definitions, frame buffer characteristics, sample programs, etc. It is intended to be used by application programmers and system programmers who are adding new frame buffers to a system.

  20. Electronic document management systems: an overview.

    PubMed

    Kohn, Deborah

    2002-08-01

    For over a decade, most health care information technology (IT) professionals erroneously learned that document imaging, which is one of the many component technologies of an electronic document management system (EDMS), is the only technology of an EDMS. In addition, many health care IT professionals erroneously believed that EDMSs have either a limited role or no place in IT environments. As a result, most health care IT professionals do not understand documents and unstructured data and their value as structured data partners in most aspects of transaction and information processing systems.

  1. A super resolution framework for low resolution document image OCR

    NASA Astrophysics Data System (ADS)

    Ma, Di; Agam, Gady

    2013-01-01

    Optical character recognition is widely used for converting document images into digital media. Existing OCR algorithms and tools produce good results from high resolution, good quality, document images. In this paper, we propose a machine learning based super resolution framework for low resolution document image OCR. Two main techniques are used in our proposed approach: a document page segmentation algorithm and a modified K-means clustering algorithm. Using this approach, by exploiting coherence in the document, we reconstruct from a low resolution document image a better resolution image and improve OCR results. Experimental results show substantial gain in low resolution documents such as the ones captured from video.

  2. Informatics in radiology: A prototype Web-based reporting system for onsite-offsite clinician communication.

    PubMed

    Arnold, Corey W; Bui, Alex A T; Morioka, Craig; El-Saden, Suzie; Kangarloo, Hooshang

    2007-01-01

    The communication of imaging findings to a referring physician is an important role of the radiologist. However, communication between onsite and offsite physicians is a time-consuming process that can obstruct work flow and frequently involves no exchange of visual information, which is especially problematic given the importance of radiologic images for diagnosis and treatment. A prototype World Wide Web-based image documentation and reporting system was developed for use in supporting a "communication loop" that is based on the concept of a classic "wet-read" system. The proposed system represents an attempt to address many of the problems seen in current communication work flows by implementing a well-documented and easily accessible communication loop that is adaptable to different types of imaging study evaluation. Images are displayed in a native (DICOM) Digital Imaging and Communications in Medicine format with a Java applet, which allows accurate presentation along with use of various image manipulation tools. The Web-based infrastructure consists of a server that stores imaging studies and reports, with Web browsers that download and install necessary client software on demand. Application logic consists of a set of PHP (hypertext preprocessor) modules that are accessible with an application programming interface. The system may be adapted to any clinician-specialist communication loop, and, because it integrates radiologic standards with Web-based technologies, can more effectively communicate and document imaging data. RSNA, 2007

  3. Adaptive optics imaging of geographic atrophy.

    PubMed

    Gocho, Kiyoko; Sarda, Valérie; Falah, Sabrina; Sahel, José-Alain; Sennlaub, Florian; Benchaboune, Mustapha; Ullern, Martine; Paques, Michel

    2013-05-01

    To report the findings of en face adaptive optics (AO) near infrared (NIR) reflectance fundus flood imaging in eyes with geographic atrophy (GA). Observational clinical study of AO NIR fundus imaging was performed in 12 eyes of nine patients with GA, and in seven controls using a flood illumination camera operating at 840 nm, in addition to routine clinical examination. To document short term and midterm changes, AO imaging sessions were repeated in four patients (mean interval between sessions 21 days; median follow up 6 months). As compared with scanning laser ophthalmoscope imaging, AO NIR imaging improved the resolution of the changes affecting the RPE. Multiple hyporeflective clumps were seen within and around GA areas. Time-lapse imaging revealed micrometric-scale details of the emergence and progression of areas of atrophy as well as the complex kinetics of some hyporeflective clumps. Such dynamic changes were observed within as well as outside atrophic areas. in eyes affected by GA, AO nir imaging allows high resolution documentation of the extent of RPE damage. this also revealed that a complex, dynamic process of redistribution of hyporeflective clumps throughout the posterior pole precedes and accompanies the emergence and progression of atrophy. therefore, these clumps are probably also a biomarker of rpe damage. AO NIR imaging may, therefore, be of interest to detect the earliest stages, to document the retinal pathology and to monitor the progression oF GA. (ClinicalTrials.gov number, NCT01546181.).

  4. Documentation of procedures for textural/spatial pattern recognition techniques

    NASA Technical Reports Server (NTRS)

    Haralick, R. M.; Bryant, W. F.

    1976-01-01

    A C-130 aircraft was flown over the Sam Houston National Forest on March 21, 1973 at 10,000 feet altitude to collect multispectral scanner (MSS) data. Existing textural and spatial automatic processing techniques were used to classify the MSS imagery into specified timber categories. Several classification experiments were performed on this data using features selected from the spectral bands and a textural transform band. The results indicate that (1) spatial post-processing a classified image can cut the classification error to 1/2 or 1/3 of its initial value, (2) spatial post-processing the classified image using combined spectral and textural features produces a resulting image with less error than post-processing a classified image using only spectral features and (3) classification without spatial post processing using the combined spectral textural features tends to produce about the same error rate as a classification without spatial post processing using only spectral features.

  5. Architectures for single-chip image computing

    NASA Astrophysics Data System (ADS)

    Gove, Robert J.

    1992-04-01

    This paper will focus on the architectures of VLSI programmable processing components for image computing applications. TI, the maker of industry-leading RISC, DSP, and graphics components, has developed an architecture for a new-generation of image processors capable of implementing a plurality of image, graphics, video, and audio computing functions. We will show that the use of a single-chip heterogeneous MIMD parallel architecture best suits this class of processors--those which will dominate the desktop multimedia, document imaging, computer graphics, and visualization systems of this decade.

  6. Dynamic "inline" images: context-sensitive retrieval and integration of images into Web documents.

    PubMed

    Kahn, Charles E

    2008-09-01

    Integrating relevant images into web-based information resources adds value for research and education. This work sought to evaluate the feasibility of using "Web 2.0" technologies to dynamically retrieve and integrate pertinent images into a radiology web site. An online radiology reference of 1,178 textual web documents was selected as the set of target documents. The ARRS GoldMiner image search engine, which incorporated 176,386 images from 228 peer-reviewed journals, retrieved images on demand and integrated them into the documents. At least one image was retrieved in real-time for display as an "inline" image gallery for 87% of the web documents. Each thumbnail image was linked to the full-size image at its original web site. Review of 20 randomly selected Collaborative Hypertext of Radiology documents found that 69 of 72 displayed images (96%) were relevant to the target document. Users could click on the "More" link to search the image collection more comprehensively and, from there, link to the full text of the article. A gallery of relevant radiology images can be inserted easily into web pages on any web server. Indexing by concepts and keywords allows context-aware image retrieval, and searching by document title and subject metadata yields excellent results. These techniques allow web developers to incorporate easily a context-sensitive image gallery into their documents.

  7. Range and Panoramic Image Fusion Into a Textured Range Image for Culture Heritage Documentation

    NASA Astrophysics Data System (ADS)

    Bila, Z.; Reznicek, J.; Pavelka, K.

    2013-07-01

    This paper deals with a fusion of range and panoramic images, where the range image is acquired by a 3D laser scanner and the panoramic image is acquired with a digital still camera mounted on a panoramic head and tripod. The fused resulting dataset, called "textured range image", provides more reliable information about the investigated object for conservators and historians, than using both datasets separately. A simple example of fusion of a range and panoramic images, both obtained in St. Francis Xavier Church in town Opařany, is given here. Firstly, we describe the process of data acquisition, then the processing of both datasets into a proper format for following fusion and the process of fusion. The process of fusion can be divided into a two main parts: transformation and remapping. In the first, transformation, part, both images are related by matching similar features detected on both images with a proper detector, which results in transformation matrix enabling transformation of the range image onto a panoramic image. Then, the range data are remapped from the range image space into a panoramic image space and stored as an additional "range" channel. The process of image fusion is validated by comparing similar features extracted on both datasets.

  8. A Complete OCR System for Tamil Magazine Documents

    NASA Astrophysics Data System (ADS)

    Kokku, Aparna; Chakravarthy, Srinivasa

    We present a complete optical character recognition (OCR) system for Tamil magazines/documents. All the standard elements of OCR process like de-skewing, preprocessing, segmentation, character recognition, and reconstruction are implemented. Experience with OCR problems teaches that for most subtasks of OCR, there is no single technique that gives perfect results for every type of document image. We exploit the ability of neural networks to learn from experience in solving the problems of segmentation and character recognition. Text segmentation of Tamil newsprint poses a new challenge owing to its italic-like font type; problems that arise in recognition of touching and close characters are discussed. Character recognition efficiency varied from 94 to 97% for this type of font. The grouping of blocks into logical units and the determination of reading order within each logical unit helped us in reconstructing automatically the document image in an editable format.

  9. 3D Documentation and BIM Modeling of Cultural Heritage Structures Using UAVs: The Case of the Foinikaria Church

    NASA Astrophysics Data System (ADS)

    Themistocleous, K.; Agapiou, A.; Hadjimitsis, D.

    2016-10-01

    The documentation of architectural cultural heritage sites has traditionally been expensive and labor-intensive. New innovative technologies, such as Unmanned Aerial Vehicles (UAVs), provide an affordable, reliable and straightforward method of capturing cultural heritage sites, thereby providing a more efficient and sustainable approach to documentation of cultural heritage structures. In this study, hundreds of images of the Panagia Chryseleousa church in Foinikaria, Cyprus were taken using a UAV with an attached high resolution camera. The images were processed to generate an accurate digital 3D model by using Structure in Motion techniques. Building Information Model (BIM) was then used to generate drawings of the church. The methodology described in the paper provides an accurate, simple and cost-effective method of documenting cultural heritage sites and generating digital 3D models using novel techniques and innovative methods.

  10. 36 CFR 1238.14 - What are the microfilming requirements for permanent and unscheduled records?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... accordance with ISO 18901 (incorporated by reference, see § 1238.5) and use the processing procedures in ANSI... § 1238.5). (2) Background density of images. Agencies must use the background ISO standard visual diffuse... transmission density. (i) Recommended visual diffuse transmission background densities for images of documents...

  11. 36 CFR 1238.14 - What are the microfilming requirements for permanent and unscheduled records?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... accordance with ISO 18901 (incorporated by reference, see § 1238.5) and use the processing procedures in ANSI... § 1238.5). (2) Background density of images. Agencies must use the background ISO standard visual diffuse... transmission density. (i) Recommended visual diffuse transmission background densities for images of documents...

  12. 36 CFR 1238.14 - What are the microfilming requirements for permanent and unscheduled records?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... accordance with ISO 18901 (incorporated by reference, see § 1238.5) and use the processing procedures in ANSI... § 1238.5). (2) Background density of images. Agencies must use the background ISO standard visual diffuse... transmission density. (i) Recommended visual diffuse transmission background densities for images of documents...

  13. Uav Photgrammetric Workflows: a best Practice Guideline

    NASA Astrophysics Data System (ADS)

    Federman, A.; Santana Quintero, M.; Kretz, S.; Gregg, J.; Lengies, M.; Ouimet, C.; Laliberte, J.

    2017-08-01

    The increasing commercialization of unmanned aerial vehicles (UAVs) has opened the possibility of performing low-cost aerial image acquisition for the documentation of cultural heritage sites through UAV photogrammetry. The flying of UAVs in Canada is regulated through Transport Canada and requires a Special Flight Operations Certificate (SFOC) in order to fly. Various image acquisition techniques have been explored in this review, as well as well software used to register the data. A general workflow procedure has been formulated based off of the literature reviewed. A case study example of using UAV photogrammetry at Prince of Wales Fort is discussed, specifically in relation to the data acquisition and processing. Some gaps in the literature reviewed highlight the need for streamlining the SFOC application process, and incorporating UAVs into cultural heritage documentation courses.

  14. Possibilities of Processing Archival Photogrammetric Images Captured by Rollei 6006 Metric Camera Using Current Method

    NASA Astrophysics Data System (ADS)

    Dlesk, A.; Raeva, P.; Vach, K.

    2018-05-01

    Processing of analog photogrammetric negatives using current methods brings new challenges and possibilities, for example, creation of a 3D model from archival images which enables the comparison of historical state and current state of cultural heritage objects. The main purpose of this paper is to present possibilities of processing archival analog images captured by photogrammetric camera Rollei 6006 metric. In 1994, the Czech company EuroGV s.r.o. carried out photogrammetric measurements of former limestone quarry the Great America located in the Central Bohemian Region in the Czech Republic. All the negatives of photogrammetric images, complete documentation, coordinates of geodetically measured ground control points, calibration reports and external orientation of images calculated in the Combined Adjustment Program are preserved and were available for the current processing. Negatives of images were scanned and processed using structure from motion method (SfM). The result of the research is a statement of what accuracy is possible to expect from the proposed methodology using Rollei metric images originally obtained for terrestrial intersection photogrammetry while adhering to the proposed methodology.

  15. Optical/digital identification/verification system based on digital watermarking technology

    NASA Astrophysics Data System (ADS)

    Herrigel, Alexander; Voloshynovskiy, Sviatoslav V.; Hrytskiv, Zenon D.

    2000-06-01

    This paper presents a new approach for the secure integrity verification of driver licenses, passports or other analogue identification documents. The system embeds (detects) the reference number of the identification document with the DCT watermark technology in (from) the owner photo of the identification document holder. During verification the reference number is extracted and compared with the reference number printed in the identification document. The approach combines optical and digital image processing techniques. The detection system must be able to scan an analogue driver license or passport, convert the image of this document into a digital representation and then apply the watermark verification algorithm to check the payload of the embedded watermark. If the payload of the watermark is identical with the printed visual reference number of the issuer, the verification was successful and the passport or driver license has not been modified. This approach constitutes a new class of application for the watermark technology, which was originally targeted for the copyright protection of digital multimedia data. The presented approach substantially increases the security of the analogue identification documents applied in many European countries.

  16. Unsupervised Word Spotting in Historical Handwritten Document Images using Document-oriented Local Features.

    PubMed

    Zagoris, Konstantinos; Pratikakis, Ioannis; Gatos, Basilis

    2017-05-03

    Word spotting strategies employed in historical handwritten documents face many challenges due to variation in the writing style and intense degradation. In this paper, a new method that permits effective word spotting in handwritten documents is presented that it relies upon document-oriented local features which take into account information around representative keypoints as well a matching process that incorporates spatial context in a local proximity search without using any training data. Experimental results on four historical handwritten datasets for two different scenarios (segmentation-based and segmentation-free) using standard evaluation measures show the improved performance achieved by the proposed methodology.

  17. Advanced Imaging Optics Utilizing Wavefront Coding.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen

    2015-06-01

    Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise.more » Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.« less

  18. Manual on characteristics of Landsat computer-compatible tapes produced by the EROS Data Center digital image processing system

    USGS Publications Warehouse

    Holkenbrink, Patrick F.

    1978-01-01

    Landsat data are received by National Aeronautics and Space Administration (NASA) tracking stations and converted into digital form on high-density tapes (HDTs) by the Image Processing Facility (IPF) at the Goddard Space Flight Center (GSFC), Greenbelt, Maryland. The HDTs are shipped to the EROS Data Center (EDC) where they are converted into customer products by the EROS Data Center digital image processing system (EDIPS). This document describes in detail one of these products: the computer-compatible tape (CCT) produced from Landsat-1, -2, and -3 multispectral scanner (MSS) data and Landsat-3 only return-beam vidicon (RBV) data. Landsat-1 and -2 RBV data will not be processed by IPF/EDIPS to CCT format.

  19. D Imaging for Museum Artefacts: a Portable Test Object for Heritage and Museum Documentation of Small Objects

    NASA Astrophysics Data System (ADS)

    Hess, M.; Robson, S.

    2012-07-01

    3D colour image data generated for the recording of small museum objects and archaeological finds are highly variable in quality and fitness for purpose. Whilst current technology is capable of extremely high quality outputs, there are currently no common standards or applicable guidelines in either the museum or engineering domain suited to scientific evaluation, understanding and tendering for 3D colour digital data. This paper firstly explains the rationale towards and requirements for 3D digital documentation in museums. Secondly it describes the design process, development and use of a new portable test object suited to sensor evaluation and the provision of user acceptance metrics. The test object is specifically designed for museums and heritage institutions and includes known surface and geometric properties which support quantitative and comparative imaging on different systems. The development for a supporting protocol will allow object reference data to be included in the data processing workflow with specific reference to conservation and curation.

  20. Digital processing of radiographic images

    NASA Technical Reports Server (NTRS)

    Bond, A. D.; Ramapriyan, H. K.

    1973-01-01

    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  1. 77 FR 72788 - Copyright Office Fees

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-06

    ... Adobe Portable Document File (PDF) format that contains searchable, accessible text (not an image... processing such statements and associated royalty payments was funded solely by the royalty fees collected... Title 17 that permits the Office to apportion up to 50 percent of the cost of processing the SOAs and...

  2. Automated search and retrieval of information from imaged documents using optical correlation techniques

    NASA Astrophysics Data System (ADS)

    Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.

    1999-10-01

    Litton PRC and Litton Data Systems Division are developing a system, the Imaged Document Optical Correlation and Conversion System (IDOCCS), to provide a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provides the search and retrieval of information from imaged documents. IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited; e.g., imaged documents containing an agency's seal or logo can be singled out. In this paper, we present a description of IDOCCS as well as preliminary performance results and theoretical projections.

  3. Document image database indexing with pictorial dictionary

    NASA Astrophysics Data System (ADS)

    Akbari, Mohammad; Azimi, Reza

    2010-02-01

    In this paper we introduce a new approach for information retrieval from Persian document image database without using Optical Character Recognition (OCR).At first an attribute called subword upper contour label is defined then, a pictorial dictionary is constructed based on this attribute for the subwords. By this approach we address two issues in document image retrieval: keyword spotting and retrieval according to the document similarities. The proposed methods have been evaluated on a Persian document image database. The results have proved the ability of this approach in document image information retrieval.

  4. "I Will Write to You with My Eyes": Reflective Text and Image Journals in the Undergraduate Classroom

    ERIC Educational Resources Information Center

    Hyland-Russell, Tara

    2014-01-01

    This article reports on a case study into students' perspectives on the use of "cahiers", reflective text and image journals. Narrative interviews and document analysis reveal that "cahiers" can be used effectively to engage students in course content and learning processes. Recent work in transformative learning…

  5. Medical devices; radiology devices; reclassification of full-field digital mammography system. Final rule.

    PubMed

    2010-11-05

    The Food and Drug Administration (FDA) is announcing the reclassification of the full-field digital mammography (FFDM) system from class III (premarket approval) to class II (special controls). The device type is intended to produce planar digital x-ray images of the entire breast; this generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component parts, and accessories. The special control that will apply to the device is the guidance document entitled "Class II Special Controls Guidance Document: Full-Field Digital Mammography System." FDA is reclassifying the device into class II (special controls) because general controls along with special controls will provide a reasonable assurance of safety and effectiveness of the device. Elsewhere in this issue of the Federal Register, FDA is announcing the availability of the guidance document that will serve as the special control for this device.

  6. [Dry view laser imager--a new economical photothermal imaging method].

    PubMed

    Weberling, R

    1996-11-01

    The production of hard copies is currently achieved by means of laser imagers and wet film processing in systems attached either directly in or to the laser imager or in a darkroom. Variations in image quality resulting from a not always optimal wet film development are frequent. A newly developed thermographic film developer for laser films without liquid powdered chemicals, on the other hand, is environmentally preferable and reducing operating costs. The completely dry developing process provides permanent image documentation meeting the quality and safety requirements of RöV and BAK. One of the currently available systems of this type, the DryView Laser Imager is inexpensive and easy to install. The selective connection principle of the DryView Laser Imager can be expanded as required and accepts digital and/or analog interfaces with all imaging systems (CT, MR, DR, US, NM) from the various manufactures.

  7. [Evaluating the maturity of IT-supported clinical imaging and diagnosis using the Digital Imaging Adoption Model : Are your clinical imaging processes ready for the digital era?

    PubMed

    Studzinski, J

    2017-06-01

    The Digital Imaging Adoption Model (DIAM) has been jointly developed by HIMSS Analytics and the European Society of Radiology (ESR). It helps evaluate the maturity of IT-supported processes in medical imaging, particularly in radiology. This eight-stage maturity model drives your organisational, strategic and tactical alignment towards imaging-IT planning. The key audience for the model comprises hospitals with imaging centers, as well as external imaging centers that collaborate with hospitals. The assessment focuses on different dimensions relevant to digital imaging, such as software infrastructure and usage, workflow security, clinical documentation and decision support, data exchange and analytical capabilities. With its standardised approach, it enables regional, national and international benchmarking. All DIAM participants receive a structured report that can be used as a basis for presenting, e.g. budget planning and investment decisions at management level.

  8. Computer imaging and workflow systems in the business office.

    PubMed

    Adams, W T; Veale, F H; Helmick, P M

    1999-05-01

    Computer imaging and workflow technology automates many business processes that currently are performed using paper processes. Documents are scanned into the imaging system and placed in electronic patient account folders. Authorized users throughout the organization, including preadmission, verification, admission, billing, cash posting, customer service, and financial counseling staff, have online access to the information they need when they need it. Such streamlining of business functions can increase collections and customer satisfaction while reducing labor, supply, and storage costs. Because the costs of a comprehensive computer imaging and workflow system can be considerable, healthcare organizations should consider implementing parts of such systems that can be cost-justified or include implementation as part of a larger strategic technology initiative.

  9. 75 FR 67700 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-03

    ...) images/ templates for identification, and relevant documentation concerning individual's acceptance... Entrance Processing Command, FOIA/PA Officer (J-1/MHR-MS-SS), 2834 Green Bay Road, North Chicago, IL 60064... inquiries to the Commander, U.S. Military Entrance Processing Command, FOIA/PA Officer (J-1/MHR-MS-SS), 2834...

  10. Acousto-Optic Processing of 2-D Signals Using Temporal and Spatial Integration.

    DTIC Science & Technology

    1983-05-31

    Documents includes data on: Architectures; Coherence Properties of Pulsed Laser Diodes; Acousto - optic device data; Dynamic Range Issues; Image correlation; Synthetic aperture radar; 2-D Fourier transform; and Moments.

  11. Anima: Modular Workflow System for Comprehensive Image Data Analysis

    PubMed Central

    Rantanen, Ville; Valori, Miko; Hautaniemi, Sampsa

    2014-01-01

    Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and pre-processing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis development, and it contains several features that are crucial in high-throughput image data analysis: programing language independence, batch processing, easily customized data processing, interoperability with other software via application programing interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environments. Anima is a fully open source and available with documentation at www.anduril.org/anima. PMID:25126541

  12. Imaged Document Optical Correlation and Conversion System (IDOCCS)

    NASA Astrophysics Data System (ADS)

    Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.

    1999-03-01

    Today, the paper document is fast becoming a thing of the past. With the rapid development of fast, inexpensive computing and storage devices, many government and private organizations are archiving their documents in electronic form (e.g., personnel records, medical records, patents, etc.). In addition, many organizations are converting their paper archives to electronic images, which are stored in a computer database. Because of this, there is a need to efficiently organize this data into comprehensive and accessible information resources. The Imaged Document Optical Correlation and Conversion System (IDOCCS) provides a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provides the search and retrieval capability of document images. The IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives and can even determine the types of languages contained within a document. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited, e.g., imaged documents containing an agency's seal or logo, or documents with a particular individual's signature block, can be singled out. With this dual capability, IDOCCS outperforms systems that rely on optical character recognition as a basis for indexing and storing only the textual content of documents for later retrieval.

  13. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  14. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  15. 32 CFR 813.5 - Shipping or transmitting visual information documentation images.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... documentation images. 813.5 Section 813.5 National Defense Department of Defense (Continued) DEPARTMENT OF THE... visual information documentation images. (a) COMCAM images. Send COMCAM images to the DoD Joint Combat... the approval procedures that on-scene and theater commanders set. (b) Other non-COMCAM images. After...

  16. 32 CFR 813.5 - Shipping or transmitting visual information documentation images.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... documentation images. 813.5 Section 813.5 National Defense Department of Defense (Continued) DEPARTMENT OF THE... visual information documentation images. (a) COMCAM images. Send COMCAM images to the DoD Joint Combat... the approval procedures that on-scene and theater commanders set. (b) Other non-COMCAM images. After...

  17. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  18. 32 CFR 813.5 - Shipping or transmitting visual information documentation images.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... documentation images. 813.5 Section 813.5 National Defense Department of Defense (Continued) DEPARTMENT OF THE... visual information documentation images. (a) COMCAM images. Send COMCAM images to the DoD Joint Combat... the approval procedures that on-scene and theater commanders set. (b) Other non-COMCAM images. After...

  19. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  20. 32 CFR 813.5 - Shipping or transmitting visual information documentation images.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... documentation images. 813.5 Section 813.5 National Defense Department of Defense (Continued) DEPARTMENT OF THE... visual information documentation images. (a) COMCAM images. Send COMCAM images to the DoD Joint Combat... the approval procedures that on-scene and theater commanders set. (b) Other non-COMCAM images. After...

  1. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  2. 32 CFR 813.5 - Shipping or transmitting visual information documentation images.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... documentation images. 813.5 Section 813.5 National Defense Department of Defense (Continued) DEPARTMENT OF THE... visual information documentation images. (a) COMCAM images. Send COMCAM images to the DoD Joint Combat... the approval procedures that on-scene and theater commanders set. (b) Other non-COMCAM images. After...

  3. XDS-I Gateway Development for HIE Connectivity with Legacy PACS at Gil Hospital.

    PubMed

    Simalango, Mikael Fernandus; Kim, Youngchul; Seo, Young Tae; Choi, Young Hwan; Cho, Yong Kyun

    2013-12-01

    The ability to support healthcare document sharing is imperative in a health information exchange (HIE). Sharing imaging documents or images, however, can be challenging, especially when they are stored in a picture archiving and communication system (PACS) archive that does not support document sharing via standard HIE protocols. This research proposes a standard-compliant imaging gateway that enables connectivity between a legacy PACS and the entire HIE. Investigation of the PACS solutions used at Gil Hospital was conducted. An imaging gateway application was then developed using a Java technology stack. Imaging document sharing capability enabled by the gateway was tested by integrating it into Gil Hospital's order communication system and its HIE infrastructure. The gateway can acquire radiology images from a PACS storage system, provide and register the images to Gil Hospital's HIE for document sharing purposes, and make the images retrievable by a cross-enterprise document sharing document viewer. Development of an imaging gateway that mediates communication between a PACS and an HIE can be considered a viable option when the PACS does not support the standard protocol for cross-enterprise document sharing for imaging. Furthermore, the availability of common HIE standards expedites the development and integration of the imaging gateway with an HIE.

  4. XDS-I Gateway Development for HIE Connectivity with Legacy PACS at Gil Hospital

    PubMed Central

    Simalango, Mikael Fernandus; Kim, Youngchul; Seo, Young Tae; Cho, Yong Kyun

    2013-01-01

    Objectives The ability to support healthcare document sharing is imperative in a health information exchange (HIE). Sharing imaging documents or images, however, can be challenging, especially when they are stored in a picture archiving and communication system (PACS) archive that does not support document sharing via standard HIE protocols. This research proposes a standard-compliant imaging gateway that enables connectivity between a legacy PACS and the entire HIE. Methods Investigation of the PACS solutions used at Gil Hospital was conducted. An imaging gateway application was then developed using a Java technology stack. Imaging document sharing capability enabled by the gateway was tested by integrating it into Gil Hospital's order communication system and its HIE infrastructure. Results The gateway can acquire radiology images from a PACS storage system, provide and register the images to Gil Hospital's HIE for document sharing purposes, and make the images retrievable by a cross-enterprise document sharing document viewer. Conclusions Development of an imaging gateway that mediates communication between a PACS and an HIE can be considered a viable option when the PACS does not support the standard protocol for cross-enterprise document sharing for imaging. Furthermore, the availability of common HIE standards expedites the development and integration of the imaging gateway with an HIE. PMID:24523994

  5. Data path design and image quality aspects of the next generation multifunctional printer

    NASA Astrophysics Data System (ADS)

    Brassé, M. H. H.; de Smet, S. P. R. C.

    2008-01-01

    Multifunctional devices (MFDs) are increasingly used as a document hub. The MFD is used as a copier, scanner, printer, and it facilitates digital document distribution and sharing. This imposes new requirements on the design of the data path and its image processing. Various design aspects need to be taken into account, including system performance, features, image quality, and cost price. A good balance is required in order to develop a competitive MFD. A modular datapath architecture is presented that supports all the envisaged use cases. Besides copying, colour scanning is becoming an important use case of a modern MFD. The copy-path use case is described and it is shown how colour scanning can also be supported with a minimal adaptation to the architecture. The key idea is to convert the scanner data to an opponent colour space representation at the beginning of the image processing pipeline. The sub-sampling of chromatic information allows for the saving of scarce hardware resources without significant perceptual loss of quality. In particular, we have shown that functional FPGA modules from the copy application can also be used for the scan-to-file application. This makes the presented approach very cost-effective while complying with market conform image quality standards.

  6. Adaptive removal of background and white space from document images using seam categorization

    NASA Astrophysics Data System (ADS)

    Fillion, Claude; Fan, Zhigang; Monga, Vishal

    2011-03-01

    Document images are obtained regularly by rasterization of document content and as scans of printed documents. Resizing via background and white space removal is often desired for better consumption of these images, whether on displays or in print. While white space and background are easy to identify in images, existing methods such as naïve removal and content aware resizing (seam carving) each have limitations that can lead to undesirable artifacts, such as uneven spacing between lines of text or poor arrangement of content. An adaptive method based on image content is hence needed. In this paper we propose an adaptive method to intelligently remove white space and background content from document images. Document images are different from pictorial images in structure. They typically contain objects (text letters, pictures and graphics) separated by uniform background, which include both white paper space and other uniform color background. Pixels in uniform background regions are excellent candidates for deletion if resizing is required, as they introduce less change in document content and style, compared with deletion of object pixels. We propose a background deletion method that exploits both local and global context. The method aims to retain the document structural information and image quality.

  7. Imaged document information location and extraction using an optical correlator

    NASA Astrophysics Data System (ADS)

    Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.

    1999-12-01

    Today, the paper document is fast becoming a thing of the past. With the rapid development of fast, inexpensive computing and storage devices, many government and private organizations are archiving their documents in electronic form (e.g., personnel records, medical records, patents, etc.). Many of these organizations are converting their paper archives to electronic images, which are then stored in a computer database. Because of this, there is a need to efficiently organize this data into comprehensive and accessible information resources and provide for rapid access to the information contained within these imaged documents. To meet this need, Litton PRC and Litton Data Systems Division are developing a system, the Imaged Document Optical Correlation and Conversion System (IDOCCS), to provide a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provide a means for the search and retrieval of information from imaged documents. IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives and has the potential to determine the types of languages contained within a document. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited, e.g., imaged documents containing an agency's seal or logo can be singled out. In this paper, we present a description of IDOCCS as well as preliminary performance results and theoretical projections.

  8. JIP: Java image processing on the Internet

    NASA Astrophysics Data System (ADS)

    Wang, Dongyan; Lin, Bo; Zhang, Jun

    1998-12-01

    In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.

  9. Level 2 Ancillary Products and Datasets Algorithm Theoretical Basis

    NASA Technical Reports Server (NTRS)

    Diner, D.; Abdou, W.; Gordon, H.; Kahn, R.; Knyazikhin, Y.; Martonchik, J.; McDonald, D.; McMuldroch, S.; Myneni, R.; West, R.

    1999-01-01

    This Algorithm Theoretical Basis (ATB) document describes the algorithms used to generate the parameters of certain ancillary products and datasets used during Level 2 processing of Multi-angle Imaging SpectroRadiometer (MIST) data.

  10. Lessons from New Zealand's introduction of pictorial health warnings on tobacco packaging.

    PubMed

    Hoek, Janet; Wilson, Nick; Allen, Matthew; Edwards, Richard; Thomson, George; Li, Judy

    2010-11-01

    While international evidence suggests that featuring pictorial health warnings on tobacco packaging is an effective tobacco control intervention, the process used to introduce these new warnings has not been well documented. We examined relevant documents and interviewed officials responsible for this process in New Zealand. We found that, despite tobacco companies' opposition to pictorial health warnings and the resource constraints facing health authorities, the implementation process was generally robust and successful. Potential lessons for other countries planning to introduce or refresh existing pictorial health warnings include: (i) strengthening the link between image research and policy; (ii) requiring frequent image development and refreshment; (iii) using larger pictures (e.g. 80% of the front of the packet); (iv) developing themes that recognize concerns held by different smoker sub-groups; and (v) running integrated mass media campaigns when the warnings are introduced. All countries could also support moves by the World Health Organization Framework Convention on Tobacco Control's Secretariat to develop an international bank of copyright-free warnings.

  11. Targeting youth and concerned smokers: evidence from Canadian tobacco industry documents

    PubMed Central

    Pollay, R.

    2000-01-01

    OBJECTIVE—To provide an understanding of the targeting strategies of cigarette marketing, and the functions and importance of the advertising images chosen.
METHODS—Analysis of historical corporate documents produced by affiliates of British American Tobacco (BAT) and RJ Reynolds (RJR) in Canadian litigation challenging tobacco advertising regulation, the Tobacco Products Control Act (1987): Imperial Tobacco Limitee & RJR-Macdonald Inc c. Le Procurer General du Canada.
RESULTS—Careful and extensive research has been employed in all stages of the process of conceiving, developing, refining, and deploying cigarette advertising. Two segments commanding much management attention are "starters" and "concerned smokers". To recruit starters, brand images communicate independence, freedom and (sometimes) peer acceptance. These advertising images portray smokers as attractive and autonomous, accepted and admired, athletic and at home in nature. For "lighter" brands reassuring health concerned smokers, lest they quit, advertisements provide imagery conveying a sense of well being, harmony with nature, and a consumer's self image as intelligent.
CONCLUSIONS—The industry's steadfast assertions that its advertising influences only brand loyalty and switching in both its intent and effect is directly contradicted by their internal documents and proven false. So too is the justification of cigarette advertising as a medium creating better informed consumers, since visual imagery, not information, is the means of advertising influence.


Keywords: advertising; brand imagery; market research; youth targeting; "concerned" smokers; corporate documents PMID:10841849

  12. Binary Color Vision for Industrial Automation.

    DTIC Science & Technology

    1983-02-28

    A . and Kak, A .: Digital Picture Processing. Academic Press, New York, 1976. (17) Connah, D. M . and Fishbourne, C . A .: "The...TEST CHART NATIONAL RUR AU OF STANDAR[l, A - -IA • . . ........ ......... ’ ... ’" ( ( READ INSTR C IN R REPORT DOCUMENTATION PAGE H"’FORE COMPLETN...image is defined by a function of 2-D posi , say I( m ,n), defined at chosen grid points of the image. For a achromatic grey-scale image, the function

  13. Extraction and labeling high-resolution images from PDF documents

    NASA Astrophysics Data System (ADS)

    Chachra, Suchet K.; Xue, Zhiyun; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-12-01

    Accuracy of content-based image retrieval is affected by image resolution among other factors. Higher resolution images enable extraction of image features that more accurately represent the image content. In order to improve the relevance of search results for our biomedical image search engine, Open-I, we have developed techniques to extract and label high-resolution versions of figures from biomedical articles supplied in the PDF format. Open-I uses the open-access subset of biomedical articles from the PubMed Central repository hosted by the National Library of Medicine. Articles are available in XML and in publisher supplied PDF formats. As these PDF documents contain little or no meta-data to identify the embedded images, the task includes labeling images according to their figure number in the article after they have been successfully extracted. For this purpose we use the labeled small size images provided with the XML web version of the article. This paper describes the image extraction process and two alternative approaches to perform image labeling that measure the similarity between two images based upon the image intensity projection on the coordinate axes and similarity based upon the normalized cross-correlation between the intensities of two images. Using image identification based on image intensity projection, we were able to achieve a precision of 92.84% and a recall of 82.18% in labeling of the extracted images.

  14. An Introduction to Document Imaging in the Financial Aid Office.

    ERIC Educational Resources Information Center

    Levy, Douglas A.

    2001-01-01

    First describes the components of a document imaging system in general and then addresses this technology specifically in relation to financial aid document management: its uses and benefits, considerations in choosing a document imaging system, and additional sources for information. (EV)

  15. Automatic extraction of numeric strings in unconstrained handwritten document images

    NASA Astrophysics Data System (ADS)

    Haji, M. Mehdi; Bui, Tien D.; Suen, Ching Y.

    2011-01-01

    Numeric strings such as identification numbers carry vital pieces of information in documents. In this paper, we present a novel algorithm for automatic extraction of numeric strings in unconstrained handwritten document images. The algorithm has two main phases: pruning and verification. In the pruning phase, the algorithm first performs a new segment-merge procedure on each text line, and then using a new regularity measure, it prunes all sequences of characters that are unlikely to be numeric strings. The segment-merge procedure is composed of two modules: a new explicit character segmentation algorithm which is based on analysis of skeletal graphs and a merging algorithm which is based on graph partitioning. All the candidate sequences that pass the pruning phase are sent to a recognition-based verification phase for the final decision. The recognition is based on a coarse-to-fine approach using probabilistic RBF networks. We developed our algorithm for the processing of real-world documents where letters and digits may be connected or broken in a document. The effectiveness of the proposed approach is shown by extensive experiments done on a real-world database of 607 documents which contains handwritten, machine-printed and mixed documents with different types of layouts and levels of noise.

  16. 10 CFR 9.35 - Duplication fees.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) copying of ADAMS documents to paper (applies to images, OCR TIFF, and PDF text) is $0.30 per page. (B) EFT... is $0.30 per page. (vi) Priority rates (rush processing) are as follows: (A) The priority rate...

  17. 10 CFR 9.35 - Duplication fees.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) copying of ADAMS documents to paper (applies to images, OCR TIFF, and PDF text) is $0.30 per page. (B) EFT... is $0.30 per page. (vi) Priority rates (rush processing) are as follows: (A) The priority rate...

  18. 10 CFR 9.35 - Duplication fees.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) copying of ADAMS documents to paper (applies to images, OCR TIFF, and PDF text) is $0.30 per page. (B) EFT... is $0.30 per page. (vi) Priority rates (rush processing) are as follows: (A) The priority rate...

  19. 76 FR 27048 - Information Collection Being Reviewed by the Federal Communications Commission

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-10

    ... Commission; (8) Ex parte notices must be submitted electronically in machine-readable format. PDF images created by scanning a paper document may not be submitted, except in cases in which a word-processing...

  20. AVIRIS Reflectance Retrievals: UCSB Users Manual. Appendix 1

    NASA Technical Reports Server (NTRS)

    Roberts, Dar A.; Prentiss, Dylan

    2001-01-01

    The following write-up is designed to help students and researchers take Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) radiance data and retrieve surface reflectance. In the event that the software is not available, but a user has access to a reflectance product, this document is designed to provide a better understanding of how AVIRIS reflectance was retrieved. This guide assumes that the reader has both a basic understanding of the UNIX computing environment, and that of spectroscopy. Knowledge of the Interactive Data Language (IDL) and the Environment for Visualizing Images (ENVI) is helpful. This is a working document, and many of the fine details described in the following pages have been previously undocumented. After having read this document the reader should be able to process AVIRIS to reflectance, provided access to all of the code is possible. The AVIRIS radiance data itself is pre-processed at the Jet Propulsion Laboratory (JPL) in Pasadena, California. The first section of this paper describes how to read data from tape and byte-swap the data. Section 2 describes the procedure in preparing support files before running the 'h2o' suite of programs. Section 3 describes the four programs used in the process, h2olut9.f, h2ospl9.f, vlsfit9.f and rfl9.f.

  1. On application of image analysis and natural language processing for music search

    NASA Astrophysics Data System (ADS)

    Gwardys, Grzegorz

    2013-10-01

    In this paper, I investigate a problem of finding most similar music tracks using, popular in Natural Language Processing, techniques like: TF-IDF and LDA. I de ned document as music track. Each music track is transformed to spectrogram, thanks that, I can use well known techniques to get words from images. I used SURF operation to detect characteristic points and novel approach for their description. The standard kmeans was used for clusterization. Clusterization is here identical with dictionary making, so after that I can transform spectrograms to text documents and perform TF-IDF and LDA. At the final, I can make a query in an obtained vector space. The research was done on 16 music tracks for training and 336 for testing, that are splitted in four categories: Hiphop, Jazz, Metal and Pop. Although used technique is completely unsupervised, results are satisfactory and encouraging to further research.

  2. Evidence and diagnostic reporting in the IHE context.

    PubMed

    Loef, Cor; Truyen, Roel

    2005-05-01

    Capturing clinical observations and findings during the diagnostic imaging process is increasingly becoming a critical step in diagnostic reporting. Standards developers-notably HL7 and DICOM-are making significant progress toward standards that enable exchanging clinical observations and findings among the various information systems of the healthcare enterprise. DICOM-like the HL7 Clinical Document Architecture (CDA) -uses templates and constrained, coded vocabulary (SNOMED, LOINC, etc.). Such a representation facilitates automated software recognition of findings and observations, intrapatient comparison, correlation to norms, and outcomes research. The scope of DICOM Structured Reporting (SR) includes many findings that products routinely create in digital form (measurements, computed estimates, etc.). In the Integrating the Healthcare Enterprise (IHE) framework, two Integration Profiles are defined for clinical data capture and diagnostic reporting: Evidence Document, and Simple Image and Numeric Report. This report describes these two DICOM SR-based integration profiles in the diagnostic reporting process.

  3. Robust binarization of degraded document images using heuristics

    NASA Astrophysics Data System (ADS)

    Parker, Jon; Frieder, Ophir; Frieder, Gideon

    2013-12-01

    Historically significant documents are often discovered with defects that make them difficult to read and analyze. This fact is particularly troublesome if the defects prevent software from performing an automated analysis. Image enhancement methods are used to remove or minimize document defects, improve software performance, and generally make images more legible. We describe an automated, image enhancement method that is input page independent and requires no training data. The approach applies to color or greyscale images with hand written script, typewritten text, images, and mixtures thereof. We evaluated the image enhancement method against the test images provided by the 2011 Document Image Binarization Contest (DIBCO). Our method outperforms all 2011 DIBCO entrants in terms of average F1 measure - doing so with a significantly lower variance than top contest entrants. The capability of the proposed method is also illustrated using select images from a collection of historic documents stored at Yad Vashem Holocaust Memorial in Israel.

  4. Document cards: a top trumps visualization for documents.

    PubMed

    Strobelt, Hendrik; Oelke, Daniela; Rohrdantz, Christian; Stoffel, Andreas; Keim, Daniel A; Deussen, Oliver

    2009-01-01

    Finding suitable, less space consuming views for a document's main content is crucial to provide convenient access to large document collections on display devices of different size. We present a novel compact visualization which represents the document's key semantic as a mixture of images and important key terms, similar to cards in a top trumps game. The key terms are extracted using an advanced text mining approach based on a fully automatic document structure extraction. The images and their captions are extracted using a graphical heuristic and the captions are used for a semi-semantic image weighting. Furthermore, we use the image color histogram for classification and show at least one representative from each non-empty image class. The approach is demonstrated for the IEEE InfoVis publications of a complete year. The method can easily be applied to other publication collections and sets of documents which contain images.

  5. Artificial neural networks for document analysis and recognition.

    PubMed

    Marinai, Simone; Gori, Marco; Soda, Giovanni; Society, Computer

    2005-01-01

    Artificial neural networks have been extensively applied to document analysis and recognition. Most efforts have been devoted to the recognition of isolated handwritten and printed characters with widely recognized successful results. However, many other document processing tasks, like preprocessing, layout analysis, character segmentation, word recognition, and signature verification, have been effectively faced with very promising results. This paper surveys the most significant problems in the area of offline document image processing, where connectionist-based approaches have been applied. Similarities and differences between approaches belonging to different categories are discussed. A particular emphasis is given on the crucial role of prior knowledge for the conception of both appropriate architectures and learning algorithms. Finally, the paper provides a critical analysis on the reviewed approaches and depicts the most promising research guidelines in the field. In particular, a second generation of connectionist-based models are foreseen which are based on appropriate graphical representations of the learning environment.

  6. Algorithms and programming tools for image processing on the MPP, part 2

    NASA Technical Reports Server (NTRS)

    Reeves, Anthony P.

    1986-01-01

    A number of algorithms were developed for image warping and pyramid image filtering. Techniques were investigated for the parallel processing of a large number of independent irregular shaped regions on the MPP. In addition some utilities for dealing with very long vectors and for sorting were developed. Documentation pages for the algorithms which are available for distribution are given. The performance of the MPP for a number of basic data manipulations was determined. From these results it is possible to predict the efficiency of the MPP for a number of algorithms and applications. The Parallel Pascal development system, which is a portable programming environment for the MPP, was improved and better documentation including a tutorial was written. This environment allows programs for the MPP to be developed on any conventional computer system; it consists of a set of system programs and a library of general purpose Parallel Pascal functions. The algorithms were tested on the MPP and a presentation on the development system was made to the MPP users group. The UNIX version of the Parallel Pascal System was distributed to a number of new sites.

  7. Detecting stripe artifacts in ultrasound images.

    PubMed

    Maciak, Adam; Kier, Christian; Seidel, Günter; Meyer-Wiethe, Karsten; Hofmann, Ulrich G

    2009-10-01

    Brain perfusion diseases such as acute ischemic stroke are detectable through computed tomography (CT)-/magnetic resonance imaging (MRI)-based methods. An alternative approach makes use of ultrasound imaging. In this low-cost bedside method, noise and artifacts degrade the imaging process. Especially stripe artifacts show a similar signal behavior compared to acute stroke or brain perfusion diseases. This document describes how stripe artifacts can be detected and eliminated in ultrasound images obtained through harmonic imaging (HI). On the basis of this new method, both proper identification of areas with critically reduced brain tissue perfusion and classification between brain perfusion defects and ultrasound stripe artifacts are made possible.

  8. Portulano frente a Landsat: Dos sistemas de georreferenciacion para el Estrecho de Gibraltar

    NASA Astrophysics Data System (ADS)

    Barranco Molina, Carlos

    The purpose of this research is to study and an compare two different images of the same area, from geographical, topographical and historical perspectives. The main differences between them are the temporal localization and the techniques and tools required in the process of getting both images. One of them is a 17th century map and the other is a contemporary satellite image. Any cartographical document would be expected to be precise, exact and cutting edge thanks to the application of the new technologies to cartography, geodesy and computing. The document can therefore be considered as insuperable and definitive. In this context, this research is a starting point for my future doctoral dissertation which will deal with the importance of the portulano maps in the Scientific Revolution of the 15th and 16 th century.

  9. ThunderSTORM: a comprehensive ImageJ plug-in for PALM and STORM data analysis and super-resolution imaging

    PubMed Central

    Ovesný, Martin; Křížek, Pavel; Borkovec, Josef; Švindrych, Zdeněk; Hagen, Guy M.

    2014-01-01

    Summary: ThunderSTORM is an open-source, interactive and modular plug-in for ImageJ designed for automated processing, analysis and visualization of data acquired by single-molecule localization microscopy methods such as photo-activated localization microscopy and stochastic optical reconstruction microscopy. ThunderSTORM offers an extensive collection of processing and post-processing methods so that users can easily adapt the process of analysis to their data. ThunderSTORM also offers a set of tools for creation of simulated data and quantitative performance evaluation of localization algorithms using Monte Carlo simulations. Availability and implementation: ThunderSTORM and the online documentation are both freely accessible at https://code.google.com/p/thunder-storm/ Contact: guy.hagen@lf1.cuni.cz Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24771516

  10. Reducing uncertainty in wind turbine blade health inspection with image processing techniques

    NASA Astrophysics Data System (ADS)

    Zhang, Huiyi

    Structural health inspection has been widely applied in the operation of wind farms to find early cracks in wind turbine blades (WTBs). Increased numbers of turbines and expanded rotor diameters are driving up the workloads and safety risks for site employees. Therefore, it is important to automate the inspection process as well as minimize the uncertainties involved in routine blade health inspection. In addition, crack documentation and trending is vital to assess rotor blade and turbine reliability in the 20 year designed life span. A new crack recognition and classification algorithm is described that can support automated structural health inspection of the surface of large composite WTBs. The first part of the study investigated the feasibility of digital image processing in WTB health inspection and defined the capability of numerically detecting cracks as small as hairline thickness. The second part of the study identified and analyzed the uncertainty of the digital image processing method. A self-learning algorithm was proposed to recognize and classify cracks without comparing a blade image to a library of crack images. The last part of the research quantified the uncertainty in the field conditions and the image processing methods.

  11. Thoracic Imaging Features of Legionnaire's Disease.

    PubMed

    Mittal, Sameer; Singh, Ayushi P; Gold, Menachem; Leung, Ann N; Haramati, Linda B; Katz, Douglas S

    2017-03-01

    Imaging examinations are often performed in patients with Legionnaires' disease. The literature to date has documented that the imaging findings in this disorder are relatively nonspecific, and it is therefore difficult to prospectively differentiate legionella pneumonia from other forms of pneumonia, and from other noninfectious thoracic processes. Through a review of clinical cases and the literature, our objective is for the reader to gain a better understanding of the spectrum of radiographic manifestations of Legionnaires' disease. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Document Image Parsing and Understanding using Neuromorphic Architecture

    DTIC Science & Technology

    2015-03-01

    processing speed at different layers. In the pattern matching layer, the computing power of multicore processors is explored to reduce the processing...developed to reduce the processing speed at different layers. In the pattern matching layer, the computing power of multicore processors is explored... cortex where the complex data is reduced to abstract representations. The abstract representation is compared to stored patterns in massively parallel

  13. A chaotic cryptosystem for images based on Henon and Arnold cat map.

    PubMed

    Soleymani, Ali; Nordin, Md Jan; Sundararajan, Elankovan

    2014-01-01

    The rapid evolution of imaging and communication technologies has transformed images into a widespread data type. Different types of data, such as personal medical information, official correspondence, or governmental and military documents, are saved and transmitted in the form of images over public networks. Hence, a fast and secure cryptosystem is needed for high-resolution images. In this paper, a novel encryption scheme is presented for securing images based on Arnold cat and Henon chaotic maps. The scheme uses Arnold cat map for bit- and pixel-level permutations on plain and secret images, while Henon map creates secret images and specific parameters for the permutations. Both the encryption and decryption processes are explained, formulated, and graphically presented. The results of security analysis of five different images demonstrate the strength of the proposed cryptosystem against statistical, brute force and differential attacks. The evaluated running time for both encryption and decryption processes guarantee that the cryptosystem can work effectively in real-time applications.

  14. A Chaotic Cryptosystem for Images Based on Henon and Arnold Cat Map

    PubMed Central

    Sundararajan, Elankovan

    2014-01-01

    The rapid evolution of imaging and communication technologies has transformed images into a widespread data type. Different types of data, such as personal medical information, official correspondence, or governmental and military documents, are saved and transmitted in the form of images over public networks. Hence, a fast and secure cryptosystem is needed for high-resolution images. In this paper, a novel encryption scheme is presented for securing images based on Arnold cat and Henon chaotic maps. The scheme uses Arnold cat map for bit- and pixel-level permutations on plain and secret images, while Henon map creates secret images and specific parameters for the permutations. Both the encryption and decryption processes are explained, formulated, and graphically presented. The results of security analysis of five different images demonstrate the strength of the proposed cryptosystem against statistical, brute force and differential attacks. The evaluated running time for both encryption and decryption processes guarantee that the cryptosystem can work effectively in real-time applications. PMID:25258724

  15. Conceptual design of a monitoring system for the Charters of Freedom

    NASA Technical Reports Server (NTRS)

    Cutts, J. A.

    1984-01-01

    A conceptual design of a monitoring system for the Charters of Freedom was developed for the National Archives and Records Service. The monitoring system would be installed at the National Archives and used to document the condition of the Charters as part of a regular inspection program. The results of an experimental measurements program that led to the definition of analysis system requirements are presented, a conceptual design of the monitoring system is described and the alternative approaches to implementing this design were discussed. The monitoring system is required to optically detect and measure deterioration in documents that are permanently encapsulated in glass cases. An electronic imaging system with the capability for precise photometric measurements of the contrast of the script on the documents can perform this task. Two general types of imaging systems are considered (line and area array), and their suitability for performing these required measurements are compared. A digital processing capability for analyzing the electronic imaging data is also required, and several optional levels of complexity for this digital analysis system are evaluated.

  16. Building Structured Personal Health Records from Photographs of Printed Medical Records.

    PubMed

    Li, Xiang; Hu, Gang; Teng, Xiaofei; Xie, Guotong

    2015-01-01

    Personal health records (PHRs) provide patient-centric healthcare by making health records accessible to patients. In China, it is very difficult for individuals to access electronic health records. Instead, individuals can easily obtain the printed copies of their own medical records, such as prescriptions and lab test reports, from hospitals. In this paper, we propose a practical approach to extract structured data from printed medical records photographed by mobile phones. An optical character recognition (OCR) pipeline is performed to recognize text in a document photo, which addresses the problems of low image quality and content complexity by image pre-processing and multiple OCR engine synthesis. A series of annotation algorithms that support flexible layouts are then used to identify the document type, entities of interest, and entity correlations, from which a structured PHR document is built. The proposed approach was applied to real world medical records to demonstrate the effectiveness and applicability.

  17. Building Structured Personal Health Records from Photographs of Printed Medical Records

    PubMed Central

    Li, Xiang; Hu, Gang; Teng, Xiaofei; Xie, Guotong

    2015-01-01

    Personal health records (PHRs) provide patient-centric healthcare by making health records accessible to patients. In China, it is very difficult for individuals to access electronic health records. Instead, individuals can easily obtain the printed copies of their own medical records, such as prescriptions and lab test reports, from hospitals. In this paper, we propose a practical approach to extract structured data from printed medical records photographed by mobile phones. An optical character recognition (OCR) pipeline is performed to recognize text in a document photo, which addresses the problems of low image quality and content complexity by image pre-processing and multiple OCR engine synthesis. A series of annotation algorithms that support flexible layouts are then used to identify the document type, entities of interest, and entity correlations, from which a structured PHR document is built. The proposed approach was applied to real world medical records to demonstrate the effectiveness and applicability. PMID:26958219

  18. Multispectral Photogrammetric Data Acquisition and Processing Forwall Paintings Studies

    NASA Astrophysics Data System (ADS)

    Pamart, A.; Guillon, O.; Faraci, S.; Gattet, E.; Genevois, M.; Vallet, J. M.; De Luca, L.

    2017-02-01

    In the field of wall paintings studies different imaging techniques are commonly used for the documentation and the decision making in term of conservation and restoration. There is nowadays some challenging issues to merge scientific imaging techniques in a multimodal context (i.e. multi-sensors, multi-dimensions, multi-spectral and multi-temporal approaches). For decades those CH objects has been widely documented with Technical Photography (TP) which gives precious information to understand or retrieve the painting layouts and history. More recently there is an increasing demand of the use of digital photogrammetry in order to provide, as one of the possible output, an orthophotomosaic which brings a possibility for metrical quantification of conservators/restorators observations and actions planning. This paper presents some ongoing experimentations of the LabCom MAP-CICRP relying on the assumption that those techniques can be merged through a common pipeline to share their own benefits and create a more complete documentation.

  19. Data Provenance in Photogrammetry Through Documentation Protocols

    NASA Astrophysics Data System (ADS)

    Carboni, N.; Bruseker, G.; Guillem, A.; Bellido Castañeda, D.; Coughenour, C.; Domajnko, M.; de Kramer, M.; Ramos Calles, M. M.; Stathopoulou, E. K.; Suma, R.

    2016-06-01

    Documenting the relevant aspects in digitisation processes such as photogrammetry in order to provide a robust provenance for their products continues to present a challenge. The creation of a product that can be re-used scientifically requires a framework for consistent, standardised documentation of the entire digitisation pipeline. This article provides an analysis of the problems inherent to such goals and presents a series of protocols to document the various steps of a photogrammetric workflow. We propose this pipeline, with descriptors to track all phases of digital product creation in order to assure data provenance and enable the validation of the operations from an analytic and production perspective. The approach aims to support adopters of the workflow to define procedures with a long term perspective. The conceptual schema we present is founded on an analysis of information and actor exchanges in the digitisation process. The metadata were defined through the synthesis of previous proposals in this area and were tested on a case study. We performed the digitisation of a set of cultural heritage artefacts from an Iron Age burial in Ilmendorf, Germany. The objects were captured and processed using different techniques, including a comparison of different imaging tools and algorithms. This augmented the complexity of the process allowing us to test the flexibility of the schema for documenting complex scenarios. Although we have only presented a photogrammetry digitisation scenario, we claim that our schema is easily applicable to a multitude of 3D documentation processes.

  20. Algorithms and programming tools for image processing on the MPP

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1985-01-01

    Topics addressed include: data mapping and rotational algorithms for the Massively Parallel Processor (MPP); Parallel Pascal language; documentation for the Parallel Pascal Development system; and a description of the Parallel Pascal language used on the MPP.

  1. Accurate documentation in cultural heritage by merging TLS and high-resolution photogrammetric data

    NASA Astrophysics Data System (ADS)

    Grussenmeyer, Pierre; Alby, Emmanuel; Assali, Pierre; Poitevin, Valentin; Hullo, Jean-François; Smigiel, Eddie

    2011-07-01

    Several recording techniques are used together in Cultural Heritage Documentation projects. The main purpose of the documentation and conservation works is usually to generate geometric and photorealistic 3D models for both accurate reconstruction and visualization purposes. The recording approach discussed in this paper is based on the combination of photogrammetric dense matching and Terrestrial Laser Scanning (TLS) techniques. Both techniques have pros and cons, and criteria as geometry, texture, accuracy, resolution, recording and processing time are often compared. TLS techniques (time of flight or phase shift systems) are often used for the recording of large and complex objects or sites. Point cloud generation from images by dense stereo or multi-image matching can be used as an alternative or a complementary method to TLS. Compared to TLS, the photogrammetric solution is a low cost one as the acquisition system is limited to a digital camera and a few accessories only. Indeed, the stereo matching process offers a cheap, flexible and accurate solution to get 3D point clouds and textured models. The calibration of the camera allows the processing of distortion free images, accurate orientation of the images, and matching at the subpixel level. The main advantage of this photogrammetric methodology is to get at the same time a point cloud (the resolution depends on the size of the pixel on the object), and therefore an accurate meshed object with its texture. After the matching and processing steps, we can use the resulting data in much the same way as a TLS point cloud, but with really better raster information for textures. The paper will address the automation of recording and processing steps, the assessment of the results, and the deliverables (e.g. PDF-3D files). Visualization aspects of the final 3D models are presented. Two case studies with merged photogrammetric and TLS data are finally presented: - The Gallo-roman Theatre of Mandeure, France); - The Medieval Fortress of Châtel-sur-Moselle, France), where a network of underground galleries and vaults has been recorded.

  2. ImgLib2--generic image processing in Java.

    PubMed

    Pietzsch, Tobias; Preibisch, Stephan; Tomancák, Pavel; Saalfeld, Stephan

    2012-11-15

    ImgLib2 is an open-source Java library for n-dimensional data representation and manipulation with focus on image processing. It aims at minimizing code duplication by cleanly separating pixel-algebra, data access and data representation in memory. Algorithms can be implemented for classes of pixel types and generic access patterns by which they become independent of the specific dimensionality, pixel type and data representation. ImgLib2 illustrates that an elegant high-level programming interface can be achieved without sacrificing performance. It provides efficient implementations of common data types, storage layouts and algorithms. It is the data model underlying ImageJ2, the KNIME Image Processing toolbox and an increasing number of Fiji-Plugins. ImgLib2 is licensed under BSD. Documentation and source code are available at http://imglib2.net and in a public repository at https://github.com/imagej/imglib. Supplementary data are available at Bioinformatics Online. saalfeld@mpi-cbg.de

  3. A neural network ActiveX based integrated image processing environment.

    PubMed

    Ciuca, I; Jitaru, E; Alaicescu, M; Moisil, I

    2000-01-01

    The paper outlines an integrated image processing environment that uses neural networks ActiveX technology for object recognition and classification. The image processing environment which is Windows based, encapsulates a Multiple-Document Interface (MDI) and is menu driven. Object (shape) parameter extraction is focused on features that are invariant in terms of translation, rotation and scale transformations. The neural network models that can be incorporated as ActiveX components into the environment allow both clustering and classification of objects from the analysed image. Mapping neural networks perform an input sensitivity analysis on the extracted feature measurements and thus facilitate the removal of irrelevant features and improvements in the degree of generalisation. The program has been used to evaluate the dimensions of the hydrocephalus in a study for calculating the Evans index and the angle of the frontal horns of the ventricular system modifications.

  4. MToS: A Tree of Shapes for Multivariate Images.

    PubMed

    Carlinet, Edwin; Géraud, Thierry

    2015-12-01

    The topographic map of a gray-level image, also called tree of shapes, provides a high-level hierarchical representation of the image contents. This representation, invariant to contrast changes and to contrast inversion, has been proved very useful to achieve many image processing and pattern recognition tasks. Its definition relies on the total ordering of pixel values, so this representation does not exist for color images, or more generally, multivariate images. Common workarounds, such as marginal processing, or imposing a total order on data, are not satisfactory and yield many problems. This paper presents a method to build a tree-based representation of multivariate images, which features marginally the same properties of the gray-level tree of shapes. Briefly put, we do not impose an arbitrary ordering on values, but we only rely on the inclusion relationship between shapes in the image definition domain. The interest of having a contrast invariant and self-dual representation of multivariate image is illustrated through several applications (filtering, segmentation, and object recognition) on different types of data: color natural images, document images, satellite hyperspectral imaging, multimodal medical imaging, and videos.

  5. Geometric rectification of camera-captured document images.

    PubMed

    Liang, Jian; DeMenthon, Daniel; Doermann, David

    2008-04-01

    Compared to typical scanners, handheld cameras offer convenient, flexible, portable, and non-contact image capture, which enables many new applications and breathes new life into existing ones. However, camera-captured documents may suffer from distortions caused by non-planar document shape and perspective projection, which lead to failure of current OCR technologies. We present a geometric rectification framework for restoring the frontal-flat view of a document from a single camera-captured image. Our approach estimates 3D document shape from texture flow information obtained directly from the image without requiring additional 3D/metric data or prior camera calibration. Our framework provides a unified solution for both planar and curved documents and can be applied in many, especially mobile, camera-based document analysis applications. Experiments show that our method produces results that are significantly more OCR compatible than the original images.

  6. Interface control document between the NASA Goddard Space Flight Center (GSFC) and Department of Interior EROS Data Center (EDC) for LANDSAT-D. Partially processed multispectral scanner High Density Tape (HDT-AM)

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The format of the HDT-AM product which contains partially processed LANDSAT D and D Prime multispectral scanner image data is defined. Recorded-data formats, tape format, and major frame types are described.

  7. Analysis Of The IJCNN 2011 UTL Challenge

    DTIC Science & Technology

    2012-01-13

    large datasets from various application domains: handwriting recognition, image recognition, video processing, text processing, and ecology. The goal...validation and final evaluation sets consist of 4096 examples each. Dataset Domain Features Sparsity Devel. Transf. AVICENNA Handwriting 120 0% 150205...documents [3]. Transfer learning methods could accelerate the application of handwriting recognizers to historical manuscript by reducing the need for

  8. The Heinz Electronic Library Interactive Online System (HELIOS): Building a Digital Archive Using Imaging, OCR, and Natural Language Processing Technologies.

    ERIC Educational Resources Information Center

    Galloway, Edward A.; Michalek, Gabrielle V.

    1995-01-01

    Discusses the conversion project of the congressional papers of Senator John Heinz into digital format and the provision of electronic access to these papers by Carnegie Mellon University. Topics include collection background, project team structure, document processing, scanning, use of optical character recognition software, verification…

  9. A GPU accelerated PDF transparency engine

    NASA Astrophysics Data System (ADS)

    Recker, John; Lin, I.-Jong; Tastl, Ingeborg

    2011-01-01

    As commercial printing presses become faster, cheaper and more efficient, so too must the Raster Image Processors (RIP) that prepare data for them to print. Digital press RIPs, however, have been challenged to on the one hand meet the ever increasing print performance of the latest digital presses, and on the other hand process increasingly complex documents with transparent layers and embedded ICC profiles. This paper explores the challenges encountered when implementing a GPU accelerated driver for the open source Ghostscript Adobe PostScript and PDF language interpreter targeted at accelerating PDF transparency for high speed commercial presses. It further describes our solution, including an image memory manager for tiling input and output images and documents, a PDF compatible multiple image layer blending engine, and a GPU accelerated ICC v4 compatible color transformation engine. The result, we believe, is the foundation for a scalable, efficient, distributed RIP system that can meet current and future RIP requirements for a wide range of commercial digital presses.

  10. Advanced microlens and color filter process technology for the high-efficiency CMOS and CCD image sensors

    NASA Astrophysics Data System (ADS)

    Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu

    2000-12-01

    New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.

  11. The sequence measurement system of the IR camera

    NASA Astrophysics Data System (ADS)

    Geng, Ai-hui; Han, Hong-xia; Zhang, Hai-bo

    2011-08-01

    Currently, the IR cameras are broadly used in the optic-electronic tracking, optic-electronic measuring, fire control and optic-electronic countermeasure field, but the output sequence of the most presently applied IR cameras in the project is complex and the giving sequence documents from the leave factory are not detailed. Aiming at the requirement that the continuous image transmission and image procession system need the detailed sequence of the IR cameras, the sequence measurement system of the IR camera is designed, and the detailed sequence measurement way of the applied IR camera is carried out. The FPGA programming combined with the SignalTap online observation way has been applied in the sequence measurement system, and the precise sequence of the IR camera's output signal has been achieved, the detailed document of the IR camera has been supplied to the continuous image transmission system, image processing system and etc. The sequence measurement system of the IR camera includes CameraLink input interface part, LVDS input interface part, FPGA part, CameraLink output interface part and etc, thereinto the FPGA part is the key composed part in the sequence measurement system. Both the video signal of the CmaeraLink style and the video signal of LVDS style can be accepted by the sequence measurement system, and because the image processing card and image memory card always use the CameraLink interface as its input interface style, the output signal style of the sequence measurement system has been designed into CameraLink interface. The sequence measurement system does the IR camera's sequence measurement work and meanwhile does the interface transmission work to some cameras. Inside the FPGA of the sequence measurement system, the sequence measurement program, the pixel clock modification, the SignalTap file configuration and the SignalTap online observation has been integrated to realize the precise measurement to the IR camera. Te sequence measurement program written by the verilog language combining the SignalTap tool on line observation can count the line numbers in one frame, pixel numbers in one line and meanwhile account the line offset and row offset of the image. Aiming at the complex sequence of the IR camera's output signal, the sequence measurement system of the IR camera accurately measures the sequence of the project applied camera, supplies the detailed sequence document to the continuous system such as image processing system and image transmission system and gives out the concrete parameters of the fval, lval, pixclk, line offset and row offset. The experiment shows that the sequence measurement system of the IR camera can get the precise sequence measurement result and works stably, laying foundation for the continuous system.

  12. EMAN2: an extensible image processing suite for electron microscopy.

    PubMed

    Tang, Guang; Peng, Liwei; Baldwin, Philip R; Mann, Deepinder S; Jiang, Wen; Rees, Ian; Ludtke, Steven J

    2007-01-01

    EMAN is a scientific image processing package with a particular focus on single particle reconstruction from transmission electron microscopy (TEM) images. It was first released in 1999, and new versions have been released typically 2-3 times each year since that time. EMAN2 has been under development for the last two years, with a completely refactored image processing library, and a wide range of features to make it much more flexible and extensible than EMAN1. The user-level programs are better documented, more straightforward to use, and written in the Python scripting language, so advanced users can modify the programs' behavior without any recompilation. A completely rewritten 3D transformation class simplifies translation between Euler angle standards and symmetry conventions. The core C++ library has over 500 functions for image processing and associated tasks, and it is modular with introspection capabilities, so programmers can add new algorithms with minimal effort and programs can incorporate new capabilities automatically. Finally, a flexible new parallelism system has been designed to address the shortcomings in the rigid system in EMAN1.

  13. Digital analysis of wind tunnel imagery to measure fluid thickness

    NASA Technical Reports Server (NTRS)

    Easton, Roger L., Jr.; Enge, James

    1992-01-01

    Documented here are the procedure and results obtained from the application of digital image processing techniques to the problem of measuring the thickness of a deicing fluid on a model airfoil during simulated takeoffs. The fluid contained a fluorescent dye and the images were recorded under flash illumination on photographic film. The films were digitized and analyzed on a personal computer to obtain maps of the fluid thickness.

  14. Processed Thematic Mapper Satellite Imagery for Selected Areas within the U.S.-Mexico Borderlands

    USGS Publications Warehouse

    Dohrenwend, John C.; Gray, Floyd; Miller, Robert J.

    2000-01-01

    The study is summarized in the Adobe Acrobat Portable Document Format (PDF) file OF00-309.PDF. This publication also contain satellite full-scene images of selected areas along the U.S.-Mexico border. These images are presented as high-resolution images in jpeg format (IMAGES). The folder LOCATIONS in contains TIFF images showing exact positions of easily-identified reference locations for each of the Landsat TM scenes located at least partly within the U.S. A reference location table (BDRLOCS.DOC in MS Word format) lists the latitude and longitude of each reference location with a nominal precision of 0.001 minute of arc

  15. Medical image informatics infrastructure design and applications.

    PubMed

    Huang, H K; Wong, S T; Pietka, E

    1997-01-01

    Picture archiving and communication systems (PACS) is a system integration of multimodality images and health information systems designed for improving the operation of a radiology department. As it evolves, PACS becomes a hospital image document management system with a voluminous image and related data file repository. A medical image informatics infrastructure can be designed to take advantage of existing data, providing PACS with add-on value for health care service, research, and education. A medical image informatics infrastructure (MIII) consists of the following components: medical images and associated data (including PACS database), image processing, data/knowledge base management, visualization, graphic user interface, communication networking, and application oriented software. This paper describes these components and their logical connection, and illustrates some applications based on the concept of the MIII.

  16. Targeting youth and concerned smokers: evidence from Canadian tobacco industry documents.

    PubMed

    Pollay, R W

    2000-06-01

    To provide an understanding of the targeting strategies of cigarette marketing, and the functions and importance of the advertising images chosen. Analysis of historical corporate documents produced by affiliates of British American Tobacco (BAT) and RJ Reynolds (RJR) in Canadian litigation challenging tobacco advertising regulation, the Tobacco Products Control Act (1987): Imperial Tobacco Limitee & RJR-Macdonald Inc c. Le Procurer General du Canada. Careful and extensive research has been employed in all stages of the process of conceiving, developing, refining, and deploying cigarette advertising. Two segments commanding much management attention are "starters" and "concerned smokers". To recruit starters, brand images communicate independence, freedom and (sometimes) peer acceptance. These advertising images portray smokers as attractive and autonomous, accepted and admired, athletic and at home in nature. For "lighter" brands reassuring health concerned smokers, lest they quit, advertisements provide imagery conveying a sense of well being, harmony with nature, and a consumer's self image as intelligent. The industry's steadfast assertions that its advertising influences only brand loyalty and switching in both its intent and effect is directly contradicted by their internal documents and proven false. So too is the justification of cigarette advertising as a medium creating better informed consumers, since visual imagery, not information, is the means of advertising influence.

  17. Uses of software in digital image analysis: a forensic report

    NASA Astrophysics Data System (ADS)

    Sharma, Mukesh; Jha, Shailendra

    2010-02-01

    Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.

  18. Trainable multiscript orientation detection

    NASA Astrophysics Data System (ADS)

    Van Beusekom, Joost; Rangoni, Yves; Breuel, Thomas M.

    2010-01-01

    Detecting the correct orientation of document images is an important step in large scale digitization processes, as most subsequent document analysis and optical character recognition methods assume upright position of the document page. Many methods have been proposed to solve the problem, most of which base on ascender to descender ratio computation. Unfortunately, this cannot be used for scripts having no descenders nor ascenders. Therefore, we present a trainable method using character similarity to compute the correct orientation. A connected component based distance measure is computed to compare the characters of the document image to characters whose orientation is known. This allows to detect the orientation for which the distance is lowest as the correct orientation. Training is easily achieved by exchanging the reference characters by characters of the script to be analyzed. Evaluation of the proposed approach showed accuracy of above 99% for Latin and Japanese script from the public UW-III and UW-II datasets. An accuracy of 98.9% was obtained for Fraktur on a non-public dataset. Comparison of the proposed method to two methods using ascender / descender ratio based orientation detection shows a significant improvement.

  19. SOI-CMOS Process for Monolithic, Radiation-Tolerant, Science-Grade Imagers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, George; Lee, Adam

    In Phase I, Voxtel worked with Jazz and Sandia to document and simulate the processes necessary to implement a DH-BSI SOI CMOS imaging process. The development is based upon mature SOI CMOS process at both fabs, with the addition of only a few custom processing steps for integration and electrical interconnection of the fully-depleted photodetectors. In Phase I, Voxtel also characterized the Sandia process, including the CMOS7 design rules, and we developed the outline of a process option that included a “BOX etch”, that will permit a “detector in handle” SOI CMOS process to be developed The process flows weremore » developed in cooperation with both Jazz and Sandia process engineers, along with detailed TCAD modeling and testing of the photodiode array architectures. In addition, Voxtel tested the radiation performance of the Jazz’s CA18HJ process, using standard and circular-enclosed transistors.« less

  20. Electrophoresis gel image processing and analysis using the KODAK 1D software.

    PubMed

    Pizzonia, J

    2001-06-01

    The present article reports on the performance of the KODAK 1D Image Analysis Software for the acquisition of information from electrophoresis experiments and highlights the utility of several mathematical functions for subsequent image processing, analysis, and presentation. Digital images of Coomassie-stained polyacrylamide protein gels containing molecular weight standards and ethidium bromide stained agarose gels containing DNA mass standards are acquired using the KODAK Electrophoresis Documentation and Analysis System 290 (EDAS 290). The KODAK 1D software is used to optimize lane and band identification using features such as isomolecular weight lines. Mathematical functions for mass standard representation are presented, and two methods for estimation of unknown band mass are compared. Given the progressive transition of electrophoresis data acquisition and daily reporting in peer-reviewed journals to digital formats ranging from 8-bit systems such as EDAS 290 to more expensive 16-bit systems, the utility of algorithms such as Gaussian modeling, which can correct geometric aberrations such as clipping due to signal saturation common at lower bit depth levels, is discussed. Finally, image-processing tools that can facilitate image preparation for presentation are demonstrated.

  1. Image-Based Collection and Measurements for Construction Pay Items

    DOT National Transportation Integrated Search

    2017-07-01

    Prior to each payment to contractors and suppliers, measurements are made to document the actual amount of pay items placed at the site. This manual process has substantial risk for personnel, and could be made more efficient and less prone to human ...

  2. Survey of Knowledge Representation and Reasoning Systems

    DTIC Science & Technology

    2009-07-01

    processing large volumes of unstructured information such as natural language documents, email, audio , images and video [Ferrucci et al. 2006]. Using this...information we hope to obtain improved es- timation and prediction, data-mining, social network analysis, and semantic search and visualisation . Knowledge

  3. From Metric Image Archives to Point Cloud Reconstruction: Case Study of the Great Mosque of Aleppo in Syria

    NASA Astrophysics Data System (ADS)

    Grussenmeyer, P.; Khalil, O. Al

    2017-08-01

    The paper presents photogrammetric archives from Aleppo (Syria), collected between 1999 and 2002 by the Committee for maintenance and restoration of the Great Mosque in partnership with the Engineering Unit of the University of Aleppo. During that period, terrestrial photogrammetric data and geodetic surveys of the Great Omayyad mosque were recorded for documentation purposes and geotechnical studies. During the recent war in Syria, the Mosque has unfortunately been seriously damaged and its minaret has been completely destroyed. The paper presents a summary of the documentation available from the past projects as well as solutions of 3D reconstruction based on the processing of the photogrammetric archives with the latest 3D image-based techniques.

  4. Symmetric nonnegative matrix factorization: algorithms and applications to probabilistic clustering.

    PubMed

    He, Zhaoshui; Xie, Shengli; Zdunek, Rafal; Zhou, Guoxu; Cichocki, Andrzej

    2011-12-01

    Nonnegative matrix factorization (NMF) is an unsupervised learning method useful in various applications including image processing and semantic analysis of documents. This paper focuses on symmetric NMF (SNMF), which is a special case of NMF decomposition. Three parallel multiplicative update algorithms using level 3 basic linear algebra subprograms directly are developed for this problem. First, by minimizing the Euclidean distance, a multiplicative update algorithm is proposed, and its convergence under mild conditions is proved. Based on it, we further propose another two fast parallel methods: α-SNMF and β -SNMF algorithms. All of them are easy to implement. These algorithms are applied to probabilistic clustering. We demonstrate their effectiveness for facial image clustering, document categorization, and pattern clustering in gene expression.

  5. Influence of Burke and Lessing on the Semiotic Theory of Document Design: Ideologies and Good Visual Images of Documents.

    ERIC Educational Resources Information Center

    Ding, Daniel D.

    2000-01-01

    Presents historical roots of page design principles, arguing that current theories and practices of document design have their roots in gender-related theories of images. Claims visual design should be evaluated regarding the rhetorical situation in which the design is used. Focuses on visual images of documents in professional communication,…

  6. IDIMS/GEOPAK: Users manual for a geophysical data display and analysis system

    NASA Technical Reports Server (NTRS)

    Libert, J. M.

    1982-01-01

    The application of an existing image analysis system to the display and analysis of geophysical data is described, the potential for expanding the capabilities of such a system toward more advanced computer analytic and modeling functions is investigated. The major features of the IDIMS (Interactive Display and Image Manipulation System) and its applicability for image type analysis of geophysical data are described. Development of a basic geophysical data processing system to permit the image representation, coloring, interdisplay and comparison of geophysical data sets using existing IDIMS functions and to provide for the production of hard copies of processed images was described. An instruction manual and documentation for the GEOPAK subsystem was produced. A training course for personnel in the use of the IDIMS/GEOPAK was conducted. The effectiveness of the current IDIMS/GEOPAK system for geophysical data analysis was evaluated.

  7. Goal-oriented rectification of camera-based document images.

    PubMed

    Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J

    2011-04-01

    Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure.

  8. [Postmortem imaging studies with data processing and 3D reconstruction: a new path of development of classic forensic medicine?].

    PubMed

    Woźniak, Krzysztof; Moskała, Artur; Urbanik, Andrzej; Kopacz, Paweł; Kłys, Małgorzata

    2009-01-01

    The techniques employed in "classic" forensic autopsy have been virtually unchanged for many years. One of the fundamental purposes of forensic documentation is to register as objectively as possible the changes found by forensic pathologists. The authors present the review of techniques of postmortem imaging studies, which aim not only at increased objectivity of observations, but also at extending the scope of the registered data. The paper is illustrated by images originating from research carried out by the authors.

  9. Raster Metafile and Raster Metafile Translator

    NASA Technical Reports Server (NTRS)

    Taylor, Nancy L.; Everton, Eric L.; Randall, Donald P.; Gates, Raymond L.; Skeens, Kristi M.

    1989-01-01

    The intent is to present an effort undertaken at NASA Langley Research Center to design a generic raster image format and to develop tools for processing images prepared in this format. Both the Raster Metafile (RM) format and the Raster Metafile Translator (RMT) are addressed. This document is intended to serve a varied audience including: users wishing to display and manipulate raster image data, programmers responsible for either interfacing the RM format with other raster formats or for developing new RMT device drivers, and programmers charged with installing the software on a host platform.

  10. Radionuclide imaging in myocardial sarcoidosis. Demonstration of myocardial uptake of /sup 99m/Tc pyrophosphate and gallium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forman, M.B.; Sandler, M.P.; Sacks, G.A.

    1983-03-01

    A patient had severe congestive cardiomyopathy secondary to myocardial sarcoidosis. The clinical diagnosis was confirmed by radionuclide ventriculography, /sup 201/Tl, /sup 67/Ga, and /sup 99m/Tc pyrophosphate (TcPYP) scintigraphy. Myocardial TcPYP uptake has not been reported previously in sarcoidosis. In this patient, TcPYP was as useful as gallium scanning and thallium imaging in documenting the myocardial process.

  11. Army Depot Maintenance: More Effective Use of Organic and Contractor Resources

    DTIC Science & Technology

    1990-06-04

    terkerrny (PA) RRAD Red River (TX) IBAD Lexingtor).Bluegrass (KY) 3MPUAD Pueblo (CO) 2000 ANAD CCAD TEAD TOAD SAAO LEAD FMD LBAD PUAfl j] Aut)’onzed Onboard...until it is received by the contractor. That problem could be eliminated by redesigning the process. Specifically, the FTA document that notifies the...should revise CCSS to allow the FTM document to generate an image to LCA, the FTA to create a due-in to the contractor, and CCSS to determine the depot

  12. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  13. Administrative IT

    ERIC Educational Resources Information Center

    Grayson, Katherine, Ed.

    2006-01-01

    When it comes to Administrative IT solutions and processes, best practices range across the spectrum. Enterprise resource planning (ERP), student information systems (SIS), and tech support are prominent and continuing areas of focus. But widespread change can also be accomplished via the implementation of campuswide document imaging and sharing,…

  14. NASA IMAGESEER: NASA IMAGEs for Science, Education, Experimentation and Research

    NASA Technical Reports Server (NTRS)

    Le Moigne, Jacqueline; Grubb, Thomas G.; Milner, Barbara C.

    2012-01-01

    A number of web-accessible databases, including medical, military or other image data, offer universities and other users the ability to teach or research new Image Processing techniques on relevant and well-documented data. However, NASA images have traditionally been difficult for researchers to find, are often only available in hard-to-use formats, and do not always provide sufficient context and background for a non-NASA Scientist user to understand their content. The new IMAGESEER (IMAGEs for Science, Education, Experimentation and Research) database seeks to address these issues. Through a graphically-rich web site for browsing and downloading all of the selected datasets, benchmarks, and tutorials, IMAGESEER provides a widely accessible database of NASA-centric, easy to read, image data for teaching or validating new Image Processing algorithms. As such, IMAGESEER fosters collaboration between NASA and research organizations while simultaneously encouraging development of new and enhanced Image Processing algorithms. The first prototype includes a representative sampling of NASA multispectral and hyperspectral images from several Earth Science instruments, along with a few small tutorials. Image processing techniques are currently represented with cloud detection, image registration, and map cover/classification. For each technique, corresponding data are selected from four different geographic regions, i.e., mountains, urban, water coastal, and agriculture areas. Satellite images have been collected from several instruments - Landsat-5 and -7 Thematic Mappers, Earth Observing-1 (EO-1) Advanced Land Imager (ALI) and Hyperion, and the Moderate Resolution Imaging Spectroradiometer (MODIS). After geo-registration, these images are available in simple common formats such as GeoTIFF and raw formats, along with associated benchmark data.

  15. Evaluation of image quality of digital photo documentation of female genital injuries following sexual assault.

    PubMed

    Ernst, E J; Speck, Patricia M; Fitzpatrick, Joyce J

    2011-12-01

    With the patient's consent, physical injuries sustained in a sexual assault are evaluated and treated by the sexual assault nurse examiner (SANE) and documented on preprinted traumagrams and with photographs. Digital imaging is now available to the SANE for documentation of sexual assault injuries, but studies of the image quality of forensic digital imaging of female genital injuries after sexual assault were not found in the literature. The Photo Documentation Image Quality Scoring System (PDIQSS) was developed to rate the image quality of digital photo documentation of female genital injuries after sexual assault. Three expert observers performed evaluations on 30 separate images at two points in time. An image quality score, the sum of eight integral technical and anatomical attributes on the PDIQSS, was obtained for each image. Individual image quality ratings, defined by rating image quality for each of the data, were also determined. The results demonstrated a high level of image quality and agreement when measured in all dimensions. For the SANE in clinical practice, the results of this study indicate that a high degree of agreement exists between expert observers when using the PDIQSS to rate image quality of individual digital photographs of female genital injuries after sexual assault. © 2011 International Association of Forensic Nurses.

  16. Duplicate document detection in DocBrowse

    NASA Astrophysics Data System (ADS)

    Chalana, Vikram; Bruce, Andrew G.; Nguyen, Thien

    1998-04-01

    Duplicate documents are frequently found in large databases of digital documents, such as those found in digital libraries or in the government declassification effort. Efficient duplicate document detection is important not only to allow querying for similar documents, but also to filter out redundant information in large document databases. We have designed three different algorithm to identify duplicate documents. The first algorithm is based on features extracted from the textual content of a document, the second algorithm is based on wavelet features extracted from the document image itself, and the third algorithm is a combination of the first two. These algorithms are integrated within the DocBrowse system for information retrieval from document images which is currently under development at MathSoft. DocBrowse supports duplicate document detection by allowing (1) automatic filtering to hide duplicate documents, and (2) ad hoc querying for similar or duplicate documents. We have tested the duplicate document detection algorithms on 171 documents and found that text-based method has an average 11-point precision of 97.7 percent while the image-based method has an average 11- point precision of 98.9 percent. However, in general, the text-based method performs better when the document contains enough high-quality machine printed text while the image- based method performs better when the document contains little or no quality machine readable text.

  17. Geologic Measurements using Rover Images: Lessons from Pathfinder with Application to Mars 2001

    NASA Technical Reports Server (NTRS)

    Bridges, N. T.; Haldemann, A. F. C.; Herkenhoff, K. E.

    1999-01-01

    The Pathfinder Sojourner rover successfully acquired images that provided important and exciting information on the geology of Mars. This included the documentation of rock textures, barchan dunes, soil crusts, wind tails, and ventifacts. It is expected that the Marie Curie rover cameras will also successfully return important information on landing site geology. Critical to a proper analysis of these images will be a rigorous determination of rover location and orientation. Here, the methods that were used to compute rover position for Sojourner image analysis are reviewed. Based on this experience, specific recommendations are made that should improve this process on the '01 mission.

  18. Image analysis in modern ophthalmology: from acquisition to computer assisted diagnosis and telemedicine

    NASA Astrophysics Data System (ADS)

    Marrugo, Andrés G.; Millán, María S.; Cristóbal, Gabriel; Gabarda, Salvador; Sorel, Michal; Sroubek, Filip

    2012-06-01

    Medical digital imaging has become a key element of modern health care procedures. It provides visual documentation and a permanent record for the patients, and most important the ability to extract information about many diseases. Modern ophthalmology thrives and develops on the advances in digital imaging and computing power. In this work we present an overview of recent image processing techniques proposed by the authors in the area of digital eye fundus photography. Our applications range from retinal image quality assessment to image restoration via blind deconvolution and visualization of structural changes in time between patient visits. All proposed within a framework for improving and assisting the medical practice and the forthcoming scenario of the information chain in telemedicine.

  19. Close Range Uav Accurate Recording and Modeling of St-Pierre Neo-Romanesque Church in Strasbourg (france)

    NASA Astrophysics Data System (ADS)

    Murtiyoso, A.; Grussenmeyer, P.; Freville, T.

    2017-02-01

    Close-range photogrammetry is an image-based technique which has often been used for the 3D documentation of heritage objects. Recently, advances in the field of image processing and UAVs (Unmanned Aerial Vehicles) have resulted in a renewed interest in this technique. However, commercially ready-to-use UAVs are often equipped with smaller sensors in order to minimize payload and the quality of the documentation is still an issue. In this research, two commercial UAVs (the Sensefly Albris and DJI Phantom 3 Professional) were setup to record the 19th century St-Pierre-le-Jeune church in Strasbourg, France. Several software solutions (commercial and open source) were used to compare both UAVs' images in terms of calibration, accuracy of external orientation, as well as dense matching. Results show some instability in regards to the calibration of Phantom 3, while the Albris had issues regarding its aerotriangulation results. Despite these shortcomings, both UAVs succeeded in producing dense point clouds of up to a few centimeters in accuracy, which is largely sufficient for the purposes of a city 3D GIS (Geographical Information System). The acquisition of close range images using UAVs also provides greater LoD flexibility in processing. These advantages over other methods such as the TLS (Terrestrial Laser Scanning) or terrestrial close range photogrammetry can be exploited in order for these techniques to complement each other.

  20. Enhanced optical security by using information carrier digital screening

    NASA Astrophysics Data System (ADS)

    Koltai, Ferenc; Adam, Bence

    2004-06-01

    Jura has developed different security features based on Information Carrier Digital Screening. Substance of such features is that a non-visible secondary image is encoded in a visible primary image. The encoded image will be visible only by using a decoding device. One of such developments is JURA's Invisible Personal Information (IPI) is widely used in high security documents, where personal data of the document holder are encoded in the screen of the document holder's photography and they can be decoded by using an optical decoding device. In order to make document verification fully automated, enhance security and eliminate human factors, digital version of IPI, the D-IPI was developed. A special 2D-barcode structure was designed, which contains sufficient quantity of encoded digital information and can be embedded into the photo. Other part of Digital-IPI is the reading software, that is able to retrieve the encoded information with high reliability. The reading software developed with a specific 2D structure is providing the possibility of a forensic analysis. Such analysis will discover all kind of manipulations -- globally, if the photography was simply changed and selectively, if only part of the photography was manipulated. Digital IPI is a good example how benefits of digital technology can be exploited by using optical security and how technology for optical security can be converted into digital technology. The D-IPI process is compatible with all current personalization printers and materials (polycarbonate, PVC, security papers, Teslin-foils, etc.) and can provide any document with enhanced security and tamper-resistance.

  1. Radar image processing module development program, phase 3

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The feasibility of using charge coupled devices in an IPM for processing synthetic aperture radar signals onboard the NASA Convair 990 (CV990) aircraft was demonstrated. Radar data onboard the aircraft was recorded and processed using a CCD sampler and digital tape recorder. A description of equipment and testing was provided. The derivation of the digital presum filter was documented. Photographs of the sampler/tape recorder, real time display and circuit boards in the IPM were also included.

  2. Modern Processing Capabilities of Analog Data from Documentation of the Great Omayyad Mosque in Aleppo, Syria, Damaged in Civil War

    NASA Astrophysics Data System (ADS)

    Pavelka, K.; Šedina, J.; Raeva, P.; Hůlková, M.

    2017-08-01

    In 1999, a big project for the documentation of the Great Omayyad mosque in Aleppo / Syria under UNESCO was conducted. By end of the last century, still analogue cameras were still being used, like the UMK Zeiss, RolleiMetric System. Digital cameras and digital automatic data processing were just starting to be on the rise and laser scanning was not relevant. In this situation, photogrammetrical measurement used stereo technology for complicated situations, and object and single-image technology for creating photoplans. Hundreds of photogrammetric images were taken. However, data processing was carried out on digital stereo plotters or workstations; it was necessary that all analogue photos were converted to digital form using a photogrammetric scanner. The outputs were adequate to the end of the last century. Nowadays, after 19 years, the photogrammetric materials still exist, but the technology and processing is completely different. Our original measurement is historical and nowadays quite obsolete. So we was it decided to explore the possibilities of the new processing of historical materials. Why? The reason is that in the last few years there has been civil war in Syria and the above mentioned monument was severely damaged. The existing historical materials therefore provide a unique opportunity for possible future reconstruction. This paper refers to the completion of existing materials, their evaluation and possibilities of new processing with today's technologies.

  3. Advanced Q-switched DPSS lasers for ID-card marking

    NASA Astrophysics Data System (ADS)

    Hertwig, Michael; Paster, Martin; Terbrueggen, Ralf

    2008-02-01

    Increased homeland security concerns across the world have generated a strong demand for forgery-proof ID documents. Manufacturers currently employ a variety of high technology techniques to produce documents that are difficult to copy. However, production costs and lead times are still a concern when considering any possible manufacturing technology. Laser marking has already emerged as an important tool in the manufacturer's arsenal, and is currently being utilized to produce a variety of documents, such as plastic ID cards, drivers' licenses, health insurance cards and passports. The marks utilized can range from simple barcodes and text to high resolution, true grayscale images. The technical challenges posed by these marking tasks include delivering adequate mark legibility, minimizing substrate burning or charring, accurately reproducing grayscale data, and supporting the required process throughput. This article covers the advantages and basic requirements on laser marking of cards and reviews how laser output parameters affect marking quality, speed and overall process economics.

  4. Complementary concept for an image archive and communication system in a cardiological department based on CD-medical, an online archive, and networking facilities

    NASA Astrophysics Data System (ADS)

    Oswald, Helmut; Mueller-Jones, Kay; Builtjes, Jan; Fleck, Eckart

    1998-07-01

    The developments in information technologies -- computer hardware, networking and storage media -- has led to expectations that these advances make it possible to replace 35 mm film completely by digital techniques in the catheter laboratory. Besides the role of an archival medium, cine film is used as the major image review and exchange medium in cardiology. None of the today technologies can fulfill completely the requirements to replace cine film. One of the major drawbacks of cine film is the single access in time and location. For the four catheter laboratories in our institutions we have designed a complementary concept combining the CD-R, also called CD-medical, as a single patient storage and exchange medium, and a digital archive for on-line access and image review of selected frames or short sequences on adequate medical workstations. The image data from various modalities as well as all digital documents regarding to a patient are part of an electronic patient record. The access, the processing and the display of documents is supported by an integrated medical application.

  5. Classification of document page images based on visual similarity of layout structures

    NASA Astrophysics Data System (ADS)

    Shin, Christian K.; Doermann, David S.

    1999-12-01

    Searching for documents by their type or genre is a natural way to enhance the effectiveness of document retrieval. The layout of a document contains a significant amount of information that can be used to classify a document's type in the absence of domain specific models. A document type or genre can be defined by the user based primarily on layout structure. Our classification approach is based on 'visual similarity' of the layout structure by building a supervised classifier, given examples of the class. We use image features, such as the percentages of tex and non-text (graphics, image, table, and ruling) content regions, column structures, variations in the point size of fonts, the density of content area, and various statistics on features of connected components which can be derived from class samples without class knowledge. In order to obtain class labels for training samples, we conducted a user relevance test where subjects ranked UW-I document images with respect to the 12 representative images. We implemented our classification scheme using the OC1, a decision tree classifier, and report our findings.

  6. Processing the Viking lander camera data

    NASA Technical Reports Server (NTRS)

    Levinthal, E. C.; Tucker, R.; Green, W.; Jones, K. L.

    1977-01-01

    Over 1000 camera events were returned from the two Viking landers during the Primary Mission. A system was devised for processing camera data as they were received, in real time, from the Deep Space Network. This system provided a flexible choice of parameters for three computer-enhanced versions of the data for display or hard-copy generation. Software systems allowed all but 0.3% of the imagery scan lines received on earth to be placed correctly in the camera data record. A second-order processing system was developed which allowed extensive interactive image processing including computer-assisted photogrammetry, a variety of geometric and photometric transformations, mosaicking, and color balancing using six different filtered images of a common scene. These results have been completely cataloged and documented to produce an Experiment Data Record.

  7. Age-related morphological changes of the dermal matrix in human skin documented in vivo by multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Wang, Hequn; Shyr, Thomas; Fevola, Michael J.; Cula, Gabriela Oana; Stamatas, Georgios N.

    2018-03-01

    Two-photon fluorescence (TPF) and second harmonic generation (SHG) microscopy provide direct visualization of the skin dermal fibers in vivo. A typical method for analyzing TPF/SHG images involves averaging the image intensity and therefore disregarding the spatial distribution information. The goal of this study is to develop an algorithm to document age-related effects of the dermal matrix. TPF and SHG images were acquired from the upper inner arm, volar forearm, and cheek of female volunteers of two age groups: 20 to 30 and 60 to 80 years of age. The acquired images were analyzed for parameters relating to collagen and elastin fiber features, such as orientation and density. Both collagen and elastin fibers showed higher anisotropy in fiber orientation for the older group. The greatest difference in elastin fiber anisotropy between the two groups was found for the upper inner arm site. Elastin fiber density increased with age, whereas collagen fiber density decreased with age. The proposed analysis considers the spatial information inherent to the TPF and SHG images and provides additional insights into how the dermal fiber structure is affected by the aging process.

  8. Intranet-based quality improvement documentation at the Veterans Affairs Maryland Health Care System.

    PubMed

    Borkowski, A; Lee, D H; Sydnor, D L; Johnson, R J; Rabinovitch, A; Moore, G W

    2001-01-01

    The Pathology and Laboratory Medicine Service of the Veterans Affairs Maryland Health Care System is inspected biannually by the College of American Pathologists (CAP). As of the year 2000, all documentation in the Anatomic Pathology Section is available to all staff through the VA Intranet. Signed, supporting paper documents are on file in the office of the department chair. For the year 2000 CAP inspection, inspectors conducted their document review by use of these Web-based documents, in which each CAP question had a hyperlink to the corresponding section of the procedure manual. Thus inspectors were able to locate the documents relevant to each question quickly and efficiently. The procedure manuals consist of 87 procedures for surgical pathology, 52 procedures for cytopathology, and 25 procedures for autopsy pathology. Each CAP question requiring documentation had from one to three hyperlinks to the corresponding section of the procedure manual. Intranet documentation allows for easier sharing among decentralized institutions and for centralized updates of the laboratory documentation. These documents can be upgraded to allow for multimedia presentations, including text search for key words, hyperlinks to other documents, and images, audio, and video. Use of Web-based documents can improve the efficiency of the inspection process.

  9. Recording Approach of Heritage Sites Based on Merging Point Clouds from High Resolution Photogrammetry and Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Grussenmeyer, P.; Alby, E.; Landes, T.; Koehl, M.; Guillemin, S.; Hullo, J. F.; Assali, P.; Smigiel, E.

    2012-07-01

    Different approaches and tools are required in Cultural Heritage Documentation to deal with the complexity of monuments and sites. The documentation process has strongly changed in the last few years, always driven by technology. Accurate documentation is closely relied to advances of technology (imaging sensors, high speed scanning, automation in recording and processing data) for the purposes of conservation works, management, appraisal, assessment of the structural condition, archiving, publication and research (Patias et al., 2008). We want to focus in this paper on the recording aspects of cultural heritage documentation, especially the generation of geometric and photorealistic 3D models for accurate reconstruction and visualization purposes. The selected approaches are based on the combination of photogrammetric dense matching and Terrestrial Laser Scanning (TLS) techniques. Both techniques have pros and cons and recent advances have changed the way of the recording approach. The choice of the best workflow relies on the site configuration, the performances of the sensors, and criteria as geometry, accuracy, resolution, georeferencing, texture, and of course processing time. TLS techniques (time of flight or phase shift systems) are widely used for recording large and complex objects and sites. Point cloud generation from images by dense stereo or multi-view matching can be used as an alternative or as a complementary method to TLS. Compared to TLS, the photogrammetric solution is a low cost one, as the acquisition system is limited to a high-performance digital camera and a few accessories only. Indeed, the stereo or multi-view matching process offers a cheap, flexible and accurate solution to get 3D point clouds. Moreover, the captured images might also be used for models texturing. Several software packages are available, whether web-based, open source or commercial. The main advantage of this photogrammetric or computer vision based technology is to get at the same time a point cloud (the resolution depends on the size of the pixel on the object), and therefore an accurate meshed object with its texture. After matching and processing steps, we can use the resulting data in much the same way as a TLS point cloud, but in addition with radiometric information for textures. The discussion in this paper reviews recording and important processing steps as geo-referencing and data merging, the essential assessment of the results, and examples of deliverables from projects of the Photogrammetry and Geomatics Group (INSA Strasbourg, France).

  10. Goal-oriented evaluation of binarization algorithms for historical document images

    NASA Astrophysics Data System (ADS)

    Obafemi-Ajayi, Tayo; Agam, Gady

    2013-01-01

    Binarization is of significant importance in document analysis systems. It is an essential first step, prior to further stages such as Optical Character Recognition (OCR), document segmentation, or enhancement of readability of the document after some restoration stages. Hence, proper evaluation of binarization methods to verify their effectiveness is of great value to the document analysis community. In this work, we perform a detailed goal-oriented evaluation of image quality assessment of the 18 binarization methods that participated in the DIBCO 2011 competition using the 16 historical document test images used in the contest. We are interested in the image quality assessment of the outputs generated by the different binarization algorithms as well as the OCR performance, where possible. We compare our evaluation of the algorithms based on human perception of quality to the DIBCO evaluation metrics. The results obtained provide an insight into the effectiveness of these methods with respect to human perception of image quality as well as OCR performance.

  11. ISLE (Image and Signal Processing LISP Environment) reference manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sherwood, R.J.; Searfus, R.M.

    1990-01-01

    ISLE is a rapid prototyping system for performing image and signal processing. It is designed to meet the needs of a person doing development of image and signal processing algorithms in a research environment. The image and signal processing modules in ISLE form a very capable package in themselves. They also provide a rich environment for quickly and easily integrating user-written software modules into the package. ISLE is well suited to applications in which there is a need to develop a processing algorithm in an interactive manner. It is straightforward to develop the algorithms, load it into ISLE, apply themore » algorithm to an image or signal, display the results, then modify the algorithm and repeat the develop-load-apply-display cycle. ISLE consists of a collection of image and signal processing modules integrated into a cohesive package through a standard command interpreter. ISLE developer elected to concentrate their effort on developing image and signal processing software rather than developing a command interpreter. A COMMON LISP interpreter was selected for the command interpreter because it already has the features desired in a command interpreter, it supports dynamic loading of modules for customization purposes, it supports run-time parameter and argument type checking, it is very well documented, and it is a commercially supported product. This manual is intended to be a reference manual for the ISLE functions The functions are grouped into a number of categories and briefly discussed in the Function Summary chapter. The full descriptions of the functions and all their arguments are given in the Function Descriptions chapter. 6 refs.« less

  12. Mexican American Identity - A Multi-Cultural Legacy.

    ERIC Educational Resources Information Center

    Stoddard, Ellwyn R.

    Investigating the background of Mexican American identify, the document determined that this identity is a dynamic image emerging from a continuous process of human development in which the genetic and cultural variations from European and indigenous peoples are combined within a complex historical situation. The combination includes: (1) the…

  13. 76 FR 45300 - Notice of Issuance of Materials License SUA-1597 and Record of Decision for Uranerz Energy...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-28

    ... considered but eliminated from detailed analysis include conventional uranium mining and milling, conventional mining and heap leach processing, alternative site location, alternate lixiviants, and alternate...'s Agencywide Document Access and Management System (ADAMS), which provides text and image files of...

  14. Optical Disc Technology for Information Management.

    ERIC Educational Resources Information Center

    Brumm, Eugenia K.

    1991-01-01

    This summary of the literature on document image processing from 1988-90 focuses on WORM (write once read many) technology and on rewritable (i.e., erasable) optical discs, and excludes CD-ROM. Highlights include vendors and products, standards, comparisons of storage media, software, legal issues, records management, indexing, and computer…

  15. Dataflow Integration and Simulation Techniques for DSP System Design Tools

    DTIC Science & Technology

    2007-01-01

    Lebak, M. Richards , and D. Campbell, “VSIPL: An object-based open standard API for vector, signal, and image processing,” in Proceedings of the...Inc., document Version 0.98a. [56] P. Marwedel and G. Goossens , Eds., Code Generation for Embedded Processors. Kluwer Academic Publishers, 1995. [57

  16. Scientific Visualization: A Synthesis of Historical Data.

    ERIC Educational Resources Information Center

    Polland, Mark

    Visualization is the process by which one is able to create and sustain mental images for observation, analysis, and experimentation. This study consists of a compilation of evidence from historical examples that were collected in order to document the importance and the uses of visualization within the realm of scientific investigation.…

  17. IHE cross-enterprise document sharing for imaging: design challenges

    NASA Astrophysics Data System (ADS)

    Noumeir, Rita

    2006-03-01

    Integrating the Healthcare Enterprise (IHE) has recently published a new integration profile for sharing documents between multiple enterprises. The Cross-Enterprise Document Sharing Integration Profile (XDS) lays the basic framework for deploying regional and national Electronic Health Record (EHR). This profile proposes an architecture based on a central Registry that holds metadata information describing published Documents residing in one or multiple Documents Repositories. As medical images constitute important information of the patient health record, it is logical to extend the XDS Integration Profile to include images. However, including images in the EHR presents many challenges. The complete image set is very large; it is useful for radiologists and other specialists such as surgeons and orthopedists. The imaging report, on the other hand, is widely needed and its broad accessibility is vital for achieving optimal patient care. Moreover, a subset of relevant images may also be of wide interest along with the report. Therefore, IHE recently published a new integration profile for sharing images and imaging reports between multiple enterprises. This new profile, the Cross-Enterprise Document Sharing for Imaging (XDS-I), is based on the XDS architecture. The XDS-I integration solution that is published as part of the IHE Technical Framework is the result of an extensive investigation effort of several design solutions. This paper presents and discusses the design challenges and the rationales behind the design decisions of the IHE XDS-I Integration Profile, for a better understanding and appreciation of the final published solution.

  18. Rectification of curved document images based on single view three-dimensional reconstruction.

    PubMed

    Kang, Lai; Wei, Yingmei; Jiang, Jie; Bai, Liang; Lao, Songyang

    2016-10-01

    Since distortions in camera-captured document images significantly affect the accuracy of optical character recognition (OCR), distortion removal plays a critical role for document digitalization systems using a camera for image capturing. This paper proposes a novel framework that performs three-dimensional (3D) reconstruction and rectification of camera-captured document images. While most existing methods rely on additional calibrated hardware or multiple images to recover the 3D shape of a document page, or make a simple but not always valid assumption on the corresponding 3D shape, our framework is more flexible and practical since it only requires a single input image and is able to handle a general locally smooth document surface. The main contributions of this paper include a new iterative refinement scheme for baseline fitting from connected components of text line, an efficient discrete vertical text direction estimation algorithm based on convex hull projection profile analysis, and a 2D distortion grid construction method based on text direction function estimation using 3D regularization. In order to examine the performance of our proposed method, both qualitative and quantitative evaluation and comparison with several recent methods are conducted in our experiments. The experimental results demonstrate that the proposed method outperforms relevant approaches for camera-captured document image rectification, in terms of improvements on both visual distortion removal and OCR accuracy.

  19. Positron Emission Tomography (PET)

    DOE R&D Accomplishments Database

    Welch, M. J.

    1990-01-01

    Positron emission tomography (PET) assesses biochemical processes in the living subject, producing images of function rather than form. Using PET, physicians are able to obtain not the anatomical information provided by other medical imaging techniques, but pictures of physiological activity. In metaphoric terms, traditional imaging methods supply a map of the body's roadways, its, anatomy; PET shows the traffic along those paths, its biochemistry. This document discusses the principles of PET, the radiopharmaceuticals in PET, PET research, clinical applications of PET, the cost of PET, training of individuals for PET, the role of the United States Department of Energy in PET, and the futures of PET.

  20. Bistatic image processing for a 32 x 19 inch model aircraft using scattered fields obtained in the OSU-ESL compact range

    NASA Technical Reports Server (NTRS)

    Lee, T-H.; Burnside, W. D.

    1992-01-01

    Inverse Synthetic Aperture Radar (ISAR) images for a 32 in long and 19 in wide model aircraft are documented. Both backscattered and bistatic scattered fields of this model aircraft were measured in the OSU-ESL compact range to obtain these images. The scattered fields of the target were measured for frequencies from 2 to 18 GHz with a 10 MHz increment and for full 360 deg azimuth rotation angles with a 0.2 deg step. For the bistatic scattering measurement, the compact range was used as the transmitting antenna; while, a broad band AEL double ridge horn was used as the receiving antenna. Bistatic angles of 90 deg and 135 deg were measured. Due to the size of the chamber and target, the receiving antenna was in the near field of the target; nevertheless, the image processing algorithm was valid for this case.

  1. MREG V1.1 : a multi-scale image registration algorithm for SAR applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eichel, Paul H.

    2013-08-01

    MREG V1.1 is the sixth generation SAR image registration algorithm developed by the Signal Processing&Technology Department for Synthetic Aperture Radar applications. Like its predecessor algorithm REGI, it employs a powerful iterative multi-scale paradigm to achieve the competing goals of sub-pixel registration accuracy and the ability to handle large initial offsets. Since it is not model based, it allows for high fidelity tracking of spatially varying terrain-induced misregistration. Since it does not rely on image domain phase, it is equally adept at coherent and noncoherent image registration. This document provides a brief history of the registration processors developed by Dept. 5962more » leading up to MREG V1.1, a full description of the signal processing steps involved in the algorithm, and a user's manual with application specific recommendations for CCD, TwoColor MultiView, and SAR stereoscopy.« less

  2. Analyses of requirements for computer control and data processing experiment subsystems: Image data processing system (IDAPS) software description (7094 version), volume 2

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A description of each of the software modules of the Image Data Processing System (IDAPS) is presented. The changes in the software modules are the result of additions to the application software of the system and an upgrade of the IBM 7094 Mod(1) computer to a 1301 disk storage configuration. Necessary information about IDAPS sofware is supplied to the computer programmer who desires to make changes in the software system or who desires to use portions of the software outside of the IDAPS system. Each software module is documented with: module name, purpose, usage, common block(s) description, method (algorithm of subroutine) flow diagram (if needed), subroutines called, and storage requirements.

  3. Quality assurance and quality control in mammography: a review of available guidance worldwide.

    PubMed

    Reis, Cláudia; Pascoal, Ana; Sakellaris, Taxiarchis; Koutalonis, Manthos

    2013-10-01

    Review available guidance for quality assurance (QA) in mammography and discuss its contribution to harmonise practices worldwide. Literature search was performed on different sources to identify guidance documents for QA in mammography available worldwide in international bodies, healthcare providers, professional/scientific associations. The guidance documents identified were reviewed and a selection was compared for type of guidance (clinical/technical), technology and proposed QA methodologies focusing on dose and image quality (IQ) performance assessment. Fourteen protocols (targeted at conventional and digital mammography) were reviewed. All included recommendations for testing acquisition, processing and display systems associated with mammographic equipment. All guidance reviewed highlighted the importance of dose assessment and testing the Automatic Exposure Control (AEC) system. Recommended tests for assessment of IQ showed variations in the proposed methodologies. Recommended testing focused on assessment of low-contrast detection, spatial resolution and noise. QC of image display is recommended following the American Association of Physicists in Medicine guidelines. The existing QA guidance for mammography is derived from key documents (American College of Radiology and European Union guidelines) and proposes similar tests despite the variations in detail and methodologies. Studies reported on QA data should provide detail on experimental technique to allow robust data comparison. Countries aiming to implement a mammography/QA program may select/prioritise the tests depending on available technology and resources. •An effective QA program should be practical to implement in a clinical setting. •QA should address the various stages of the imaging chain: acquisition, processing and display. •AEC system QC testing is simple to implement and provides information on equipment performance.

  4. Millimeter-wave Imaging Radiometer (MIR) data processing and development of water vapor retrieval algorithms

    NASA Technical Reports Server (NTRS)

    Chang, L. Aron

    1995-01-01

    This document describes the progress of the task of the Millimeter-wave Imaging Radiometer (MIR) data processing and the development of water vapor retrieval algorithms, for the second six-month performing period. Aircraft MIR data from two 1995 field experiments were collected and processed with a revised data processing software. Two revised versions of water vapor retrieval algorithm were developed, one for the execution of retrieval on a supercomputer platform, and one for using pressure as the vertical coordinate. Two implementations of incorporating products from other sensors into the water vapor retrieval system, one from the Special Sensor Microwave Imager (SSM/I), the other from the High-resolution Interferometer Sounder (HIS). Water vapor retrievals were performed for both airborne MIR data and spaceborne SSM/T-2 data, during field experiments of TOGA/COARE, CAMEX-1, and CAMEX-2. The climatology of water vapor during TOGA/COARE was examined by SSM/T-2 soundings and conventional rawinsonde.

  5. A graphic user interface for efficient 3D photo-reconstruction based on free software

    NASA Astrophysics Data System (ADS)

    Castillo, Carlos; James, Michael; Gómez, Jose A.

    2015-04-01

    Recently, different studies have stressed the applicability of 3D photo-reconstruction based on Structure from Motion algorithms in a wide range of geoscience applications. For the purpose of image photo-reconstruction, a number of commercial and freely available software packages have been developed (e.g. Agisoft Photoscan, VisualSFM). The workflow involves typically different stages such as image matching, sparse and dense photo-reconstruction, point cloud filtering and georeferencing. For approaches using open and free software, each of these stages usually require different applications. In this communication, we present an easy-to-use graphic user interface (GUI) developed in Matlab® code as a tool for efficient 3D photo-reconstruction making use of powerful existing software: VisualSFM (Wu, 2015) for photo-reconstruction and CloudCompare (Girardeau-Montaut, 2015) for point cloud processing. The GUI performs as a manager of configurations and algorithms, taking advantage of the command line modes of existing software, which allows an intuitive and automated processing workflow for the geoscience user. The GUI includes several additional features: a) a routine for significantly reducing the duration of the image matching operation, normally the most time consuming stage; b) graphical outputs for understanding the overall performance of the algorithm (e.g. camera connectivity, point cloud density); c) a number of useful options typically performed before and after the photo-reconstruction stage (e.g. removal of blurry images, image renaming, vegetation filtering); d) a manager of batch processing for the automated reconstruction of different image datasets. In this study we explore the advantages of this new tool by testing its performance using imagery collected in several soil erosion applications. References Girardeau-Montaut, D. 2015. CloudCompare documentation accessed at http://cloudcompare.org/ Wu, C. 2015. VisualSFM documentation access at http://ccwu.me/vsfm/doc.html#.

  6. Einstein's creative thinking and the general theory of relativity: a documented report.

    PubMed

    Rothenberg, A

    1979-01-01

    A document written by Albert Einstein has recently come to light in which the eminent scientist described the actual sequence of his thoughts leading to the development of the general theory of relativity. The key creative thought was an instance of a type of creative cognition the author has previously designated "Janusian thinking," Janusian thinking consists of actively conceiving two or more opposite or antithetical concepts, ideas, or images simultaneously. This form of high-level secondary process cognition has been found to operate widely in art, science, and other fields.

  7. Contrast image formation based on thermodynamic approach and surface laser oxidation process for optoelectronic read-out system

    NASA Astrophysics Data System (ADS)

    Scherbak, Aleksandr; Yulmetova, Olga

    2018-05-01

    A pulsed fiber laser with the wavelength 1.06 μm was used to treat titanium nitride film deposited on beryllium substrates in the air with intensities below an ablation threshold to provide oxide formation. Laser oxidation results were predicted by the chemical thermodynamic method and confirmed by experimental techniques (X-ray diffraction). The developed technology of contrast image formation is intended to be used for optoelectronic read-out system.

  8. Improving Image Drizzling in the HST Archive: Advanced Camera for Surveys

    NASA Astrophysics Data System (ADS)

    Hoffmann, Samantha L.; Avila, Roberto J.

    2017-06-01

    The Mikulski Archive for Space Telescopes (MAST) pipeline performs geometric distortion corrections, associated image combinations, and cosmic ray rejections with AstroDrizzle on Hubble Space Telescope (HST) data. The MDRIZTAB reference table contains a list of relevant parameters that controls this program. This document details our photometric analysis of Advanced Camera for Surveys Wide Field Channel (ACS/WFC) data processed by AstroDrizzle. Based on this analysis, we update the MDRIZTAB table to improve the quality of the drizzled products delivered by MAST.

  9. Human Terrain: A Tactical Issue or a Strategic C4I Problem?

    DTIC Science & Technology

    2008-05-20

    C4I" 20-21 May 2008, George Mason University, Fairfax, Virginia Campus, The original document contains color images . 14. ABSTRACT 15. SUBJECT TERMS...can modify and control it – Terrain is many things: typography ; geology and soil type; it’s natural coverage (forests); it’s roadways, rail lines... image of “gun toting” field researchers who are doing military stuff and not research and in the process are poisoning the environment for real field

  10. Knowledge and Valorization of Historical Sites Through 3d Documentation and Modeling

    NASA Astrophysics Data System (ADS)

    Farella, E.; Menna, F.; Nocerino, E.; Morabito, D.; Remondino, F.; Campi, M.

    2016-06-01

    The paper presents the first results of an interdisciplinary project related to the 3D documentation, dissemination, valorization and digital access of archeological sites. Beside the mere 3D documentation aim, the project has two goals: (i) to easily explore and share via web references and results of the interdisciplinary work, including the interpretative process and the final reconstruction of the remains; (ii) to promote and valorize archaeological areas using reality-based 3D data and Virtual Reality devices. This method has been verified on the ruins of the archeological site of Pausilypon, a maritime villa of Roman period (Naples, Italy). Using Unity3D, the virtual tour of the heritage site was integrated and enriched with the surveyed 3D data, text documents, CAAD reconstruction hypotheses, drawings, photos, etc. In this way, starting from the actual appearance of the ruins (panoramic images), passing through the 3D digital surveying models and several other historical information, the user is able to access virtual contents and reconstructed scenarios, all in a single virtual, interactive and immersive environment. These contents and scenarios allow to derive documentation and geometrical information, understand the site, perform analyses, see interpretative processes, communicate historical information and valorize the heritage location.

  11. Digitization of medical documents: an X-Windows application for fast scanning.

    PubMed

    Muñoz, A; Salvador, C H; Gonzalez, M A; Dueñas, A

    1992-01-01

    This paper deals with digitization, using a commercial scanner, of medical documents as still images for introduction into a computer-based Information System. Document management involves storing, editing and transmission. This task has usually been approached from the perspective of the difficulties posed by radiologic images because of their indisputable qualitative and quantitative significance. However, healthcare activities require the management of many other types of documents and involve the requirements of numerous users. One key to document management will be the availability of a digitizer to deal with the greatest possible number of different types of documents. This paper describes the relevant aspects of documents and the technical specifications that digitizers must fulfill. The concept of document type is introduced as the ideal set of digitizing parameters for a given document. The use of document type parameters can drastically reduce the time the user spends in scanning sessions. Presentation is made of an application based on Unix, X-Windows and OSF/Motif, with a GPIB interface, implemented around the document type concept. Finally, the results of the evaluation of the application are presented, focusing on the user interface, as well as on the viewing of color images in an X-Windows environment and the use of lossy algorithms in the compression of medical images.

  12. Medication order communication using fax and document-imaging technologies.

    PubMed

    Simonian, Armen I

    2008-03-15

    The implementation of fax and document-imaging technology to electronically communicate medication orders from nursing stations to the pharmacy is described. The evaluation of a commercially available pharmacy order imaging system to improve order communication and to make document retrieval more efficient led to the selection and customization of a system already licensed and used in seven affiliated hospitals. The system consisted of existing fax machines and document-imaging software that would capture images of written orders and send them from nursing stations to a central database server. Pharmacists would then retrieve the images and enter the orders in an electronic medical record system. The pharmacy representatives from all seven hospitals agreed on the configuration and functionality of the custom application. A 30-day trial of the order imaging system was successfully conducted at one of the larger institutions. The new system was then implemented at the remaining six hospitals over a period of 60 days. The transition from a paper-order system to electronic communication via a standardized pharmacy document management application tailored to the specific needs of this health system was accomplished. A health system with seven affiliated hospitals successfully implemented electronic communication and the management of inpatient paper-chart orders by using faxes and document-imaging technology. This standardized application eliminated the problems associated with the hand delivery of paper orders, the use of the pneumatic tube system, and the printing of traditional faxes.

  13. The use of fingerprints available on the web in false identity documents: Analysis from a forensic intelligence perspective.

    PubMed

    Girelli, Carlos Magno Alves

    2016-05-01

    Fingerprints present in false identity documents were found on the web. In some cases, laterally reversed (mirrored) images of a same fingerprint were observed in different documents. In the present work, 100 fingerprints images downloaded from the web, as well as their reversals obtained by image editing, were compared between themselves and against the database of the Brazilian Federal Police AFIS, in order to better understand trends about this kind of forgery in Brazil. Some image editing effects were observed in the analyzed fingerprints: addition of artifacts (such as watermarks), image rotation, image stylization, lateral reversal and tonal reversal. Discussion about lateral reversals' detection is presented in this article, as well as suggestion to reduce errors due to missed HIT decisions between reversed fingerprints. The present work aims to highlight the importance of the fingerprints' analysis when performing document examination, especially when only copies of documents are available, something very common in Brazil. Besides the intrinsic features of the fingermarks considered in three levels of details by ACE-V methodology, some visual features of the fingerprints images can be helpful to identify sources of forgeries and modus operandi, such as: limits and image contours, fails in the friction ridges caused by excess or lack of inking and presence of watermarks and artifacts arising from the background. Based on the agreement of such features in fingerprints present in different identity documents and also on the analysis of the time and location where the documents were seized, it is possible to highlight potential links between apparently unconnected crimes. Therefore, fingerprints have potential to reduce linkage blindness and the present work suggests the analysis of fingerprints when profiling false identity documents, as well as the inclusion of fingerprints features in the profile of the documents. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Free and open source software for the manipulation of digital images.

    PubMed

    Solomon, Robert W

    2009-06-01

    Free and open source software is a type of software that is nearly as powerful as commercial software but is freely downloadable. This software can do almost everything that the expensive programs can. GIMP (gnu image manipulation program) is the free program that is comparable to Photoshop, and versions are available for Windows, Macintosh, and Linux platforms. This article briefly describes how GIMP can be installed and used to manipulate radiology images. It is no longer necessary to budget large amounts of money for high-quality software to achieve the goals of image processing and document creation because free and open source software is available for the user to download at will.

  15. Built Heritage Documentation and Management: AN Integrated Conservation Approach in Bagan

    NASA Astrophysics Data System (ADS)

    Mezzino, D.; Chan, L.; Santana Quintero, M.; Esponda, M.; Lee, S.; Min, A.; Pwint, M.

    2017-08-01

    Good practices in heritage conservation are based on accurate information about conditions, materials, and transformation of built heritage sites. Therefore, heritage site documentation and its analysis are essential parts for their conservation. In addition, the devastating effects of recent catastrophic events in different geographical areas have highly affected cultural heritage places. Such areas include and are not limited to South Europe, South East Asia, and Central America. Within this framework, appropriate acquisition of information can effectively provide tools for the decision-making process and management. Heritage documentation is growing in innovation, providing dynamic opportunities for effectively responding to the alarming rate of destruction by natural events, conflicts, and negligence. In line with these considerations, a multidisciplinary team - including students and faculty members from Carleton University and Yangon Technological University, as well as staff from the Department of Archaeology, National Museum and Library (DoA) and professionals from the CyArk foundation - developed a coordinated strategy to document four temples in the site of Bagan (Myanmar). On-field work included capacity-building activities to train local emerging professionals in the heritage field (graduate and undergraduate students from the Yangon Technological University) and to increase the technical knowledge of the local DoA staff in the digital documentation field. Due to the short time of the on-field activity and the need to record several monuments, a variety of documentation techniques, including image and non-image based ones, were used. Afterwards, the information acquired during the fieldwork was processed to develop a solid base for the conservation and monitoring of the four documented temples. The relevance of developing this kind of documentation in Bagan is related to the vulnerability of the site, often affected by natural seismic events and flooding, as well as the lack of maintenance. Bagan provided an excellent case study to test the effectiveness of the proposed approach, to prevent and manage the damages of catastrophic events, and to support retrofitting actions. In order to test the flexibility of adopted methodology and workflow, temples with different features - in terms of architectural design, shape, and geometry - were selected. The goals of these documentation activities range from testing digital documentation workflows for the metric and visual recording of the site (reviewing strengths and limitations of particular recording techniques), to the definition of effective conditions assessment strategies.

  16. Intraoperative navigation-guided resection of anomalous transverse processes in patients with Bertolotti's syndrome

    PubMed Central

    Babu, Harish; Lagman, Carlito; Kim, Terrence T.; Grode, Marshall; Johnson, J. Patrick; Drazin, Doniel

    2017-01-01

    Background: Bertolotti's syndrome is characterized by enlargement of the transverse process at the most caudal lumbar vertebra with a pseudoarticulation between the transverse process and sacral ala. Here, we describe the use of intraoperative three-dimensional image-guided navigation in the resection of anomalous transverse processes in two patients with Bertolotti's syndrome. Case Descriptions: Two patients diagnosed with Bertolotti's syndrome who had undergone the above-mentioned procedure were identified. The patients were 17- and 38-years-old, and presented with severe, chronic low back pain that was resistant to conservative treatment. Imaging revealed lumbosacral transitional vertebrae at the level of L5-S1, which was consistent with Bertolotti's syndrome. Injections of the pseudoarticulations resulted in only temporary symptomatic relief. Thus, the patients subsequently underwent O-arm neuronavigational resection of the bony defects. Both patients experienced immediate pain resolution (documented on the postoperative notes) and remained asymptomatic 1 year later. Conclusion: Intraoperative three-dimensional imaging and navigation guidance facilitated the resection of anomalous transverse processes in two patients with Bertolotti's syndrome. Excellent outcomes were achieved in both patients. PMID:29026672

  17. Intraoperative navigation-guided resection of anomalous transverse processes in patients with Bertolotti's syndrome.

    PubMed

    Babu, Harish; Lagman, Carlito; Kim, Terrence T; Grode, Marshall; Johnson, J Patrick; Drazin, Doniel

    2017-01-01

    Bertolotti's syndrome is characterized by enlargement of the transverse process at the most caudal lumbar vertebra with a pseudoarticulation between the transverse process and sacral ala. Here, we describe the use of intraoperative three-dimensional image-guided navigation in the resection of anomalous transverse processes in two patients with Bertolotti's syndrome. Two patients diagnosed with Bertolotti's syndrome who had undergone the above-mentioned procedure were identified. The patients were 17- and 38-years-old, and presented with severe, chronic low back pain that was resistant to conservative treatment. Imaging revealed lumbosacral transitional vertebrae at the level of L5-S1, which was consistent with Bertolotti's syndrome. Injections of the pseudoarticulations resulted in only temporary symptomatic relief. Thus, the patients subsequently underwent O-arm neuronavigational resection of the bony defects. Both patients experienced immediate pain resolution (documented on the postoperative notes) and remained asymptomatic 1 year later. Intraoperative three-dimensional imaging and navigation guidance facilitated the resection of anomalous transverse processes in two patients with Bertolotti's syndrome. Excellent outcomes were achieved in both patients.

  18. "Hungry Eyes": Visual Processing of Food Images in Adults with Prader-Willi Syndrome

    ERIC Educational Resources Information Center

    Key, A. P. F.; Dykens, E. M.

    2008-01-01

    Background: Prader-Willi syndrome (PWS) is a genetic disorder associated with intellectual disabilities, compulsivity, hyperphagia and increased risks of life-threatening obesity. Food preferences in people with PWS are well documented, but research has yet to focus on other properties of food in PWS, including composition and suitability for…

  19. The Design of an Intelligent Decision Support Tool for Submarine Commanders

    DTIC Science & Technology

    2009-06-01

    for public release, distribution unlimited 13. SUPPLEMENTARY NOTES The original document contains color images . 14. ABSTRACT 15. SUBJECT TERMS 16...with research supporting the advancement of military technology. Thank you again for your support throughout this process . To Dave Silvia and Carl...26 2.1.3 Voyage Management System

  20. 76 FR 53500 - Notice of the Nuclear Regulatory Commission Issuance of Materials License SUA-1598 and Record of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-26

    ... (ADAMS), which provides text and image files of the NRC's public documents in the NRC Library at http... considered, but eliminated from detailed analysis, include conventional uranium mining and milling, conventional mining and heap leach processing, alternate lixiviants, and alternative wastewater disposal...

  1. Program Proposal for Improving the Quality of Educational Experiences.

    ERIC Educational Resources Information Center

    Davis, Robert

    This document considers the image of schools as "a world apart" and the subsequent question, "What if we teach these young people the wrong thing?" The author discusses many of the questions and problems that exist in this separate world of schools: problems of administration, the innovative process itself, the open education…

  2. Reprocessing of multi-channel seismic-reflection data collected in the Beaufort Sea

    USGS Publications Warehouse

    Agena, W.F.; Lee, Myung W.; Hart, P.E.

    2000-01-01

    Contained on this set of two CD-ROMs are stacked and migrated multi-channel seismic-reflection data for 65 lines recorded in the Beaufort Sea by the United States Geological Survey in 1977. All data were reprocessed by the USGS using updated processing methods resulting in improved interpretability. Each of the two CD-ROMs contains the following files: 1) 65 files containing the digital seismic data in standard, SEG-Y format; 2) 1 file containing navigation data for the 65 lines in standard SEG-P1 format; 3) an ASCII text file with cross-reference information for relating the sequential trace numbers on each line to cdp numbers and shotpoint numbers; 4) 2 small scale graphic images (stacked and migrated) of a segment of line 722 in Adobe Acrobat (R) PDF format; 5) a graphic image of the location map, generated from the navigation file; 6) PlotSeis, an MS-DOS Application that allows PC users to interactively view the SEG-Y files; 7) a PlotSeis documentation file; and 8) an explanation of the processing used to create the final seismic sections (this document).

  3. Towards an easier creation of three-dimensional data for embedding into scholarly 3D PDF (Portable Document Format) files

    PubMed Central

    2015-01-01

    The Portable Document Format (PDF) allows for embedding three-dimensional (3D) models and is therefore particularly suitable to communicate respective data, especially as regards scholarly articles. The generation of the necessary model data, however, is still challenging, especially for inexperienced users. This prevents an unrestrained proliferation of 3D PDF usage in scholarly communication. This article introduces a new solution for the creation of three of types of 3D geometry (point clouds, polylines and triangle meshes), that is based on MeVisLab, a framework for biomedical image processing. This solution enables even novice users to generate the model data files without requiring programming skills and without the need for an intensive training by simply using it as a conversion tool. Advanced users can benefit from the full capability of MeVisLab to generate and export the model data as part of an overall processing chain. Although MeVisLab is primarily designed for handling biomedical image data, the new module is not restricted to this domain. It can be used for all scientific disciplines. PMID:25780759

  4. Towards an easier creation of three-dimensional data for embedding into scholarly 3D PDF (Portable Document Format) files.

    PubMed

    Newe, Axel

    2015-01-01

    The Portable Document Format (PDF) allows for embedding three-dimensional (3D) models and is therefore particularly suitable to communicate respective data, especially as regards scholarly articles. The generation of the necessary model data, however, is still challenging, especially for inexperienced users. This prevents an unrestrained proliferation of 3D PDF usage in scholarly communication. This article introduces a new solution for the creation of three of types of 3D geometry (point clouds, polylines and triangle meshes), that is based on MeVisLab, a framework for biomedical image processing. This solution enables even novice users to generate the model data files without requiring programming skills and without the need for an intensive training by simply using it as a conversion tool. Advanced users can benefit from the full capability of MeVisLab to generate and export the model data as part of an overall processing chain. Although MeVisLab is primarily designed for handling biomedical image data, the new module is not restricted to this domain. It can be used for all scientific disciplines.

  5. Large-scale image region documentation for fully automated image biomarker algorithm development and evaluation.

    PubMed

    Reeves, Anthony P; Xie, Yiting; Liu, Shuang

    2017-04-01

    With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset.

  6. ACC/AATS/AHA/ASE/ASNC/HRS/SCAI/SCCT/SCMR/STS 2017 Appropriate Use Criteria for Multimodality Imaging in Valvular Heart Disease: A Report of the American College of Cardiology Appropriate Use Criteria Task Force, American Association for Thoracic Surgery, American Heart Association, American Society of Echocardiography, American Society of Nuclear Cardiology, Heart Rhythm Society, Society for Cardiovascular Angiography and Interventions, Society of Cardiovascular Computed Tomography, Society for Cardiovascular Magnetic Resonance, and Society of Thoracic Surgeons.

    PubMed

    Doherty, John U; Kort, Smadar; Mehran, Roxana; Schoenhagen, Paul; Soman, Prem; Dehmer, Greg J; Doherty, John U; Schoenhagen, Paul; Amin, Zahid; Bashore, Thomas M; Boyle, Andrew; Calnon, Dennis A; Carabello, Blase; Cerqueira, Manuel D; Conte, John; Desai, Milind; Edmundowicz, Daniel; Ferrari, Victor A; Ghoshhajra, Brian; Mehrotra, Praveen; Nazarian, Saman; Reece, T Brett; Tamarappoo, Balaji; Tzou, Wendy S; Wong, John B; Doherty, John U; Dehmer, Gregory J; Bailey, Steven R; Bhave, Nicole M; Brown, Alan S; Daugherty, Stacie L; Dean, Larry S; Desai, Milind Y; Duvernoy, Claire S; Gillam, Linda D; Hendel, Robert C; Kramer, Christopher M; Lindsay, Bruce D; Manning, Warren J; Mehrotra, Praveen; Patel, Manesh R; Sachdeva, Ritu; Wann, L Samuel; Winchester, David E; Wolk, Michael J; Allen, Joseph M

    2018-04-01

    This document is 1 of 2 companion appropriate use criteria (AUC) documents developed by the American College of Cardiology, American Association for Thoracic Surgery, American Heart Association, American Society of Echocardiography, American Society of Nuclear Cardiology, Heart Rhythm Society, Society for Cardiovascular Angiography and Interventions, Society of Cardiovascular Computed Tomography, Society for Cardiovascular Magnetic Resonance, and Society of Thoracic Surgeons. This document addresses the evaluation and use of multimodality imaging in the diagnosis and management of valvular heart disease, whereas the second, companion document addresses this topic with regard to structural heart disease. Although there is clinical overlap, the documents addressing valvular and structural heart disease are published separately, albeit with a common structure. The goal of the companion AUC documents is to provide a comprehensive resource for multimodality imaging in the context of valvular and structural heart disease, encompassing multiple imaging modalities. Using standardized methodology, the clinical scenarios (indications) were developed by a diverse writing group to represent patient presentations encountered in everyday practice and included common applications and anticipated uses. Where appropriate, the scenarios were developed on the basis of the most current American College of Cardiology/American Heart Association guidelines. A separate, independent rating panel scored the 92 clinical scenarios in this document on a scale of 1 to 9. Scores of 7 to 9 indicate that a modality is considered appropriate for the clinical scenario presented. Midrange scores of 4 to 6 indicate that a modality may be appropriate for the clinical scenario, and scores of 1 to 3 indicate that a modality is considered rarely appropriate for the clinical scenario. The primary objective of the AUC is to provide a framework for the assessment of these scenarios by practices that will improve and standardize physician decision making. AUC publications reflect an ongoing effort by the American College of Cardiology to critically and systematically create, review, and categorize clinical situations where diagnostic tests and procedures are utilized by physicians caring for patients with cardiovascular diseases. The process is based on the current understanding of the technical capabilities of the imaging modalities examined. Copyright © 2017. Published by Elsevier Inc.

  7. ACC/AATS/AHA/ASE/ASNC/HRS/SCAI/SCCT/SCMR/STS 2017 Appropriate Use Criteria for Multimodality Imaging in Valvular Heart Disease : A Report of the American College of Cardiology Appropriate Use Criteria Task Force, American Association for Thoracic Surgery, American Heart Association, American Society of Echocardiography, American Society of Nuclear Cardiology, Heart Rhythm Society, Society for Cardiovascular Angiography and Interventions, Society of Cardiovascular Computed Tomography, Society for Cardiovascular Magnetic Resonance, and Society of Thoracic Surgeons.

    PubMed

    Doherty, John U; Kort, Smadar; Mehran, Roxana; Schoenhagen, Paul; Soman, Prem

    2017-12-01

    This document is 1 of 2 companion appropriate use criteria (AUC) documents developed by the American College of Cardiology, American Association for Thoracic Surgery, American Heart Association, American Society of Echocardiography, American Society of Nuclear Cardiology, Heart Rhythm Society, Society for Cardiovascular Angiography and Interventions, Society of Cardiovascular Computed Tomography, Society for Cardiovascular Magnetic Resonance, and Society of Thoracic Surgeons. This document addresses the evaluation and use of multimodality imaging in the diagnosis and management of valvular heart disease, whereas the second, companion document addresses this topic with regard to structural heart disease. Although there is clinical overlap, the documents addressing valvular and structural heart disease are published separately, albeit with a common structure. The goal of the companion AUC documents is to provide a comprehensive resource for multimodality imaging in the context of valvular and structural heart disease, encompassing multiple imaging modalities.Using standardized methodology, the clinical scenarios (indications) were developed by a diverse writing group to represent patient presentations encountered in everyday practice and included common applications and anticipated uses. Where appropriate, the scenarios were developed on the basis of the most current American College of Cardiology/American Heart Association guidelines.A separate, independent rating panel scored the 92 clinical scenarios in this document on a scale of 1 to 9. Scores of 7 to 9 indicate that a modality is considered appropriate for the clinical scenario presented. Midrange scores of 4 to 6 indicate that a modality may be appropriate for the clinical scenario, and scores of 1 to 3 indicate that a modality is considered rarely appropriate for the clinical scenario.The primary objective of the AUC is to provide a framework for the assessment of these scenarios by practices that will improve and standardize physician decision making. AUC publications reflect an ongoing effort by the American College of Cardiology to critically and systematically create, review, and categorize clinical situations where diagnostic tests and procedures are utilized by physicians caring for patients with cardiovascular diseases. The process is based on the current understanding of the technical capabilities of the imaging modalities examined.

  8. Ns-scaled time-gated fluorescence lifetime imaging for forensic document examination

    NASA Astrophysics Data System (ADS)

    Zhong, Xin; Wang, Xinwei; Zhou, Yan

    2018-01-01

    A method of ns-scaled time-gated fluorescence lifetime imaging (TFLI) is proposed to distinguish different fluorescent substances in forensic document examination. Compared with Video Spectral Comparator (VSC) which can examine fluorescence intensity images only, TFLI can detect questioned documents like falsification or alteration. TFLI system can enhance weak signal by accumulation method. The two fluorescence intensity images of the interval delay time tg are acquired by ICCD and fitted into fluorescence lifetime image. The lifetimes of fluorescence substances are represented by different colors, which make it easy to detect the fluorescent substances and the sequence of handwritings. It proves that TFLI is a powerful tool for forensic document examination. Furthermore, the advantages of TFLI system are ns-scaled precision preservation and powerful capture capability.

  9. Scalable ranked retrieval using document images

    NASA Astrophysics Data System (ADS)

    Jain, Rajiv; Oard, Douglas W.; Doermann, David

    2013-12-01

    Despite the explosion of text on the Internet, hard copy documents that have been scanned as images still play a significant role for some tasks. The best method to perform ranked retrieval on a large corpus of document images, however, remains an open research question. The most common approach has been to perform text retrieval using terms generated by optical character recognition. This paper, by contrast, examines whether a scalable segmentation-free image retrieval algorithm, which matches sub-images containing text or graphical objects, can provide additional benefit in satisfying a user's information needs on a large, real world dataset. Results on 7 million scanned pages from the CDIP v1.0 test collection show that content based image retrieval finds a substantial number of documents that text retrieval misses, and that when used as a basis for relevance feedback can yield improvements in retrieval effectiveness.

  10. Global and Local Features Based Classification for Bleed-Through Removal

    NASA Astrophysics Data System (ADS)

    Hu, Xiangyu; Lin, Hui; Li, Shutao; Sun, Bin

    2016-12-01

    The text on one side of historical documents often seeps through and appears on the other side, so the bleed-through is a common problem in historical document images. It makes the document images hard to read and the text difficult to recognize. To improve the image quality and readability, the bleed-through has to be removed. This paper proposes a global and local features extraction based bleed-through removal method. The Gaussian mixture model is used to get the global features of the images. Local features are extracted by the patch around each pixel. Then, the extreme learning machine classifier is utilized to classify the scanned images into the foreground text and the bleed-through component. Experimental results on real document image datasets show that the proposed method outperforms the state-of-the-art bleed-through removal methods and preserves the text strokes well.

  11. Mobile and embedded fast high resolution image stitching for long length rectangular monochromatic objects with periodic structure

    NASA Astrophysics Data System (ADS)

    Limonova, Elena; Tropin, Daniil; Savelyev, Boris; Mamay, Igor; Nikolaev, Dmitry

    2018-04-01

    In this paper we describe stitching protocol, which allows to obtain high resolution images of long length monochromatic objects with periodic structure. This protocol can be used for long length documents or human-induced objects in satellite images of uninhabitable regions like Arctic regions. The length of such objects can reach notable values, while modern camera sensors have limited resolution and are not able to provide good enough image of the whole object for further processing, e.g. using in OCR system. The idea of the proposed method is to acquire a video stream containing full object in high resolution and use image stitching. We expect the scanned object to have straight boundaries and periodic structure, which allow us to introduce regularization to the stitching problem and adapt algorithm for limited computational power of mobile and embedded CPUs. With the help of detected boundaries and structure we estimate homography between frames and use this information to reduce complexity of stitching. We demonstrate our algorithm on mobile device and show image processing speed of 2 fps on Samsung Exynos 5422 processor

  12. New public dataset for spotting patterns in medieval document images

    NASA Astrophysics Data System (ADS)

    En, Sovann; Nicolas, Stéphane; Petitjean, Caroline; Jurie, Frédéric; Heutte, Laurent

    2017-01-01

    With advances in technology, a large part of our cultural heritage is becoming digitally available. In particular, in the field of historical document image analysis, there is now a growing need for indexing and data mining tools, thus allowing us to spot and retrieve the occurrences of an object of interest, called a pattern, in a large database of document images. Patterns may present some variability in terms of color, shape, or context, making the spotting of patterns a challenging task. Pattern spotting is a relatively new field of research, still hampered by the lack of available annotated resources. We present a new publicly available dataset named DocExplore dedicated to spotting patterns in historical document images. The dataset contains 1500 images and 1464 queries, and allows the evaluation of two tasks: image retrieval and pattern localization. A standardized benchmark protocol along with ad hoc metrics is provided for a fair comparison of the submitted approaches. We also provide some first results obtained with our baseline system on this new dataset, which show that there is room for improvement and that should encourage researchers of the document image analysis community to design new systems and submit improved results.

  13. The image-guided surgery toolkit IGSTK: an open source C++ software toolkit.

    PubMed

    Enquobahrie, Andinet; Cheng, Patrick; Gary, Kevin; Ibanez, Luis; Gobbi, David; Lindseth, Frank; Yaniv, Ziv; Aylward, Stephen; Jomier, Julien; Cleary, Kevin

    2007-11-01

    This paper presents an overview of the image-guided surgery toolkit (IGSTK). IGSTK is an open source C++ software library that provides the basic components needed to develop image-guided surgery applications. It is intended for fast prototyping and development of image-guided surgery applications. The toolkit was developed through a collaboration between academic and industry partners. Because IGSTK was designed for safety-critical applications, the development team has adopted lightweight software processes that emphasizes safety and robustness while, at the same time, supporting geographically separated developers. A software process that is philosophically similar to agile software methods was adopted emphasizing iterative, incremental, and test-driven development principles. The guiding principle in the architecture design of IGSTK is patient safety. The IGSTK team implemented a component-based architecture and used state machine software design methodologies to improve the reliability and safety of the components. Every IGSTK component has a well-defined set of features that are governed by state machines. The state machine ensures that the component is always in a valid state and that all state transitions are valid and meaningful. Realizing that the continued success and viability of an open source toolkit depends on a strong user community, the IGSTK team is following several key strategies to build an active user community. These include maintaining a users and developers' mailing list, providing documentation (application programming interface reference document and book), presenting demonstration applications, and delivering tutorial sessions at relevant scientific conferences.

  14. The SIR-B science plan

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The Shuttle Imaging Radar-B (SIR-B) will be the third in a series of spaceborne SAR experiments conducted by NASA which began with the 1978 launch of SEASAT and continued with the 1981 launch of SIR-A. Like SEASAT and SIR-A, SIR-B will operate at L-band and will be horizontally polarized. However, SIR-B will allow digitally processed imagery to be acquired at selectable incidence angles between 15 and 60 deg, thereby permitting, for the first time, parametric studies of the effect of illumination geometry on SAR image information extraction. This document presents a science plan for SIR-B and serves as a reference for the types of geoscientific, sensor, and processing experiments which are possible.

  15. Linear feature extraction from radar imagery: SBIR (Small Business Innovative Research), phase 2, option 2

    NASA Astrophysics Data System (ADS)

    Milgram, David L.; Kahn, Philip; Conner, Gary D.; Lawton, Daryl T.

    1988-12-01

    The goal of this effort is to develop and demonstrate prototype processing capabilities for a knowledge-based system to automatically extract and analyze features from Synthetic Aperture Radar (SAR) imagery. This effort constitutes Phase 2 funding through the Defense Small Business Innovative Research (SBIR) Program. Previous work examined the feasibility of and technology issues involved in the development of an automated linear feature extraction system. This final report documents this examination and the technologies involved in automating this image understanding task. In particular, it reports on a major software delivery containing an image processing algorithmic base, a perceptual structures manipulation package, a preliminary hypothesis management framework and an enhanced user interface.

  16. Automated management for pavement inspection system (AMPIS)

    NASA Astrophysics Data System (ADS)

    Chung, Hung Chi; Girardello, Roberto; Soeller, Tony; Shinozuka, Masanobu

    2003-08-01

    An automated in-situ road surface distress surveying and management system, AMPIS, has been developed on the basis of video images within the framework of GIS software. Video image processing techniques are introduced to acquire, process and analyze the road surface images obtained from a moving vehicle. ArcGIS platform is used to integrate the routines of image processing and spatial analysis in handling the full-scale metropolitan highway surface distress detection and data fusion/management. This makes it possible to present user-friendly interfaces in GIS and to provide efficient visualizations of surveyed results not only for the use of transportation engineers to manage road surveying documentations, data acquisition, analysis and management, but also for financial officials to plan maintenance and repair programs and further evaluate the socio-economic impacts of highway degradation and deterioration. A review performed in this study on fundamental principle of Pavement Management System (PMS) and its implementation indicates that the proposed approach of using GIS concept and its tools for PMS application will reshape PMS into a new information technology-based system providing a convenient and efficient pavement inspection and management.

  17. GIS-based automated management of highway surface crack inspection system

    NASA Astrophysics Data System (ADS)

    Chung, Hung-Chi; Shinozuka, Masanobu; Soeller, Tony; Girardello, Roberto

    2004-07-01

    An automated in-situ road surface distress surveying and management system, AMPIS, has been developed on the basis of video images within the framework of GIS software. Video image processing techniques are introduced to acquire, process and analyze the road surface images obtained from a moving vehicle. ArcGIS platform is used to integrate the routines of image processing and spatial analysis in handling the full-scale metropolitan highway surface distress detection and data fusion/management. This makes it possible to present user-friendly interfaces in GIS and to provide efficient visualizations of surveyed results not only for the use of transportation engineers to manage road surveying documentations, data acquisition, analysis and management, but also for financial officials to plan maintenance and repair programs and further evaluate the socio-economic impacts of highway degradation and deterioration. A review performed in this study on fundamental principle of Pavement Management System (PMS) and its implementation indicates that the proposed approach of using GIS concept and its tools for PMS application will reshape PMS into a new information technology-based system that can provide convenient and efficient pavement inspection and management.

  18. FBI Fingerprint Image Capture System High-Speed-Front-End throughput modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rathke, P.M.

    1993-09-01

    The Federal Bureau of Investigation (FBI) has undertaken a major modernization effort called the Integrated Automated Fingerprint Identification System (IAFISS). This system will provide centralized identification services using automated fingerprint, subject descriptor, mugshot, and document processing. A high-speed Fingerprint Image Capture System (FICS) is under development as part of the IAFIS program. The FICS will capture digital and microfilm images of FBI fingerprint cards for input into a central database. One FICS design supports two front-end scanning subsystems, known as the High-Speed-Front-End (HSFE) and Low-Speed-Front-End, to supply image data to a common data processing subsystem. The production rate of themore » HSFE is critical to meeting the FBI`s fingerprint card processing schedule. A model of the HSFE has been developed to help identify the issues driving the production rate, assist in the development of component specifications, and guide the evolution of an operations plan. A description of the model development is given, the assumptions are presented, and some HSFE throughput analysis is performed.« less

  19. Old document image segmentation using the autocorrelation function and multiresolution analysis

    NASA Astrophysics Data System (ADS)

    Mehri, Maroua; Gomez-Krämer, Petra; Héroux, Pierre; Mullot, Rémy

    2013-01-01

    Recent progress in the digitization of heterogeneous collections of ancient documents has rekindled new challenges in information retrieval in digital libraries and document layout analysis. Therefore, in order to control the quality of historical document image digitization and to meet the need of a characterization of their content using intermediate level metadata (between image and document structure), we propose a fast automatic layout segmentation of old document images based on five descriptors. Those descriptors, based on the autocorrelation function, are obtained by multiresolution analysis and used afterwards in a specific clustering method. The method proposed in this article has the advantage that it is performed without any hypothesis on the document structure, either about the document model (physical structure), or the typographical parameters (logical structure). It is also parameter-free since it automatically adapts to the image content. In this paper, firstly, we detail our proposal to characterize the content of old documents by extracting the autocorrelation features in the different areas of a page and at several resolutions. Then, we show that is possible to automatically find the homogeneous regions defined by similar indices of autocorrelation without knowledge about the number of clusters using adapted hierarchical ascendant classification and consensus clustering approaches. To assess our method, we apply our algorithm on 316 old document images, which encompass six centuries (1200-1900) of French history, in order to demonstrate the performance of our proposal in terms of segmentation and characterization of heterogeneous corpus content. Moreover, we define a new evaluation metric, the homogeneity measure, which aims at evaluating the segmentation and characterization accuracy of our methodology. We find a 85% of mean homogeneity accuracy. Those results help to represent a document by a hierarchy of layout structure and content, and to define one or more signatures for each page, on the basis of a hierarchical representation of homogeneous blocks and their topology.

  20. Micromirror array nanostructures for anticounterfeiting applications

    NASA Astrophysics Data System (ADS)

    Lee, Robert A.

    2004-06-01

    The optical characteristics of pixellated passive micro mirror arrays are derived and applied in the context of their use as reflective optically variable device (OVD) nanostructures for the protection of documents from counterfeiting. The traditional design variables of foil based diffractive OVDs are shown to be able to be mapped to a corresponding set of design parameters for reflective optical micro mirror array (OMMA) devices. The greatly increased depth characteristics of micro mirror array OVDs provides an opportunity for directly printing the OVD microstructure onto the security document in-line with the normal printing process. The micro mirror array OVD architecture therefore eliminates the need for hot stamping foil as the carrier of the OVD information, thereby reducing costs. The origination of micro mirror array devices via a palette based data format and a combination electron beam lithography and photolithography techniques is discussed via an artwork example and experimental tests. Finally the application of the technology to the design of a generic class of devices which have the interesting property of allowing for both application and customer specific OVD image encoding and data encoding at the end user stage of production is described. Because of the end user nature of the image and data encoding process these devices are particularly well suited to ID document applications and for this reason we refer this new OVD concept as biometric OVD technology.

  1. Word spotting for handwritten documents using Chamfer Distance and Dynamic Time Warping

    NASA Astrophysics Data System (ADS)

    Saabni, Raid M.; El-Sana, Jihad A.

    2011-01-01

    A large amount of handwritten historical documents are located in libraries around the world. The desire to access, search, and explore these documents paves the way for a new age of knowledge sharing and promotes collaboration and understanding between human societies. Currently, the indexes for these documents are generated manually, which is very tedious and time consuming. Results produced by state of the art techniques, for converting complete images of handwritten documents into textual representations, are not yet sufficient. Therefore, word-spotting methods have been developed to archive and index images of handwritten documents in order to enable efficient searching within documents. In this paper, we present a new matching algorithm to be used in word-spotting tasks for historical Arabic documents. We present a novel algorithm based on the Chamfer Distance to compute the similarity between shapes of word-parts. Matching results are used to cluster images of Arabic word-parts into different classes using the Nearest Neighbor rule. To compute the distance between two word-part images, the algorithm subdivides each image into equal-sized slices (windows). A modified version of the Chamfer Distance, incorporating geometric gradient features and distance transform data, is used as a similarity distance between the different slices. Finally, the Dynamic Time Warping (DTW) algorithm is used to measure the distance between two images of word-parts. By using the DTW we enabled our system to cluster similar word-parts, even though they are transformed non-linearly due to the nature of handwriting. We tested our implementation of the presented methods using various documents in different writing styles, taken from Juma'a Al Majid Center - Dubai, and obtained encouraging results.

  2. Images of Kilauea East Rift Zone eruption, 1983-1993

    USGS Publications Warehouse

    Takahashi, Taeko Jane; Abston, C.C.; Heliker, C.C.

    1995-01-01

    This CD-ROM disc contains 475 scanned photographs from the U.S. Geological Survey Hawaii Observatory Library. The collection represents a comprehensive range of the best photographic images of volcanic phenomena for Kilauea's East Rift eruption, which continues as of September 1995. Captions of the images present information on location, geologic feature or process, and date. Short documentations of work by the USGS Hawaiian Volcano Observatory in geology, seismology, ground deformation, geophysics, and geochemistry are also included, along with selected references. The CD-ROM was produced in accordance with the ISO 9660 standard; however, it is intended for use only on DOS-based computer systems.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welch, M.J.

    Positron emission tomography (PET) assesses biochemical processes in the living subject, producing images of function rather than form. Using PET, physicians are able to obtain not the anatomical information provided by other medical imaging techniques, but pictures of physiological activity. In metaphoric terms, traditional imaging methods supply a map of the body's roadways, its, anatomy; PET shows the traffic along those paths, its biochemistry. This document discusses the principles of PET, the radiopharmaceuticals in PET, PET research, clinical applications of PET, the cost of PET, training of individuals for PET, the role of the United States Department of Energy inmore » PET, and the futures of PET. 22 figs.« less

  4. Deep Learning for Lowtextured Image Matching

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Fedorenko, V. V.; Fomin, N. A.

    2018-05-01

    Low-textured objects pose challenges for an automatic 3D model reconstruction. Such objects are common in archeological applications of photogrammetry. Most of the common feature point descriptors fail to match local patches in featureless regions of an object. Hence, automatic documentation of the archeological process using Structure from Motion (SfM) methods is challenging. Nevertheless, such documentation is possible with the aid of a human operator. Deep learning-based descriptors have outperformed most of common feature point descriptors recently. This paper is focused on the development of a new Wide Image Zone Adaptive Robust feature Descriptor (WIZARD) based on the deep learning. We use a convolutional auto-encoder to compress discriminative features of a local path into a descriptor code. We build a codebook to perform point matching on multiple images. The matching is performed using the nearest neighbor search and a modified voting algorithm. We present a new "Multi-view Amphora" (Amphora) dataset for evaluation of point matching algorithms. The dataset includes images of an Ancient Greek vase found at Taman Peninsula in Southern Russia. The dataset provides color images, a ground truth 3D model, and a ground truth optical flow. We evaluated the WIZARD descriptor on the "Amphora" dataset to show that it outperforms the SIFT and SURF descriptors on the complex patch pairs.

  5. The SWAP EUV Imaging Telescope Part I: Instrument Overview and Pre-Flight Testing

    NASA Astrophysics Data System (ADS)

    Seaton, D. B.; Berghmans, D.; Nicula, B.; Halain, J.-P.; De Groof, A.; Thibert, T.; Bloomfield, D. S.; Raftery, C. L.; Gallagher, P. T.; Auchère, F.; Defise, J.-M.; D'Huys, E.; Lecat, J.-H.; Mazy, E.; Rochus, P.; Rossi, L.; Schühle, U.; Slemzin, V.; Yalim, M. S.; Zender, J.

    2013-08-01

    The Sun Watcher with Active Pixels and Image Processing (SWAP) is an EUV solar telescope onboard ESA's Project for Onboard Autonomy 2 (PROBA2) mission launched on 2 November 2009. SWAP has a spectral bandpass centered on 17.4 nm and provides images of the low solar corona over a 54×54 arcmin field-of-view with 3.2 arcsec pixels and an imaging cadence of about two minutes. SWAP is designed to monitor all space-weather-relevant events and features in the low solar corona. Given the limited resources of the PROBA2 microsatellite, the SWAP telescope is designed with various innovative technologies, including an off-axis optical design and a CMOS-APS detector. This article provides reference documentation for users of the SWAP image data.

  6. Detecting fluorescence hot-spots using mosaic maps generated from multimodal endoscope imaging

    NASA Astrophysics Data System (ADS)

    Yang, Chenying; Soper, Timothy D.; Seibel, Eric J.

    2013-03-01

    Fluorescence labeled biomarkers can be detected during endoscopy to guide early cancer biopsies, such as high-grade dysplasia in Barrett's Esophagus. To enhance intraoperative visualization of the fluorescence hot-spots, a mosaicking technique was developed to create full anatomical maps of the lower esophagus and associated fluorescent hot-spots. The resultant mosaic map contains overlaid reflectance and fluorescence images. It can be used to assist biopsy and document findings. The mosaicking algorithm uses reflectance images to calculate image registration between successive frames, and apply this registration to simultaneously acquired fluorescence images. During this mosaicking process, the fluorescence signal is enhanced through multi-frame averaging. Preliminary results showed that the technique promises to enhance the detectability of the hot-spots due to enhanced fluorescence signal.

  7. Ensemble methods with simple features for document zone classification

    NASA Astrophysics Data System (ADS)

    Obafemi-Ajayi, Tayo; Agam, Gady; Xie, Bingqing

    2012-01-01

    Document layout analysis is of fundamental importance for document image understanding and information retrieval. It requires the identification of blocks extracted from a document image via features extraction and block classification. In this paper, we focus on the classification of the extracted blocks into five classes: text (machine printed), handwriting, graphics, images, and noise. We propose a new set of features for efficient classifications of these blocks. We present a comparative evaluation of three ensemble based classification algorithms (boosting, bagging, and combined model trees) in addition to other known learning algorithms. Experimental results are demonstrated for a set of 36503 zones extracted from 416 document images which were randomly selected from the tobacco legacy document collection. The results obtained verify the robustness and effectiveness of the proposed set of features in comparison to the commonly used Ocropus recognition features. When used in conjunction with the Ocropus feature set, we further improve the performance of the block classification system to obtain a classification accuracy of 99.21%.

  8. Plant Phenotyping using Probabilistic Topic Models: Uncovering the Hyperspectral Language of Plants

    PubMed Central

    Wahabzada, Mirwaes; Mahlein, Anne-Katrin; Bauckhage, Christian; Steiner, Ulrike; Oerke, Erich-Christian; Kersting, Kristian

    2016-01-01

    Modern phenotyping and plant disease detection methods, based on optical sensors and information technology, provide promising approaches to plant research and precision farming. In particular, hyperspectral imaging have been found to reveal physiological and structural characteristics in plants and to allow for tracking physiological dynamics due to environmental effects. In this work, we present an approach to plant phenotyping that integrates non-invasive sensors, computer vision, as well as data mining techniques and allows for monitoring how plants respond to stress. To uncover latent hyperspectral characteristics of diseased plants reliably and in an easy-to-understand way, we “wordify” the hyperspectral images, i.e., we turn the images into a corpus of text documents. Then, we apply probabilistic topic models, a well-established natural language processing technique that identifies content and topics of documents. Based on recent regularized topic models, we demonstrate that one can track automatically the development of three foliar diseases of barley. We also present a visualization of the topics that provides plant scientists an intuitive tool for hyperspectral imaging. In short, our analysis and visualization of characteristic topics found during symptom development and disease progress reveal the hyperspectral language of plant diseases. PMID:26957018

  9. Image Analysis for Facility Siting: a Comparison of Lowand High-altitude Image Interpretability for Land Use/land Cover Mapping

    NASA Technical Reports Server (NTRS)

    Borella, H. M.; Estes, J. E.; Ezra, C. E.; Scepan, J.; Tinney, L. R.

    1982-01-01

    For two test sites in Pennsylvania the interpretability of commercially acquired low-altitude and existing high-altitude aerial photography are documented in terms of time, costs, and accuracy for Anderson Level II land use/land cover mapping. Information extracted from the imagery is to be used in the evaluation process for siting energy facilities. Land use/land cover maps were drawn at 1:24,000 scale using commercially flown color infrared photography obtained from the United States Geological Surveys' EROS Data Center. Detailed accuracy assessment of the maps generated by manual image analysis was accomplished employing a stratified unaligned adequate class representation. Both 'area-weighted' and 'by-class' accuracies were documented and field-verified. A discrepancy map was also drawn to illustrate differences in classifications between the two map scales. Results show that the 1:24,000 scale map set was more accurate (99% to 94% area-weighted) than the 1:62,500 scale set, especially when sampled by class (96% to 66%). The 1:24,000 scale maps were also more time-consuming and costly to produce, due mainly to higher image acquisition costs.

  10. Process thresholds: Report of Working Group Number 3

    NASA Technical Reports Server (NTRS)

    Williams, R. S., Jr.

    1985-01-01

    The Process Thresholds Working Group concerned itself with whether a geomorphic process to be monitored on satellite imagery must be global, regional, or local in its effect on the landscape. It was pointed out that major changes in types and magnitudes of processes operating in an area are needed to be detectable on a global scale. It was concluded from a review of geomorphic studies which used satellite images that they do record change in landscape over time (on a time-lapse basis) as a result of one or more processes. In fact, this may be one of the most important attributes of space imagery, in that one can document land form changes in the form of a permanent historical record. The group also discussed the important subject of the acquisition of basic data sets by different satellite imaging systems. Geomorphologists already have available one near-global basis data set resulting from the early LANDSAT program, especially images acquired by LANDSATs 1 and 2. Such historic basic data sets can serve as a benchmark for comparison with landscape changes that take place in the future. They can also serve as a benchmark for comparison with landscape changes that have occurred in the past (as recorded) by images, photography and maps.

  11. Cultural Heritage: An example of graphical documentation with automated photogrammetric systems

    NASA Astrophysics Data System (ADS)

    Giuliano, M. G.

    2014-06-01

    In the field of Cultural Heritage, the use of automated photogrammetric systems, based on Structure from Motion techniques (SfM), is widely used, in particular for the study and for the documentation of the ancient ruins. This work has been carried out during the PhD cycle that was produced the "Carta Archeologica del territorio intorno al monte Massico". The study suggests the archeological documentation of the mausoleum "Torre del Ballerino" placed in the south-west area of Falciano del Massico, along the Via Appia. The graphic documentation has been achieved by using photogrammetric system (Image Based Modeling) and by the classical survey with total station, Nikon Nivo C. The data acquisition was carried out through digital camera Canon EOS 5D Mark II with Canon EF 17-40 mm f/4L USM @ 20 mm with images snapped in RAW and corrected in Adobe Lightroom. During the data processing, the camera calibration and orientation was carried out by the software Agisoft Photoscans and the final result has allowed to achieve a scaled 3D model of the monument, imported in software MeshLab for the different view. Three orthophotos in jpg format were extracted by the model, and then were imported in AutoCAD obtaining façade's surveys.

  12. Use of spectral imaging for documentation of skin parameters in face lift procedure

    NASA Astrophysics Data System (ADS)

    Ruvolo, Eduardo C., Jr.; Bargo, Paulo R.; Dietz, Tim; Scamuffa, Robin; Shoemaker, Kurt; DiBernardo, Barry; Kollias, Nikiforos

    2010-02-01

    In rhytidectomy the postoperative edema (swelling) and ecchymosis (bruising) can influence the cosmetic results. Evaluation of edema has typically been performed by visual inspection by a trained physician using a fourlevel or, more commonly, a two-level grading(1). Few instruments exist capable of quantitatively assessing edema and ecchymosis in skin. Here we demonstrate that edema and ecchymosis can be objectively quantitated in vivo by a multispectral clinical imaging system (MSCIS). After a feasibility study of induced stasis to the forearms of volunteers and a benchtop study of an edema model, five subjects undergoing rhytidectomy were recruited for a clinical study and multispectral images were taken approximately at days 0, 1, 3, 6, 8, 10, 15, 22 and 29 (according with the day of their visit). Apparent concentrations of oxy-hemoglobin, deoxy-hemoglobin (ecchymosis), melanin, scattering and water (edema) were calculated for each pixel of a spectral image stack. From the blue channel on cross-polarized images bilirubin was extracted. These chromophore maps are two-dimensional quantitative representations of the involved skin areas that demonstrated characteristics of the recovery process of the patient after the procedure. We conclude that multispectral imaging can be a valuable noninvasive tool in the study of edema and ecchymosis and can be used to document these chromophores in vivo and determine the efficacy of treatments in a clinical setting.

  13. Using component technologies for web based wavelet enhanced mammographic image visualization.

    PubMed

    Sakellaropoulos, P; Costaridou, L; Panayiotakis, G

    2000-01-01

    The poor contrast detectability of mammography can be dealt with by domain specific software visualization tools. Remote desktop client access and time performance limitations of a previously reported visualization tool are addressed, aiming at more efficient visualization of mammographic image resources existing in web or PACS image servers. This effort is also motivated by the fact that at present, web browsers do not support domain-specific medical image visualization. To deal with desktop client access the tool was redesigned by exploring component technologies, enabling the integration of stand alone domain specific mammographic image functionality in a web browsing environment (web adaptation). The integration method is based on ActiveX Document Server technology. ActiveX Document is a part of Object Linking and Embedding (OLE) extensible systems object technology, offering new services in existing applications. The standard DICOM 3.0 part 10 compatible image-format specification Papyrus 3.0 is supported, in addition to standard digitization formats such as TIFF. The visualization functionality of the tool has been enhanced by including a fast wavelet transform implementation, which allows for real time wavelet based contrast enhancement and denoising operations. Initial use of the tool with mammograms of various breast structures demonstrated its potential in improving visualization of diagnostic mammographic features. Web adaptation and real time wavelet processing enhance the potential of the previously reported tool in remote diagnosis and education in mammography.

  14. A low-cost approach for the documentation and monitoring of an archaeological excavation site

    NASA Astrophysics Data System (ADS)

    Hoffmeister, Dirk; Orrin, Joel; Richter, Jürgen

    2016-04-01

    The documentation of archaeological excavations and in particular a constant monitoring is often time-consuming and depending on humańs capabilities. Thus, remote sensing methods, which allow an objective reproduction of the current state of an excavation and additional information are of interest. Therefore, a low-cost approach was tested on an open-air excavation site for two days in September 2015. The Magdalenian excavation site of Bad Kösen-Lengefeld, Germany is one important site in a system of about 100 sites in the area of the small rivers Saale and Unstrut. The whole site and the surrounding area (200 by 200 m) was first observed by a GoPro Hero 3+ mounted on a DJI-Phantom 2 UAV. Ground control points were set-up in a regular grid covering the whole area. The achieved accuracy is 20 mm with a ground resolution of 45 mm. As a test, the GoPro Hero 3+ camera was additionally mounted on a small, extendable pole. With this second low-cost, easy to apply monitoring approach, pictures were automatically taken every second in a stop-and-go mode. In order to capture the excavation pit (7 by 4 m), two different angles were used for holding the pole, which focused on the middle and on the border of the pit. This procedure was repeated on the following day in order to document the excavation process. For the registration of the images, the already existing and measured excavation nails were used, which are equally distributed over the whole site in a 1 m grid. Thus, a high accurate registration of the images was possible (>10 mm). In order to approve the accuracy of the already derived data, the whole site was also observed by a Faro Focus 3D LS 120 laser scanner. The measurements of this device were registered by spherical targets, which were measured in the same reference system. The accuracy of the registration and the ground resolution for the image based approach for both days was about 4 mm. From these two measurements the process of the excavation was easily derived by computing the differences between the point clouds. The mean difference between the laser scanner measurements and the corresponding image observations of about 5 mm proves the overall accuracy. The results show, that the study site can fastly and easily be documented and monitored in a high-resolution by low-cost systems. The approach uses the surveying information of already existing measurements, as tachymetric measurements are usually conducted on nearly all excavation sites. Overall the presented approach worked successfully. The high-resolution dataset allows to easily document the ongoing excavation. A daily observation would lead to a complete documentation in 3D.

  15. Image Segmentation of Historical Handwriting from Palm Leaf Manuscripts

    NASA Astrophysics Data System (ADS)

    Surinta, Olarik; Chamchong, Rapeeporn

    Palm leaf manuscripts were one of the earliest forms of written media and were used in Southeast Asia to store early written knowledge about subjects such as medicine, Buddhist doctrine and astrology. Therefore, historical handwritten palm leaf manuscripts are important for people who like to learn about historical documents, because we can learn more experience from them. This paper presents an image segmentation of historical handwriting from palm leaf manuscripts. The process is composed of three steps: 1) background elimination to separate text and background by Otsu's algorithm 2) line segmentation and 3) character segmentation by histogram of image. The end result is the character's image. The results from this research may be applied to optical character recognition (OCR) in the future.

  16. Page segmentation using script identification vectors: A first look

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hochberg, J.; Cannon, M.; Kelly, P.

    1997-07-01

    Document images in which different scripts, such as Chinese and Roman, appear on a single page pose a problem for optical character recognition (OCR) systems. This paper explores the use of script identification vectors in the analysis of multilingual document images. A script identification vector is calculated for each connected component in a document. The vector expresses the closest distance between the component and templates developed for each of thirteen scripts, including Arabic, Chinese, Cyrillic, and Roman. The authors calculate the first three principal components within the resulting thirteen-dimensional space for each image. By mapping these components to red, green,more » and blue, they can visualize the information contained in the script identification vectors. The visualization of several multilingual images suggests that the script identification vectors can be used to segment images into script-specific regions as large as several paragraphs or as small as a few characters. The visualized vectors also reveal distinctions within scripts, such as font in Roman documents, and kanji vs. kana in Japanese. Results are best for documents containing highly dissimilar scripts such as Roman and Japanese. Documents containing similar scripts, such as Roman and Cyrillic will require further investigation.« less

  17. Restoring warped document images through 3D shape modeling.

    PubMed

    Tan, Chew Lim; Zhang, Li; Zhang, Zheng; Xia, Tao

    2006-02-01

    Scanning a document page from a thick bound volume often results in two kinds of distortions in the scanned image, i.e., shade along the "spine" of the book and warping in the shade area. In this paper, we propose an efficient restoration method based on the discovery of the 3D shape of a book surface from the shading information in a scanned document image. From a technical point of view, this shape from shading (SFS) problem in real-world environments is characterized by 1) a proximal and moving light source, 2) Lambertian reflection, 3) nonuniform albedo distribution, and 4) document skew. Taking all these factors into account, we first build practical models (consisting of a 3D geometric model and a 3D optical model) for the practical scanning conditions to reconstruct the 3D shape of the book surface. We next restore the scanned document image using this shape based on deshading and dewarping models. Finally, we evaluate the restoration results by comparing our estimated surface shape with the real shape as well as the OCR performance on original and restored document images. The results show that the geometric and photometric distortions are mostly removed and the OCR results are improved markedly.

  18. Simplified generation of biomedical 3D surface model data for embedding into 3D portable document format (PDF) files for publication and education.

    PubMed

    Newe, Axel; Ganslandt, Thomas

    2013-01-01

    The usefulness of the 3D Portable Document Format (PDF) for clinical, educational, and research purposes has recently been shown. However, the lack of a simple tool for converting biomedical data into the model data in the necessary Universal 3D (U3D) file format is a drawback for the broad acceptance of this new technology. A new module for the image processing and rapid prototyping framework MeVisLab does not only provide a platform-independent possibility to create surface meshes out of biomedical/DICOM and other data and to export them into U3D--it also lets the user add meta data to these meshes to predefine colors and names that can be processed by a PDF authoring software while generating 3D PDF files. Furthermore, the source code of the respective module is available and well documented so that it can easily be modified for own purposes.

  19. Hands-Free Image Capture, Data Tagging and Transfer Using Google Glass: A Pilot Study for Improved Wound Care Management

    PubMed Central

    Aldaz, Gabriel; Shluzas, Lauren Aquino; Pickham, David; Eris, Ozgur; Sadler, Joel; Joshi, Shantanu; Leifer, Larry

    2015-01-01

    Chronic wounds, including pressure ulcers, compromise the health of 6.5 million Americans and pose an annual estimated burden of $25 billion to the U.S. health care system. When treating chronic wounds, clinicians must use meticulous documentation to determine wound severity and to monitor healing progress over time. Yet, current wound documentation practices using digital photography are often cumbersome and labor intensive. The process of transferring photos into Electronic Medical Records (EMRs) requires many steps and can take several days. Newer smartphone and tablet-based solutions, such as Epic Haiku, have reduced EMR upload time. However, issues still exist involving patient positioning, image-capture technique, and patient identification. In this paper, we present the development and assessment of the SnapCap System for chronic wound photography. Through leveraging the sensor capabilities of Google Glass, SnapCap enables hands-free digital image capture, and the tagging and transfer of images to a patient’s EMR. In a pilot study with wound care nurses at Stanford Hospital (n=16), we (i) examined feature preferences for hands-free digital image capture and documentation, and (ii) compared SnapCap to the state of the art in digital wound care photography, the Epic Haiku application. We used the Wilcoxon Signed-ranks test to evaluate differences in mean ranks between preference options. Preferred hands-free navigation features include barcode scanning for patient identification, Z(15) = -3.873, p < 0.001, r = 0.71, and double-blinking to take photographs, Z(13) = -3.606, p < 0.001, r = 0.71. In the comparison between SnapCap and Epic Haiku, the SnapCap System was preferred for sterile image-capture technique, Z(16) = -3.873, p < 0.001, r = 0.68. Responses were divided with respect to image quality and overall ease of use. The study’s results have contributed to the future implementation of new features aimed at enhancing mobile hands-free digital photography for chronic wound care. PMID:25902061

  20. Large-scale image region documentation for fully automated image biomarker algorithm development and evaluation

    PubMed Central

    Reeves, Anthony P.; Xie, Yiting; Liu, Shuang

    2017-01-01

    Abstract. With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset. PMID:28612037

  1. From Ephemeral to Legitimate: An Inquiry into Television's Material Traces in Archival Spaces, 1950s-1970s

    ERIC Educational Resources Information Center

    Bratslavsky, Lauren Michelle

    2013-01-01

    The dissertation offers a historical inquiry about how television's material traces entered archival spaces. Material traces refer to both the moving image products and the assortment of documentation about the processes of television as industrial and creative endeavors. By identifying the development of television-specific archives and…

  2. Robust Adaptive Thresholder For Document Scanning Applications

    NASA Astrophysics Data System (ADS)

    Hsing, To R.

    1982-12-01

    In document scanning applications, thresholding is used to obtain binary data from a scanner. However, due to: (1) a wide range of different color backgrounds; (2) density variations of printed text information; and (3) the shading effect caused by the optical systems, the use of adaptive thresholding to enhance the useful information is highly desired. This paper describes a new robust adaptive thresholder for obtaining valid binary images. It is basically a memory type algorithm which can dynamically update the black and white reference level to optimize a local adaptive threshold function. The results of high image quality from different types of simulate test patterns can be obtained by this algorithm. The software algorithm is described and experiment results are present to describe the procedures. Results also show that the techniques described here can be used for real-time signal processing in the varied applications.

  3. Real-time text extraction based on the page layout analysis system

    NASA Astrophysics Data System (ADS)

    Soua, M.; Benchekroun, A.; Kachouri, R.; Akil, M.

    2017-05-01

    Several approaches were proposed in order to extract text from scanned documents. However, text extraction in heterogeneous documents stills a real challenge. Indeed, text extraction in this context is a difficult task because of the variation of the text due to the differences of sizes, styles and orientations, as well as to the complexity of the document region background. Recently, we have proposed the improved hybrid binarization based on Kmeans method (I-HBK)5 to extract suitably the text from heterogeneous documents. In this method, the Page Layout Analysis (PLA), part of the Tesseract OCR engine, is used to identify text and image regions. Afterwards our hybrid binarization is applied separately on each kind of regions. In one side, gamma correction is employed before to process image regions. In the other side, binarization is performed directly on text regions. Then, a foreground and background color study is performed to correct inverted region colors. Finally, characters are located from the binarized regions based on the PLA algorithm. In this work, we extend the integration of the PLA algorithm within the I-HBK method. In addition, to speed up the separation of text and image step, we employ an efficient GPU acceleration. Through the performed experiments, we demonstrate the high F-measure accuracy of the PLA algorithm reaching 95% on the LRDE dataset. In addition, we illustrate the sequential and the parallel compared PLA versions. The obtained results give a speedup of 3.7x when comparing the parallel PLA implementation on GPU GTX 660 to the CPU version.

  4. BOREAS TE-18, 60-m, Radiometrically Rectified Landsat TM Imagery

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team used a radiometric rectification process to produce standardized DN values for a series of Landsat TM images of the BOREAS SSA and NSA in order to compare images that were collected under different atmospheric conditions. The images for each study area were referenced to an image that had very clear atmospheric qualities. The reference image for the SSA was collected on 02-Sep-1994, while the reference image for the NSA was collected on 2 1 Jun-1995. The 23 rectified images cover the period of 07-Jul-1985 to 18-Sep-1994 in the SSA and 22-Jun-1984 to 09-Jun-1994 in the NSA. Each of the reference scenes had coincident atmospheric optical thickness measurements made by RSS-11. The radiometric rectification process is described in more detail by Hall et al. (1991). The original Landsat TM data were received from CCRS for use in the BOREAS project. Due to the nature of the radiometric rectification process and copyright issues, the full-resolution (30-m) images may not be publicly distributed. However, this spatially degraded 60-m resolution version of the images may be openly distributed and is available on the BOREAS CD-ROM series. After the radiometric rectification processing, the original data were degraded to a 60-m pixel size from the original 30-m pixel size by averaging the data over a 2- by 2-pixel window. The data are stored in binary image-format files. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  5. Applications of artificial intelligence to space station: General purpose intelligent sensor interface

    NASA Technical Reports Server (NTRS)

    Mckee, James W.

    1988-01-01

    This final report describes the accomplishments of the General Purpose Intelligent Sensor Interface task of the Applications of Artificial Intelligence to Space Station grant for the period from October 1, 1987 through September 30, 1988. Portions of the First Biannual Report not revised will not be included but only referenced. The goal is to develop an intelligent sensor system that will simplify the design and development of expert systems using sensors of the physical phenomena as a source of data. This research will concentrate on the integration of image processing sensors and voice processing sensors with a computer designed for expert system development. The result of this research will be the design and documentation of a system in which the user will not need to be an expert in such areas as image processing algorithms, local area networks, image processor hardware selection or interfacing, television camera selection, voice recognition hardware selection, or analog signal processing. The user will be able to access data from video or voice sensors through standard LISP statements without any need to know about the sensor hardware or software.

  6. Forensic hash for multimedia information

    NASA Astrophysics Data System (ADS)

    Lu, Wenjun; Varna, Avinash L.; Wu, Min

    2010-01-01

    Digital multimedia such as images and videos are prevalent on today's internet and cause significant social impact, which can be evidenced by the proliferation of social networking sites with user generated contents. Due to the ease of generating and modifying images and videos, it is critical to establish trustworthiness for online multimedia information. In this paper, we propose novel approaches to perform multimedia forensics using compact side information to reconstruct the processing history of a document. We refer to this as FASHION, standing for Forensic hASH for informatION assurance. Based on the Radon transform and scale space theory, the proposed forensic hash is compact and can effectively estimate the parameters of geometric transforms and detect local tampering that an image may have undergone. Forensic hash is designed to answer a broader range of questions regarding the processing history of multimedia data than the simple binary decision from traditional robust image hashing, and also offers more efficient and accurate forensic analysis than multimedia forensic techniques that do not use any side information.

  7. DEVA: An extensible ontology-based annotation model for visual document collections

    NASA Astrophysics Data System (ADS)

    Jelmini, Carlo; Marchand-Maillet, Stephane

    2003-01-01

    The description of visual documents is a fundamental aspect of any efficient information management system, but the process of manually annotating large collections of documents is tedious and far from being perfect. The need for a generic and extensible annotation model therefore arises. In this paper, we present DEVA, an open, generic and expressive multimedia annotation framework. DEVA is an extension of the Dublin Core specification. The model can represent the semantic content of any visual document. It is described in the ontology language DAML+OIL and can easily be extended with external specialized ontologies, adapting the vocabulary to the given application domain. In parallel, we present the Magritte annotation tool, which is an early prototype that validates the DEVA features. Magritte allows to manually annotating image collections. It is designed with a modular and extensible architecture, which enables the user to dynamically adapt the user interface to specialized ontologies merged into DEVA.

  8. Text block identification in restoration process of Javanese script damage

    NASA Astrophysics Data System (ADS)

    Himamunanto, A. R.; Setyowati, E.

    2018-05-01

    Generally, in a sheet of documents there are two objects of information, namely text and image. A text block area in the sheet of manuscript is a vital object because the restoration process would be done only in this object. Text block or text area identification becomes an important step before. This paper describes the steps leading to the restoration of Java script destruction. The process stages are: pre-processing, identification of text block, segmentation, damage identification, restoration. The test result based on the input manuscript “Hamong Tani” show that the system works with a success rate of 82.07%

  9. Method and apparatus for imaging and documenting fingerprints

    DOEpatents

    Fernandez, Salvador M.

    2002-01-01

    The invention relates to a method and apparatus for imaging and documenting fingerprints. A fluorescent dye brought in intimate proximity with the lipid residues of a latent fingerprint is caused to fluoresce on exposure to light energy. The resulting fluorescing image may be recorded photographically.

  10. Wide-field time-resolved luminescence imaging and spectroscopy to decipher obliterated documents in forensic science

    NASA Astrophysics Data System (ADS)

    Suzuki, Mototsugu; Akiba, Norimitsu; Kurosawa, Kenji; Kuroki, Kenro; Akao, Yoshinori; Higashikawa, Yoshiyasu

    2016-01-01

    We applied a wide-field time-resolved luminescence (TRL) method with a pulsed laser and a gated intensified charge coupled device (ICCD) for deciphering obliterated documents for use in forensic science. The TRL method can nondestructively measure the dynamics of luminescence, including fluorescence and phosphorescence lifetimes, which prove to be useful parameters for image detection. First, we measured the TRL spectra of four brands of black porous-tip pen inks on paper to estimate their luminescence lifetimes. Next, we acquired the TRL images of 12 obliterated documents at various delay times and gate times of the ICCD. The obliterated contents were revealed in the TRL images because of the difference in the luminescence lifetimes of the inks. This method requires no pretreatment, is nondestructive, and has the advantage of wide-field imaging, which makes it is easy to control the gate timing. This demonstration proves that TRL imaging and spectroscopy are powerful tools for forensic document examination.

  11. Low-cost conversion of the Polaroid MD-4 land camera to a digital gel documentation system.

    PubMed

    Porch, Timothy G; Erpelding, John E

    2006-04-30

    A simple, inexpensive design is presented for the rapid conversion of the popular MD-4 Polaroid land camera to a high quality digital gel documentation system. Images of ethidium bromide stained DNA gels captured using the digital system were compared to images captured on Polaroid instant film. Resolution and sensitivity were enhanced using the digital system. In addition to the low cost and superior image quality of the digital system, there is also the added convenience of real-time image viewing through the swivel LCD of the digital camera, wide flexibility of gel sizes, accurate automatic focusing, variable image resolution, and consistent ease of use and quality. Images can be directly imported to a computer by using the USB port on the digital camera, further enhancing the potential of the digital system for documentation, analysis, and archiving. The system is appropriate for use as a start-up gel documentation system and for routine gel analysis.

  12. Creating & using specimen images for collection documentation, research, teaching and outreach

    NASA Astrophysics Data System (ADS)

    Demouthe, J. F.

    2012-12-01

    In this age of digital media, there are many opportunities for use of good images of specimens. On-line resources such as institutional web sites and global sites such as PaleoNet and the Paleobiology Database provide venues for collection information and images. Pictures can also be made available to the general public through popular media sites such as Flickr and Facebook, where they can be retrieved and used by teachers, students, and the general public. The number of requests for specimen loans can be drastically reduced by offering the scientific community access to data and specimen images using the internet. This is an important consideration in these days of limited support budgets, since it reduces the amount of staff time necessary for giving researchers and educators access to collections. It also saves wear and tear on the specimens themselves. Many institutions now limit or refuse to send specimens out of their own countries because of the risks involved in going through security and customs. The internet can bridge political boundaries, allowing everyone equal access to collections. In order to develop photographic documentation of a collection, thoughtful preparation will make the process easier and more efficient. Acquire the necessary equipment, establish standards for images, and develop a simple workflow design. Manage images in the camera, and produce the best possible results, rather than relying on time-consuming editing after the fact. It is extremely important that the images of each specimen be of the highest quality and resolution. Poor quality, low resolution photos are not good for anything, and will often have to be retaken when another need arises. Repeating the photography process involves more handling of specimens and more staff time. Once good photos exist, smaller versions can be created for use on the web. The originals can be archived and used for publication and other purposes.

  13. Training-based descreening.

    PubMed

    Siddiqui, Hasib; Bouman, Charles A

    2007-03-01

    Conventional halftoning methods employed in electrophotographic printers tend to produce Moiré artifacts when used for printing images scanned from printed material, such as books and magazines. We present a novel approach for descreening color scanned documents aimed at providing an efficient solution to the Moiré problem in practical imaging devices, including copiers and multifunction printers. The algorithm works by combining two nonlinear image-processing techniques, resolution synthesis-based denoising (RSD), and modified smallest univalue segment assimilating nucleus (SUSAN) filtering. The RSD predictor is based on a stochastic image model whose parameters are optimized beforehand in a separate training procedure. Using the optimized parameters, RSD classifies the local window around the current pixel in the scanned image and applies filters optimized for the selected classes. The output of the RSD predictor is treated as a first-order estimate to the descreened image. The modified SUSAN filter uses the output of RSD for performing an edge-preserving smoothing on the raw scanned data and produces the final output of the descreening algorithm. Our method does not require any knowledge of the screening method, such as the screen frequency or dither matrix coefficients, that produced the printed original. The proposed scheme not only suppresses the Moiré artifacts, but, in addition, can be trained with intrinsic sharpening for deblurring scanned documents. Finally, once optimized for a periodic clustered-dot halftoning method, the same algorithm can be used to inverse halftone scanned images containing stochastic error diffusion halftone noise.

  14. BOREAS TE-18, 30-m, Radiometrically Rectified Landsat TM Imagery

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team used a radiometric rectification process to produce standardized DN values for a series of Landsat TM images of the BOREAS SSA and NSA in order to compare images that were collected under different atmospheric conditions. The images for each study area were referenced to an image that had very clear atmospheric qualities. The reference image for the SSA was collected on 02-Sep-1994, while the reference image for the NSA was collected on 21-Jun-1995. the 23 rectified images cover the period of 07-Jul-1985 to 18 Sep-1994 in the SSA and from 22-Jun-1984 to 09-Jun-1994 in the NSA. Each of the reference scenes had coincident atmospheric optical thickness measurements made by RSS-11. The radiometric rectification process is described in more detail by Hall et al. (199 1). The original Landsat TM data were received from CCRS for use in the BOREAS project. The data are stored in binary image-format files. Due to the nature of the radiometric rectification process and copyright issues, these full-resolution images may not be publicly distributed. However, a spatially degraded 60-m resolution version of the images is available on the BOREAS CD-ROM series. See Sections 15 and 16 for information about how to possibly acquire the full resolution data. Information about the full-resolution images is provided in an inventory listing on the CD-ROMs. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  15. Main image file tape description

    USGS Publications Warehouse

    Warriner, Howard W.

    1980-01-01

    This Main Image File Tape document defines the data content and file structure of the Main Image File Tape (MIFT) produced by the EROS Data Center (EDC). This document also defines an INQUIRY tape, which is just a subset of the MIFT. The format of the INQUIRY tape is identical to the MIFT except for two records; therefore, with the exception of these two records (described elsewhere in this document), every remark made about the MIFT is true for the INQUIRY tape.

  16. Testing photogrammetry-based techniques for three-dimensional surface documentation in forensic pathology.

    PubMed

    Urbanová, Petra; Hejna, Petr; Jurda, Mikoláš

    2015-05-01

    Three-dimensional surface technologies particularly close range photogrammetry and optical surface scanning have recently advanced into affordable, flexible and accurate techniques. Forensic postmortem investigation as performed on a daily basis, however, has not yet fully benefited from their potentials. In the present paper, we tested two approaches to 3D external body documentation - digital camera-based photogrammetry combined with commercial Agisoft PhotoScan(®) software and stereophotogrammetry-based Vectra H1(®), a portable handheld surface scanner. In order to conduct the study three human subjects were selected, a living person, a 25-year-old female, and two forensic cases admitted for postmortem examination at the Department of Forensic Medicine, Hradec Králové, Czech Republic (both 63-year-old males), one dead to traumatic, self-inflicted, injuries (suicide by hanging), the other diagnosed with the heart failure. All three cases were photographed in 360° manner with a Nikon 7000 digital camera and simultaneously documented with the handheld scanner. In addition to having recorded the pre-autopsy phase of the forensic cases, both techniques were employed in various stages of autopsy. The sets of collected digital images (approximately 100 per case) were further processed to generate point clouds and 3D meshes. Final 3D models (a pair per individual) were counted for numbers of points and polygons, then assessed visually and compared quantitatively using ICP alignment algorithm and a cloud point comparison technique based on closest point to point distances. Both techniques were proven to be easy to handle and equally laborious. While collecting the images at autopsy took around 20min, the post-processing was much more time-demanding and required up to 10h of computation time. Moreover, for the full-body scanning the post-processing of the handheld scanner required rather time-consuming manual image alignment. In all instances the applied approaches produced high-resolution photorealistic, real sized or easy to calibrate 3D surface models. Both methods equally failed when the scanned body surface was covered with body hair or reflective moist areas. Still, it can be concluded that single camera close range photogrammetry and optical surface scanning using Vectra H1 scanner represent relatively low-cost solutions which were shown to be beneficial for postmortem body documentation in forensic pathology. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. Building a print on demand web service

    NASA Astrophysics Data System (ADS)

    Reddy, Prakash; Rozario, Benedict; Dudekula, Shariff; V, Anil Dev

    2011-03-01

    There is considerable effort underway to digitize all books that have ever been printed. There is need for a service that can take raw book scans and convert them into Print on Demand (POD) books. Such a service definitely augments the digitization effort and enables broader access to a wider audience. To make this service practical we have identified three key challenges that needed to be addressed. These are: a) produce high quality image images by eliminating artifacts that exist due to the age of the document or those that are introduced during the scanning process b) develop an efficient automated system to process book scans with minimum human intervention; and c) build an eco system which allows us the target audience to discover these books.

  18. Linear feature extraction from radar imagery: SBIR (Small Business Innovative Research) phase 2, option 1

    NASA Astrophysics Data System (ADS)

    Conner, Gary D.; Milgram, David L.; Lawton, Daryl T.; McConnell, Christopher C.

    1988-04-01

    The goal of this effort is to develop and demonstrate prototype processing capabilities for a knowledge-based system to automatically extract and analyze linear features from synthetic aperture radar (SAR) imagery. This effort constitutes Phase 2 funding through the Defense Small Business Innovative Research (SBIR) Program. Previous work examined the feasibility of the technology issues involved in the development of an automatedlinear feature extraction system. This Option 1 Final Report documents this examination and the technologies involved in automating this image understanding task. In particular, it reports on a major software delivery containing an image processing algorithmic base, a perceptual structures manipulation package, a preliminary hypothesis management framework and an enhanced user interface.

  19. Onboard shuttle on-line software requirements system: Prototype

    NASA Technical Reports Server (NTRS)

    Kolkhorst, Barbara; Ogletree, Barry

    1989-01-01

    The prototype discussed here was developed as proof of a concept for a system which could support high volumes of requirements documents with integrated text and graphics; the solution proposed here could be extended to other projects whose goal is to place paper documents in an electronic system for viewing and printing purposes. The technical problems (such as conversion of documentation between word processors, management of a variety of graphics file formats, and difficulties involved in scanning integrated text and graphics) would be very similar for other systems of this type. Indeed, technological advances in areas such as scanning hardware and software and display terminals insure that some of the problems encountered here will be solved in the near-term (less than five years). Examples of these solvable problems include automated input of integrated text and graphics, errors in the recognition process, and the loss of image information which results from the digitization process. The solution developed for the Online Software Requirements System is modular and allows hardware and software components to be upgraded or replaced as industry solutions mature. The extensive commercial software content allows the NASA customer to apply resources to solving the problem and maintaining documents.

  20. Restoring 2D content from distorted documents.

    PubMed

    Brown, Michael S; Sun, Mingxuan; Yang, Ruigang; Yun, Lin; Seales, W Brent

    2007-11-01

    This paper presents a framework to restore the 2D content printed on documents in the presence of geometric distortion and non-uniform illumination. Compared with textbased document imaging approaches that correct distortion to a level necessary to obtain sufficiently readable text or to facilitate optical character recognition (OCR), our work targets nontextual documents where the original printed content is desired. To achieve this goal, our framework acquires a 3D scan of the document's surface together with a high-resolution image. Conformal mapping is used to rectify geometric distortion by mapping the 3D surface back to a plane while minimizing angular distortion. This conformal "deskewing" assumes no parametric model of the document's surface and is suitable for arbitrary distortions. Illumination correction is performed by using the 3D shape to distinguish content gradient edges from illumination gradient edges in the high-resolution image. Integration is performed using only the content edges to obtain a reflectance image with significantly less illumination artifacts. This approach makes no assumptions about light sources and their positions. The results from the geometric and photometric correction are combined to produce the final output.

  1. Applied high-speed imaging for the icing research program at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Slater, Howard; Owens, Jay; Shin, Jaiwon

    1992-01-01

    The Icing Research Tunnel at NASA Lewis Research Center provides scientists a scaled, controlled environment to simulate natural icing events. The closed-loop, low speed, refrigerated wind tunnel offers the experimental capability to test for icing certification requirements, analytical model validation and calibration techniques, cloud physics instrumentation refinement, advanced ice protection systems, and rotorcraft icing methodology development. The test procedures for these objectives all require a high degree of visual documentation, both in real-time data acquisition and post-test image processing. Information is provided to scientific, technical, and industrial imaging specialists as well as to research personnel about the high-speed and conventional imaging systems will be on the recent ice protection technology program. Various imaging examples for some of the tests are presented. Additional imaging examples are available from the NASA Lewis Research Center's Photographic and Printing Branch.

  2. Applied high-speed imaging for the icing research program at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Slater, Howard; Owens, Jay; Shin, Jaiwon

    1991-01-01

    The Icing Research Tunnel at NASA Lewis Research Center provides scientists a scaled, controlled environment to simulate natural icing events. The closed-loop, low speed, refrigerated wind tunnel offers the experimental capability to test for icing certification requirements, analytical model validation and calibration techniques, cloud physics instrumentation refinement, advanced ice protection systems, and rotorcraft icing methodology development. The test procedures for these objectives all require a high degree of visual documentation, both in real-time data acquisition and post-test image processing. Information is provided to scientific, technical, and industrial imaging specialists as well as to research personnel about the high-speed and conventional imaging systems will be on the recent ice protection technology program. Various imaging examples for some of the tests are presented. Additional imaging examples are available from the NASA Lewis Research Center's Photographic and Printing Branch.

  3. Dealing with extreme data diversity: extraction and fusion from the growing types of document formats

    NASA Astrophysics Data System (ADS)

    David, Peter; Hansen, Nichole; Nolan, James J.; Alcocer, Pedro

    2015-05-01

    The growth in text data available online is accompanied by a growth in the diversity of available documents. Corpora with extreme heterogeneity in terms of file formats, document organization, page layout, text style, and content are common. The absence of meaningful metadata describing the structure of online and open-source data leads to text extraction results that contain no information about document structure and are cluttered with page headers and footers, web navigation controls, advertisements, and other items that are typically considered noise. We describe an approach to document structure and metadata recovery that uses visual analysis of documents to infer the communicative intent of the author. Our algorithm identifies the components of documents such as titles, headings, and body content, based on their appearance. Because it operates on an image of a document, our technique can be applied to any type of document, including scanned images. Our approach to document structure recovery considers a finer-grained set of component types than prior approaches. In this initial work, we show that a machine learning approach to document structure recovery using a feature set based on the geometry and appearance of images of documents achieves a 60% greater F1- score than a baseline random classifier.

  4. Text extraction method for historical Tibetan document images based on block projections

    NASA Astrophysics Data System (ADS)

    Duan, Li-juan; Zhang, Xi-qun; Ma, Long-long; Wu, Jian

    2017-11-01

    Text extraction is an important initial step in digitizing the historical documents. In this paper, we present a text extraction method for historical Tibetan document images based on block projections. The task of text extraction is considered as text area detection and location problem. The images are divided equally into blocks and the blocks are filtered by the information of the categories of connected components and corner point density. By analyzing the filtered blocks' projections, the approximate text areas can be located, and the text regions are extracted. Experiments on the dataset of historical Tibetan documents demonstrate the effectiveness of the proposed method.

  5. Rapidly Progressive Maxillary Atelectasis.

    PubMed

    Elkhatib, Ahmad; McMullen, Kyle; Hachem, Ralph Abi; Carrau, Ricardo L; Mastros, Nicholas

    2017-07-01

    Report of a patient with rapidly progressive maxillary atelectasis documented by sequential imaging. A 51-year-old man, presented with left periorbital and retro-orbital pain associated with left nasal obstruction. An initial computed tomographic (CT) scan of the paranasal sinuses failed to reveal any significant abnormality. A subsequent CT scan, indicated for recurrence of symptoms 11 months later, showed significant maxillary atelectasis. An uncinectomy, maxillary antrostomy, and anterior ethmoidectomy resulted in a complete resolution of the symptoms. Chronic maxillary atelectasis is most commonly a consequence of chronic rhinosinusitis. All previous reports have indicated a chronic process but lacked documentation of the course of the disease. This report documents a patient of rapidly progressive chronic maxillary atelectasis with CT scans that demonstrate changes in the maxillary sinus (from normal to atelectatic) within 11 months.

  6. From Panoramic Photos to a Low-Cost Photogrammetric Workflow for Cultural Heritage 3d Documentation

    NASA Astrophysics Data System (ADS)

    D'Annibale, E.; Tassetti, A. N.; Malinverni, E. S.

    2013-07-01

    The research aims to optimize a workflow of architecture documentation: starting from panoramic photos, tackling available instruments and technologies to propose an integrated, quick and low-cost solution of Virtual Architecture. The broader research background shows how to use spherical panoramic images for the architectural metric survey. The input data (oriented panoramic photos), the level of reliability and Image-based Modeling methods constitute an integrated and flexible 3D reconstruction approach: from the professional survey of cultural heritage to its communication in virtual museum. The proposed work results from the integration and implementation of different techniques (Multi-Image Spherical Photogrammetry, Structure from Motion, Imagebased Modeling) with the aim to achieve high metric accuracy and photorealistic performance. Different documentation chances are possible within the proposed workflow: from the virtual navigation of spherical panoramas to complex solutions of simulation and virtual reconstruction. VR tools make for the integration of different technologies and the development of new solutions for virtual navigation. Image-based Modeling techniques allow 3D model reconstruction with photo realistic and high-resolution texture. High resolution of panoramic photo and algorithms of panorama orientation and photogrammetric restitution vouch high accuracy and high-resolution texture. Automated techniques and their following integration are subject of this research. Data, advisably processed and integrated, provide different levels of analysis and virtual reconstruction joining the photogrammetric accuracy to the photorealistic performance of the shaped surfaces. Lastly, a new solution of virtual navigation is tested. Inside the same environment, it proposes the chance to interact with high resolution oriented spherical panorama and 3D reconstructed model at once.

  7. Ballistic Signature Identification System Study

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The first phase of a research project directed toward development of a high speed automatic process to be used to match gun barrel signatures imparted to fired bullets was documented. An optical projection technique has been devised to produce and photograph a planar image of the entire signature, and the phototransparency produced is subjected to analysis using digital Fourier transform techniques. The success of this approach appears to be limited primarily by the accuracy of the photographic step since no significant processing limitations have been encountered.

  8. Incorporating the APS Catalog of the POSS I and Image Archive in ADS

    NASA Technical Reports Server (NTRS)

    Humphreys, Roberta M.

    1998-01-01

    The primary purpose of this contract was to develop the software to both create and access an on-line database of images from digital scans of the Palomar Sky Survey. This required modifying our DBMS (called Star Base) to create an image database from the actual raw pixel data from the scans. The digitized images are processed into a set of coordinate-reference index and pixel files that are stored in run-length files, thus achieving an efficient lossless compression. For efficiency and ease of referencing, each digitized POSS I plate is then divided into 900 subplates. Our custom DBMS maps each query into the corresponding POSS plate(s) and subplate(s). All images from the appropriate subplates are retrieved from disk with byte-offsets taken from the index files. These are assembled on-the-fly into a GIF image file for browser display, and a FITS format image file for retrieval. The FITS images have a pixel size of 0.33 arcseconds. The FITS header contains astrometric and photometric information. This method keeps the disk requirements manageable while allowing for future improvements. When complete, the APS Image Database will contain over 130 Gb of data. A set of web pages query forms are available on-line, as well as an on-line tutorial and documentation. The database is distributed to the Internet by a high-speed SGI server and a high-bandwidth disk system. URL is http://aps.umn.edu/IDB/. The image database software is written in perl and C and has been compiled on SGI computers with MIX5.3. A copy of the written documentation is included and the software is on the accompanying exabyte tape.

  9. Medical history of the representation of rosacea in the 19th century.

    PubMed

    Cribier, Bernard

    2013-12-01

    Throughout the 1800s, clinical illustrations helped to formalize what was then the recently developed field of dermatology. Knowledge of skin diseases was given new dimension as artists and clinicians alike strove to accurately document the physical characteristics of numerous dermatoses. Introduction of novel processes and refined techniques advanced the clinical use of disease images. The increasingly superior quality of these images aided in the early distinction between rosacea and acne. This article highlights these illustrative contributions in dermatology, and includes key images that serve as a road map to early clinical understanding of skin diseases. Copyright © 2013 American Academy of Dermatology, Inc. Published by Mosby, Inc. All rights reserved.

  10. Paperless protocoling of CT and MRI requests at an outpatient imaging center.

    PubMed

    Bassignani, Matthew J; Dierolf, David A; Roberts, David L; Lee, Steven

    2010-04-01

    We created our imaging center (IC) to move outpatient imaging from our busy inpatient imaging suite off-site to a location that is more inviting to ambulatory patients. Nevertheless, patients scanned at our IC still represent the depth and breadth of illness complexity seen with our tertiary care population. Thus, we protocol exams on an individualized basis to ensure that the referring clinician's question is fully answered by the exam performed. Previously, paper based protocoling was a laborious process for all those involved where the IC business office would fax the requests to various reading rooms for protocoling by the subspecialist radiologists who are 3 miles away at the main hospital. Once protocoled, reading room coordinators would fax back the protocoled request to the IC technical area in preparation for the next day's scheduled exams. At any breakdown in this process (e.g., lost paperwork), patient exams were delayed and clinicians and patients became upset. To improve this process, we developed a paper free process whereby protocoling is accomplished through scanning of exam requests into our PACS. Using the common worklist functionality found in most PACS, we created "protocoling worklists" that contain these scanned documents. Radiologists protocol these studies in the PACS worklist (with the added benefit of having all imaging and report data available), and subsequently, the technologists can see and act on the protocols they find in PACS. This process has significantly decreased interruptions in our busy reading rooms and decreased rework of IC staff.

  11. Unveiling molecular events in the brain by noninvasive imaging.

    PubMed

    Klohs, Jan; Rudin, Markus

    2011-10-01

    Neuroimaging allows researchers and clinicians to noninvasively assess structure and function of the brain. With the advances of imaging modalities such as magnetic resonance, nuclear, and optical imaging; the design of target-specific probes; and/or the introduction of reporter gene assays, these technologies are now capable of visualizing cellular and molecular processes in vivo. Undoubtedly, the system biological character of molecular neuroimaging, which allows for the study of molecular events in the intact organism, will enhance our understanding of physiology and pathophysiology of the brain and improve our ability to diagnose and treat diseases more specifically. Technical/scientific challenges to be faced are the development of highly sensitive imaging modalities, the design of specific imaging probe molecules capable of penetrating the CNS and reporting on endogenous cellular and molecular processes, and the development of tools for extracting quantitative, biologically relevant information from imaging data. Today, molecular neuroimaging is still an experimental approach with limited clinical impact; this is expected to change within the next decade. This article provides an overview of molecular neuroimaging approaches with a focus on rodent studies documenting the exploratory state of the field. Concepts are illustrated by discussing applications related to the pathophysiology of Alzheimer's disease.

  12. The element of naturalness when evaluating image quality of digital photo documentation after sexual assault.

    PubMed

    Ernst, E J; Speck, P M; Fitzpatrick, J J

    2012-01-01

    Digital photography is a valuable adjunct to document physical injuries after sexual assault. In order for a digital photograph to have high image quality, there must exist a high level of naturalness. Digital photo documentation has varying degrees of naturalness; however, for a photograph to be natural, specific technical elements for the viewer must be satisfied. No tool was available to rate the naturalness of digital photo documentation of female genital injuries after sexual assault. The Photo Documentation Image Quality Scoring System (PDIQSS) tool was developed to rate technical elements for naturalness. Using this tool, experts evaluated randomly selected digital photographs of female genital injuries captured following sexual assault. Naturalness of female genital injuries following sexual assault was demonstrated when measured in all dimensions.

  13. Browsing Through Closed Books: Evaluation of Preprocessing Methods for Page Extraction of a 3-D CT Book Volume

    NASA Astrophysics Data System (ADS)

    Stromer, D.; Christlein, V.; Schön, T.; Holub, W.; Maier, A.

    2017-09-01

    It is often the case that a document can not be opened, page-turned or touched anymore due to damages caused by aging processes, moisture or fire. To counter this, special imaging systems can be used. One of our earlier work revealed that a common 3-D X-ray micro-CT scanner is well suited for imaging and reconstructing historical documents written with iron gall ink - an ink consisting of metallic particles. We acquired a volume of a self-made book without opening or page-turning with a single 3-D scan. However, when investigating the reconstructed volume, we faced the problem of a proper automatic extraction of single pages within the volume in an acceptable time without losing information of the writings. Within this work, we evaluate different appropriate pre-processing methods with respect to computation time and accuracy which are decisive for a proper extraction of book pages from the reconstructed X-ray volume and the subsequent ink identification. The different methods were tested for an extreme case with low resolution, noisy input data and wavy pages. Finally, we present results of the page extraction after applying the evaluated methods.

  14. Development and Evaluation of a Diagnostic Documentation Support System using Knowledge Processing

    NASA Astrophysics Data System (ADS)

    Makino, Kyoko; Hayakawa, Rumi; Terai, Koichi; Fukatsu, Hiroshi

    In this paper, we will introduce a system which supports creating diagnostic reports. Diagnostic reports are documents by doctors of radiology describing the existence and nonexistence of abnormalities from the inspection images, such as CT and MRI, and summarize a patient's state and disease. Our system indicates insufficiencies in these reports created by younger doctors, by using knowledge processing based on a medical knowledge dictionary. These indications are not only clerical errors, but the system also analyzes the purpose of the inspection and determines whether a comparison with a former inspection is required, or whether there is any shortage in description. We verified our system by using actual data of 2,233 report pairs, a pair comprised of a report written by a younger doctor and a check result of the report by an experienced doctor. The results of the verification showed that the rules of string analysis for detecting clerical errors and sentence wordiness obtained a recall of over 90% and a precision of over 75%. Moreover, the rules based on a medical knowledge dictionary for detecting the lack of required comparison with a former inspection and the shortage in description for the inspection purpose obtained a recall of over 70%. From these results, we confirmed that our system contributes to the quality improvement of diagnostic reports. We expect that our system can comprehensively support diagnostic documentations by cooperating with the interface which refers to inspection images or past reports.

  15. Detection of figure and caption pairs based on disorder measurements

    NASA Astrophysics Data System (ADS)

    Faure, Claudie; Vincent, Nicole

    2010-01-01

    Figures inserted in documents mediate a kind of information for which the visual modality is more appropriate than the text. A complete understanding of a figure often necessitates the reading of its caption or to establish a relationship with the main text using a numbered figure identifier which is replicated in the caption and in the main text. A figure and its caption are closely related; they constitute single multimodal components (FC-pair) that Document Image Analysis cannot extract with text and graphics segmentation. We propose a method to go further than the graphics and text segmentation in order to extract FC-pairs without performing a full labelling of the page components. Horizontal and vertical text lines are detected in the pages. The graphics are associated with selected text lines to initiate the detector of FC-pairs. Spatial and visual disorders are introduced to define a layout model in terms of properties. It enables to cope with most of the numerous spatial arrangements of graphics and text lines. The detector of FC-pairs performs operations in order to eliminate the layout disorder and assigns a quality value to each FC-pair. The processed documents were collected in medic@, the digital historical collection of the BIUM (Bibliothèque InterUniversitaire Médicale). A first set of 98 pages constitutes the design set. Then 298 pages were collected to evaluate the system. The performances are the result of a full process, from the binarisation of the digital images to the detection of FC-pairs.

  16. Fundamental performance differences of CMOS and CCD imagers: part V

    NASA Astrophysics Data System (ADS)

    Janesick, James R.; Elliott, Tom; Andrews, James; Tower, John; Pinter, Jeff

    2013-02-01

    Previous papers delivered over the last decade have documented developmental progress made on large pixel scientific CMOS imagers that match or surpass CCD performance. New data and discussions presented in this paper include: 1) a new buried channel CCD fabricated on a CMOS process line, 2) new data products generated by high performance custom scientific CMOS 4T/5T/6T PPD pixel imagers, 3) ultimate CTE and speed limits for large pixel CMOS imagers, 4) fabrication and test results of a flight 4k x 4k CMOS imager for NRL's SoloHi Solar Orbiter Mission, 5) a progress report on ultra large stitched Mk x Nk CMOS imager, 6) data generated by on-chip sub-electron CDS signal chain circuitry used in our imagers, 7) CMOS and CMOSCCD proton and electron radiation damage data for dose levels up to 10 Mrd, 8) discussions and data for a new class of PMOS pixel CMOS imagers and 9) future CMOS development work planned.

  17. Toward image phylogeny forests: automatically recovering semantically similar image relationships.

    PubMed

    Dias, Zanoni; Goldenstein, Siome; Rocha, Anderson

    2013-09-10

    In the past few years, several near-duplicate detection methods appeared in the literature to identify the cohabiting versions of a given document online. Following this trend, there are some initial attempts to go beyond the detection task, and look into the structure of evolution within a set of related images overtime. In this paper, we aim at automatically identify the structure of relationships underlying the images, correctly reconstruct their past history and ancestry information, and group them in distinct trees of processing history. We introduce a new algorithm that automatically handles sets of images comprising different related images, and outputs the phylogeny trees (also known as a forest) associated with them. Image phylogeny algorithms have many applications such as finding the first image within a set posted online (useful for tracking copyright infringement perpetrators), hint at child pornography content creators, and narrowing down a list of suspects for online harassment using photographs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. An application of viola jones method for face recognition for absence process efficiency

    NASA Astrophysics Data System (ADS)

    Rizki Damanik, Rudolfo; Sitanggang, Delima; Pasaribu, Hendra; Siagian, Hendrik; Gulo, Frisman

    2018-04-01

    Absence was a list of documents that the company used to record the attendance time of each employee. The most common problem in a fingerprint machine is the identification of a slow sensor or a sensor not recognizing a finger. The employees late to work because they get difficulties at fingerprint system, they need about 3 – 5 minutes to absence when the condition of finger is wet or not fit. To overcome this problem, this research tried to utilize facial recognition for attendance process. The method used for facial recognition was Viola Jones. Through the processing phase of the RGB face image was converted into a histogram equalization face image for the next stage of recognition. The result of this research was the absence process could be done less than 1 second with a maximum slope of ± 700 and a distance of 20-200 cm. After implement facial recognition the process of absence is more efficient, just take less 1 minute to absence.

  19. Document image cleanup and binarization

    NASA Astrophysics Data System (ADS)

    Wu, Victor; Manmatha, Raghaven

    1998-04-01

    Image binarization is a difficult task for documents with text over textured or shaded backgrounds, poor contrast, and/or considerable noise. Current optical character recognition (OCR) and document analysis technology do not handle such documents well. We have developed a simple yet effective algorithm for document image clean-up and binarization. The algorithm consists of two basic steps. In the first step, the input image is smoothed using a low-pass filter. The smoothing operation enhances the text relative to any background texture. This is because background texture normally has higher frequency than text does. The smoothing operation also removes speckle noise. In the second step, the intensity histogram of the smoothed image is computed and a threshold automatically selected as follows. For black text, the first peak of the histogram corresponds to text. Thresholding the image at the value of the valley between the first and second peaks of the histogram binarizes the image well. In order to reliably identify the valley, the histogram is smoothed by a low-pass filter before the threshold is computed. The algorithm has been applied to some 50 images from a wide variety of source: digitized video frames, photos, newspapers, advertisements in magazines or sales flyers, personal checks, etc. There are 21820 characters and 4406 words in these images. 91 percent of the characters and 86 percent of the words are successfully cleaned up and binarized. A commercial OCR was applied to the binarized text when it consisted of fonts which were OCR recognizable. The recognition rate was 84 percent for the characters and 77 percent for the words.

  20. [The procedure for documentation of digital images in forensic medical histology].

    PubMed

    Putintsev, V A; Bogomolov, D V; Fedulova, M V; Gribunov, Iu P; Kul'bitskiĭ, B N

    2012-01-01

    This paper is devoted to the novel computer technologies employed in the studies of histological preparations. These technologies allow to visualize digital images, structurize the data obtained and store the results in computer memory. The authors emphasize the necessity to properly document digital images obtained during forensic-histological studies and propose the procedure for the formulation of electronic documents in conformity with the relevant technical and legal requirements. It is concluded that the use of digital images as a new study object permits to obviate the drawbacks inherent in the work with the traditional preparations and pass from descriptive microscopy to their quantitative analysis.

  1. TU-AB-201-02: An Automated Treatment Plan Quality Assurance Program for Tandem and Ovoid High Dose-Rate Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, J; Shi, F; Hrycushko, B

    2015-06-15

    Purpose: For tandem and ovoid (T&O) HDR brachytherapy in our clinic, it is required that the planning physicist manually capture ∼10 images during planning, perform a secondary dose calculation and generate a report, combine them into a single PDF document, and upload it to a record- and-verify system to prove to an independent plan checker that the case was planned correctly. Not only does this slow down the already time-consuming clinical workflow, the PDF document also limits the number of parameters that can be checked. To solve these problems, we have developed a web-based automatic quality assurance (QA) program. Methods:more » We set up a QA server accessible through a web- interface. A T&O plan and CT images are exported as DICOMRT files and uploaded to the server. The software checks 13 geometric features, e.g. if the dwell positions are reasonable, and 10 dosimetric features, e.g. secondary dose calculations via TG43 formalism and D2cc to critical structures. A PDF report is automatically generated with errors and potential issues highlighted. It also contains images showing important geometric and dosimetric aspects to prove the plan was created following standard guidelines. Results: The program has been clinically implemented in our clinic. In each of the 58 T&O plans we tested, a 14- page QA report was automatically generated. It took ∼45 sec to export the plan and CT images and ∼30 sec to perform the QA tests and generate the report. In contrast, our manual QA document preparation tooks on average ∼7 minutes under optimal conditions and up to 20 minutes when mistakes were made during the document assembly. Conclusion: We have tested the efficiency and effectiveness of an automated process for treatment plan QA of HDR T&O cases. This software was shown to improve the workflow compared to our conventional manual approach.« less

  2. Image-Based Reconstruction and Analysis of Dynamic Scenes in a Landslide Simulation Facility

    NASA Astrophysics Data System (ADS)

    Scaioni, M.; Crippa, J.; Longoni, L.; Papini, M.; Zanzi, L.

    2017-12-01

    The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time) reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC) technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.

  3. The NASA Subsonic Jet Particle Image Velocimetry (PIV) Dataset

    NASA Technical Reports Server (NTRS)

    Bridges, James; Wernet, Mark P.

    2011-01-01

    Many tasks in fluids engineering require prediction of turbulence of jet flows. The present document documents the single-point statistics of velocity, mean and variance, of cold and hot jet flows. The jet velocities ranged from 0.5 to 1.4 times the ambient speed of sound, and temperatures ranged from unheated to static temperature ratio 2.7. Further, the report assesses the accuracies of the data, e.g., establish uncertainties for the data. This paper covers the following five tasks: (1) Document acquisition and processing procedures used to create the particle image velocimetry (PIV) datasets. (2) Compare PIV data with hotwire and laser Doppler velocimetry (LDV) data published in the open literature. (3) Compare different datasets acquired at the same flow conditions in multiple tests to establish uncertainties. (4) Create a consensus dataset for a range of hot jet flows, including uncertainty bands. (5) Analyze this consensus dataset for self-consistency and compare jet characteristics to those of the open literature. The final objective was fulfilled by using the potential core length and the spread rate of the half-velocity radius to collapse of the mean and turbulent velocity fields over the first 20 jet diameters.

  4. Electronic Imaging in Admissions, Records & Financial Aid Offices.

    ERIC Educational Resources Information Center

    Perkins, Helen L.

    Over the years, efforts have been made to work more efficiently with the ever increasing number of records and paper documents that cross workers' desks. Filing records on optical disk through electronic imaging is an alternative that many feel is the answer to successful document management. The pioneering efforts in electronic imaging in…

  5. Volumetric CT in lung cancer: an example for the qualification of imaging as a biomarker.

    PubMed

    Buckler, Andrew J; Mozley, P David; Schwartz, Lawrence; Petrick, Nicholas; McNitt-Gray, Michael; Fenimore, Charles; O'Donnell, Kevin; Hayes, Wendy; Kim, Hyun J; Clarke, Laurence; Sullivan, Daniel

    2010-01-01

    New ways to understand biology as well as increasing interest in personalized treatments requires new capabilities for the assessment of therapy response. The lack of consensus methods and qualification evidence needed for large-scale multicenter trials, and in turn the standardization that allows them, are widely acknowledged to be the limiting factor in the deployment of qualified imaging biomarkers. The Quantitative Imaging Biomarker Alliance is organized to establish a methodology whereby multiple stakeholders collaborate. It has charged the Volumetric Computed Tomography (CT) Technical Subcommittee with investigating the technical feasibility and clinical value of quantifying changes over time in either volume or other parameters as biomarkers. The group selected solid tumors of the chest in subjects with lung cancer as its first case in point. Success is defined as sufficiently rigorous improvements in CT-based outcome measures to allow individual patients in clinical settings to switch treatments sooner if they are no longer responding to their current regimens, and reduce the costs of evaluating investigational new drugs to treat lung cancer. The team has completed a systems engineering analysis, has begun a roadmap of experimental groundwork, documented profile claims and protocols, and documented a process for imaging biomarker qualification as a general paradigm for qualifying other imaging biomarkers as well. This report addresses a procedural template for the qualification of quantitative imaging biomarkers. This mechanism is cost-effective for stakeholders while simultaneously advancing the public health by promoting the use of measures that prove effective.

  6. Integration of Point Clouds and Images Acquired from a Low-Cost NIR Camera Sensor for Cultural Heritage Purposes

    NASA Astrophysics Data System (ADS)

    Kedzierski, M.; Walczykowski, P.; Wojtkowska, M.; Fryskowska, A.

    2017-08-01

    Terrestrial Laser Scanning is currently one of the most common techniques for modelling and documenting structures of cultural heritage. However, only geometric information on its own, without the addition of imagery data is insufficient when formulating a precise statement about the status of studies structure, for feature extraction or indicating the sites to be restored. Therefore, the Authors propose the integration of spatial data from terrestrial laser scanning with imaging data from low-cost cameras. The use of images from low-cost cameras makes it possible to limit the costs needed to complete such a study, and thus, increasing the possibility of intensifying the frequency of photographing and monitoring of the given structure. As a result, the analysed cultural heritage structures can be monitored more closely and in more detail, meaning that the technical documentation concerning this structure is also more precise. To supplement the laser scanning information, the Authors propose using both images taken both in the near-infrared range and in the visible range. This choice is motivated by the fact that not all important features of historical structures are always visible RGB, but they can be identified in NIR imagery, which, with the additional merging with a three-dimensional point cloud, gives full spatial information about the cultural heritage structure in question. The Authors proposed an algorithm that automates the process of integrating NIR images with a point cloud using parameters, which had been calculated during the transformation of RGB images. A number of conditions affecting the accuracy of the texturing had been studies, in particular, the impact of the geometry of the distribution of adjustment points and their amount on the accuracy of the integration process, the correlation between the intensity value and the error on specific points using images in different ranges of the electromagnetic spectrum and the selection of the optimal method of transforming the acquired imagery. As a result of the research, an innovative solution was achieved, giving high accuracy results and taking into account a number of factors important in the creation of the documentation of historical structures. In addition, thanks to the designed algorithm, the final result can be obtained in a very short time at a high level of automation, in relation to similar types of studies, meaning that it would be possible to obtain a significant data set for further analyses and more detailed monitoring of the state of the historical structures.

  7. Shuttle Case Study Collection Website Development

    NASA Technical Reports Server (NTRS)

    Ransom, Khadijah S.; Johnson, Grace K.

    2012-01-01

    As a continuation from summer 2012, the Shuttle Case Study Collection has been developed using lessons learned documented by NASA engineers, analysts, and contractors. Decades of information related to processing and launching the Space Shuttle is gathered into a single database to provide educators with an alternative means to teach real-world engineering processes. The goal is to provide additional engineering materials that enhance critical thinking, decision making, and problem solving skills. During this second phase of the project, the Shuttle Case Study Collection website was developed. Extensive HTML coding to link downloadable documents, videos, and images was required, as was training to learn NASA's Content Management System (CMS) for website design. As the final stage of the collection development, the website is designed to allow for distribution of information to the public as well as for case study report submissions from other educators online.

  8. Cigarette package design: opportunities for disease prevention.

    PubMed

    Difranza, J R; Clark, D M; Pollay, R W

    2002-06-15

    To learn how cigarette packages are designed and to determine to what extent cigarette packages are designed to target children. A computer search was made of all Internet websites that post tobacco industry documents using the search terms: packaging, package design, package study, box design, logo, trademark and design study. All documents were retrieved electronically and analyzed by the first author for recurrent themes. Cigarette manufacturers devote a great deal of attention and expense to package design because it is central to their efforts to create brand images. Colors, graphic elements, proportioning, texture, materials and typography are tested and used in various combinations to create the desired product and user images. Designs help to create the perceived product attributes and project a personality image of the user with the intent of fulfilling the psychological needs of the targeted type of smoker. The communication of these images and attributes is conducted through conscious and subliminal processes. Extensive testing is conducted using a variety of qualitative and quantitative research techniques. The promotion of tobacco products through appealing imagery cannot be stopped without regulating the package design. The same marketing research techniques used by the tobacco companies can be used to design generic packaging and more effective warning labels targeted at specific consumers.

  9. Cigarette package design: opportunities for disease prevention

    PubMed Central

    DiFranza, JR; Clark, DM; Pollay, RW

    2003-01-01

    Objective To learn how cigarette packages are designed and to determine to what extent cigarette packages are designed to target children. Methods A computer search was made of all Internet websites that post tobacco industry documents using the search terms: packaging, package design, package study, box design, logo, trademark and design study. All documents were retrieved electronically and analyzed by the first author for recurrent themes. Data Synthesis Cigarette manufacturers devote a great deal of attention and expense to package design because it is central to their efforts to create brand images. Colors, graphic elements, proportioning, texture, materials and typography are tested and used in various combinations to create the desired product and user images. Designs help to create the perceived product attributes and project a personality image of the user with the intent of fulfilling the psychological needs of the targeted type of smoker. The communication of these images and attributes is conducted through conscious and subliminal processes. Extensive testing is conducted using a variety of qualitative and quantitative research techniques. Conclusion The promotion of tobacco products through appealing imagery cannot be stopped without regulating the package design. The same marketing research techniques used by the tobacco companies can be used to design generic packaging and more effective warning labels targeted at specific consumers. PMID:19570250

  10. Cigarette package design: opportunities for disease prevention

    PubMed Central

    DiFranza, JR; Clark, DM; Pollay, RW

    2003-01-01

    Objective To learn how cigarette packages are designed and to determine to what extent cigarette packages are designed to target children. Methods A computer search was made of all Internet websites that post tobacco industry documents using the search terms: packaging, package design, package study, box design, logo, trademark and design study. All documents were retrieved electronically and analyzed by the first author for recurrent themes. Data Synthesis Cigarette manufacturers devote a great deal of attention and expense to package design because it is central to their efforts to create brand images. Colors, graphic elements, proportioning, texture, materials and typography are tested and used in various combinations to create the desired product and user images. Designs help to create the perceived product attributes and project a personality image of the user with the intent of fulfilling the psychological needs of the targeted type of smoker. The communication of these images and attributes is conducted through conscious and subliminal processes. Extensive testing is conducted using a variety of qualitative and quantitative research techniques. Conclusion The promotion of tobacco products through appealing imagery cannot be stopped without regulating the package design. The same marketing research techniques used by the tobacco companies can be used to design generic packaging and more effective warning labels targeted at specific consumers.

  11. Architectural Heritage Documentation by Using Low Cost Uav with Fisheye Lens: Otag-I Humayun in Istanbul as a Case Study

    NASA Astrophysics Data System (ADS)

    Yastikli, N.; Özerdem, Ö. Z.

    2017-11-01

    The digital documentation of architectural heritage is important for monitoring, preserving, managing as well as 3B BIM modelling, time-space VR (virtual reality) applications. The unmanned aerial vehicles (UAVs) have been widely used in these application thanks to rapid developments in technology which enable the high resolution images with resolutions in millimeters. Moreover, it has become possible to produce highly accurate 3D point clouds with structure from motion (SfM) and multi-view stereo (MVS), to obtain a surface reconstruction of a realistic 3D architectural heritage model by using high-overlap images and 3D modeling software such as Context capture, Pix4Dmapper, Photoscan. In this study, digital documentation of Otag-i Humayun (The Ottoman Empire Sultan's Summer Palace) located in Davutpaşa, Istanbul/Turkey is aimed using low cost UAV. The data collections have been made with low cost UAS 3DR Solo UAV with GoPro Hero 4 with fisheye lens. The data processing was accomplished by using commercial Pix4D software. The dense point clouds, a true orthophoto and 3D solid model of the Otag-i Humayun were produced results. The quality check of the produced point clouds has been performed. The obtained result from Otag-i Humayun in Istanbul proved that, the low cost UAV with fisheye lens can be successfully used for architectural heritage documentation.

  12. Forest Resource Information System. Phase 3: System transfer report

    NASA Technical Reports Server (NTRS)

    Mroczynski, R. P. (Principal Investigator)

    1981-01-01

    Transfer of the forest reserve information system (FRIS) from the Laboratory for Applications of Remote Sensing to St. Regis Paper Company is described. Modifications required for the transfer of the LARYS image processing software are discussed. The reformatting, geometric correction, image registration, and documentation performed for preprocessing transfer are described. Data turnaround was improved and geometrically corrected and ground-registered CCT LANDSAT 3 data provided to the user. The technology transfer activities are summarized. An application test performed in order to assess a Florida land acquisition is described. A benefit/cost analysis of FRIS is presented.

  13. Unberthed Dragon CRS-2 grappled by SSRMS

    NASA Image and Video Library

    2013-03-26

    ISS035-E-008904 (26 March 2013) ---This image is one of a series of still photos documenting the process to release the SpaceX Dragon-2 spacecraft from the International Space Station on March 26. The spacecraft, filled with experiments and old supplies, can be seen in the grasp of the Space Station Remote Manipulator System’s robot arm or CanadArm2 after it was undocked from the orbital outpost. Forming the backdrop for this image is western Namibia. The Dragon was scheduled to make a landing in the Pacific Ocean, off the coast of California later in the day.

  14. Magnetosphere imager science definition team interim report

    NASA Technical Reports Server (NTRS)

    Armstrong, T. P.; Johnson, C. L.

    1995-01-01

    For three decades, magnetospheric field and plasma measurements have been made by diverse instruments flown on spacecraft in may different orbits, widely separated in space and time, and under various solar and magnetospheric conditions. Scientists have used this information to piece together an intricate, yet incomplete view of the magnetosphere. A simultaneous global view, using various light wavelengths and energetic neutral atoms, could reveal exciting new data nd help explain complex magnetospheric processes, thus providing a clear picture of this region of space. This report documents the scientific rational for such a magnetospheric imaging mission and provides a mission concept for its implementation.

  15. Selected time-lapse movies of the east rift zone eruption of KĪlauea Volcano, 2004–2008

    USGS Publications Warehouse

    Orr, Tim R.

    2011-01-01

    Since 2004, the U.S. Geological Survey's Hawaiian Volcano Observatory has used mass-market digital time-lapse cameras and network-enabled Webcams for visual monitoring and research. The 26 time-lapse movies in this report were selected from the vast collection of images acquired by these camera systems during 2004–2008. Chosen for their content and broad aesthetic appeal, these image sequences document a variety of flow-field and vent processes from Kīlauea's east rift zone eruption, which began in 1983 and is still (as of 2011) ongoing.

  16. Wavelet domain textual coding of Ottoman script images

    NASA Astrophysics Data System (ADS)

    Gerek, Oemer N.; Cetin, Enis A.; Tewfik, Ahmed H.

    1996-02-01

    Image coding using wavelet transform, DCT, and similar transform techniques is well established. On the other hand, these coding methods neither take into account the special characteristics of the images in a database nor are they suitable for fast database search. In this paper, the digital archiving of Ottoman printings is considered. Ottoman documents are printed in Arabic letters. Witten et al. describes a scheme based on finding the characters in binary document images and encoding the positions of the repeated characters This method efficiently compresses document images and is suitable for database research, but it cannot be applied to Ottoman or Arabic documents as the concept of character is different in Ottoman or Arabic. Typically, one has to deal with compound structures consisting of a group of letters. Therefore, the matching criterion will be according to those compound structures. Furthermore, the text images are gray tone or color images for Ottoman scripts for the reasons that are described in the paper. In our method the compound structure matching is carried out in wavelet domain which reduces the search space and increases the compression ratio. In addition to the wavelet transformation which corresponds to the linear subband decomposition, we also used nonlinear subband decomposition. The filters in the nonlinear subband decomposition have the property of preserving edges in the low resolution subband image.

  17. Author name recognition in degraded journal images

    NASA Astrophysics Data System (ADS)

    de Bodard de la Jacopière, Aliette; Likforman-Sulem, Laurence

    2006-01-01

    A method for extracting names in degraded documents is presented in this article. The documents targeted are images of photocopied scientific journals from various scientific domains. Due to the degradation, there is poor OCR recognition, and pieces of other articles appear on the sides of the image. The proposed approach relies on the combination of a low-level textual analysis and an image-based analysis. The textual analysis extracts robust typographic features, while the image analysis selects image regions of interest through anchor components. We report results on the University of Washington benchmark database.

  18. SEOM-SERAM-SEMNIM guidelines on the use of functional and molecular imaging techniques in advanced non-small-cell lung cancer.

    PubMed

    Fernández Pérez, G; Sánchez Escribano, R; García Vicente, A M; Luna Alcalá, A; Ceballos Viro, J; Delgado Bolton, R C; Vilanova Busquets, J C; Sánchez Rovira, P; Fierro Alanis, M P; García Figueiras, R; Alés Martínez, J E

    2018-05-25

    Imaging in oncology is an essential tool for patient management but its potential is being profoundly underutilized. Each of the techniques used in the diagnostic process also conveys functional information that can be relevant in treatment decision making. New imaging algorithms and techniques enhance our knowledge about the phenotype of the tumor and its potential response to different therapies. Functional imaging can be defined as the one that provides information beyond the purely morphological data, and include all the techniques that make it possible to measure specific physiological functions of the tumor, whereas molecular imaging would include techniques that allow us to measure metabolic changes. Functional and molecular techniques included in this document are based on multi-detector computed tomography (CT), 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET), magnetic resonance imaging (MRI), and hybrid equipments, integrating PET with CT (PET/CT) or MRI (PET-MRI). Lung cancer is one of the most frequent and deadly tumors although survival is increasing thanks to advances in diagnostic methods and new treatments. This increased survival poises challenges in terms of proper follow-up and definitions of response and progression, as exemplified by immune therapy-related pseudoprogression. In this consensus document, the use of functional and molecular imaging techniques will be addressed to exploit their current potential and explore future applications in the diagnosis, evaluation of response and detection of recurrence of advanced NSCLC. Copyright © 2018 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.

  19. RESEARCH ON ROBUST METHODS FOR EXTRACTING AND RECOGNIZING PHOTOGRAPHY MANAGEMENT ITEMS FROM VARIOUS IMAGE DATA Of CONSTRUCTION

    NASA Astrophysics Data System (ADS)

    Kitagawa, Etsuji; Tanaka, Shigenori; Abiko, Satoshi; Wakabayashi, Katsuma; Jiang, Wenyuan

    Recently, an electronic delivery for various documents is carried out by Ministry of Land, Infrastructure, Transport and Tourism in construction fields. One of them is image data of construction photography that must be delivered with information of photography management items such as construction name or type of works, etc. However, there is a problem that a lot of cost is needed to treat contents of these items from characters printed and handwritten on blackboard into these image data. In this research, we develop the system which can treat contents of these items by extracting contents of these items from the image data of construction photography taken in various scenes with preprocessing the image, recognizing characters with OCR and correcting error with natural language process. And we confirm the effectiveness of the system, by experimenting in each function of system and in entire system.

  20. Role of Sonographic Imaging in Occupational Therapy Practice

    PubMed Central

    2015-01-01

    Occupational therapy practice is grounded in the delivery of occupation-centered, patient-driven treatments that engage clients in the process of doing to improve health. As emerging technologies, such as medical imaging, find their way into rehabilitation practice, it is imperative that occupational therapy practitioners assess whether and how these tools can be incorporated into treatment regimens that are dually responsive to the medical model of health care and to the profession’s foundation in occupation. Most medical imaging modalities have a discrete place in occupation-based intervention as outcome measures or for patient education; however, sonographic imaging has the potential to blend multiple occupational therapy practice forms to document treatment outcomes, inform clinical reasoning, and facilitate improved functional performance when used as an accessory tool in direct intervention. Use of medical imaging is discussed as it relates to occupational foundations and the professional role within the context of providing efficient, effective patient-centered rehabilitative care. PMID:25871607

  1. Identification needs in developing, documenting, and indexing WSDOT photographs : research report, February 2010.

    DOT National Transportation Integrated Search

    2010-02-01

    Over time, the Department of Transportation has accumulated image collections, which document important : aspects of the transportation infrastructure in the Pacific Northwest, project status and construction details. These : images range from paper ...

  2. Structured Forms Reference Set of Binary Images (SFRS)

    National Institute of Standards and Technology Data Gateway

    NIST Structured Forms Reference Set of Binary Images (SFRS) (Web, free access)   The NIST Structured Forms Database (Special Database 2) consists of 5,590 pages of binary, black-and-white images of synthesized documents. The documents in this database are 12 different tax forms from the IRS 1040 Package X for the year 1988.

  3. Detection of text strings from mixed text/graphics images

    NASA Astrophysics Data System (ADS)

    Tsai, Chien-Hua; Papachristou, Christos A.

    2000-12-01

    A robust system for text strings separation from mixed text/graphics images is presented. Based on a union-find (region growing) strategy the algorithm is thus able to classify the text from graphics and adapts to changes in document type, language category (e.g., English, Chinese and Japanese), text font style and size, and text string orientation within digital images. In addition, it allows for a document skew that usually occurs in documents, without skew correction prior to discrimination while these proposed methods such a projection profile or run length coding are not always suitable for the condition. The method has been tested with a variety of printed documents from different origins with one common set of parameters, and the experimental results of the performance of the algorithm in terms of computational efficiency are demonstrated by using several tested images from the evaluation.

  4. Monitoring of two rapidly changing glacier tongues in the Swiss Alps by new drone data and historical documents

    NASA Astrophysics Data System (ADS)

    Nussbaumer, Samuel U.; Jörg, Philip C.; Gärtner-Roer, Isabelle; Rastner, Philipp; Ruff, Alexander; Steiner, Daniel; Vieli, Andreas; Zumbühl, Heinz J.

    2015-04-01

    Glaciers are considered among the most sensitive indicators of climate change. One of the most visually compelling examples of recent climate change is the retreat of glaciers in mountain regions, and knowledge about the past evolution of glacier fluctuations has been proven to be crucial for studying past decadal to century-scale climate variability. In this presentation, we evaluate the potential of a light fixed-wing UAV (unmanned aerial vehicle; drone) designed for surveying and remote sensing purposes, for monitoring glacier changes. We focus on the frontal zones of two well-known glaciers in the Swiss Alps: Unterer Grindelwaldgletscher in the Bernese Oberland, and Findelengletscher near Zermatt, Valais. We used a professional mapping drone (eBee by senseFly) to cover both frontal areas of the glaciers in the summer/autumn of 2014. We used a Canon IXUS 125HS RGB camera on-board the drone to collect overlapping nadir images for both study sites. For Unterer Grindelwaldgletscher (Findelengletscher), 187 (421) images were taken for a surveyed area of 3.2 km2 (2.9 km2) resulting in digital surface models and orthophotos with a very high spatial resolution of 0.16 m (0.11 m). The high number of images collected per area resulted in accurate elevation models and no detectable systematic horizontal shifts. Analysis of these images reveal in great detail the typical processes and features known for down-wasting and rapidly disintegrating Alpine glacier tongues: formation of (pro-)glacial lakes, dead ice, thermokarst phenomena, collapse of lateral moraines, and a complex interplay between many of those processes. Typically glacio-fluvial, gravitational, and periglacial processes occur in close vicinity and on different temporal scales (continuous, sporadic). We compare both glacier landscapes and address the important processes identified to be responsible for the glacier change at both sites. Finally, to set the observed geomorphological processes and the rapid glacier change into a long-term context, we compare the recent findings with available observation data (in situ measurements) and historical documents of a high quality, such as the original plane-table sheets (prepared for the Swiss Dufour map) surveyed by W. Jacky for the area of Unterer Grindelwaldgletscher in 1860/61, and by A. Bétemps for Findelengletscher in 1859. To complement our findings we show pictorial documents, such as early photographs, captured in the mid-19th century and part of a newly discovered collection of photographs for Unterer Grindelwaldgletscher, which depict both glaciers' splendor during the last Little Ice Age advance. Vertical ice loss since the Little Ice Age amounts to about 350 m for the tongue of Unterer Grindelwaldgletscher, and 150 m for Findelengletscher.

  5. Effects of a proposed quality improvement process in the proportion of the reported ultrasound findings unsupported by stored images.

    PubMed

    Schenone, Mauro; Ziebarth, Sarah; Duncan, Jose; Stokes, Lea; Hernandez, Angela

    2018-02-05

    To investigate the proportion of documented ultrasound findings that were unsupported by stored ultrasound images in the obstetric ultrasound unit, before and after the implementation of a quality improvement process consisting of a checklist and feedback. A quality improvement process was created involving utilization of a checklist and feedback from physician to sonographer. The feedback was based on findings of the physician's review of the report and images using a check list. To assess the impact of this process, two groups were compared. Group 1 consisted of 58 ultrasound reports created prior to initiation of the process. Group 2 included 65 ultrasound reports created after process implementation. Each chart was reviewed by a physician and a sonographer. Findings considered unsupported by stored images by both reviewers were used for analysis, and the proportion of unsupported findings was compared between the two groups. Results are expressed as mean ± standard error. A p value of < .05 was used to determine statistical significance. Univariate analysis of baseline characteristics and potential confounders showed no statistically significant difference between the groups. The mean proportion of unsupported findings in Group 1 was 5.1 ± 0.87, with Group 2 having a significantly lower proportion (2.6 ± 0.62) (p value = .018). Results suggest a significant decrease in the proportion of unsupported findings in ultrasound reports after quality improvement process implementation. Thus, we present a simple yet effective quality improvement process to reduce unsupported ultrasound findings.

  6. NASA Photo One

    NASA Technical Reports Server (NTRS)

    Ross, James C.

    2013-01-01

    This is a photographic record of NASA Dryden flight research aircraft, spanning nearly 25 years. The author has served as a Dryden photographer, and now as its chief photographer and airborne photographer. The results are extraordinary images of in-flight aircraft never seen elsewhere, as well as pictures of aircraft from unusual angles on the ground. The collection is the result of the agency required documentation process for its assets.

  7. Evidence for polymetamorphic garnet growth in the Çine (southern Menderes) Massif, Western Turkey

    NASA Astrophysics Data System (ADS)

    Baker, C. B.; Catlos, E. J.; Sorensen, S. S.; Çemen, I.; Hancer, M.

    2008-07-01

    Garnet-based thermobarometry is often used to develop models for the evolution of the Menderes Massif, a key Aegean metamorphic core complex. Here we present X-ray element maps and high-contrast backscattered electron (BSE) and cathodoluminescence (CL) images from a garnet-bearing rock from the Çine (southern Menderes) Massif. The images document a polymetamorphic history as plagioclase and garnet grains show distinct cores and rims. The sample contains matrix monazite in reaction with allanite. The garnet in the sample is likely not in equilibrium with its matrix minerals. This is evidenced by BSE images that document compositional variability in both core and rim zoning and tracks of bright streaks extending from rim to core. We propose that some garnet that is now present in the Menderes Massif formed due to collision during Cambro-Ordovician and may have recrystallized during subsequent collisional and extensional events. These processes led to non-equilibrium compositions and can result in spurious pressure-temperature (P-T) calculations. To establish the feasibility of the P-T estimates of rocks from the Çine Massif for input into tectonic models for the region, more than one sample from single outcrops should be analyzed. Rocks within the Çine Massif have been suggested to display inverted metamorphism, an increase in T towards structurally higher levels. Based on the garnet documented here, we propose that the inverted metamorphism may be a consequence of apparent P-T rather than a real phenomenon.

  8. On-line transmission electron microscopic image analysis of chromatin texture for differentiation of thyroid gland tumors.

    PubMed

    Kriete, A; Schäffer, R; Harms, H; Aus, H M

    1987-06-01

    Nuclei of the cells from the thyroid gland were analyzed in a transmission electron microscope by direct TV scanning and on-line image processing. The method uses the advantages of a visual-perception model to detect structures in noisy and low-contrast images. The features analyzed include area, a form factor and texture parameters from the second derivative stage. Three tumor-free thyroid tissues, three follicular adenomas, three follicular carcinomas and three papillary carcinomas were studied. The computer-aided cytophotometric method showed that the most significant differences were the statistics of the chromatin texture features of homogeneity and regularity. These findings document the possibility of an automated differentiation of tumors at the ultrastructural level.

  9. The Use of Video-Tacheometric Technology for Documenting and Analysing Geometric Features of Objects

    NASA Astrophysics Data System (ADS)

    Woźniak, Marek; Świerczyńska, Ewa; Jastrzębski, Sławomir

    2015-12-01

    This paper analyzes selected aspects of the use of video-tacheometric technology for inventorying and documenting geometric features of objects. Data was collected with the use of the video-tacheometer Topcon Image Station IS-3 and the professional camera Canon EOS 5D Mark II. During the field work and the development of data the following experiments have been performed: multiple determination of the camera interior orientation parameters and distortion parameters of five lenses with different focal lengths, reflectorless measurements of profiles for the elevation and inventory of decorative surface wall of the building of Warsaw Ballet School. During the research the process of acquiring and integrating video-tacheometric data was analysed as well as the process of combining "point cloud" acquired by using video-tacheometer in the scanning process with independent photographs taken by a digital camera. On the basis of tests performed, utility of the use of video-tacheometric technology in geodetic surveys of geometrical features of buildings has been established.

  10. Unified modeling language and design of a case-based retrieval system in medical imaging.

    PubMed Central

    LeBozec, C.; Jaulent, M. C.; Zapletal, E.; Degoulet, P.

    1998-01-01

    One goal of artificial intelligence research into case-based reasoning (CBR) systems is to develop approaches for designing useful and practical interactive case-based environments. Explaining each step of the design of the case-base and of the retrieval process is critical for the application of case-based systems to the real world. We describe herein our approach to the design of IDEM--Images and Diagnosis from Examples in Medicine--a medical image case-based retrieval system for pathologists. Our approach is based on the expressiveness of an object-oriented modeling language standard: the Unified Modeling Language (UML). We created a set of diagrams in UML notation illustrating the steps of the CBR methodology we used. The key aspect of this approach was selecting the relevant objects of the system according to user requirements and making visualization of cases and of the components of the case retrieval process. Further evaluation of the expressiveness of the design document is required but UML seems to be a promising formalism, improving the communication between the developers and users. Images Figure 6 Figure 7 PMID:9929346

  11. A Scalable Distributed Approach to Mobile Robot Vision

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.

    1997-01-01

    This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).

  12. What's in "Your" File Cabinet? Leveraging Technology for Document Imaging and Storage

    ERIC Educational Resources Information Center

    Flaherty, William

    2011-01-01

    Spotsylvania County Public Schools (SCPS) in Virginia uses a document-imaging solution that leverages the features of a multifunction printer (MFP). An MFP is a printer, scanner, fax machine, and copier all rolled into one. It can scan a document and email it all in one easy step. Software is available that allows the MFP to scan bubble sheets and…

  13. Review of free software tools for image analysis of fluorescence cell micrographs.

    PubMed

    Wiesmann, V; Franz, D; Held, C; Münzenmayer, C; Palmisano, R; Wittenberg, T

    2015-01-01

    An increasing number of free software tools have been made available for the evaluation of fluorescence cell micrographs. The main users are biologists and related life scientists with no or little knowledge of image processing. In this review, we give an overview of available tools and guidelines about which tools the users should use to segment fluorescence micrographs. We selected 15 free tools and divided them into stand-alone, Matlab-based, ImageJ-based, free demo versions of commercial tools and data sharing tools. The review consists of two parts: First, we developed a criteria catalogue and rated the tools regarding structural requirements, functionality (flexibility, segmentation and image processing filters) and usability (documentation, data management, usability and visualization). Second, we performed an image processing case study with four representative fluorescence micrograph segmentation tasks with figure-ground and cell separation. The tools display a wide range of functionality and usability. In the image processing case study, we were able to perform figure-ground separation in all micrographs using mainly thresholding. Cell separation was not possible with most of the tools, because cell separation methods are provided only by a subset of the tools and are difficult to parametrize and to use. Most important is that the usability matches the functionality of a tool. To be usable, specialized tools with less functionality need to fulfill less usability criteria, whereas multipurpose tools need a well-structured menu and intuitive graphical user interface. © 2014 Fraunhofer-Institute for Integrated Circuits IIS Journal of Microscopy © 2014 Royal Microscopical Society.

  14. Dilated contour extraction and component labeling algorithm for object vector representation

    NASA Astrophysics Data System (ADS)

    Skourikhine, Alexei N.

    2005-08-01

    Object boundary extraction from binary images is important for many applications, e.g., image vectorization, automatic interpretation of images containing segmentation results, printed and handwritten documents and drawings, maps, and AutoCAD drawings. Efficient and reliable contour extraction is also important for pattern recognition due to its impact on shape-based object characterization and recognition. The presented contour tracing and component labeling algorithm produces dilated (sub-pixel) contours associated with corresponding regions. The algorithm has the following features: (1) it always produces non-intersecting, non-degenerate contours, including the case of one-pixel wide objects; (2) it associates the outer and inner (i.e., around hole) contours with the corresponding regions during the process of contour tracing in a single pass over the image; (3) it maintains desired connectivity of object regions as specified by 8-neighbor or 4-neighbor connectivity of adjacent pixels; (4) it avoids degenerate regions in both background and foreground; (5) it allows an easy augmentation that will provide information about the containment relations among regions; (6) it has a time complexity that is dominantly linear in the number of contour points. This early component labeling (contour-region association) enables subsequent efficient object-based processing of the image information.

  15. Binary partition tree analysis based on region evolution and its application to tree simplification.

    PubMed

    Lu, Huihai; Woods, John C; Ghanbari, Mohammed

    2007-04-01

    Pyramid image representations via tree structures are recognized methods for region-based image analysis. Binary partition trees can be applied which document the merging process with small details found at the bottom levels and larger ones close to the root. Hindsight of the merging process is stored within the tree structure and provides the change histories of an image property from the leaf to the root node. In this work, the change histories are modelled by evolvement functions and their second order statistics are analyzed by using a knee function. Knee values show the reluctancy of each merge. We have systematically formulated these findings to provide a novel framework for binary partition tree analysis, where tree simplification is demonstrated. Based on an evolvement function, for each upward path in a tree, the tree node associated with the first reluctant merge is considered as a pruning candidate. The result is a simplified version providing a reduced solution space and still complying with the definition of a binary tree. The experiments show that image details are preserved whilst the number of nodes is dramatically reduced. An image filtering tool also results which preserves object boundaries and has applications for segmentation.

  16. BOREAS RSS-14 Level 1a GOES-7 Visible, IR, and Water Vapor Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Newcomer, Jeffrey A.; Faysash, David; Cooper, Harry J.; Smith, Eric A.

    2000-01-01

    The BOREAS RSS-14 team collected and processed GOES-7 and -8 images of the BOREAS region as part of its effort to characterize the incoming, reflected, and emitted radiation at regional scales. The level-1a BOREAS GOES-7 image data were collected by RSS-14 personnel at FSU and processed to level-1a products by BORIS personnel. The data cover the period of 01-Jan-1994 through 08-Jul-1995 with partial to complete coverage on the majority of the days. The data include three bands with eightbit pixel values. No major problems with the data have been identified. Due to the large size of the images, the level-1a GOES-7 data are not contained on the BOREAS CD-ROM set. An inventory listing file is supplied on the CD-ROM to inform users of what data were collected. The level-1a GOES-7 image data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). See sections 15 and 16 for more information. The data files are available on a CD-ROM (see document number 20010000884).

  17. Developing a Low-Cost System for 3d Data Acquisition

    NASA Astrophysics Data System (ADS)

    Kossieris, S.; Kourounioti, O.; Agrafiotis, P.; Georgopoulos, A.

    2017-11-01

    In this paper, a developed low-cost system is described, which aims to facilitate 3D documentation fast and reliably by acquiring the necessary data in outdoor environment for the 3D documentation of façades especially in the case of very narrow streets. In particular, it provides a viable solution for buildings up to 8-10m high and streets as narrow as 2m or even less. In cases like that, it is practically impossible or highly time-consuming to acquire images in a conventional way. This practice would lead to a huge number of images and long processing times. The developed system was tested in the narrow streets of a medieval village on the Greek island of Chios. There, in order to by-pass the problem of short taking distances, it was thought to use high definition action cameras together with a 360˚ camera, which are usually provided with very wide-angle lenses and are capable of acquiring images, of high definition, are rather cheap and, most importantly, extremely light. Results suggest that the system can perform fast 3D data acquisition adequate for deliverables of high quality.

  18. Fast frequency domain method to detect skew in a document image

    NASA Astrophysics Data System (ADS)

    Mehta, Sunita; Walia, Ekta; Dutta, Maitreyee

    2015-12-01

    In this paper, a new fast frequency domain method based on Discrete Wavelet Transform and Fast Fourier Transform has been implemented for the determination of the skew angle in a document image. Firstly, image size reduction is done by using two-dimensional Discrete Wavelet Transform and then skew angle is computed using Fast Fourier Transform. Skew angle error is almost negligible. The proposed method is experimented using a large number of documents having skew between -90° and +90° and results are compared with Moments with Discrete Wavelet Transform method and other commonly used existing methods. It has been determined that this method works more efficiently than the existing methods. Also, it works with typed, picture documents having different fonts and resolutions. It overcomes the drawback of the recently proposed method of Moments with Discrete Wavelet Transform that does not work with picture documents.

  19. Image based performance analysis of thermal imagers

    NASA Astrophysics Data System (ADS)

    Wegner, D.; Repasi, E.

    2016-05-01

    Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.

  20. VerifEYE: a real-time meat inspection system for the beef processing industry

    NASA Astrophysics Data System (ADS)

    Kocak, Donna M.; Caimi, Frank M.; Flick, Rick L.; Elharti, Abdelmoula

    2003-02-01

    Described is a real-time meat inspection system developed for the beef processing industry by eMerge Interactive. Designed to detect and localize trace amounts of contamination on cattle carcasses in the packing process, the system affords the beef industry an accurate, high speed, passive optical method of inspection. Using a method patented by United States Department of Agriculture and Iowa State University, the system takes advantage of fluorescing chlorophyll found in the animal's diet and therefore the digestive track to allow detection and imaging of contaminated areas that may harbor potentially dangerous microbial pathogens. Featuring real-time image processing and documentation of performance, the system can be easily integrated into a processing facility's Hazard Analysis and Critical Control Point quality assurance program. This paper describes the VerifEYE carcass inspection and removal verification system. Results indicating the feasibility of the method, as well as field data collected using a prototype system during four university trials conducted in 2001 are presented. Two successful demonstrations using the prototype system were held at a major U.S. meat processing facility in early 2002.

  1. Archive of Boomer seismic reflection data: collected during USGS Cruise 96CCT01, nearshore south central South Carolina coast, June 26 - July 1, 1996

    USGS Publications Warehouse

    Calderon, Karynna; Dadisman, Shawn V.; Kindinger, Jack G.; Flocks, James G.; Wiese, Dana S.

    2003-01-01

    This archive consists of marine seismic reflection profile data collected in four survey areas from southeast of Charleston Harbor to the mouth of the North Edisto River of South Carolina. These data were acquired June 26 - July 1, 1996, aboard the R/V G.K. Gilbert. Included here are data in a variety of formats including binary, American Standard Code for Information Interchange (ASCII), Hyper Text Markup Language (HTML), Portable Document Format (PDF), Rich Text Format (RTF), Graphics Interchange Format (GIF) and Joint Photographic Experts Group (JPEG) images, and shapefiles. Binary data are in Society of Exploration Geophysicists (SEG) SEG-Y format and may be downloaded for further processing or display. Reference maps and GIF images of the profiles may be viewed with a web browser. The Geographic Information Systems (GIS) map documents provided were created with Environmental Systems Research Institute (ESRI) GIS software ArcView 3.2 and 8.1.

  2. CAED Document Repository

    EPA Pesticide Factsheets

    Compliance Assurance and Enforcement Division Document Repository (CAEDDOCRESP) provides internal and external access of Inspection Records, Enforcement Actions, and National Environmental Protection Act (NEPA) documents to all CAED staff. The respository will also include supporting documents, images, etc.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myronakis, M; Cai, W; Dhou, S

    Purpose: To design a comprehensive open-source, publicly available, graphical user interface (GUI) to facilitate the configuration, generation, processing and use of the 4D Extended Cardiac-Torso (XCAT) phantom. Methods: The XCAT phantom includes over 9000 anatomical objects as well as respiratory, cardiac and tumor motion. It is widely used for research studies in medical imaging and radiotherapy. The phantom generation process involves the configuration of a text script to parameterize the geometry, motion, and composition of the whole body and objects within it, and to generate simulated PET or CT images. To avoid the need for manual editing or script writing,more » our MATLAB-based GUI uses slider controls, drop-down lists, buttons and graphical text input to parameterize and process the phantom. Results: Our GUI can be used to: a) generate parameter files; b) generate the voxelized phantom; c) combine the phantom with a lesion; d) display the phantom; e) produce average and maximum intensity images from the phantom output files; f) incorporate irregular patient breathing patterns; and f) generate DICOM files containing phantom images. The GUI provides local help information using tool-tip strings on the currently selected phantom, minimizing the need for external documentation. The DICOM generation feature is intended to simplify the process of importing the phantom images into radiotherapy treatment planning systems or other clinical software. Conclusion: The GUI simplifies and automates the use of the XCAT phantom for imaging-based research projects in medical imaging or radiotherapy. This has the potential to accelerate research conducted with the XCAT phantom, or to ease the learning curve for new users. This tool does not include the XCAT phantom software itself. We would like to acknowledge funding from MRA, Varian Medical Systems Inc.« less

  4. Documentation and virtual reconstruction of historical objects in Peru damaged by an earthquake and climatic events

    NASA Astrophysics Data System (ADS)

    Hanzalová, K.; Pavelka, K.

    2013-07-01

    This paper deals with the possibilities of creating a 3-D model and a visualization technique for a presentation of historical buildings and sites in Peru. The project Nasca/CTU is documenting historical objects by using several techniques. This paper describes the documentation and the visualization of two historical churches (San Jose and San Xavier Churches) and the pre-Hispanic archaeological site La Ciudad Perdida de Huayuri (Abandoned town near Huayuri) in Nasca region by using photogrammetry and remote sensing. Both churches were damaged by an earthquake. We use different process for the documentation of these objects. Firstly, PhotoModeler software was used for the photogrammetric data processing of the acquired images. The subsequent making models of both churches were different too. Google SketchUp software was used for the San Jose Church and the 3-D model of San Xavier Church was created in MicroStation software. While in the modelling of the "Abandoned town" near Huayuri, which was destroyed by a climatic event (El Niño), the terrestrial photogrammetry, satellite data and GNSS measurement were applied. The general output of the project is a thematic map of this archaeological site; C14 method was used for dating.

  5. CONRAD—A software framework for cone-beam imaging in radiology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maier, Andreas; Choi, Jang-Hwan; Riess, Christian

    2013-11-15

    Purpose: In the community of x-ray imaging, there is a multitude of tools and applications that are used in scientific practice. Many of these tools are proprietary and can only be used within a certain lab. Often the same algorithm is implemented multiple times by different groups in order to enable comparison. In an effort to tackle this problem, the authors created CONRAD, a software framework that provides many of the tools that are required to simulate basic processes in x-ray imaging and perform image reconstruction with consideration of nonlinear physical effects.Methods: CONRAD is a Java-based state-of-the-art software platform withmore » extensive documentation. It is based on platform-independent technologies. Special libraries offer access to hardware acceleration such as OpenCL. There is an easy-to-use interface for parallel processing. The software package includes different simulation tools that are able to generate up to 4D projection and volume data and respective vector motion fields. Well known reconstruction algorithms such as FBP, DBP, and ART are included. All algorithms in the package are referenced to a scientific source.Results: A total of 13 different phantoms and 30 processing steps have already been integrated into the platform at the time of writing. The platform comprises 74.000 nonblank lines of code out of which 19% are used for documentation. The software package is available for download at http://conrad.stanford.edu. To demonstrate the use of the package, the authors reconstructed images from two different scanners, a table top system and a clinical C-arm system. Runtimes were evaluated using the RabbitCT platform and demonstrate state-of-the-art runtimes with 2.5 s for the 256 problem size and 12.4 s for the 512 problem size.Conclusions: As a common software framework, CONRAD enables the medical physics community to share algorithms and develop new ideas. In particular this offers new opportunities for scientific collaboration and quantitative performance comparison between the methods of different groups.« less

  6. End-User Imaging DISKussions.

    ERIC Educational Resources Information Center

    McConnell, Pamela Jean

    1993-01-01

    This third in a series of articles on EDIS (Electronic Document Imaging System) technology focuses on organizational issues. Highlights include computer platforms; management information systems; computer-based skills of staff; new technology and change; time factors; financial considerations; document conversion costs; the benefits of EDIS…

  7. Handwritten text line segmentation by spectral clustering

    NASA Astrophysics Data System (ADS)

    Han, Xuecheng; Yao, Hui; Zhong, Guoqiang

    2017-02-01

    Since handwritten text lines are generally skewed and not obviously separated, text line segmentation of handwritten document images is still a challenging problem. In this paper, we propose a novel text line segmentation algorithm based on the spectral clustering. Given a handwritten document image, we convert it to a binary image first, and then compute the adjacent matrix of the pixel points. We apply spectral clustering on this similarity metric and use the orthogonal kmeans clustering algorithm to group the text lines. Experiments on Chinese handwritten documents database (HIT-MW) demonstrate the effectiveness of the proposed method.

  8. We get the algorithms of our ground truths: Designing referential databases in digital image processing

    PubMed Central

    Jaton, Florian

    2017-01-01

    This article documents the practical efforts of a group of scientists designing an image-processing algorithm for saliency detection. By following the actors of this computer science project, the article shows that the problems often considered to be the starting points of computational models are in fact provisional results of time-consuming, collective and highly material processes that engage habits, desires, skills and values. In the project being studied, problematization processes lead to the constitution of referential databases called ‘ground truths’ that enable both the effective shaping of algorithms and the evaluation of their performances. Working as important common touchstones for research communities in image processing, the ground truths are inherited from prior problematization processes and may be imparted to subsequent ones. The ethnographic results of this study suggest two complementary analytical perspectives on algorithms: (1) an ‘axiomatic’ perspective that understands algorithms as sets of instructions designed to solve given problems computationally in the best possible way, and (2) a ‘problem-oriented’ perspective that understands algorithms as sets of instructions designed to computationally retrieve outputs designed and designated during specific problematization processes. If the axiomatic perspective on algorithms puts the emphasis on the numerical transformations of inputs into outputs, the problem-oriented perspective puts the emphasis on the definition of both inputs and outputs. PMID:28950802

  9. Document image archive transfer from DOS to UNIX

    NASA Technical Reports Server (NTRS)

    Hauser, Susan E.; Gill, Michael J.; Thoma, George R.

    1994-01-01

    An R&D division of the National Library of Medicine has developed a prototype system for automated document image delivery as an adjunct to the labor-intensive manual interlibrary loan service of the library. The document image archive is implemented by a PC controlled bank of optical disk drives which use 12 inch WORM platters containing bitmapped images of over 200,000 pages of medical journals. Following three years of routine operation which resulted in serving patrons with articles both by mail and fax, an effort is underway to relocate the storage environment from the DOS-based system to a UNIX-based jukebox whose magneto-optical erasable 5 1/4 inch platters hold the images. This paper describes the deficiencies of the current storage system, the design issues of modifying several modules in the system, the alternatives proposed and the tradeoffs involved.

  10. Nonlinear filtering for character recognition in low quality document images

    NASA Astrophysics Data System (ADS)

    Diaz-Escobar, Julia; Kober, Vitaly

    2014-09-01

    Optical character recognition in scanned printed documents is a well-studied task, where the captured conditions like sheet position, illumination, contrast and resolution are controlled. Nowadays, it is more practical to use mobile devices for document capture than a scanner. So as a consequence, the quality of document images is often poor owing to presence of geometric distortions, nonhomogeneous illumination, low resolution, etc. In this work we propose to use multiple adaptive nonlinear composite filters for detection and classification of characters. Computer simulation results obtained with the proposed system are presented and discussed.

  11. MULTI-CORE AND OPTICAL PROCESSOR RELATED APPLICATIONS RESEARCH AT OAK RIDGE NATIONAL LABORATORY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barhen, Jacob; Kerekes, Ryan A; ST Charles, Jesse Lee

    2008-01-01

    High-speed parallelization of common tasks holds great promise as a low-risk approach to achieving the significant increases in signal processing and computational performance required for next generation innovations in reconfigurable radio systems. Researchers at the Oak Ridge National Laboratory have been working on exploiting the parallelization offered by this emerging technology and applying it to a variety of problems. This paper will highlight recent experience with four different parallel processors applied to signal processing tasks that are directly relevant to signal processing required for SDR/CR waveforms. The first is the EnLight Optical Core Processor applied to matched filter (MF) correlationmore » processing via fast Fourier transform (FFT) of broadband Dopplersensitive waveforms (DSW) using active sonar arrays for target tracking. The second is the IBM CELL Broadband Engine applied to 2-D discrete Fourier transform (DFT) kernel for image processing and frequency domain processing. And the third is the NVIDIA graphical processor applied to document feature clustering. EnLight Optical Core Processor. Optical processing is inherently capable of high-parallelism that can be translated to very high performance, low power dissipation computing. The EnLight 256 is a small form factor signal processing chip (5x5 cm2) with a digital optical core that is being developed by an Israeli startup company. As part of its evaluation of foreign technology, ORNL's Center for Engineering Science Advanced Research (CESAR) had access to a precursor EnLight 64 Alpha hardware for a preliminary assessment of capabilities in terms of large Fourier transforms for matched filter banks and on applications related to Doppler-sensitive waveforms. This processor is optimized for array operations, which it performs in fixed-point arithmetic at the rate of 16 TeraOPS at 8-bit precision. This is approximately 1000 times faster than the fastest DSP available today. The optical core performs the matrix-vector multiplications, where the nominal matrix size is 256x256. The system clock is 125MHz. At each clock cycle, 128K multiply-and-add operations per second (OPS) are carried out, which yields a peak performance of 16 TeraOPS. IBM Cell Broadband Engine. The Cell processor is the extraordinary resulting product of 5 years of sustained, intensive R&D collaboration (involving over $400M investment) between IBM, Sony, and Toshiba. Its architecture comprises one multithreaded 64-bit PowerPC processor element (PPE) with VMX capabilities and two levels of globally coherent cache, and 8 synergistic processor elements (SPEs). Each SPE consists of a processor (SPU) designed for streaming workloads, local memory, and a globally coherent direct memory access (DMA) engine. Computations are performed in 128-bit wide single instruction multiple data streams (SIMD). An integrated high-bandwidth element interconnect bus (EIB) connects the nine processors and their ports to external memory and to system I/O. The Applied Software Engineering Research (ASER) Group at the ORNL is applying the Cell to a variety of text and image analysis applications. Research on Cell-equipped PlayStation3 (PS3) consoles has led to the development of a correlation-based image recognition engine that enables a single PS3 to process images at more than 10X the speed of state-of-the-art single-core processors. NVIDIA Graphics Processing Units. The ASER group is also employing the latest NVIDIA graphical processing units (GPUs) to accelerate clustering of thousands of text documents using recently developed clustering algorithms such as document flocking and affinity propagation.« less

  12. Standard Health Level Seven for Odontological Digital Imaging

    PubMed Central

    Abril-Gonzalez, Mauricio; Portilla, Fernando A.

    2017-01-01

    Abstract Background: A guide for the implementation of dental digital imaging reports was developed and validated through the International Standard of Health Informatics–Health Level Seven (HL7), achieving interoperability with an electronic system that keeps dental records. Introduction: Digital imaging benefits patients, who can view previous close-ups of dental examinations; providers, because of greater efficiency in managing information; and insurers, because of improved accessibility, patient monitoring, and more efficient cost management. Finally, imaging is beneficial for the dentist who can be more agile in the diagnosis and treatment of patients using this tool. Materials and Methods: The guide was developed under the parameters of an HL7 standard. It was necessary to create a group of dentists and three experts in information and communication technologies from different institutions. Discussion: Diagnostic images scanned with conventional radiology or from a radiovisiograph can be converted to Digital Imaging and Communications in Medicine (DICOM) format, while also retaining patient information. The guide shows how the information of the health record of the patient and the information of the dental image could be standardized in a Clinical Dental Record document using international informatics standard like HL7-V3-CDA document (dental document Level 2). Since it is an informatics standardized document, it could be sent, stored, or displayed using different devices—personal computers or mobile devices—independent of the platform used. Conclusions: Interoperability using dental images and dental record systems reduces adverse events, increases security for the patient, and makes more efficient use of resources. This article makes a contribution to the field of telemedicine in dental informatics. In addition to that, the results could be a reference for projects of electronic medical records when the dental documents are part of them. PMID:27248059

  13. Standard Health Level Seven for Odontological Digital Imaging.

    PubMed

    Abril-Gonzalez, Mauricio; Portilla, Fernando A; Jaramillo-Mejia, Marta C

    2017-01-01

    A guide for the implementation of dental digital imaging reports was developed and validated through the International Standard of Health Informatics-Health Level Seven (HL7), achieving interoperability with an electronic system that keeps dental records. Digital imaging benefits patients, who can view previous close-ups of dental examinations; providers, because of greater efficiency in managing information; and insurers, because of improved accessibility, patient monitoring, and more efficient cost management. Finally, imaging is beneficial for the dentist who can be more agile in the diagnosis and treatment of patients using this tool. The guide was developed under the parameters of an HL7 standard. It was necessary to create a group of dentists and three experts in information and communication technologies from different institutions. Diagnostic images scanned with conventional radiology or from a radiovisiograph can be converted to Digital Imaging and Communications in Medicine (DICOM) format, while also retaining patient information. The guide shows how the information of the health record of the patient and the information of the dental image could be standardized in a Clinical Dental Record document using international informatics standard like HL7-V3-CDA document (dental document Level 2). Since it is an informatics standardized document, it could be sent, stored, or displayed using different devices-personal computers or mobile devices-independent of the platform used. Interoperability using dental images and dental record systems reduces adverse events, increases security for the patient, and makes more efficient use of resources. This article makes a contribution to the field of telemedicine in dental informatics. In addition to that, the results could be a reference for projects of electronic medical records when the dental documents are part of them.

  14. Digital authentication with copy-detection patterns

    NASA Astrophysics Data System (ADS)

    Picard, Justin

    2004-06-01

    Technologies for making high-quality copies of documents are getting more available, cheaper, and more efficient. As a result, the counterfeiting business engenders huge losses, ranging to 5% to 8% of worldwide sales of brand products, and endangers the reputation and value of the brands themselves. Moreover, the growth of the Internet drives the business of counterfeited documents (fake IDs, university diplomas, checks, and so on), which can be bought easily and anonymously from hundreds of companies on the Web. The incredible progress of digital imaging equipment has put in question the very possibility of verifying the authenticity of documents: how can we discern genuine documents from seemingly "perfect" copies? This paper proposes a solution based on creating digital images with specific properties, called a Copy-detection patterns (CDP), that is printed on arbitrary documents, packages, etc. CDPs make an optimal use of an "information loss principle": every time an imae is printed or scanned, some information is lost about the original digital image. That principle applies even for the highest quality scanning, digital imaging, printing or photocopying equipment today, and will likely remain true for tomorrow. By measuring the amount of information contained in a scanned CDP, the CDP detector can take a decision on the authenticity of the document.

  15. Spotting words in handwritten Arabic documents

    NASA Astrophysics Data System (ADS)

    Srihari, Sargur; Srinivasan, Harish; Babu, Pavithra; Bhole, Chetan

    2006-01-01

    The design and performance of a system for spotting handwritten Arabic words in scanned document images is presented. Three main components of the system are a word segmenter, a shape based matcher for words and a search interface. The user types in a query in English within a search window, the system finds the equivalent Arabic word, e.g., by dictionary look-up, locates word images in an indexed (segmented) set of documents. A two-step approach is employed in performing the search: (1) prototype selection: the query is used to obtain a set of handwritten samples of that word from a known set of writers (these are the prototypes), and (2) word matching: the prototypes are used to spot each occurrence of those words in the indexed document database. A ranking is performed on the entire set of test word images-- where the ranking criterion is a similarity score between each prototype word and the candidate words based on global word shape features. A database of 20,000 word images contained in 100 scanned handwritten Arabic documents written by 10 different writers was used to study retrieval performance. Using five writers for providing prototypes and the other five for testing, using manually segmented documents, 55% precision is obtained at 50% recall. Performance increases as more writers are used for training.

  16. Span graphics display utilities handbook, first edition

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L.; Green, J. L.; Newman, R.

    1985-01-01

    The Space Physics Analysis Network (SPAN) is a computer network connecting scientific institutions throughout the United States. This network provides an avenue for timely, correlative research between investigators, in a multidisciplinary approach to space physics studies. An objective in the development of SPAN is to make available direct and simplified procedures that scientists can use, without specialized training, to exchange information over the network. Information exchanges include raw and processes data, analysis programs, correspondence, documents, and graphite images. This handbook details procedures that can be used to exchange graphic images over SPAN. The intent is to periodically update this handbook to reflect the constantly changing facilities available on SPAN. The utilities described within reflect an earnest attempt to provide useful descriptions of working utilities that can be used to transfer graphic images across the network. Whether graphic images are representative of satellite servations or theoretical modeling and whether graphics images are of device dependent or independent type, the SPAN graphics display utilities handbook will be the users guide to graphic image exchange.

  17. A simple method for imaging axonal transport in aging neurons using the adult Drosophila wing.

    PubMed

    Vagnoni, Alessio; Bullock, Simon L

    2016-09-01

    There is growing interest in the link between axonal cargo transport and age-associated neuronal dysfunction. The study of axonal transport in neurons of adult animals requires intravital or ex vivo imaging approaches, which are laborious and expensive in vertebrate models. We describe simple, noninvasive procedures for imaging cargo motility within axons using sensory neurons of the translucent Drosophila wing. A key aspect is a method for mounting the intact fly that allows detailed imaging of transport in wing neurons. Coupled with existing genetic tools in Drosophila, this is a tractable system for studying axonal transport over the life span of an animal and thus for characterization of the relationship between cargo dynamics, neuronal aging and disease. Preparation of a sample for imaging takes ∼5 min, with transport typically filmed for 2-3 min per wing. We also document procedures for the quantification of transport parameters from the acquired images and describe how the protocol can be adapted to study other cell biological processes in aging neurons.

  18. Change detection on UGV patrols with respect to a reference tour using VIS imagery

    NASA Astrophysics Data System (ADS)

    Müller, Thomas

    2015-05-01

    Autonomous driving robots (UGVs, Unmanned Ground Vehicles) equipped with visual-optical (VIS) cameras offer a high potential to automatically detect suspicious occurrences and dangerous or threatening situations on patrol. In order to explore this potential, the scene of interest is recorded first on a reference tour representing the 'everything okay' situation. On further patrols changes are detected with respect to the reference in a two step processing scheme. In the first step, an image retrieval is done to find the reference images that are closest to the current camera image on patrol. This is done efficiently based on precalculated image-to-image registrations of the reference by optimizing image overlap in a local reference search (after a global search when that is needed). In the second step, a robust spatio-temporal change detection is performed that widely compensates 3-D parallax according to variations of the camera position. Various results document the performance of the presented approach.

  19. Assessment of landscape change associated with tropical cyclone phenomena in Baja California Sur, Mexico, using satellite remote sensing

    NASA Astrophysics Data System (ADS)

    Martinez-Gutierrez, Genaro

    Baja California Sur (Mexico), as well as mainland Mexico, is affected by tropical cyclone storms, which originate in the eastern north Pacific. Historical records show that Baja has been damaged by intense summer storms. An arid to semiarid climate characterizes the study area, where precipitation mainly occurs during the summer and winter seasons. Natural and anthropogenic changes have impacted the landscape of southern Baja. The present research documents the effects of tropical storms over the southern region of Baja California for a period of approximately twenty-six years. The goal of the research is to demonstrate how remote sensing can be used to detect the important effects of tropical storms including: (a) evaluation of change detection algorithms, and (b) delineating changes to the landscape including coastal modification, fluvial erosion and deposition, vegetation change, river avulsion using change detection algorithms. Digital image processing methods with temporal Landsat satellite remotely sensed data from the North America Landscape Characterization archive (NALC), Thematic Mapper (TM), and Enhanced Thematic Mapper (ETM) images were used to document the landscape change. Two image processing methods were tested including Image differencing (ID), and Principal Component Analysis (PCA). Landscape changes identified with the NALC archive and TM images showed that the major changes included a rapid change of land use in the towns of San Jose del Cabo and Cabo San Lucas between 1973 and 1986. The features detected using the algorithms included flood deposits within the channels of active streams, erosion banks, and new channels caused by channel avulsion. Despite the 19 year period covered by the NALC data and approximately 10 year intervals between acquisition dates, there were changed features that could be identified in the images. The TM images showed that flooding from Hurricane Isis (1998) produced new large deposits within the stream channels. This research has shown that remote sensing based change detection can delineate the effects of flooding on the landscape at scales down to the nominal resolution of the sensor. These findings indicate that many other applications for change detection are both viable and important. These include disaster response, flood hazard planning, geomorphic studies, water supply management in deserts.

  20. Representation-based user interfaces for the audiovisual library of the year 2000

    NASA Astrophysics Data System (ADS)

    Aigrain, Philippe; Joly, Philippe; Lepain, Philippe; Longueville, Veronique

    1995-03-01

    The audiovisual library of the future will be based on computerized access to digitized documents. In this communication, we address the user interface issues which will arise from this new situation. One cannot simply transfer a user interface designed for the piece by piece production of some audiovisual presentation and make it a tool for accessing full-length movies in an electronic library. One cannot take a digital sound editing tool and propose it as a means to listen to a musical recording. In our opinion, when computers are used as mediations to existing contents, document representation-based user interfaces are needed. With such user interfaces, a structured visual representation of the document contents is presented to the user, who can then manipulate it to control perception and analysis of these contents. In order to build such manipulable visual representations of audiovisual documents, one needs to automatically extract structural information from the documents contents. In this communication, we describe possible visual interfaces for various temporal media, and we propose methods for the economically feasible large scale processing of documents. The work presented is sponsored by the Bibliotheque Nationale de France: it is part of the program aiming at developing for image and sound documents an experimental counterpart to the digitized text reading workstation of this library.

  1. The Road to Paperless

    ERIC Educational Resources Information Center

    Villano, Matt

    2006-01-01

    More and more colleges and universities today have discovered electronic record-keeping and record-sharing, made possible by document imaging technology. Across the country, schools such as Monmouth University (New Jersey), Washington State University, the University of Idaho, and Towson University (Maryland) are embracing document imaging. Yet…

  2. Image segmentation evaluation for very-large datasets

    NASA Astrophysics Data System (ADS)

    Reeves, Anthony P.; Liu, Shuang; Xie, Yiting

    2016-03-01

    With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.

  3. Cloud-Based NoSQL Open Database of Pulmonary Nodules for Computer-Aided Lung Cancer Diagnosis and Reproducible Research.

    PubMed

    Ferreira Junior, José Raniery; Oliveira, Marcelo Costa; de Azevedo-Marques, Paulo Mazzoncini

    2016-12-01

    Lung cancer is the leading cause of cancer-related deaths in the world, and its main manifestation is pulmonary nodules. Detection and classification of pulmonary nodules are challenging tasks that must be done by qualified specialists, but image interpretation errors make those tasks difficult. In order to aid radiologists on those hard tasks, it is important to integrate the computer-based tools with the lesion detection, pathology diagnosis, and image interpretation processes. However, computer-aided diagnosis research faces the problem of not having enough shared medical reference data for the development, testing, and evaluation of computational methods for diagnosis. In order to minimize this problem, this paper presents a public nonrelational document-oriented cloud-based database of pulmonary nodules characterized by 3D texture attributes, identified by experienced radiologists and classified in nine different subjective characteristics by the same specialists. Our goal with the development of this database is to improve computer-aided lung cancer diagnosis and pulmonary nodule detection and classification research through the deployment of this database in a cloud Database as a Service framework. Pulmonary nodule data was provided by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), image descriptors were acquired by a volumetric texture analysis, and database schema was developed using a document-oriented Not only Structured Query Language (NoSQL) approach. The proposed database is now with 379 exams, 838 nodules, and 8237 images, 4029 of them are CT scans and 4208 manually segmented nodules, and it is allocated in a MongoDB instance on a cloud infrastructure.

  4. Structured Forms Reference Set of Binary Images II (SFRS2)

    National Institute of Standards and Technology Data Gateway

    NIST Structured Forms Reference Set of Binary Images II (SFRS2) (Web, free access)   The second NIST database of structured forms (Special Database 6) consists of 5,595 pages of binary, black-and-white images of synthesized documents containing hand-print. The documents in this database are 12 different tax forms with the IRS 1040 Package X for the year 1988.

  5. Outpatients flow management and ophthalmic electronic medical records system in university hospital using Yahgee Document View.

    PubMed

    Matsuo, Toshihiko; Gochi, Akira; Hirakawa, Tsuyoshi; Ito, Tadashi; Kohno, Yoshihisa

    2010-10-01

    General electronic medical records systems remain insufficient for ophthalmology outpatient clinics from the viewpoint of dealing with many ophthalmic examinations and images in a large number of patients. Filing systems for documents and images by Yahgee Document View (Yahgee, Inc.) were introduced on the platform of general electronic medical records system (Fujitsu, Inc.). Outpatients flow management system and electronic medical records system for ophthalmology were constructed. All images from ophthalmic appliances were transported to Yahgee Image by the MaxFile gateway system (P4 Medic, Inc.). The flow of outpatients going through examinations such as visual acuity testing were monitored by the list "Ophthalmology Outpatients List" by Yahgee Workflow in addition to the list "Patients Reception List" by Fujitsu. Patients' identification number was scanned with bar code readers attached to ophthalmic appliances. Dual monitors were placed in doctors' rooms to show Fujitsu Medical Records on the left-hand monitor and ophthalmic charts of Yahgee Document on the right-hand monitor. The data of manually-inputted visual acuity, automatically-exported autorefractometry and non-contact tonometry on a new template, MaxFile ED, were again automatically transported to designated boxes on ophthalmic charts of Yahgee Document. Images such as fundus photographs, fluorescein angiograms, optical coherence tomographic and ultrasound scans were viewed by Yahgee Image, and were copy-and-pasted to assigned boxes on the ophthalmic charts. Ordering such as appointments, drug prescription, fees and diagnoses input, central laboratory tests, surgical theater and ward room reservations were placed by functions of the Fujitsu electronic medical records system. The combination of the Fujitsu electronic medical records and Yahgee Document View systems enabled the University Hospital to examine the same number of outpatients as prior to the implementation of the computerized filing system.

  6. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  7. [Wish and reality in installation of a clinic-wide system for image and documentation access].

    PubMed

    Rau, W S; Schwabe, C

    1999-04-01

    This report describes the problems that can occur in the representation of the radiological workplace in a digital environment. On one hand the radiologist can sometimes access good equipment in "stand-alone" surroundings (CT, laser printer, workstations,...); on the other hand, the existing insufficient communication between different components is only rarely qualified to support the radiological workflow. This unsatisfactory framework handicaps the required clinic-wide distribution of radiological information. From the beginning we defined user groups requiring different radiological data closely associated with specific hard- and software: The radiological workstation in the department for reporting and image processing. The demonstration workstation in wards/outpatient departments for clinicians involved in treatment. Standard PCs with access to the digital medical document for clinicians involved in treatment. At all workstations the medical as well as the legal unity of digital radiological images and the corresponding report is ensured. Only the first two user groups have unrestricted access to the RIS database and to the PACS archive. We have decided that the RIS should be the master of the RIS/PACS-System. For an effective master/slave relationship between RIS and PACS archive and PACS workstations we suggest to mark images and/or series of images. The third user group depends on the information exported by the radiologist from PACS. After the report is written and signed by the radiologist, the digital report is transferred from the RIS to the HIS. The report is automatically attached to these images. Authorized personnel at the wards and outpatient are able to read the combination of validated report and exported radiological images as part of the digital medical record with an intranet browser on standard PCs.

  8. Large-Scale Document Automation: The Systems Integration Issue.

    ERIC Educational Resources Information Center

    Kalthoff, Robert J.

    1985-01-01

    Reviews current technologies for electronic imaging and its recording and transmission, including digital recording, optical data disks, automated image-delivery micrographics, high-density-magnetic recording, and new developments in telecommunications and computers. The role of the document automation systems integrator, who will bring these…

  9. A high-level 3D visualization API for Java and ImageJ.

    PubMed

    Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin

    2010-05-21

    Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  10. Image enhancement using the hypothesis selection filter: theory and application to JPEG decoding.

    PubMed

    Wong, Tak-Shing; Bouman, Charles A; Pollak, Ilya

    2013-03-01

    We introduce the hypothesis selection filter (HSF) as a new approach for image quality enhancement. We assume that a set of filters has been selected a priori to improve the quality of a distorted image containing regions with different characteristics. At each pixel, HSF uses a locally computed feature vector to predict the relative performance of the filters in estimating the corresponding pixel intensity in the original undistorted image. The prediction result then determines the proportion of each filter used to obtain the final processed output. In this way, the HSF serves as a framework for combining the outputs of a number of different user selected filters, each best suited for a different region of an image. We formulate our scheme in a probabilistic framework where the HSF output is obtained as the Bayesian minimum mean square error estimate of the original image. Maximum likelihood estimates of the model parameters are determined from an offline fully unsupervised training procedure that is derived from the expectation-maximization algorithm. To illustrate how to apply the HSF and to demonstrate its potential, we apply our scheme as a post-processing step to improve the decoding quality of JPEG-encoded document images. The scheme consistently improves the quality of the decoded image over a variety of image content with different characteristics. We show that our scheme results in quantitative improvements over several other state-of-the-art JPEG decoding methods.

  11. Font adaptive word indexing of modern printed documents.

    PubMed

    Marinai, Simone; Marino, Emanuele; Soda, Giovanni

    2006-08-01

    We propose an approach for the word-level indexing of modern printed documents which are difficult to recognize using current OCR engines. By means of word-level indexing, it is possible to retrieve the position of words in a document, enabling queries involving proximity of terms. Web search engines implement this kind of indexing, allowing users to retrieve Web pages on the basis of their textual content. Nowadays, digital libraries hold collections of digitized documents that can be retrieved either by browsing the document images or relying on appropriate metadata assembled by domain experts. Word indexing tools would therefore increase the access to these collections. The proposed system is designed to index homogeneous document collections by automatically adapting to different languages and font styles without relying on OCR engines for character recognition. The approach is based on three main ideas: the use of Self Organizing Maps (SOM) to perform unsupervised character clustering, the definition of one suitable vector-based word representation whose size depends on the word aspect-ratio, and the run-time alignment of the query word with indexed words to deal with broken and touching characters. The most appropriate applications are for processing modern printed documents (17th to 19th centuries) where current OCR engines are less accurate. Our experimental analysis addresses six data sets containing documents ranging from books of the 17th century to contemporary journals.

  12. Patient-generated Digital Images after Pediatric Ambulatory Surgery.

    PubMed

    Miller, Matthew W; Ross, Rachael K; Voight, Christina; Brouwer, Heather; Karavite, Dean J; Gerber, Jeffrey S; Grundmeier, Robert W; Coffin, Susan E

    2016-07-06

    To describe the use of digital images captured by parents or guardians and sent to clinicians for assessment of wounds after pediatric ambulatory surgery. Subjects with digital images of post-operative wounds were identified as part of an on-going cohort study of infections after ambulatory surgery within a large pediatric healthcare system. We performed a structured review of the electronic health record (EHR) to determine how digital images were documented in the EHR and used in clinical care. We identified 166 patients whose parent or guardian reported sending a digital image of the wound to the clinician after surgery. A corresponding digital image was located in the EHR in only 121 of these encounters. A change in clinical management was documented in 20% of these encounters, including referral for in-person evaluation of the wound and antibiotic prescription. Clinical teams have developed ad hoc workflows to use digital images to evaluate post-operative pediatric surgical patients. Because the use of digital images to support follow-up care after ambulatory surgery is likely to increase, it is important that high-quality images are captured and documented appropriately in the EHR to ensure privacy, security, and a high-level of care.

  13. Patient-Generated Digital Images after Pediatric Ambulatory Surgery

    PubMed Central

    Ross, Rachael K.; Voight, Christina; Brouwer, Heather; Karavite, Dean J.; Gerber, Jeffrey S.; Grundmeier, Robert W.; Coffin, Susan E.

    2016-01-01

    Summary Objective To describe the use of digital images captured by parents or guardians and sent to clinicians for assessment of wounds after pediatric ambulatory surgery. Methods Subjects with digital images of post-operative wounds were identified as part of an ongoing cohort study of infections after ambulatory surgery within a large pediatric healthcare system. We performed a structured review of the electronic health record (EHR) to determine how digital images were documented in the EHR and used in clinical care. Results We identified 166 patients whose parent or guardian reported sending a digital image of the wound to the clinician after surgery. A corresponding digital image was located in the EHR in only 121 of these encounters. A change in clinical management was documented in 20% of these encounters, including referral for in-person evaluation of the wound and antibiotic prescription. Conclusion Clinical teams have developed ad hoc workflows to use digital images to evaluate post-operative pediatric surgical patients. Because the use of digital images to support follow-up care after ambulatory surgery is likely to increase, it is important that high-quality images are captured and documented appropriately in the EHR to ensure privacy, security, and a high-level of care. PMID:27452477

  14. Writer identification on historical Glagolitic documents

    NASA Astrophysics Data System (ADS)

    Fiel, Stefan; Hollaus, Fabian; Gau, Melanie; Sablatnig, Robert

    2013-12-01

    This work aims at automatically identifying scribes of historical Slavonic manuscripts. The quality of the ancient documents is partially degraded by faded-out ink or varying background. The writer identification method used is based on image features, which are described with Scale Invariant Feature Transform (SIFT) features. A visual vocabulary is used for the description of handwriting characteristics, whereby the features are clustered using a Gaussian Mixture Model and employing the Fisher kernel. The writer identification approach is originally designed for grayscale images of modern handwritings. But contrary to modern documents, the historical manuscripts are partially corrupted by background clutter and water stains. As a result, SIFT features are also found on the background. Since the method shows also good results on binarized images of modern handwritings, the approach was additionally applied on binarized images of the ancient writings. Experiments show that this preprocessing step leads to a significant performance increase: The identification rate on binarized images is 98.9%, compared to an identification rate of 87.6% gained on grayscale images.

  15. Storing and Viewing Electronic Documents.

    ERIC Educational Resources Information Center

    Falk, Howard

    1999-01-01

    Discusses the conversion of fragile library materials to computer storage and retrieval to extend the life of the items and to improve accessibility through the World Wide Web. Highlights include entering the images, including scanning; optical character recognition; full text and manual indexing; and available document- and image-management…

  16. Document Indexing for Image-Based Optical Information Systems.

    ERIC Educational Resources Information Center

    Thiel, Thomas J.; And Others

    1991-01-01

    Discussion of image-based information retrieval systems focuses on indexing. Highlights include computerized information retrieval; multimedia optical systems; optical mass storage and personal computers; and a case study that describes an optical disk system which was developed to preserve, access, and disseminate military documents. (19…

  17. New Software for Ensemble Creation in the Spitzer-Space-Telescope Operations Database

    NASA Technical Reports Server (NTRS)

    Laher, Russ; Rector, John

    2004-01-01

    Some of the computer pipelines used to process digital astronomical images from NASA's Spitzer Space Telescope require multiple input images, in order to generate high-level science and calibration products. The images are grouped into ensembles according to well documented ensemble-creation rules by making explicit associations in the operations Informix database at the Spitzer Science Center (SSC). The advantage of this approach is that a simple database query can retrieve the required ensemble of pipeline input images. New and improved software for ensemble creation has been developed. The new software is much faster than the existing software because it uses pre-compiled database stored-procedures written in Informix SPL (SQL programming language). The new software is also more flexible because the ensemble creation rules are now stored in and read from newly defined database tables. This table-driven approach was implemented so that ensemble rules can be inserted, updated, or deleted without modifying software.

  18. Bridging the integration gap between imaging and information systems: a uniform data concept for content-based image retrieval in computer-aided diagnosis.

    PubMed

    Welter, Petra; Riesmeier, Jörg; Fischer, Benedikt; Grouls, Christoph; Kuhl, Christiane; Deserno, Thomas M

    2011-01-01

    It is widely accepted that content-based image retrieval (CBIR) can be extremely useful for computer-aided diagnosis (CAD). However, CBIR has not been established in clinical practice yet. As a widely unattended gap of integration, a unified data concept for CBIR-based CAD results and reporting is lacking. Picture archiving and communication systems and the workflow of radiologists must be considered for successful data integration to be achieved. We suggest that CBIR systems applied to CAD should integrate their results in a picture archiving and communication systems environment such as Digital Imaging and Communications in Medicine (DICOM) structured reporting documents. A sample DICOM structured reporting template adaptable to CBIR and an appropriate integration scheme is presented. The proposed CBIR data concept may foster the promulgation of CBIR systems in clinical environments and, thereby, improve the diagnostic process.

  19. Bridging the integration gap between imaging and information systems: a uniform data concept for content-based image retrieval in computer-aided diagnosis

    PubMed Central

    Riesmeier, Jörg; Fischer, Benedikt; Grouls, Christoph; Kuhl, Christiane; Deserno (né Lehmann), Thomas M

    2011-01-01

    It is widely accepted that content-based image retrieval (CBIR) can be extremely useful for computer-aided diagnosis (CAD). However, CBIR has not been established in clinical practice yet. As a widely unattended gap of integration, a unified data concept for CBIR-based CAD results and reporting is lacking. Picture archiving and communication systems and the workflow of radiologists must be considered for successful data integration to be achieved. We suggest that CBIR systems applied to CAD should integrate their results in a picture archiving and communication systems environment such as Digital Imaging and Communications in Medicine (DICOM) structured reporting documents. A sample DICOM structured reporting template adaptable to CBIR and an appropriate integration scheme is presented. The proposed CBIR data concept may foster the promulgation of CBIR systems in clinical environments and, thereby, improve the diagnostic process. PMID:21672913

  20. [A new concept for integration of image databanks into a comprehensive patient documentation].

    PubMed

    Schöll, E; Holm, J; Eggli, S

    2001-05-01

    Image processing and archiving are of increasing importance in the practice of modern medicine. Particularly due to the introduction of computer-based investigation methods, physicians are dealing with a wide variety of analogue and digital picture archives. On the other hand, clinical information is stored in various text-based information systems without integration of image components. The link between such traditional medical databases and picture archives is a prerequisite for efficient data management as well as for continuous quality control and medical education. At the Department of Orthopedic Surgery, University of Berne, a software program was developed to create a complete multimedia electronic patient record. The client-server system contains all patients' data, questionnaire-based quality control, and a digital picture archive. Different interfaces guarantee the integration into the hospital's data network. This article describes our experiences in the development and introduction of a comprehensive image archiving system at a large orthopedic center.

  1. Pigeons (Columba livia) as Trainable Observers of Pathology and Radiology Breast Cancer Images

    PubMed Central

    Levenson, Richard M.; Krupinski, Elizabeth A.; Navarro, Victor M.; Wasserman, Edward A.

    2015-01-01

    Pathologists and radiologists spend years acquiring and refining their medically essential visual skills, so it is of considerable interest to understand how this process actually unfolds and what image features and properties are critical for accurate diagnostic performance. Key insights into human behavioral tasks can often be obtained by using appropriate animal models. We report here that pigeons (Columba livia)—which share many visual system properties with humans—can serve as promising surrogate observers of medical images, a capability not previously documented. The birds proved to have a remarkable ability to distinguish benign from malignant human breast histopathology after training with differential food reinforcement; even more importantly, the pigeons were able to generalize what they had learned when confronted with novel image sets. The birds’ histological accuracy, like that of humans, was modestly affected by the presence or absence of color as well as by degrees of image compression, but these impacts could be ameliorated with further training. Turning to radiology, the birds proved to be similarly capable of detecting cancer-relevant microcalcifications on mammogram images. However, when given a different (and for humans quite difficult) task—namely, classification of suspicious mammographic densities (masses)—the pigeons proved to be capable only of image memorization and were unable to successfully generalize when shown novel examples. The birds’ successes and difficulties suggest that pigeons are well-suited to help us better understand human medical image perception, and may also prove useful in performance assessment and development of medical imaging hardware, image processing, and image analysis tools. PMID:26581091

  2. Pigeons (Columba livia) as Trainable Observers of Pathology and Radiology Breast Cancer Images.

    PubMed

    Levenson, Richard M; Krupinski, Elizabeth A; Navarro, Victor M; Wasserman, Edward A

    2015-01-01

    Pathologists and radiologists spend years acquiring and refining their medically essential visual skills, so it is of considerable interest to understand how this process actually unfolds and what image features and properties are critical for accurate diagnostic performance. Key insights into human behavioral tasks can often be obtained by using appropriate animal models. We report here that pigeons (Columba livia)-which share many visual system properties with humans-can serve as promising surrogate observers of medical images, a capability not previously documented. The birds proved to have a remarkable ability to distinguish benign from malignant human breast histopathology after training with differential food reinforcement; even more importantly, the pigeons were able to generalize what they had learned when confronted with novel image sets. The birds' histological accuracy, like that of humans, was modestly affected by the presence or absence of color as well as by degrees of image compression, but these impacts could be ameliorated with further training. Turning to radiology, the birds proved to be similarly capable of detecting cancer-relevant microcalcifications on mammogram images. However, when given a different (and for humans quite difficult) task-namely, classification of suspicious mammographic densities (masses)-the pigeons proved to be capable only of image memorization and were unable to successfully generalize when shown novel examples. The birds' successes and difficulties suggest that pigeons are well-suited to help us better understand human medical image perception, and may also prove useful in performance assessment and development of medical imaging hardware, image processing, and image analysis tools.

  3. Use of image registration and fusion algorithms and techniques in radiotherapy: Report of the AAPM Radiation Therapy Committee Task Group No. 132.

    PubMed

    Brock, Kristy K; Mutic, Sasa; McNutt, Todd R; Li, Hua; Kessler, Marc L

    2017-07-01

    Image registration and fusion algorithms exist in almost every software system that creates or uses images in radiotherapy. Most treatment planning systems support some form of image registration and fusion to allow the use of multimodality and time-series image data and even anatomical atlases to assist in target volume and normal tissue delineation. Treatment delivery systems perform registration and fusion between the planning images and the in-room images acquired during the treatment to assist patient positioning. Advanced applications are beginning to support daily dose assessment and enable adaptive radiotherapy using image registration and fusion to propagate contours and accumulate dose between image data taken over the course of therapy to provide up-to-date estimates of anatomical changes and delivered dose. This information aids in the detection of anatomical and functional changes that might elicit changes in the treatment plan or prescription. As the output of the image registration process is always used as the input of another process for planning or delivery, it is important to understand and communicate the uncertainty associated with the software in general and the result of a specific registration. Unfortunately, there is no standard mathematical formalism to perform this for real-world situations where noise, distortion, and complex anatomical variations can occur. Validation of the software systems performance is also complicated by the lack of documentation available from commercial systems leading to use of these systems in undesirable 'black-box' fashion. In view of this situation and the central role that image registration and fusion play in treatment planning and delivery, the Therapy Physics Committee of the American Association of Physicists in Medicine commissioned Task Group 132 to review current approaches and solutions for image registration (both rigid and deformable) in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. © 2017 American Association of Physicists in Medicine.

  4. An automated workflow for parallel processing of large multiview SPIM recordings

    PubMed Central

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-01-01

    Summary: Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. Availability and implementation: The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT. The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows. Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction. Contact: schmied@mpi-cbg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26628585

  5. An automated workflow for parallel processing of large multiview SPIM recordings.

    PubMed

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-04-01

    Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction : schmied@mpi-cbg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  6. From Visual Exploration to Storytelling and Back Again.

    PubMed

    Gratzl, S; Lex, A; Gehlenborg, N; Cosgrove, N; Streit, M

    2016-06-01

    The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author "Vistories", visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals. (see Figure 1 for visual abstract).

  7. From Visual Exploration to Storytelling and Back Again

    PubMed Central

    Gratzl, S.; Lex, A.; Gehlenborg, N.; Cosgrove, N.; Streit, M.

    2016-01-01

    The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author “Vistories”, visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals. (see Figure 1 for visual abstract) PMID:27942091

  8. BOREAS RSS-14 Level-2 GOES-7 Shortwave and Longwave Radiation Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Gu, Jiujing; Smith, Eric A.

    2000-01-01

    The BOREAS RSS-14 team collected and processed several GOES-7 and GOES-8 image data sets that covered the BOREAS study region. This data set contains images of shortwave and longwave radiation at the surface and top of the atmosphere derived from collected GOES-7 data. The data cover the time period of 05-Feb-1994 to 20-Sep-1994. The images missing from the temporal series were zero-filled to create a consistent sequence of files. The data are stored in binary image format files. Due to the large size of the images, the level-1a GOES-7 data are not contained on the BOREAS CD-ROM set. An inventory listing file is supplied on the CD-ROM to inform users of what data were collected. The level-1a GOES-7 image data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). See sections 15 and 16 for more information. The data files are available on a CD-ROM (see document number 20010000884).

  9. Choosing a Scanner: Points To Consider before Buying a Scanner.

    ERIC Educational Resources Information Center

    Raby, Chris

    1998-01-01

    Outlines ten factors to consider before buying a scanner: size of document; type of document; color; speed and volume; resolution; image enhancement; image compression; optical character recognition; scanning subsystem; and the option to use a commercial bureau service. The importance of careful analysis of requirements is emphasized. (AEF)

  10. Illinois Occupational Skill Standards: Imaging/Pre-Press Cluster.

    ERIC Educational Resources Information Center

    Illinois Occupational Skill Standards and Credentialing Council, Carbondale.

    This document, which is intended as a guide for work force preparation program providers, details the Illinois occupational skill standards for programs preparing students for employment in occupations in the imaging/pre-press cluster. The document begins with a brief overview of the Illinois perspective on occupational skill standards and…

  11. iPhone 4s and iPhone 5s Imaging of the Eye.

    PubMed

    Jalil, Maaz; Ferenczy, Sandor R; Shields, Carol L

    2017-01-01

    To evaluate the technical feasibility of a consumer-grade cellular iPhone camera as an ocular imaging device compared to existing ophthalmic imaging equipment for documentation purposes. A comparison of iPhone 4s and 5s images was made with external facial images (macrophotography) using Nikon cameras, slit-lamp images (microphotography) using Zeiss photo slit-lamp camera, and fundus images (fundus photography) using RetCam II. In an analysis of six consecutive patients with ophthalmic conditions, both iPhones achieved documentation of external findings (macrophotography) using standard camera modality, tap to focus, and built-in flash. Both iPhones achieved documentation of anterior segment findings (microphotography) during slit-lamp examination through oculars. Both iPhones achieved fundus imaging using standard video modality with continuous iPhone illumination through an ophthalmic lens. Comparison to standard ophthalmic cameras, macrophotography and microphotography were excellent. In comparison to RetCam fundus photography, iPhone fundus photography revealed smaller field and was technically more difficult to obtain, but the quality was nearly similar to RetCam. iPhone versions 4s and 5s can provide excellent ophthalmic macrophotography and microphotography and adequate fundus photography. We believe that iPhone imaging could be most useful in settings where expensive, complicated, and cumbersome imaging equipment is unavailable.

  12. Analysis of line structure in handwritten documents using the Hough transform

    NASA Astrophysics Data System (ADS)

    Ball, Gregory R.; Kasiviswanathan, Harish; Srihari, Sargur N.; Narayanan, Aswin

    2010-01-01

    In the analysis of handwriting in documents a central task is that of determining line structure of the text, e.g., number of text lines, location of their starting and end-points, line-width, etc. While simple methods can handle ideal images, real world documents have complexities such as overlapping line structure, variable line spacing, line skew, document skew, noisy or degraded images etc. This paper explores the application of the Hough transform method to handwritten documents with the goal of automatically determining global document line structure in a top-down manner which can then be used in conjunction with a bottom-up method such as connected component analysis. The performance is significantly better than other top-down methods, such as the projection profile method. In addition, we evaluate the performance of skew analysis by the Hough transform on handwritten documents.

  13. Text-image alignment for historical handwritten documents

    NASA Astrophysics Data System (ADS)

    Zinger, S.; Nerbonne, J.; Schomaker, L.

    2009-01-01

    We describe our work on text-image alignment in context of building a historical document retrieval system. We aim at aligning images of words in handwritten lines with their text transcriptions. The images of handwritten lines are automatically segmented from the scanned pages of historical documents and then manually transcribed. To train automatic routines to detect words in an image of handwritten text, we need a training set - images of words with their transcriptions. We present our results on aligning words from the images of handwritten lines and their corresponding text transcriptions. Alignment based on the longest spaces between portions of handwriting is a baseline. We then show that relative lengths, i.e. proportions of words in their lines, can be used to improve the alignment results considerably. To take into account the relative word length, we define the expressions for the cost function that has to be minimized for aligning text words with their images. We apply right to left alignment as well as alignment based on exhaustive search. The quality assessment of these alignments shows correct results for 69% of words from 100 lines, or 90% of partially correct and correct alignments combined.

  14. Updated MDRIZTAB Parameters for ACS/WFC

    NASA Astrophysics Data System (ADS)

    Hoffman, S. L.; Avila, R. J.

    2017-03-01

    The Mikulski Archive for Space Telescopes (MAST) pipeline performs geometric distortion corrections, associated image combinations, and cosmic ray rejections with AstroDrizzle. The MDRIZTAB reference table contains a list of relevant parameters that controls this program. This document details our photometric analysis of Advanced Camera for Surveys Wide Field Channel (ACS/WFC) data processed by AstroDrizzle. Based on this analysis, we update the MDRIZTAB table to improve the quality of the drizzled products delivered by MAST.

  15. Instant replay.

    PubMed

    Rosenthal, David I

    2013-06-01

    With widespread adoption of electronic health records (EHRs) and electronic clinical documentation, health care organizations now have greater faculty to review clinical data and evaluate the efficacy of quality improvement efforts. Unfortunately, I believe there is a fundamental gap between actual health care delivery and what we document in the current EHR systems. This process of capturing the patient encounter, which I'll refer to as transcription, is prone to significant data loss due to inadequate methods of data capture, multiple points of view, and bias and subjectivity in the transcriptional process. Our current EHR, text-based clinical documentation systems are lossy abstractions - one sided accounts of what take place between patients and providers. Our clinical notes contain the breadcrumbs of relationships, conversations, physical exams, and procedures but often lack the ability to capture the form, the emotions, the images, the nonverbal communication, and the actual narrative of interactions between human beings. I believe that a video record, in conjunction with objective transcriptional services and other forms of data capture, may provide a closer approximation to the truth of health care delivery and may be a valuable tool for healthcare improvement. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Electronic Still Camera Project on STS-48

    NASA Technical Reports Server (NTRS)

    1991-01-01

    On behalf of NASA, the Office of Commercial Programs (OCP) has signed a Technical Exchange Agreement (TEA) with Autometric, Inc. (Autometric) of Alexandria, Virginia. The purpose of this agreement is to evaluate and analyze a high-resolution Electronic Still Camera (ESC) for potential commercial applications. During the mission, Autometric will provide unique photo analysis and hard-copy production. Once the mission is complete, Autometric will furnish NASA with an analysis of the ESC s capabilities. Electronic still photography is a developing technology providing the means by which a hand held camera electronically captures and produces a digital image with resolution approaching film quality. The digital image, stored on removable hard disks or small optical disks, can be converted to a format suitable for downlink transmission, or it can be enhanced using image processing software. The on-orbit ability to enhance or annotate high-resolution images and then downlink these images in real-time will greatly improve Space Shuttle and Space Station capabilities in Earth observations and on-board photo documentation.

  17. Enhancing Web applications in radiology with Java: estimating MR imaging relaxation times.

    PubMed

    Dagher, A P; Fitzpatrick, M; Flanders, A E; Eng, J

    1998-01-01

    Java is a relatively new programming language that has been used to develop a World Wide Web-based tool for estimating magnetic resonance (MR) imaging relaxation times, thereby demonstrating how Java may be used for Web-based radiology applications beyond improving the user interface of teaching files. A standard processing algorithm coded with Java is downloaded along with the hypertext markup language (HTML) document. The user (client) selects the desired pulse sequence and inputs data obtained from a region of interest on the MR images. The algorithm is used to modify selected MR imaging parameters in an equation that models the phenomenon being evaluated. MR imaging relaxation times are estimated, and confidence intervals and a P value expressing the accuracy of the final results are calculated. Design features such as simplicity, object-oriented programming, and security restrictions allow Java to expand the capabilities of HTML by offering a more versatile user interface that includes dynamic annotations and graphics. Java also allows the client to perform more sophisticated information processing and computation than is usually associated with Web applications. Java is likely to become a standard programming option, and the development of stand-alone Java applications may become more common as Java is integrated into future versions of computer operating systems.

  18. SU-E-CAMPUS-T-01: Automation of the Winston-Lutz Test for Stereotactic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Litzenberg, D; Irrer, J; Kessler, M

    Purpose: To optimize clinical efficiency and shorten patient wait time by minimizing the time and effort required to perform the Winston-Lutz test before stereotactic radiosurgery (SRS) through automation of the delivery, analysis, and documentation of results. Methods: The radiation fields of the Winston-Lutz (WL) test were created in a “machine-QA patient” saved in ARIA for use before SRS cases. Images of the BRW target ball placed at mechanical isocenter are captured with the portal imager for each of four, 2cm×2cm, MLC-shaped beams. When the WL plan is delivered and closed, this event is detected by in-house software called EventNet whichmore » automates subsequent processes with the aid of the ARIA web services. Images are automatically retrieved from the ARIA database and analyzed to determine the offset of the target ball from radiation isocenter. The results are posted to a website and a composite summary image of the results is pushed back into ImageBrowser for review and authenticated documentation. Results: The total time to perform the test was reduced from 20-25 minutes to less than 4 minutes. The results were found to be more accurate and consistent than the previous method which used radiochromic film. The images were also analyzed with DoseLab for comparison. The difference between the film and automated WL results in the X and Y direction and the radius were (−0.17 +/− 0.28) mm, (0.21 +/− 0.20) mm and (−0.14 +/− 0.27) mm, respectively. The difference between the DoseLab and automated WL results were (−0.05 +/− 0.06) mm, (−0.01 +/− 0.02) mm and (0.01 +/− 0.07) mm, respectively. Conclusions: This process reduced patient wait times by 15–20 minutes making the treatment machine available to treat another patient. Accuracy and consistency of results were improved over the previous method and were comparable to other commercial solutions. Access to the ARIA web services is made possible through an Eclipse co-development agreement with Varian Medical Systems.« less

  19. Various Landscapes and Features on Europa

    NASA Technical Reports Server (NTRS)

    1997-01-01

    These 15 frames show the great variety of surface features on Jupiter's icy moon, Europa, which have been revealed by the Galileo spacecraft Solid State Imaging (CCD) system during its first six orbits around Jupiter from June 1996 to February 1997. North is to the top of each of the images. The features seen on Europa's surface document both internal and external processes shaping the icy crust. Internal processes and the possible presence of liquid water beneath the ice are indicated by features such as 'dark spots', lobe-shaped flow features, 'puddles','mottled terrain', knobs, pits, and the darker areas along ridges and triple bands.

    Europa is subjected to constant tugging from the giant planet, Jupiter, as well as from its neighboring moons, Io and Ganymede. This causes 'tidal' forces that affect Europa's interior and surface. Evidence for such forces includes ridges, fractures, wedge-shaped bands, and areas of 'chaos'. Some of these features result from alternate extension and compression buckling and pulling apart Europa's icy shell.

    Impact craters document external effects on a planet's surface. Although present on Europa, impact craters are relatively scarce compared to the number seen on Ganymede, Callisto, and on the surfaces of most other 'rocky' planets and moons in our solar system. This scarcity of craters suggests that the surface of Europa is very young. 'Maculae' on Europa may be the scars from large impact events.

    These images have resolutions from 27 meters (89 feet) to 7 kilometers (4.3 miles) per picture element (pixel) and were taken by Galileo at ranges of 2,500 kilometers (1,525 miles) to 677,000 kilometers (413,000 miles) from Europa.

    The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo

  20. A distributed computing system for magnetic resonance imaging: Java-based processing and binding of XML.

    PubMed

    de Beer, R; Graveron-Demilly, D; Nastase, S; van Ormondt, D

    2004-03-01

    Recently we have developed a Java-based heterogeneous distributed computing system for the field of magnetic resonance imaging (MRI). It is a software system for embedding the various image reconstruction algorithms that we have created for handling MRI data sets with sparse sampling distributions. Since these data sets may result from multi-dimensional MRI measurements our system has to control the storage and manipulation of large amounts of data. In this paper we describe how we have employed the extensible markup language (XML) to realize this data handling in a highly structured way. To that end we have used Java packages, recently released by Sun Microsystems, to process XML documents and to compile pieces of XML code into Java classes. We have effectuated a flexible storage and manipulation approach for all kinds of data within the MRI system, such as data describing and containing multi-dimensional MRI measurements, data configuring image reconstruction methods and data representing and visualizing the various services of the system. We have found that the object-oriented approach, possible with the Java programming environment, combined with the XML technology is a convenient way of describing and handling various data streams in heterogeneous distributed computing systems.

  1. A Framework for Integration of Heterogeneous Medical Imaging Networks

    PubMed Central

    Viana-Ferreira, Carlos; Ribeiro, Luís S; Costa, Carlos

    2014-01-01

    Medical imaging is increasing its importance in matters of medical diagnosis and in treatment support. Much is due to computers that have revolutionized medical imaging not only in acquisition process but also in the way it is visualized, stored, exchanged and managed. Picture Archiving and Communication Systems (PACS) is an example of how medical imaging takes advantage of computers. To solve problems of interoperability of PACS and medical imaging equipment, the Digital Imaging and Communications in Medicine (DICOM) standard was defined and widely implemented in current solutions. More recently, the need to exchange medical data between distinct institutions resulted in Integrating the Healthcare Enterprise (IHE) initiative that contains a content profile especially conceived for medical imaging exchange: Cross Enterprise Document Sharing for imaging (XDS-i). Moreover, due to application requirements, many solutions developed private networks to support their services. For instance, some applications support enhanced query and retrieve over DICOM objects metadata. This paper proposes anintegration framework to medical imaging networks that provides protocols interoperability and data federation services. It is an extensible plugin system that supports standard approaches (DICOM and XDS-I), but is also capable of supporting private protocols. The framework is being used in the Dicoogle Open Source PACS. PMID:25279021

  2. A framework for integration of heterogeneous medical imaging networks.

    PubMed

    Viana-Ferreira, Carlos; Ribeiro, Luís S; Costa, Carlos

    2014-01-01

    Medical imaging is increasing its importance in matters of medical diagnosis and in treatment support. Much is due to computers that have revolutionized medical imaging not only in acquisition process but also in the way it is visualized, stored, exchanged and managed. Picture Archiving and Communication Systems (PACS) is an example of how medical imaging takes advantage of computers. To solve problems of interoperability of PACS and medical imaging equipment, the Digital Imaging and Communications in Medicine (DICOM) standard was defined and widely implemented in current solutions. More recently, the need to exchange medical data between distinct institutions resulted in Integrating the Healthcare Enterprise (IHE) initiative that contains a content profile especially conceived for medical imaging exchange: Cross Enterprise Document Sharing for imaging (XDS-i). Moreover, due to application requirements, many solutions developed private networks to support their services. For instance, some applications support enhanced query and retrieve over DICOM objects metadata. This paper proposes anintegration framework to medical imaging networks that provides protocols interoperability and data federation services. It is an extensible plugin system that supports standard approaches (DICOM and XDS-I), but is also capable of supporting private protocols. The framework is being used in the Dicoogle Open Source PACS.

  3. Selective document image data compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1998-05-19

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.

  4. Selective document image data compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1998-01-01

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)

  5. Towards Mobile OCR: How To Take a Good Picture of a Document Without Sight.

    PubMed

    Cutter, Michael; Manduchi, Roberto

    The advent of mobile OCR (optical character recognition) applications on regular smartphones holds great promise for enabling blind people to access printed information. Unfortunately, these systems suffer from a problem: in order for OCR output to be meaningful, a well-framed image of the document needs to be taken, something that is difficult to do without sight. This contribution presents an experimental investigation of how blind people position and orient a camera phone while acquiring document images. We developed experimental software to investigate if verbal guidance aids in the acquisition of OCR-readable images without sight. We report on our participant's feedback and performance before and after assistance from our software.

  6. Towards Mobile OCR: How To Take a Good Picture of a Document Without Sight

    PubMed Central

    Cutter, Michael; Manduchi, Roberto

    2015-01-01

    The advent of mobile OCR (optical character recognition) applications on regular smartphones holds great promise for enabling blind people to access printed information. Unfortunately, these systems suffer from a problem: in order for OCR output to be meaningful, a well-framed image of the document needs to be taken, something that is difficult to do without sight. This contribution presents an experimental investigation of how blind people position and orient a camera phone while acquiring document images. We developed experimental software to investigate if verbal guidance aids in the acquisition of OCR-readable images without sight. We report on our participant's feedback and performance before and after assistance from our software. PMID:26677461

  7. International Ultraviolet Explorer Final Archive

    NASA Technical Reports Server (NTRS)

    1997-01-01

    CSC processed IUE images through the Final Archive Data Processing System. Raw images were obtained from both NDADS and the IUEGTC optical disk platters for processing on the Alpha cluster, and from the IUEGTC optical disk platters for DECstation processing. Input parameters were obtained from the IUE database. Backup tapes of data to send to VILSPA were routinely made on the Alpha cluster. IPC handled more than 263 requests for priority NEWSIPS processing during the contract. Staff members also answered various questions and requests for information and sent copies of IUE documents to requesters. CSC implemented new processing capabilities into the NEWSIPS processing systems as they became available. In addition, steps were taken to improve efficiency and throughput whenever possible. The node TORTE was reconfigured as the I/O server for Alpha processing in May. The number of Alpha nodes used for the NEWSIPS processing queue was increased to a maximum of six in measured fashion in order to understand the dependence of throughput on the number of nodes and to be able to recognize when a point of diminishing returns was reached. With Project approval, generation of the VD FITS files was dropped in July. This action not only saved processing time but, even more significantly, also reduced the archive storage media requirements, and the time required to perform the archiving, drastically. The throughput of images verified through CDIVS and processed through NEWSIPS for the contract period is summarized below. The number of images of a given dispersion type and camera that were processed in any given month reflects several factors, including the availability of the required NEWSIPS software system, the availability of the corresponding required calibrations (e.g., the LWR high-dispersion ripple correction and absolute calibration), and the occurrence of reprocessing efforts such as that conducted to incorporate the updated SWP sensitivity-degradation correction in May.

  8. A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors

    PubMed Central

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255

  9. A multi-resolution approach for an automated fusion of different low-cost 3D sensors.

    PubMed

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-04-24

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.

  10. Comparative implementation of Handwritten and Machine written Gurmukhi text utilizing appropriate parameters

    NASA Astrophysics Data System (ADS)

    Kaur, Jaswinder; Jagdev, Gagandeep, Dr.

    2018-01-01

    Optical character recognition is concerned with the recognition of optically processed characters. The recognition is done offline after the writing or printing has been completed, unlike online recognition where the computer has to recognize the characters instantly as they are drawn. The performance of character recognition depends upon the quality of scanned documents. The preprocessing steps are used for removing low-frequency background noise and normalizing the intensity of individual scanned documents. Several filters are used for reducing certain image details and enabling an easier or faster evaluation. The primary aim of the research work is to recognize handwritten and machine written characters and differentiate them. The language opted for the research work is Punjabi Gurmukhi and tool utilized is Matlab.

  11. Is it possible to eliminate patient identification errors in medical imaging?

    PubMed

    Danaher, Luke A; Howells, Joan; Holmes, Penny; Scally, Peter

    2011-08-01

    The aim of this article is to review a system that validates and documents the process of ensuring the correct patient, correct site and side, and correct procedure (commonly referred to as the 3 C's) within medical imaging. A 4-step patient identification and procedure matching process was developed using health care and aviation models. The process was established in medical imaging departments after a successful interventional radiology pilot program. The success of the project was evaluated using compliance audit data, incident reporting data before and after the implementation of the process, and a staff satisfaction survey. There was 95% to 100% verification of site and side and 100% verification of correct patient, procedure, and consent. Correct patient data and side markers were present in 82% to 95% of cases. The number of incidents before and after the implementation of the 3 C's was difficult to assess because of a change in reporting systems and incident underreporting. More incidents are being reported, particularly "near misses." All near misses were related to incorrect patient identification stickers being placed on request forms. The majority of staff members surveyed found the process easy (55.8%), quick (47.7%), relevant (51.7%), and useful (60.9%). Although identification error is difficult to eliminate, practical initiatives can engender significant systems improvement in complex health care environments. Crown Copyright © 2011. Published by Elsevier Inc. All rights reserved.

  12. Comparison of 3d Reconstruction Services and Terrestrial Laser Scanning for Cultural Heritage Documentation

    NASA Astrophysics Data System (ADS)

    Rasztovits, S.; Dorninger, P.

    2013-07-01

    Terrestrial Laser Scanning (TLS) is an established method to reconstruct the geometrical surface of given objects. Current systems allow for fast and efficient determination of 3D models with high accuracy and richness in detail. Alternatively, 3D reconstruction services are using images to reconstruct the surface of an object. While the instrumental expenses for laser scanning systems are high, upcoming free software services as well as open source software packages enable the generation of 3D models using digital consumer cameras. In addition, processing TLS data still requires an experienced user while recent web-services operate completely automatically. An indisputable advantage of image based 3D modeling is its implicit capability for model texturing. However, the achievable accuracy and resolution of the 3D models is lower than those of laser scanning data. Within this contribution, we investigate the results of automated web-services for image based 3D model generation with respect to a TLS reference model. For this, a copper sculpture was acquired using a laser scanner and using image series of different digital cameras. Two different webservices, namely Arc3D and AutoDesk 123D Catch were used to process the image data. The geometric accuracy was compared for the entire model and for some highly structured details. The results are presented and interpreted based on difference models. Finally, an economical comparison of the generation of the models is given considering the interactive and processing time costs.

  13. Using drone-mounted cameras for on-site body documentation: 3D mapping and active survey.

    PubMed

    Urbanová, Petra; Jurda, Mikoláš; Vojtíšek, Tomáš; Krajsa, Jan

    2017-12-01

    Recent advances in unmanned aerial technology have substantially lowered the cost associated with aerial imagery. As a result, forensic practitioners are today presented with easy low-cost access to aerial photographs at remote locations. The present paper aims to explore boundaries in which the low-end drone technology can operate as professional crime scene equipment, and to test the prospects of aerial 3D modeling in the forensic context. The study was based on recent forensic cases of falls from height admitted for postmortem examinations. Three mock outdoor forensic scenes featuring a dummy, skeletal remains and artificial blood were constructed at an abandoned quarry and subsequently documented using a commercial DJI Phantom 2 drone equipped with a GoPro HERO 4 digital camera. In two of the experiments, the purpose was to conduct aerial and ground-view photography and to process the acquired images with a photogrammetry protocol (using Agisoft PhotoScan ® 1.2.6) in order to generate 3D textured models. The third experiment tested the employment of drone-based video recordings in mapping scattered body parts. The results show that drone-based aerial photography is capable of producing high-quality images, which are appropriate for building accurate large-scale 3D models of a forensic scene. If, however, high-resolution top-down three-dimensional scene documentation featuring details on a corpse or other physical evidence is required, we recommend building a multi-resolution model by processing aerial and ground-view imagery separately. The video survey showed that using an overview recording for seeking out scattered body parts was efficient. In contrast, the less easy-to-spot evidence, such as bloodstains, was detected only after having been marked properly with crime scene equipment. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. [Development of an ophthalmological clinical information system for inpatient eye clinics].

    PubMed

    Kortüm, K U; Müller, M; Babenko, A; Kampik, A; Kreutzer, T C

    2015-12-01

    In times of increased digitalization in healthcare, departments of ophthalmology are faced with the challenge of introducing electronic clinical health records (EHR); however, specialized software for ophthalmology is not available with most major EHR sytems. The aim of this project was to create specific ophthalmological user interfaces for large inpatient eye care providers within a hospitalwide EHR. Additionally the integration of ophthalmic imaging systems, scheduling and surgical documentation should be achieved. The existing EHR i.s.h.med (Siemens, Germany) was modified using advanced business application programming (ABAP) language to create specific ophthalmological user interfaces for reproduction and moreover optimization of the clinical workflow. A user interface for documentation of ambulatory patients with eight tabs was designed. From June 2013 to October 2014 a total of 61,551 patient contact details were documented. For surgical documentation a separate user interface was set up. Digital clinical orders for documentation of registration and scheduling of operations user interfaces were also set up. A direct integration of ophthalmic imaging modalities could be established. An ophthalmologist-orientated EHR for outpatient and surgical documentation for inpatient clinics was created and successfully implemented. By incorporation of imaging procedures the foundation of future smart/big data analyses was created.

  15. Data visualization methods, data visualization devices, data visualization apparatuses, and articles of manufacture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Alan E.; Crow, Vernon L.; Payne, Deborah A.

    Data visualization methods, data visualization devices, data visualization apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a data visualization method includes accessing a plurality of initial documents at a first moment in time, first processing the initial documents providing processed initial documents, first identifying a plurality of first associations of the initial documents using the processed initial documents, generating a first visualization depicting the first associations, accessing a plurality of additional documents at a second moment in time after the first moment in time, second processing the additional documents providing processed additional documents, secondmore » identifying a plurality of second associations of the additional documents and at least some of the initial documents, wherein the second identifying comprises identifying using the processed initial documents and the processed additional documents, and generating a second visualization depicting the second associations.« less

  16. Teaching resources for dermatology on the WWW--quiz system and dynamic lecture scripts using a HTTP-database demon.

    PubMed Central

    Bittorf, A.; Diepgen, T. L.

    1996-01-01

    The World Wide Web (WWW) is becoming the major way of acquiring information in all scientific disciplines as well as in business. It is very well suitable for fast distribution and exchange of up to date teaching resources. However, to date most teaching applications on the Web do not use its full power by integrating interactive components. We have set up a computer based training (CBT) framework for Dermatology, which consists of dynamic lecture scripts, case reports, an atlas and a quiz system. All these components heavily rely on an underlying image database that permits the creation of dynamic documents. We used a demon process that keeps the database open and can be accessed using HTTP to achieve better performance and avoid the overhead involved by starting CGI-processes. The result of our evaluation was very encouraging. Images Figure 3 PMID:8947625

  17. The Direct Lighting Computation in Global Illumination Methods

    NASA Astrophysics Data System (ADS)

    Wang, Changyaw Allen

    1994-01-01

    Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.

  18. The Impact Of Optical Storage Technology On Image Processing Systems

    NASA Astrophysics Data System (ADS)

    Garges, Daniel T.; Durbin, Gerald T.

    1984-09-01

    The recent announcement of commercially available high density optical storage devices will have a profound impact on the information processing industry. Just as the initial introduction of random access storage created entirely new processing strategies, optical technology will allow dramatic changes in the storage, retrieval, and dissemination of engineering drawings and other pictorial or text-based documents. Storage Technology Corporation has assumed a leading role in this arena with the introduction of the 7600 Optical Storage Subsystem, and the formation of StorageTek Integrated Systems, a subsidiary chartered to incorporate this new technology into deliverable total systems. This paper explores the impact of optical storage technology from the perspective of a leading-edge manufacturer and integrator.

  19. Boost OCR accuracy using iVector based system combination approach

    NASA Astrophysics Data System (ADS)

    Peng, Xujun; Cao, Huaigu; Natarajan, Prem

    2015-01-01

    Optical character recognition (OCR) is a challenging task because most existing preprocessing approaches are sensitive to writing style, writing material, noises and image resolution. Thus, a single recognition system cannot address all factors of real document images. In this paper, we describe an approach to combine diverse recognition systems by using iVector based features, which is a newly developed method in the field of speaker verification. Prior to system combination, document images are preprocessed and text line images are extracted with different approaches for each system, where iVector is transformed from a high-dimensional supervector of each text line and is used to predict the accuracy of OCR. We merge hypotheses from multiple recognition systems according to the overlap ratio and the predicted OCR score of text line images. We present evaluation results on an Arabic document database where the proposed method is compared against the single best OCR system using word error rate (WER) metric.

  20. Indexing and retrieving DICOM data in disperse and unstructured archives.

    PubMed

    Costa, Carlos; Freitas, Filipe; Pereira, Marco; Silva, Augusto; Oliveira, José L

    2009-01-01

    This paper proposes an indexing and retrieval solution to gather information from distributed DICOM documents by allowing searches and access to the virtual data repository using a Google-like process. The medical imaging modalities are becoming more powerful and less expensive. The result is the proliferation of equipment acquisition by imaging centers, including the small ones. With this dispersion of data, it is not easy to take advantage of all the information that can be retrieved from these studies. Furthermore, many of these small centers do not have large enough requirements to justify the acquisition of a traditional PACS. A peer-to-peer PACS platform to index and query DICOM files over a set of distributed repositories that are logically viewed as a single federated unit. The solution is based on a public domain document-indexing engine and extends traditional PACS query and retrieval mechanisms. This proposal deals well with complex searching requirements, from a single desktop environment to distributed scenarios. The solution performance and robustness were demonstrated in trials. The characteristics of presented PACS platform make it particularly important for small institutions, including educational and research groups.

  1. Lessons Learned through the Development and Publication of AstroImageJ

    NASA Astrophysics Data System (ADS)

    Collins, Karen

    2018-01-01

    As lead author of the scientific image processing software package AstroImageJ (AIJ), I will discuss the reasoning behind why we decided to release AIJ to the public, and the lessons we learned related to the development, publication, distribution, and support of AIJ. I will also summarize the AIJ code language selection, code documentation and testing approaches, code distribution, update, and support facilities used, and the code citation and licensing decisions. Since AIJ was initially developed as part of my graduate research and was my first scientific open source software publication, many of my experiences and difficulties encountered may parallel those of others new to scientific software publication. Finally, I will discuss the benefits and disadvantages of releasing scientific software that I now recognize after having AIJ in the public domain for more than five years.

  2. A novel method for correction of temporally- and spatially-variant optical distortion in planar particle image velocimetry

    DOE PAGES

    Zha, Kan; Busch, Stephen; Park, Cheolwoong; ...

    2016-06-24

    In-cylinder flow measurements are necessary to gain a fundamental understanding of swirl-supported, light-duty Diesel engine processes for high thermal efficiency and low emissions. Planar particle image velocimetry (PIV) can be used for non-intrusive, in situ measurement of swirl-plane velocity fields through a transparent piston. In order to keep the flow unchanged from all-metal engine operation, the geometry of the transparent piston must adapt the production-intent metal piston geometry. As a result, a temporally- and spatially-variant optical distortion is introduced to the particle images. Here, to ensure reliable measurement of particle displacements, this work documents a systematic exploration of optical distortionmore » quantification and a hybrid back-projection procedure that combines ray-tracing-based geometric and in situ manual back-projection approaches.« less

  3. iPhone 4s and iPhone 5s Imaging of the Eye

    PubMed Central

    Jalil, Maaz; Ferenczy, Sandor R.; Shields, Carol L.

    2017-01-01

    Background/Aims To evaluate the technical feasibility of a consumer-grade cellular iPhone camera as an ocular imaging device compared to existing ophthalmic imaging equipment for documentation purposes. Methods A comparison of iPhone 4s and 5s images was made with external facial images (macrophotography) using Nikon cameras, slit-lamp images (microphotography) using Zeiss photo slit-lamp camera, and fundus images (fundus photography) using RetCam II. Results In an analysis of six consecutive patients with ophthalmic conditions, both iPhones achieved documentation of external findings (macrophotography) using standard camera modality, tap to focus, and built-in flash. Both iPhones achieved documentation of anterior segment findings (microphotography) during slit-lamp examination through oculars. Both iPhones achieved fundus imaging using standard video modality with continuous iPhone illumination through an ophthalmic lens. Comparison to standard ophthalmic cameras, macrophotography and microphotography were excellent. In comparison to RetCam fundus photography, iPhone fundus photography revealed smaller field and was technically more difficult to obtain, but the quality was nearly similar to RetCam. Conclusions iPhone versions 4s and 5s can provide excellent ophthalmic macrophotography and microphotography and adequate fundus photography. We believe that iPhone imaging could be most useful in settings where expensive, complicated, and cumbersome imaging equipment is unavailable. PMID:28275604

  4. MONITORING EROSION OF STONE SURFACES USING TIME-LAPSE AND PTM PHOTOGRAPHY: FIELD STUDY OF A 14TH CENTURY MONASTERY IN YORKSHIRE

    NASA Astrophysics Data System (ADS)

    Doehne, E.; Pinchin, S.

    2009-12-01

    Evaluating stone weathering rates and their relationship to environmental fluctuations is an important challenge in understanding the critical zone and also in efforts to prevent the loss of important cultural heritage in stone, such as monuments, sculpture and archaeological sites. Repeat photography has been widely used to evaluate geological processes such as the retreat of glaciers and the weathering of stone surfaces. However, a fundamental difficulty is that the images are often shot under differing lighting conditions, making the interpretation of stone surface loss particularly challenging. Two developments in photographic documentation show promise for improving the situation. One is the use of digital time-lapse methods to provide more frequent images to correlate stone surface loss with ongoing environmental changes. The other is a relatively new method known as polynomial transform mapping (PTM), which integrates multiple photographs taken at different angles to document more comprehensively the texture of stone surfaces. Using Java-based software, the viewer can control the precise angle of the light source in an interpolated, high-quality image. PTM can produce raking light images from any angle, as well as images with ‘normal’ illumination. We present here results based on several years of macro-photography, time-lapse imaging, and PTM imaging of rapidly eroding stone surfaces at the site of Howden Minster in Yorkshire, UK, which suffers from salt weathering. The images show that surface loss is episodic rather than continuous and in some cases is related to unusual environmental conditions, such as high winds and condensation events. Damage was also found to be synchronous, with surface change (flaking, granular disintegration, and loss of flakes) occurring at the same time in different stone blocks. Crystallization pressure from phase transitions in magnesium sulfate salts appears to be the main cause of the loss of stone surfaces.

  5. Efficient Workflows for Curation of Heterogeneous Data Supporting Modeling of U-Nb Alloy Aging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, Logan Timothy; Hackenberg, Robert Errol

    These are slides from a presentation summarizing a graduate research associate's summer project. The following topics are covered in these slides: data challenges in materials, aging in U-Nb Alloys, Building an Aging Model, Different Phase Trans. in U-Nb, the Challenge, Storing Materials Data, Example Data Source, Organizing Data: What is a Schema?, What does a "XML Schema" look like?, Our Data Schema: Nice and Simple, Storing Data: Materials Data Curation System (MDCS), Problem with MDCS: Slow Data Entry, Getting Literature into MDCS, Staging Data in Excel Document, Final Result: MDCS Records, Analyzing Image Data, Process for Making TTT Diagram, Bottleneckmore » Number 1: Image Analysis, Fitting a TTP Boundary, Fitting a TTP Curve: Comparable Results, How Does it Compare to Our Data?, Image Analysis Workflow, Curating Hardness Records, Hardness Data: Two Key Decisions, Before Peak Age? - Automation, Interactive Viz, Which Transformation?, Microstructure-Informed Model, Tracking the Entire Process, General Problem with Property Models, Pinyon: Toolkit for Managing Model Creation, Tracking Individual Decisions, Jupyter: Docs and Code in One File, Hardness Analysis Workflow, Workflow for Aging Models, and conclusions.« less

  6. Magnetospheric Radio Tomography: Observables, Algorithms, and Experimental Analysis

    NASA Technical Reports Server (NTRS)

    Cummer, Steven

    2005-01-01

    This grant supported research towards developing magnetospheric electron density and magnetic field remote sensing techniques via multistatic radio propagation and tomographic image reconstruction. This work was motivated by the need to better develop the basic technique of magnetospheric radio tomography, which holds substantial promise as a technology uniquely capable of imaging magnetic field and electron density in the magnetosphere on large scales with rapid cadence. Such images would provide an unprecedented and needed view into magnetospheric processes. By highlighting the systems-level interconnectedness of different regions, our understanding of space weather processes and ability to predict them would be dramatically enhanced. Three peer-reviewed publications and 5 conference presentations have resulted from this work, which supported 1 PhD student and 1 postdoctoral researcher. One more paper is in progress and will be submitted shortly. Because the main results of this research have been published or are soon to be published in refereed journal articles listed in the reference section of this document, we provide here an overview of the research and accomplishments without describing all of the details that are contained in the articles.

  7. Unified modeling language and design of a case-based retrieval system in medical imaging.

    PubMed

    LeBozec, C; Jaulent, M C; Zapletal, E; Degoulet, P

    1998-01-01

    One goal of artificial intelligence research into case-based reasoning (CBR) systems is to develop approaches for designing useful and practical interactive case-based environments. Explaining each step of the design of the case-base and of the retrieval process is critical for the application of case-based systems to the real world. We describe herein our approach to the design of IDEM--Images and Diagnosis from Examples in Medicine--a medical image case-based retrieval system for pathologists. Our approach is based on the expressiveness of an object-oriented modeling language standard: the Unified Modeling Language (UML). We created a set of diagrams in UML notation illustrating the steps of the CBR methodology we used. The key aspect of this approach was selecting the relevant objects of the system according to user requirements and making visualization of cases and of the components of the case retrieval process. Further evaluation of the expressiveness of the design document is required but UML seems to be a promising formalism, improving the communication between the developers and users.

  8. 10 CFR 2.1013 - Use of the electronic docket during the proceeding.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... bi-tonal documents. (v) Electronic submissions must be generated in the appropriate PDF output format by using: (A) PDF—Formatted Text and Graphics for textual documents converted from native applications; (B) PDF—Searchable Image (Exact) for textual documents converted from scanned documents; and (C...

  9. Content Recognition and Context Modeling for Document Analysis and Retrieval

    ERIC Educational Resources Information Center

    Zhu, Guangyu

    2009-01-01

    The nature and scope of available documents are changing significantly in many areas of document analysis and retrieval as complex, heterogeneous collections become accessible to virtually everyone via the web. The increasing level of diversity presents a great challenge for document image content categorization, indexing, and retrieval.…

  10. Database Technology Activities and Assessment for Defense Modeling and Simulation Office (DMSO) (August 1991-November 1992). A Documented Briefing

    DTIC Science & Technology

    1994-01-01

    databases and identifying new data entities, data elements, and relationships . - Standard data naming conventions, schema, and definition processes...management system. The use of such a tool could offer: (1) structured support for representation of objects and their relationships to each other (and...their relationships to related multimedia objects such as an engineering drawing of the tank object or a satellite image that contains the installation

  11. Noninvasive Characterization of Indeterminate Pulmonary Nodules Detected on Chest High-Resolution Computed Tomography

    DTIC Science & Technology

    2017-10-01

    author(s) and should not be construed as an official Department of the Army position, policy or decision unless so designated by other documentation...the LTRC database comprised of nodules with a very high pretest probability of malignancy make these results encouraging as we are in the process of...working with the investigators to design the study, establish and support access to the clinical data and images of NLST and DECAMP, develop database

  12. Handwriting Identification, Matching, and Indexing in Noisy Document Images

    DTIC Science & Technology

    2006-01-01

    algorithm to detect all parallel lines simultaneously. Our method can detect 96.8% of the severely broken rule lines in the Arabic database we collected...in the database to guide later processing. It is widely used in banks, post offices, and tax offices where the types of forms are most often pre...be used for different fields), and output the recognition results to a database . Although special anchors may be avail- able to facilitate form

  13. System and method for generating a relationship network

    DOEpatents

    Franks, Kasian; Myers, Cornelia A; Podowski, Raf M

    2015-05-05

    A computer-implemented system and process for generating a relationship network is disclosed. The system provides a set of data items to be related and generates variable length data vectors to represent the relationships between the terms within each data item. The system can be used to generate a relationship network for documents, images, or any other type of file. This relationship network can then be queried to discover the relationships between terms within the set of data items.

  14. System and method for generating a relationship network

    DOEpatents

    Franks, Kasian [Kensington, CA; Myers, Cornelia A [St. Louis, MO; Podowski, Raf M [Pleasant Hill, CA

    2011-07-26

    A computer-implemented system and process for generating a relationship network is disclosed. The system provides a set of data items to be related and generates variable length data vectors to represent the relationships between the terms within each data item. The system can be used to generate a relationship network for documents, images, or any other type of file. This relationship network can then be queried to discover the relationships between terms within the set of data items.

  15. Pre- and postprocessing for reservoir simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, W.L.; Ingalls, L.J.; Prasad, S.J.

    1991-05-01

    This paper describes the functionality and underlying programing paradigms of Shell's simulator-related reservoir-engineering graphics system. THis system includes the simulation postprocessing programs Reservoir Display System (RDS) and Fast Reservoir Engineering Displays (FRED), a hypertext-like on-line documentation system (DOC), and a simulator input preprocessor (SIMPLSIM). RDS creates displays of reservoir simulation results. These displays represent the areal or cross-section distribution of computer reservoir parameters, such as pressure, phase saturation, or temperature. Generation of these images at real-time animation rates is discussed. FRED facilitates the creation of plot files from reservoir simulation output. The use of dynamic memory allocation, asynchronous I/O, amore » table-driven screen manager, and mixed-language (FORTRAN and C) programming are detailed. DOC is used to create and access on-line documentation for the pre-and post-processing programs and the reservoir simulators. DOC can be run by itself or can be accessed from within any other graphics or nongraphics application program. DOC includes a text editor, which is that basis for a reservoir simulation tutorial and greatly simplifies the preparation of simulator input. The use of sharable images, graphics, and the documentation file network are described. Finally, SIMPLSIM is a suite of program that uses interactive graphics in the preparation of reservoir description data for input into reservoir simulators. The SIMPLSIM user-interface manager (UIM) and its graphic interface for reservoir description are discussed.« less

  16. The digital curriculum vitae.

    PubMed

    Galdino, Greg M; Gotway, Michael

    2005-02-01

    The curriculum vitae (CV) has been the traditional method for radiologists to illustrate their accomplishments in the field of medicine. Despite its presence in medicine as a standard, widely accepted means to describe one's professional career and its use for decades as an accomplice to most applications and interviews, there is relatively little written in the medical literature regarding the CV. Misrepresentation on medical students', residents', and fellows' applications has been reported. Using digital technology, CVs have the potential to be much more than printed words on paper and offers a solution to misrepresentation. Digital CVs may incorporate full-length articles, graphics, presentations, clinical images, and video. Common formats for digital CVs include CD-ROMs or DVD-ROMs containing articles (in Adobe Portable Document Format) and presentations (in Microsoft PowerPoint format) accompanying printed CVs, word processing documents with hyperlinks to articles and presentations either locally (on CD-ROMs or DVD-ROMs) or remotely (via the Internet), or hypertext markup language documents. Digital CVs afford the ability to provide more information that is readily accessible to those receiving and reviewing them. Articles, presentations, videos, images, and Internet links can be illustrated using standard file formats commonly available to all radiologists. They can be easily updated and distributed on an inexpensive media, such as a CD-ROM or DVD-ROM. With the availability of electronic articles, presentations, and information via the Internet, traditional paper CVs may soon be superseded by their electronic successors.

  17. Paving the seafloor: Volcanic emplacement processes during the 2005-2006 eruptions at the fast spreading East Pacific Rise, 9°50‧N

    NASA Astrophysics Data System (ADS)

    Fundis, A. T.; Soule, S. A.; Fornari, D. J.; Perfit, M. R.

    2010-08-01

    The 2005-2006 eruptions near 9°50'N at the East Pacific Rise (EPR) marked the first observed repeat eruption at a mid-ocean ridge and provided a unique opportunity to deduce the emplacement dynamics of submarine lava flows. Since these new flows were documented in April 2006, a total of 40 deep-towed imaging surveys have been conducted with the Woods Hole Oceanographic Institution's (WHOI) TowCam system. More than 60,000 digital color images and high-resolution bathymetric profiles of the 2005-2006 flows from the TowCam surveys were analyzed for lava flow morphology and for the presence of kipukas, collapse features, faults and fissures. We use these data to quantify the spatial distributions of lava flow surface morphologies and to investigate how they relate to the physical characteristics of the ridge crest, such as seafloor slope, and inferred dynamics of flow emplacement. We conclude that lava effusion rate was the dominant factor controlling the observed morphological variations in the 2005-2006 flows. We also show that effusion rates were higher than in previously studied eruptions at this site and varied systematically along the length of the eruptive fissure. This is the first well-documented study in which variations in seafloor lava morphology can be directly related to a well documented ridge-crest eruption where effusion rate varied significantly.

  18. Texture for script identification.

    PubMed

    Busch, Andrew; Boles, Wageeh W; Sridharan, Sridha

    2005-11-01

    The problem of determining the script and language of a document image has a number of important applications in the field of document analysis, such as indexing and sorting of large collections of such images, or as a precursor to optical character recognition (OCR). In this paper, we investigate the use of texture as a tool for determining the script of a document image, based on the observation that text has a distinct visual texture. An experimental evaluation of a number of commonly used texture features is conducted on a newly created script database, providing a qualitative measure of which features are most appropriate for this task. Strategies for improving classification results in situations with limited training data and multiple font types are also proposed.

  19. Sub-pixel analysis to support graphic security after scanning at low resolution

    NASA Astrophysics Data System (ADS)

    Haas, Bertrand; Cordery, Robert; Gou, Hongmei; Decker, Steve

    2006-02-01

    Whether in the domain of audio, video or finance, our world tends to become increasingly digital. However, for diverse reasons, the transition from analog to digital is often much extended in time, and proceeds by long steps (and sometimes never completes). One such step is the conversion of information on analog media to digital information. We focus in this paper on the conversion (scanning) of printed documents to digital images. Analog media have the advantage over digital channels that they can harbor much imperceptible information that can be used for fraud detection and forensic purposes. But this secondary information usually fails to be retrieved during the conversion step. This is particularly relevant since the Check-21 act (Check Clearing for the 21st Century act) became effective in 2004 and allows images of checks to be handled by banks as usual paper checks. We use here this situation of check scanning as our primary benchmark for graphic security features after scanning. We will first present a quick review of the most common graphic security features currently found on checks, with their specific purpose, qualities and disadvantages, and we demonstrate their poor survivability after scanning in the average scanning conditions expected from the Check-21 Act. We will then present a novel method of measurement of distances between and rotations of line elements in a scanned image: Based on an appropriate print model, we refine direct measurements to an accuracy beyond the size of a scanning pixel, so we can then determine expected distances, periodicity, sharpness and print quality of known characters, symbols and other graphic elements in a document image. Finally we will apply our method to fraud detection of documents after gray-scale scanning at 300dpi resolution. We show in particular that alterations on legitimate checks or copies of checks can be successfully detected by measuring with sub-pixel accuracy the irregularities inherently introduced by the illegitimate process.

  20. Robotic Inspection System for Non-Destructive Evaluation (nde) of Pipes

    NASA Astrophysics Data System (ADS)

    Mackenzie, L. D.; Pierce, S. G.; Hayward, G.

    2009-03-01

    The demand for remote inspection of pipework in the processing cells of nuclear plant provides significant challenges of access, navigation, inspection technique and data communication. Such processing cells typically contain several kilometres of densely packed pipework whose actual physical layout may be poorly documented. Access to these pipes is typically afforded through the radiation shield via a small removable concrete plug which may be several meters from the actual inspection site, thus considerably complicating practical inspection. The current research focuses on the robotic deployment of multiple NDE payloads for weld inspection along non-ferritic steel pipework (thus precluding use of magnetic traction options). A fully wireless robotic inspection platform has been developed that is capable of travelling along the outside of a pipe at any orientation, while avoiding obstacles such as pipe hangers and delivering a variety of NDE payloads. An eddy current array system provides rapid imaging capabilities for surface breaking defects while an on-board camera, in addition to assisting with navigation tasks, also allows real time image processing to identify potential defects. All sensor data can be processed by the embedded microcontroller or transmitted wirelessly back to the point of access for post-processing analysis.

Top