Sample records for document image content

  1. Adaptive removal of background and white space from document images using seam categorization

    NASA Astrophysics Data System (ADS)

    Fillion, Claude; Fan, Zhigang; Monga, Vishal

    2011-03-01

    Document images are obtained regularly by rasterization of document content and as scans of printed documents. Resizing via background and white space removal is often desired for better consumption of these images, whether on displays or in print. While white space and background are easy to identify in images, existing methods such as naïve removal and content aware resizing (seam carving) each have limitations that can lead to undesirable artifacts, such as uneven spacing between lines of text or poor arrangement of content. An adaptive method based on image content is hence needed. In this paper we propose an adaptive method to intelligently remove white space and background content from document images. Document images are different from pictorial images in structure. They typically contain objects (text letters, pictures and graphics) separated by uniform background, which include both white paper space and other uniform color background. Pixels in uniform background regions are excellent candidates for deletion if resizing is required, as they introduce less change in document content and style, compared with deletion of object pixels. We propose a background deletion method that exploits both local and global context. The method aims to retain the document structural information and image quality.

  2. Content-based retrieval of historical Ottoman documents stored as textual images.

    PubMed

    Saykol, Ediz; Sinop, Ali Kemal; Güdükbay, Ugur; Ulusoy, Ozgür; Cetin, A Enis

    2004-03-01

    There is an accelerating demand to access the visual content of documents stored in historical and cultural archives. Availability of electronic imaging tools and effective image processing techniques makes it feasible to process the multimedia data in large databases. In this paper, a framework for content-based retrieval of historical documents in the Ottoman Empire archives is presented. The documents are stored as textual images, which are compressed by constructing a library of symbols occurring in a document, and the symbols in the original image are then replaced with pointers into the codebook to obtain a compressed representation of the image. The features in wavelet and spatial domain based on angular and distance span of shapes are used to extract the symbols. In order to make content-based retrieval in historical archives, a query is specified as a rectangular region in an input image and the same symbol-extraction process is applied to the query region. The queries are processed on the codebook of documents and the query images are identified in the resulting documents using the pointers in textual images. The querying process does not require decompression of images. The new content-based retrieval framework is also applicable to many other document archives using different scripts.

  3. Restoring 2D content from distorted documents.

    PubMed

    Brown, Michael S; Sun, Mingxuan; Yang, Ruigang; Yun, Lin; Seales, W Brent

    2007-11-01

    This paper presents a framework to restore the 2D content printed on documents in the presence of geometric distortion and non-uniform illumination. Compared with textbased document imaging approaches that correct distortion to a level necessary to obtain sufficiently readable text or to facilitate optical character recognition (OCR), our work targets nontextual documents where the original printed content is desired. To achieve this goal, our framework acquires a 3D scan of the document's surface together with a high-resolution image. Conformal mapping is used to rectify geometric distortion by mapping the 3D surface back to a plane while minimizing angular distortion. This conformal "deskewing" assumes no parametric model of the document's surface and is suitable for arbitrary distortions. Illumination correction is performed by using the 3D shape to distinguish content gradient edges from illumination gradient edges in the high-resolution image. Integration is performed using only the content edges to obtain a reflectance image with significantly less illumination artifacts. This approach makes no assumptions about light sources and their positions. The results from the geometric and photometric correction are combined to produce the final output.

  4. Old document image segmentation using the autocorrelation function and multiresolution analysis

    NASA Astrophysics Data System (ADS)

    Mehri, Maroua; Gomez-Krämer, Petra; Héroux, Pierre; Mullot, Rémy

    2013-01-01

    Recent progress in the digitization of heterogeneous collections of ancient documents has rekindled new challenges in information retrieval in digital libraries and document layout analysis. Therefore, in order to control the quality of historical document image digitization and to meet the need of a characterization of their content using intermediate level metadata (between image and document structure), we propose a fast automatic layout segmentation of old document images based on five descriptors. Those descriptors, based on the autocorrelation function, are obtained by multiresolution analysis and used afterwards in a specific clustering method. The method proposed in this article has the advantage that it is performed without any hypothesis on the document structure, either about the document model (physical structure), or the typographical parameters (logical structure). It is also parameter-free since it automatically adapts to the image content. In this paper, firstly, we detail our proposal to characterize the content of old documents by extracting the autocorrelation features in the different areas of a page and at several resolutions. Then, we show that is possible to automatically find the homogeneous regions defined by similar indices of autocorrelation without knowledge about the number of clusters using adapted hierarchical ascendant classification and consensus clustering approaches. To assess our method, we apply our algorithm on 316 old document images, which encompass six centuries (1200-1900) of French history, in order to demonstrate the performance of our proposal in terms of segmentation and characterization of heterogeneous corpus content. Moreover, we define a new evaluation metric, the homogeneity measure, which aims at evaluating the segmentation and characterization accuracy of our methodology. We find a 85% of mean homogeneity accuracy. Those results help to represent a document by a hierarchy of layout structure and content, and to define one or more signatures for each page, on the basis of a hierarchical representation of homogeneous blocks and their topology.

  5. Document image retrieval through word shape coding.

    PubMed

    Lu, Shijian; Li, Linlin; Tan, Chew Lim

    2008-11-01

    This paper presents a document retrieval technique that is capable of searching document images without OCR (optical character recognition). The proposed technique retrieves document images by a new word shape coding scheme, which captures the document content through annotating each word image by a word shape code. In particular, we annotate word images by using a set of topological shape features including character ascenders/descenders, character holes, and character water reservoirs. With the annotated word shape codes, document images can be retrieved by either query keywords or a query document image. Experimental results show that the proposed document image retrieval technique is fast, efficient, and tolerant to various types of document degradation.

  6. Content Recognition and Context Modeling for Document Analysis and Retrieval

    ERIC Educational Resources Information Center

    Zhu, Guangyu

    2009-01-01

    The nature and scope of available documents are changing significantly in many areas of document analysis and retrieval as complex, heterogeneous collections become accessible to virtually everyone via the web. The increasing level of diversity presents a great challenge for document image content categorization, indexing, and retrieval.…

  7. Classification of document page images based on visual similarity of layout structures

    NASA Astrophysics Data System (ADS)

    Shin, Christian K.; Doermann, David S.

    1999-12-01

    Searching for documents by their type or genre is a natural way to enhance the effectiveness of document retrieval. The layout of a document contains a significant amount of information that can be used to classify a document's type in the absence of domain specific models. A document type or genre can be defined by the user based primarily on layout structure. Our classification approach is based on 'visual similarity' of the layout structure by building a supervised classifier, given examples of the class. We use image features, such as the percentages of tex and non-text (graphics, image, table, and ruling) content regions, column structures, variations in the point size of fonts, the density of content area, and various statistics on features of connected components which can be derived from class samples without class knowledge. In order to obtain class labels for training samples, we conducted a user relevance test where subjects ranked UW-I document images with respect to the 12 representative images. We implemented our classification scheme using the OC1, a decision tree classifier, and report our findings.

  8. Web-based document and content management with off-the-shelf software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuster, J

    1999-03-18

    This, then, is the current status of the project: Since we made the switch to Intradoc, we are now treating the project as a document and image management system. In reality, it could be considered a document and content management system since we can manage almost any file input to the system such as video or audio. At present, however, we are concentrating on images. As mentioned above, my CRADA funding was only targeted at including thumbnails of images in Intradoc. We still had to modify Intradoc so that it would compress images submitted to the system. All processing ofmore » files submitted to Intradoc is handled in what is called the Document Refinery. Even though MrSID created thumbnails in the process of compressing an image, work needed to be done to somehow build this capability into the Document Refinery. Therefore we made the decision to contract the Intradoc Engineering Team to perform this custom development work. To make Intradoc even more capable of handling images, we have also contracted for customization of the Document Refinery to accept Adobe PhotoShop and Illustrator file in their native format.« less

  9. Composition of a dewarped and enhanced document image from two view images.

    PubMed

    Koo, Hyung Il; Kim, Jinho; Cho, Nam Ik

    2009-07-01

    In this paper, we propose an algorithm to compose a geometrically dewarped and visually enhanced image from two document images taken by a digital camera at different angles. Unlike the conventional works that require special equipment or assumptions on the contents of books or complicated image acquisition steps, we estimate the unfolded book or document surface from the corresponding points between two images. For this purpose, the surface and camera matrices are estimated using structure reconstruction, 3-D projection analysis, and random sample consensus-based curve fitting with the cylindrical surface model. Because we do not need any assumption on the contents of books, the proposed method can be applied not only to optical character recognition (OCR), but also to the high-quality digitization of pictures in documents. In addition to the dewarping for a structurally better image, image mosaic is also performed for further improving the visual quality. By finding better parts of images (with less out of focus blur and/or without specular reflections) from either of views, we compose a better image by stitching and blending them. These processes are formulated as energy minimization problems that can be solved using a graph cut method. Experiments on many kinds of book or document images show that the proposed algorithm robustly works and yields visually pleasing results. Also, the OCR rate of the resulting image is comparable to that of document images from a flatbed scanner.

  10. Document cards: a top trumps visualization for documents.

    PubMed

    Strobelt, Hendrik; Oelke, Daniela; Rohrdantz, Christian; Stoffel, Andreas; Keim, Daniel A; Deussen, Oliver

    2009-01-01

    Finding suitable, less space consuming views for a document's main content is crucial to provide convenient access to large document collections on display devices of different size. We present a novel compact visualization which represents the document's key semantic as a mixture of images and important key terms, similar to cards in a top trumps game. The key terms are extracted using an advanced text mining approach based on a fully automatic document structure extraction. The images and their captions are extracted using a graphical heuristic and the captions are used for a semi-semantic image weighting. Furthermore, we use the image color histogram for classification and show at least one representative from each non-empty image class. The approach is demonstrated for the IEEE InfoVis publications of a complete year. The method can easily be applied to other publication collections and sets of documents which contain images.

  11. Dealing with extreme data diversity: extraction and fusion from the growing types of document formats

    NASA Astrophysics Data System (ADS)

    David, Peter; Hansen, Nichole; Nolan, James J.; Alcocer, Pedro

    2015-05-01

    The growth in text data available online is accompanied by a growth in the diversity of available documents. Corpora with extreme heterogeneity in terms of file formats, document organization, page layout, text style, and content are common. The absence of meaningful metadata describing the structure of online and open-source data leads to text extraction results that contain no information about document structure and are cluttered with page headers and footers, web navigation controls, advertisements, and other items that are typically considered noise. We describe an approach to document structure and metadata recovery that uses visual analysis of documents to infer the communicative intent of the author. Our algorithm identifies the components of documents such as titles, headings, and body content, based on their appearance. Because it operates on an image of a document, our technique can be applied to any type of document, including scanned images. Our approach to document structure recovery considers a finer-grained set of component types than prior approaches. In this initial work, we show that a machine learning approach to document structure recovery using a feature set based on the geometry and appearance of images of documents achieves a 60% greater F1- score than a baseline random classifier.

  12. Main image file tape description

    USGS Publications Warehouse

    Warriner, Howard W.

    1980-01-01

    This Main Image File Tape document defines the data content and file structure of the Main Image File Tape (MIFT) produced by the EROS Data Center (EDC). This document also defines an INQUIRY tape, which is just a subset of the MIFT. The format of the INQUIRY tape is identical to the MIFT except for two records; therefore, with the exception of these two records (described elsewhere in this document), every remark made about the MIFT is true for the INQUIRY tape.

  13. Image/text automatic indexing and retrieval system using context vector approach

    NASA Astrophysics Data System (ADS)

    Qing, Kent P.; Caid, William R.; Ren, Clara Z.; McCabe, Patrick

    1995-11-01

    Thousands of documents and images are generated daily both on and off line on the information superhighway and other media. Storage technology has improved rapidly to handle these data but indexing this information is becoming very costly. HNC Software Inc. has developed a technology for automatic indexing and retrieval of free text and images. This technique is demonstrated and is based on the concept of `context vectors' which encode a succinct representation of the associated text and features of sub-image. In this paper, we will describe the Automated Librarian System which was designed for free text indexing and the Image Content Addressable Retrieval System (ICARS) which extends the technique from the text domain into the image domain. Both systems have the ability to automatically assign indices for a new document and/or image based on the content similarities in the database. ICARS also has the capability to retrieve images based on similarity of content using index terms, text description, and user-generated images as a query without performing segmentation or object recognition.

  14. Goal-oriented rectification of camera-based document images.

    PubMed

    Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J

    2011-04-01

    Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure.

  15. Imaged Document Optical Correlation and Conversion System (IDOCCS)

    NASA Astrophysics Data System (ADS)

    Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.

    1999-03-01

    Today, the paper document is fast becoming a thing of the past. With the rapid development of fast, inexpensive computing and storage devices, many government and private organizations are archiving their documents in electronic form (e.g., personnel records, medical records, patents, etc.). In addition, many organizations are converting their paper archives to electronic images, which are stored in a computer database. Because of this, there is a need to efficiently organize this data into comprehensive and accessible information resources. The Imaged Document Optical Correlation and Conversion System (IDOCCS) provides a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provides the search and retrieval capability of document images. The IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives and can even determine the types of languages contained within a document. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited, e.g., imaged documents containing an agency's seal or logo, or documents with a particular individual's signature block, can be singled out. With this dual capability, IDOCCS outperforms systems that rely on optical character recognition as a basis for indexing and storing only the textual content of documents for later retrieval.

  16. Mapping DICOM to OpenDocument format

    NASA Astrophysics Data System (ADS)

    Yu, Cong; Yao, Zhihong

    2009-02-01

    In order to enhance the readability, extensibility and sharing of DICOM files, we have introduced XML into DICOM file system (SPIE Volume 5748)[1] and the multilayer tree structure into DICOM (SPIE Volume 6145)[2]. In this paper, we proposed mapping DICOM to ODF(OpenDocument Format), for it is also based on XML. As a result, the new format realizes the separation of content(including text content and image) and display style. Meanwhile, since OpenDocument files take the format of a ZIP compressed archive, the new kind of DICOM files can benefit from ZIP's lossless compression to reduce file size. Moreover, this open format can also guarantee long-term access to data without legal or technical barriers, making medical images accessible to various fields.

  17. Scalable ranked retrieval using document images

    NASA Astrophysics Data System (ADS)

    Jain, Rajiv; Oard, Douglas W.; Doermann, David

    2013-12-01

    Despite the explosion of text on the Internet, hard copy documents that have been scanned as images still play a significant role for some tasks. The best method to perform ranked retrieval on a large corpus of document images, however, remains an open research question. The most common approach has been to perform text retrieval using terms generated by optical character recognition. This paper, by contrast, examines whether a scalable segmentation-free image retrieval algorithm, which matches sub-images containing text or graphical objects, can provide additional benefit in satisfying a user's information needs on a large, real world dataset. Results on 7 million scanned pages from the CDIP v1.0 test collection show that content based image retrieval finds a substantial number of documents that text retrieval misses, and that when used as a basis for relevance feedback can yield improvements in retrieval effectiveness.

  18. Contrast in Terahertz Images of Archival Documents—Part II: Influence of Topographic Features

    NASA Astrophysics Data System (ADS)

    Bardon, Tiphaine; May, Robert K.; Taday, Philip F.; Strlič, Matija

    2017-04-01

    We investigate the potential of terahertz time-domain imaging in reflection mode to reveal archival information in documents in a non-invasive way. In particular, this study explores the parameters and signal processing tools that can be used to produce well-contrasted terahertz images of topographic features commonly found in archival documents, such as indentations left by a writing tool, as well as sieve lines. While the amplitude of the waveforms at a specific time delay can provide the most contrasted and legible images of topographic features on flat paper or parchment sheets, this parameter may not be suitable for documents that have a highly irregular surface, such as water- or fire-damaged documents. For analysis of such documents, cross-correlation of the time-domain signals can instead yield images with good contrast. Analysis of the frequency-domain representation of terahertz waveforms can also provide well-contrasted images of topographic features, with improved spatial resolution when utilising high-frequency content. Finally, we point out some of the limitations of these means of analysis for extracting information relating to topographic features of interest from documents.

  19. Case retrieval in medical databases by fusing heterogeneous information.

    PubMed

    Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Roux, Christian; Cochener, Béatrice

    2011-01-01

    A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper. It was designed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework. The proposed retrieval method relies on image processing, in order to characterize each individual image in a document by their digital content, and information fusion. Once the available images in a query document are characterized, a degree of match, between the query document and each reference document stored in the database, is defined for each attribute (an image feature or a metadata). A Bayesian network is used to recover missing information if need be. Finally, two novel information fusion methods are proposed to combine these degrees of match, in order to rank the reference documents by decreasing relevance for the query. In the first method, the degrees of match are fused by the Bayesian network itself. In the second method, they are fused by the Dezert-Smarandache theory: the second approach lets us model our confidence in each source of information (i.e., each attribute) and take it into account in the fusion process for a better retrieval performance. The proposed methods were applied to two heterogeneous medical databases, a diabetic retinopathy database and a mammography screening database, for computer aided diagnosis. Precisions at five of 0.809 ± 0.158 and 0.821 ± 0.177, respectively, were obtained for these two databases, which is very promising.

  20. Wide-field time-resolved luminescence imaging and spectroscopy to decipher obliterated documents in forensic science

    NASA Astrophysics Data System (ADS)

    Suzuki, Mototsugu; Akiba, Norimitsu; Kurosawa, Kenji; Kuroki, Kenro; Akao, Yoshinori; Higashikawa, Yoshiyasu

    2016-01-01

    We applied a wide-field time-resolved luminescence (TRL) method with a pulsed laser and a gated intensified charge coupled device (ICCD) for deciphering obliterated documents for use in forensic science. The TRL method can nondestructively measure the dynamics of luminescence, including fluorescence and phosphorescence lifetimes, which prove to be useful parameters for image detection. First, we measured the TRL spectra of four brands of black porous-tip pen inks on paper to estimate their luminescence lifetimes. Next, we acquired the TRL images of 12 obliterated documents at various delay times and gate times of the ICCD. The obliterated contents were revealed in the TRL images because of the difference in the luminescence lifetimes of the inks. This method requires no pretreatment, is nondestructive, and has the advantage of wide-field imaging, which makes it is easy to control the gate timing. This demonstration proves that TRL imaging and spectroscopy are powerful tools for forensic document examination.

  1. Duplicate document detection in DocBrowse

    NASA Astrophysics Data System (ADS)

    Chalana, Vikram; Bruce, Andrew G.; Nguyen, Thien

    1998-04-01

    Duplicate documents are frequently found in large databases of digital documents, such as those found in digital libraries or in the government declassification effort. Efficient duplicate document detection is important not only to allow querying for similar documents, but also to filter out redundant information in large document databases. We have designed three different algorithm to identify duplicate documents. The first algorithm is based on features extracted from the textual content of a document, the second algorithm is based on wavelet features extracted from the document image itself, and the third algorithm is a combination of the first two. These algorithms are integrated within the DocBrowse system for information retrieval from document images which is currently under development at MathSoft. DocBrowse supports duplicate document detection by allowing (1) automatic filtering to hide duplicate documents, and (2) ad hoc querying for similar or duplicate documents. We have tested the duplicate document detection algorithms on 171 documents and found that text-based method has an average 11-point precision of 97.7 percent while the image-based method has an average 11- point precision of 98.9 percent. However, in general, the text-based method performs better when the document contains enough high-quality machine printed text while the image- based method performs better when the document contains little or no quality machine readable text.

  2. Geographical Topics Learning of Geo-Tagged Social Images.

    PubMed

    Zhang, Xiaoming; Ji, Shufan; Wang, Senzhang; Li, Zhoujun; Lv, Xueqiang

    2016-03-01

    With the availability of cheap location sensors, geotagging of images in online social media is very popular. With a large amount of geo-tagged social images, it is interesting to study how these images are shared across geographical regions and how the geographical language characteristics and vision patterns are distributed across different regions. Unlike textual document, geo-tagged social image contains multiple types of content, i.e., textual description, visual content, and geographical information. Existing approaches usually mine geographical characteristics using a subset of multiple types of image contents or combining those contents linearly, which ignore correlations between different types of contents, and their geographical distributions. Therefore, in this paper, we propose a novel method to discover geographical characteristics of geo-tagged social images using a geographical topic model called geographical topic model of social images (GTMSIs). GTMSI integrates multiple types of social image contents as well as the geographical distributions, in which image topics are modeled based on both vocabulary and visual features. In GTMSI, each region of the image would have its own topic distribution, and hence have its own language model and vision pattern. Experimental results show that our GTMSI could identify interesting topics and vision patterns, as well as provide location prediction and image tagging.

  3. High recall document content extraction

    NASA Astrophysics Data System (ADS)

    An, Chang; Baird, Henry S.

    2011-01-01

    We report methodologies for computing high-recall masks for document image content extraction, that is, the location and segmentation of regions containing handwriting, machine-printed text, photographs, blank space, etc. The resulting segmentation is pixel-accurate, which accommodates arbitrary zone shapes (not merely rectangles). We describe experiments showing that iterated classifiers can increase recall of all content types, with little loss of precision. We also introduce two methodological enhancements: (1) a multi-stage voting rule; and (2) a scoring policy that views blank pixels as a "don't care" class with other content classes. These enhancements improve both recall and precision, achieving at least 89% recall and at least 87% precision among three content types: machine-print, handwriting, and photo.

  4. What Can Pictures Tell Us About Web Pages? Improving Document Search Using Images.

    PubMed

    Rodriguez-Vaamonde, Sergio; Torresani, Lorenzo; Fitzgibbon, Andrew W

    2015-06-01

    Traditional Web search engines do not use the images in the HTML pages to find relevant documents for a given query. Instead, they typically operate by computing a measure of agreement between the keywords provided by the user and only the text portion of each page. In this paper we study whether the content of the pictures appearing in a Web page can be used to enrich the semantic description of an HTML document and consequently boost the performance of a keyword-based search engine. We present a Web-scalable system that exploits a pure text-based search engine to find an initial set of candidate documents for a given query. Then, the candidate set is reranked using visual information extracted from the images contained in the pages. The resulting system retains the computational efficiency of traditional text-based search engines with only a small additional storage cost needed to encode the visual information. We test our approach on one of the TREC Million Query Track benchmarks where we show that the exploitation of visual content yields improvement in accuracies for two distinct text-based search engines, including the system with the best reported performance on this benchmark. We further validate our approach by collecting document relevance judgements on our search results using Amazon Mechanical Turk. The results of this experiment confirm the improvement in accuracy produced by our image-based reranker over a pure text-based system.

  5. Extraction and labeling high-resolution images from PDF documents

    NASA Astrophysics Data System (ADS)

    Chachra, Suchet K.; Xue, Zhiyun; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-12-01

    Accuracy of content-based image retrieval is affected by image resolution among other factors. Higher resolution images enable extraction of image features that more accurately represent the image content. In order to improve the relevance of search results for our biomedical image search engine, Open-I, we have developed techniques to extract and label high-resolution versions of figures from biomedical articles supplied in the PDF format. Open-I uses the open-access subset of biomedical articles from the PubMed Central repository hosted by the National Library of Medicine. Articles are available in XML and in publisher supplied PDF formats. As these PDF documents contain little or no meta-data to identify the embedded images, the task includes labeling images according to their figure number in the article after they have been successfully extracted. For this purpose we use the labeled small size images provided with the XML web version of the article. This paper describes the image extraction process and two alternative approaches to perform image labeling that measure the similarity between two images based upon the image intensity projection on the coordinate axes and similarity based upon the normalized cross-correlation between the intensities of two images. Using image identification based on image intensity projection, we were able to achieve a precision of 92.84% and a recall of 82.18% in labeling of the extracted images.

  6. Uses of software in digital image analysis: a forensic report

    NASA Astrophysics Data System (ADS)

    Sharma, Mukesh; Jha, Shailendra

    2010-02-01

    Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.

  7. Representation-based user interfaces for the audiovisual library of the year 2000

    NASA Astrophysics Data System (ADS)

    Aigrain, Philippe; Joly, Philippe; Lepain, Philippe; Longueville, Veronique

    1995-03-01

    The audiovisual library of the future will be based on computerized access to digitized documents. In this communication, we address the user interface issues which will arise from this new situation. One cannot simply transfer a user interface designed for the piece by piece production of some audiovisual presentation and make it a tool for accessing full-length movies in an electronic library. One cannot take a digital sound editing tool and propose it as a means to listen to a musical recording. In our opinion, when computers are used as mediations to existing contents, document representation-based user interfaces are needed. With such user interfaces, a structured visual representation of the document contents is presented to the user, who can then manipulate it to control perception and analysis of these contents. In order to build such manipulable visual representations of audiovisual documents, one needs to automatically extract structural information from the documents contents. In this communication, we describe possible visual interfaces for various temporal media, and we propose methods for the economically feasible large scale processing of documents. The work presented is sponsored by the Bibliotheque Nationale de France: it is part of the program aiming at developing for image and sound documents an experimental counterpart to the digitized text reading workstation of this library.

  8. Are Television Commercials Still Achievement Scripts for Women?

    ERIC Educational Resources Information Center

    Yoder, Janice D.; Christopher, Jessica; Holmes, Jeffrey D.

    2008-01-01

    Content analyses of television advertising document the ubiquity of traditional images of women, yet few studies have explored their impact. One noteworthy exception is the experiment by Geis, Brown, Jennings, and Porter (1984). These researchers found that the achievement aspirations of controls and women exposed to traditional images were lower…

  9. RESEARCH ON ROBUST METHODS FOR EXTRACTING AND RECOGNIZING PHOTOGRAPHY MANAGEMENT ITEMS FROM VARIOUS IMAGE DATA Of CONSTRUCTION

    NASA Astrophysics Data System (ADS)

    Kitagawa, Etsuji; Tanaka, Shigenori; Abiko, Satoshi; Wakabayashi, Katsuma; Jiang, Wenyuan

    Recently, an electronic delivery for various documents is carried out by Ministry of Land, Infrastructure, Transport and Tourism in construction fields. One of them is image data of construction photography that must be delivered with information of photography management items such as construction name or type of works, etc. However, there is a problem that a lot of cost is needed to treat contents of these items from characters printed and handwritten on blackboard into these image data. In this research, we develop the system which can treat contents of these items by extracting contents of these items from the image data of construction photography taken in various scenes with preprocessing the image, recognizing characters with OCR and correcting error with natural language process. And we confirm the effectiveness of the system, by experimenting in each function of system and in entire system.

  10. Multipurpose floating platform for hyperspectral imaging, sampling and sensing of surface water sources used in irrigation and recreation

    USDA-ARS?s Scientific Manuscript database

    The objective of this work was to design, construct, and test the self-propelled aquatic platform for imaging, multi-tier water sampling, water quality sensing, and depth profiling to document microbial content and environmental covariates in the interior of irrigation ponds and reservoirs. The plat...

  11. iScreen: Image-Based High-Content RNAi Screening Analysis Tools.

    PubMed

    Zhong, Rui; Dong, Xiaonan; Levine, Beth; Xie, Yang; Xiao, Guanghua

    2015-09-01

    High-throughput RNA interference (RNAi) screening has opened up a path to investigating functional genomics in a genome-wide pattern. However, such studies are often restricted to assays that have a single readout format. Recently, advanced image technologies have been coupled with high-throughput RNAi screening to develop high-content screening, in which one or more cell image(s), instead of a single readout, were generated from each well. This image-based high-content screening technology has led to genome-wide functional annotation in a wider spectrum of biological research studies, as well as in drug and target discovery, so that complex cellular phenotypes can be measured in a multiparametric format. Despite these advances, data analysis and visualization tools are still largely lacking for these types of experiments. Therefore, we developed iScreen (image-Based High-content RNAi Screening Analysis Tool), an R package for the statistical modeling and visualization of image-based high-content RNAi screening. Two case studies were used to demonstrate the capability and efficiency of the iScreen package. iScreen is available for download on CRAN (http://cran.cnr.berkeley.edu/web/packages/iScreen/index.html). The user manual is also available as a supplementary document. © 2014 Society for Laboratory Automation and Screening.

  12. "I Will Write to You with My Eyes": Reflective Text and Image Journals in the Undergraduate Classroom

    ERIC Educational Resources Information Center

    Hyland-Russell, Tara

    2014-01-01

    This article reports on a case study into students' perspectives on the use of "cahiers", reflective text and image journals. Narrative interviews and document analysis reveal that "cahiers" can be used effectively to engage students in course content and learning processes. Recent work in transformative learning…

  13. Ontology modularization to improve semantic medical image annotation.

    PubMed

    Wennerberg, Pinar; Schulz, Klaus; Buitelaar, Paul

    2011-02-01

    Searching for medical images and patient reports is a significant challenge in a clinical setting. The contents of such documents are often not described in sufficient detail thus making it difficult to utilize the inherent wealth of information contained within them. Semantic image annotation addresses this problem by describing the contents of images and reports using medical ontologies. Medical images and patient reports are then linked to each other through common annotations. Subsequently, search algorithms can more effectively find related sets of documents on the basis of these semantic descriptions. A prerequisite to realizing such a semantic search engine is that the data contained within should have been previously annotated with concepts from medical ontologies. One major challenge in this regard is the size and complexity of medical ontologies as annotation sources. Manual annotation is particularly time consuming labor intensive in a clinical environment. In this article we propose an approach to reducing the size of clinical ontologies for more efficient manual image and text annotation. More precisely, our goal is to identify smaller fragments of a large anatomy ontology that are relevant for annotating medical images from patients suffering from lymphoma. Our work is in the area of ontology modularization, which is a recent and active field of research. We describe our approach, methods and data set in detail and we discuss our results. Copyright © 2010 Elsevier Inc. All rights reserved.

  14. Document image binarization using "multi-scale" predefined filters

    NASA Astrophysics Data System (ADS)

    Saabni, Raid M.

    2018-04-01

    Reading text or searching for key words within a historical document is a very challenging task. one of the first steps of the complete task is binarization, where we separate foreground such as text, figures and drawings from the background. Successful results of this important step in many cases can determine next steps to success or failure, therefore it is very vital to the success of the complete task of reading and analyzing the content of a document image. Generally, historical documents images are of poor quality due to their storage condition and degradation over time, which mostly cause to varying contrasts, stains, dirt and seeping ink from reverse side. In this paper, we use banks of anisotropic predefined filters in different scales and orientations to develop a binarization method for degraded documents and manuscripts. Using the fact, that handwritten strokes may follow different scales and orientations, we use predefined sets of filter banks having various scales, weights, and orientations to seek a compact set of filters and weights in order to generate diffrent layers of foregrounds and background. Results of convolving these fiters on the gray level image locally, weighted and accumulated to enhance the original image. Based on the different layers, seeds of components in the gray level image and a learning process, we present an improved binarization algorithm to separate the background from layers of foreground. Different layers of foreground which may be caused by seeping ink, degradation or other factors are also separated from the real foreground in a second phase. Promising experimental results were obtained on the DIBCO2011 , DIBCO2013 and H-DIBCO2016 data sets and a collection of images taken from real historical documents.

  15. Corporate Social Responsibility programs of Big Food in Australia: a content analysis of industry documents.

    PubMed

    Richards, Zoe; Thomas, Samantha L; Randle, Melanie; Pettigrew, Simone

    2015-12-01

    To examine Corporate Social Responsibility (CSR) tactics by identifying the key characteristics of CSR strategies as described in the corporate documents of selected 'Big Food' companies. A mixed methods content analysis was used to analyse the information contained on Australian Big Food company websites. Data sources included company CSR reports and web-based content that related to CSR initiatives employed in Australia. A total of 256 CSR activities were identified across six organisations. Of these, the majority related to the categories of environment (30.5%), responsibility to consumers (25.0%) or community (19.5%). Big Food companies appear to be using CSR activities to: 1) build brand image through initiatives associated with the environment and responsibility to consumers; 2) target parents and children through community activities; and 3) align themselves with respected organisations and events in an effort to transfer their positive image attributes to their own brands. Results highlight the type of CSR strategies Big Food companies are employing. These findings serve as a guide to mapping and monitoring CSR as a specific form of marketing. © 2015 Public Health Association of Australia.

  16. An automatic indexing method for medical documents.

    PubMed Central

    Wagner, M. M.

    1991-01-01

    This paper describes MetaIndex, an automatic indexing program that creates symbolic representations of documents for the purpose of document retrieval. MetaIndex uses a simple transition network parser to recognize a language that is derived from the set of main concepts in the Unified Medical Language System Metathesaurus (Meta-1). MetaIndex uses a hierarchy of medical concepts, also derived from Meta-1, to represent the content of documents. The goal of this approach is to improve document retrieval performance by better representation of documents. An evaluation method is described, and the performance of MetaIndex on the task of indexing the Slice of Life medical image collection is reported. PMID:1807564

  17. Machine printed text and handwriting identification in noisy document images.

    PubMed

    Zheng, Yefeng; Li, Huiping; Doermann, David

    2004-03-01

    In this paper, we address the problem of the identification of text in noisy document images. We are especially focused on segmenting and identifying between handwriting and machine printed text because: 1) Handwriting in a document often indicates corrections, additions, or other supplemental information that should be treated differently from the main content and 2) the segmentation and recognition techniques requested for machine printed and handwritten text are significantly different. A novel aspect of our approach is that we treat noise as a separate class and model noise based on selected features. Trained Fisher classifiers are used to identify machine printed text and handwriting from noise and we further exploit context to refine the classification. A Markov Random Field-based (MRF) approach is used to model the geometrical structure of the printed text, handwriting, and noise to rectify misclassifications. Experimental results show that our approach is robust and can significantly improve page segmentation in noisy document collections.

  18. Advanced Cell Classifier: User-Friendly Machine-Learning-Based Software for Discovering Phenotypes in High-Content Imaging Data.

    PubMed

    Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter

    2017-06-28

    High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Segmentation-driven compound document coding based on H.264/AVC-INTRA.

    PubMed

    Zaghetto, Alexandre; de Queiroz, Ricardo L

    2007-07-01

    In this paper, we explore H.264/AVC operating in intraframe mode to compress a mixed image, i.e., composed of text, graphics, and pictures. Even though mixed contents (compound) documents usually require the use of multiple compressors, we apply a single compressor for both text and pictures. For that, distortion is taken into account differently between text and picture regions. Our approach is to use a segmentation-driven adaptation strategy to change the H.264/AVC quantization parameter on a macroblock by macroblock basis, i.e., we deviate bits from pictorial regions to text in order to keep text edges sharp. We show results of a segmentation driven quantizer adaptation method applied to compress documents. Our reconstructed images have better text sharpness compared to straight unadapted coding, at negligible visual losses on pictorial regions. Our results also highlight the fact that H.264/AVC-INTRA outperforms coders such as JPEG-2000 as a single coder for compound images.

  20. Authenticity techniques for PACS images and records

    NASA Astrophysics Data System (ADS)

    Wong, Stephen T. C.; Abundo, Marco; Huang, H. K.

    1995-05-01

    Along with the digital radiology environment supported by picture archiving and communication systems (PACS) comes a new problem: How to establish trust in multimedia medical data that exist only in the easily altered memory of a computer. Trust is characterized in terms of integrity and privacy of digital data. Two major self-enforcing techniques can be used to assure the authenticity of electronic images and text -- key-based cryptography and digital time stamping. Key-based cryptography associates the content of an image with the originator using one or two distinct keys and prevents alteration of the document by anyone other than the originator. A digital time stamping algorithm generates a characteristic `digital fingerprint' for the original document using a mathematical hash function, and checks that it has not been modified. This paper discusses these cryptographic algorithms and their appropriateness for a PACS environment. It also presents experimental results of cryptographic algorithms on several imaging modalities.

  1. Scanning technology selection impacts acceptability and usefulness of image-rich content.

    PubMed

    Alpi, Kristine M; Brown, James C; Neel, Jennifer A; Grindem, Carol B; Linder, Keith E; Harper, James B

    2016-01-01

    Clinical and research usefulness of articles can depend on image quality. This study addressed whether scans of figures in black and white (B&W), grayscale, or color, or portable document format (PDF) to tagged image file format (TIFF) conversions as provided by interlibrary loan or document delivery were viewed as acceptable or useful by radiologists or pathologists. Residency coordinators selected eighteen figures from studies from radiology, clinical pathology, and anatomic pathology journals. With original PDF controls, each figure was prepared in three or four experimental conditions: PDF conversion to TIFF, and scans from print in B&W, grayscale, and color. Twelve independent observers indicated whether they could identify the features and whether the image quality was acceptable. They also ranked all the experimental conditions of each figure in terms of usefulness. Of 982 assessments of 87 anatomic pathology, 83 clinical pathology, and 77 radiology images, 471 (48%) were unidentifiable. Unidentifiability of originals (4%) and conversions (10%) was low. For scans, unidentifiability ranged from 53% for color, to 74% for grayscale, to 97% for B&W. Of 987 responses about acceptability (n=405), 41% were said to be unacceptable, 97% of B&W, 66% of grayscale, 41% of color, and 1% of conversions. Hypothesized order (original, conversion, color, grayscale, B&W) matched 67% of rankings (n=215). PDF to TIFF conversion provided acceptable content. Color images are rarely useful in grayscale (12%) or B&W (less than 1%). Acceptability of grayscale scans of noncolor originals was 52%. Digital originals are needed for most images. Print images in color or grayscale should be scanned using those modalities.

  2. Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video

    NASA Astrophysics Data System (ADS)

    Yeo, Boon-Lock; Liu, Bede

    1996-03-01

    Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.

  3. Turning "smoking man" images around: portrayals of smoking in men's magazines as a blueprint for smoking cessation campaigns.

    PubMed

    Dutta, Mohan J; Boyd, Josh

    2007-01-01

    Published scholarship documents the prevalence and health risks of smoking among men. There is also a rich tradition of studying the normative influences of the media in constructing and propagating images of healthy/unhealthy behaviors such as smoking. To understand the construction of these media-propagated smoking images toward male audiences, this article studies all advertising and editorial content of 3 major men's magazines for 2001 using rhetorical and content analyses. The emergent themes construct the smoking man as sensual, in another place, independent, and mysterious. The authors recommend turning around these themes of the masculine "smoking man" for the purpose of strategic media planning and developing message-targeting guidelines for smoking cessation and prevention messages directed at men.

  4. Bridging the integration gap between imaging and information systems: a uniform data concept for content-based image retrieval in computer-aided diagnosis.

    PubMed

    Welter, Petra; Riesmeier, Jörg; Fischer, Benedikt; Grouls, Christoph; Kuhl, Christiane; Deserno, Thomas M

    2011-01-01

    It is widely accepted that content-based image retrieval (CBIR) can be extremely useful for computer-aided diagnosis (CAD). However, CBIR has not been established in clinical practice yet. As a widely unattended gap of integration, a unified data concept for CBIR-based CAD results and reporting is lacking. Picture archiving and communication systems and the workflow of radiologists must be considered for successful data integration to be achieved. We suggest that CBIR systems applied to CAD should integrate their results in a picture archiving and communication systems environment such as Digital Imaging and Communications in Medicine (DICOM) structured reporting documents. A sample DICOM structured reporting template adaptable to CBIR and an appropriate integration scheme is presented. The proposed CBIR data concept may foster the promulgation of CBIR systems in clinical environments and, thereby, improve the diagnostic process.

  5. Bridging the integration gap between imaging and information systems: a uniform data concept for content-based image retrieval in computer-aided diagnosis

    PubMed Central

    Riesmeier, Jörg; Fischer, Benedikt; Grouls, Christoph; Kuhl, Christiane; Deserno (né Lehmann), Thomas M

    2011-01-01

    It is widely accepted that content-based image retrieval (CBIR) can be extremely useful for computer-aided diagnosis (CAD). However, CBIR has not been established in clinical practice yet. As a widely unattended gap of integration, a unified data concept for CBIR-based CAD results and reporting is lacking. Picture archiving and communication systems and the workflow of radiologists must be considered for successful data integration to be achieved. We suggest that CBIR systems applied to CAD should integrate their results in a picture archiving and communication systems environment such as Digital Imaging and Communications in Medicine (DICOM) structured reporting documents. A sample DICOM structured reporting template adaptable to CBIR and an appropriate integration scheme is presented. The proposed CBIR data concept may foster the promulgation of CBIR systems in clinical environments and, thereby, improve the diagnostic process. PMID:21672913

  6. A Picture is Worth 1,000 Words. The Use of Clinical Images in Electronic Medical Records.

    PubMed

    Ai, Angela C; Maloney, Francine L; Hickman, Thu-Trang; Wilcox, Allison R; Ramelson, Harley; Wright, Adam

    2017-07-12

    To understand how clinicians utilize image uploading tools in a home grown electronic health records (EHR) system. A content analysis of patient notes containing non-radiological images from the EHR was conducted. Images from 4,000 random notes from July 1, 2009 - June 30, 2010 were reviewed and manually coded. Codes were assigned to four properties of the image: (1) image type, (2) role of image uploader (e.g. MD, NP, PA, RN), (3) practice type (e.g. internal medicine, dermatology, ophthalmology), and (4) image subject. 3,815 images from image-containing notes stored in the EHR were reviewed and manually coded. Of those images, 32.8% were clinical and 66.2% were non-clinical. The most common types of the clinical images were photographs (38.0%), diagrams (19.1%), and scanned documents (14.4%). MDs uploaded 67.9% of clinical images, followed by RNs with 10.2%, and genetic counselors with 6.8%. Dermatology (34.9%), ophthalmology (16.1%), and general surgery (10.8%) uploaded the most clinical images. The content of clinical images referencing body parts varied, with 49.8% of those images focusing on the head and neck region, 15.3% focusing on the thorax, and 13.8% focusing on the lower extremities. The diversity of image types, content, and uploaders within a home grown EHR system reflected the versatility and importance of the image uploading tool. Understanding how users utilize image uploading tools in a clinical setting highlights important considerations for designing better EHR tools and the importance of interoperability between EHR systems and other health technology.

  7. Registering parameters and granules of wave observations: IMAGE RPI success story

    NASA Astrophysics Data System (ADS)

    Galkin, I. A.; Charisi, A.; Fung, S. F.; Benson, R. F.; Reinisch, B. W.

    2015-12-01

    Modern metadata systems strive to help scientists locate data relevant to their research and then retrieve them quickly. Success of this mission depends on the organization and completeness of metadata. Each relevant data resource has to be registered; each content has to be described; each data file has to be accessible. Ultimately, data discoverability is about the practical ability to describe data content and location. Correspondingly, data registration has a "Parameter" level, at which content is specified by listing available observed properties (parameters), and a "Granule" level, at which download links are given to data records (granules). Until recently, both parameter- and granule-level data registrations were accomplished at NASA Virtual System Observatory easily by listing provided parameters and building Granule documents with URLs to the datafile locations, usually those at NASA CDAWeb data warehouse. With the introduction of the Virtual Wave Observatory (VWO), however, the parameter/granule concept faced a scalability challenge. The wave phenomenon content is rich with descriptors of the wave generation, propagation, interaction with propagation media, and observation processes. Additionally, the wave phenomenon content varies from record to record, reflecting changes in the constituent processes, making it necessary to generate granule documents at sub-minute resolution. We will present the first success story of registering 234,178 records of IMAGE Radio Plasma Imager (RPI) plasmagram data and Level 2 derived data products in ESPAS (near-Earth Space Data Infrastructure for e-Science), using the VWO-inspired wave ontology. The granules are arranged in overlapping display and numerical data collections. Display data include (a) auto-prospected plasmagrams of potential interest, (b) interesting plasmagrams annotated by human analysts or software, and (c) spectacular plasmagrams annotated by analysts as publication-quality examples of the RPI science. Numerical data products include plasmagram-derived records containing signatures of local and remote signal propagation, as well as field-aligned profiles of electron density in the plasmasphere. Registered granules of RPI observations are available in ESPAS for their content-targeted search and retrieval.

  8. Trends in Library and Information Science: 1989. ERIC Digest.

    ERIC Educational Resources Information Center

    Eisenberg, Michael B.

    Based on a content analysis of professional journals, conference proceedings, ERIC documents, annuals, and dissertations in library and information science, the following current trends in the field are discussed: (1) there are important emerging roles and responsibilities for information professionals; (2) the status and image of librarians…

  9. Recommending images of user interests from the biomedical literature

    NASA Astrophysics Data System (ADS)

    Clukey, Steven; Xu, Songhua

    2013-03-01

    Every year hundreds of thousands of biomedical images are published in journals and conferences. Consequently, finding images relevant to one's interests becomes an ever daunting task. This vast amount of literature creates a need for intelligent and easy-to-use tools that can help researchers effectively navigate through the content corpus and conveniently locate materials of their interests. Traditionally, literature search tools allow users to query content using topic keywords. However, manual query composition is often time and energy consuming. A better system would be one that can automatically deliver relevant content to a researcher without having the end user manually manifest one's search intent and interests via search queries. Such a computer-aided assistance for information access can be provided by a system that first determines a researcher's interests automatically and then recommends images relevant to the person's interests accordingly. The technology can greatly improve a researcher's ability to stay up to date in their fields of study by allowing them to efficiently browse images and documents matching their needs and interests among the vast amount of the biomedical literature. A prototype system implementation of the technology can be accessed via http://www.smartdataware.com.

  10. Building Structured Personal Health Records from Photographs of Printed Medical Records.

    PubMed

    Li, Xiang; Hu, Gang; Teng, Xiaofei; Xie, Guotong

    2015-01-01

    Personal health records (PHRs) provide patient-centric healthcare by making health records accessible to patients. In China, it is very difficult for individuals to access electronic health records. Instead, individuals can easily obtain the printed copies of their own medical records, such as prescriptions and lab test reports, from hospitals. In this paper, we propose a practical approach to extract structured data from printed medical records photographed by mobile phones. An optical character recognition (OCR) pipeline is performed to recognize text in a document photo, which addresses the problems of low image quality and content complexity by image pre-processing and multiple OCR engine synthesis. A series of annotation algorithms that support flexible layouts are then used to identify the document type, entities of interest, and entity correlations, from which a structured PHR document is built. The proposed approach was applied to real world medical records to demonstrate the effectiveness and applicability.

  11. Building Structured Personal Health Records from Photographs of Printed Medical Records

    PubMed Central

    Li, Xiang; Hu, Gang; Teng, Xiaofei; Xie, Guotong

    2015-01-01

    Personal health records (PHRs) provide patient-centric healthcare by making health records accessible to patients. In China, it is very difficult for individuals to access electronic health records. Instead, individuals can easily obtain the printed copies of their own medical records, such as prescriptions and lab test reports, from hospitals. In this paper, we propose a practical approach to extract structured data from printed medical records photographed by mobile phones. An optical character recognition (OCR) pipeline is performed to recognize text in a document photo, which addresses the problems of low image quality and content complexity by image pre-processing and multiple OCR engine synthesis. A series of annotation algorithms that support flexible layouts are then used to identify the document type, entities of interest, and entity correlations, from which a structured PHR document is built. The proposed approach was applied to real world medical records to demonstrate the effectiveness and applicability. PMID:26958219

  12. Scanning technology selection impacts acceptability and usefulness of image-rich content*†

    PubMed Central

    Alpi, Kristine M.; Brown, James C.; Neel, Jennifer A.; Grindem, Carol B.; Linder, Keith E.; Harper, James B.

    2016-01-01

    Objective Clinical and research usefulness of articles can depend on image quality. This study addressed whether scans of figures in black and white (B&W), grayscale, or color, or portable document format (PDF) to tagged image file format (TIFF) conversions as provided by interlibrary loan or document delivery were viewed as acceptable or useful by radiologists or pathologists. Methods Residency coordinators selected eighteen figures from studies from radiology, clinical pathology, and anatomic pathology journals. With original PDF controls, each figure was prepared in three or four experimental conditions: PDF conversion to TIFF, and scans from print in B&W, grayscale, and color. Twelve independent observers indicated whether they could identify the features and whether the image quality was acceptable. They also ranked all the experimental conditions of each figure in terms of usefulness. Results Of 982 assessments of 87 anatomic pathology, 83 clinical pathology, and 77 radiology images, 471 (48%) were unidentifiable. Unidentifiability of originals (4%) and conversions (10%) was low. For scans, unidentifiability ranged from 53% for color, to 74% for grayscale, to 97% for B&W. Of 987 responses about acceptability (n=405), 41% were said to be unacceptable, 97% of B&W, 66% of grayscale, 41% of color, and 1% of conversions. Hypothesized order (original, conversion, color, grayscale, B&W) matched 67% of rankings (n=215). Conclusions PDF to TIFF conversion provided acceptable content. Color images are rarely useful in grayscale (12%) or B&W (less than 1%). Acceptability of grayscale scans of noncolor originals was 52%. Digital originals are needed for most images. Print images in color or grayscale should be scanned using those modalities. PMID:26807048

  13. Knowledge and Valorization of Historical Sites Through 3d Documentation and Modeling

    NASA Astrophysics Data System (ADS)

    Farella, E.; Menna, F.; Nocerino, E.; Morabito, D.; Remondino, F.; Campi, M.

    2016-06-01

    The paper presents the first results of an interdisciplinary project related to the 3D documentation, dissemination, valorization and digital access of archeological sites. Beside the mere 3D documentation aim, the project has two goals: (i) to easily explore and share via web references and results of the interdisciplinary work, including the interpretative process and the final reconstruction of the remains; (ii) to promote and valorize archaeological areas using reality-based 3D data and Virtual Reality devices. This method has been verified on the ruins of the archeological site of Pausilypon, a maritime villa of Roman period (Naples, Italy). Using Unity3D, the virtual tour of the heritage site was integrated and enriched with the surveyed 3D data, text documents, CAAD reconstruction hypotheses, drawings, photos, etc. In this way, starting from the actual appearance of the ruins (panoramic images), passing through the 3D digital surveying models and several other historical information, the user is able to access virtual contents and reconstructed scenarios, all in a single virtual, interactive and immersive environment. These contents and scenarios allow to derive documentation and geometrical information, understand the site, perform analyses, see interpretative processes, communicate historical information and valorize the heritage location.

  14. Content Documents Management

    NASA Technical Reports Server (NTRS)

    Muniz, R.; Hochstadt, J.; Boelke J.; Dalton, A.

    2011-01-01

    The Content Documents are created and managed under the System Software group with. Launch Control System (LCS) project. The System Software product group is lead by NASA Engineering Control and Data Systems branch (NEC3) at Kennedy Space Center. The team is working on creating Operating System Images (OSI) for different platforms (i.e. AIX, Linux, Solaris and Windows). Before the OSI can be created, the team must create a Content Document which provides the information of a workstation or server, with the list of all the software that is to be installed on it and also the set where the hardware belongs. This can be for example in the LDS, the ADS or the FR-l. The objective of this project is to create a User Interface Web application that can manage the information of the Content Documents, with all the correct validations and filters for administrator purposes. For this project we used one of the most excellent tools in agile development applications called Ruby on Rails. This tool helps pragmatic programmers develop Web applications with Rails framework and Ruby programming language. It is very amazing to see how a student can learn about OOP features with the Ruby language, manage the user interface with HTML and CSS, create associations and queries with gems, manage databases and run a server with MYSQL, run shell commands with command prompt and create Web frameworks with Rails. All of this in a real world project and in just fifteen weeks!

  15. Uses of Metaphors & Imagery in Counseling. Instructor's Manual.

    ERIC Educational Resources Information Center

    Gladding, Samuel T.

    This document presents an instructor's manual designed to accompany the videotape, "Uses of Metaphors and Imagery in Counseling," a tool to teach beginning and experienced counselors how to more efficiently help their clients by focusing on the use of non-literal language and thoughts (i.e., metaphors and images). The format and content of the…

  16. A Content Analysis of Images of Novice Teacher Induction: First-Semester Themes

    ERIC Educational Resources Information Center

    Curry, Jennifer R.; Webb, Angela W.; Latham, Samantha J.

    2016-01-01

    The powerful nature of novice teachers' experiences in their first years of teaching has been well documented. However, the variance in novices' initial immersion in the school environment is largely dependent on perceived personal and professional support as well as the environmental inducements that lend to novice teachers' success in the…

  17. STS Case Study Development Support

    NASA Technical Reports Server (NTRS)

    Rosa de Jesus, Dan A.; Johnson, Grace K.

    2013-01-01

    The Shuttle Case Study Collection (SCSC) has been developed using lessons learned documented by NASA engineers, analysts, and contractors. The SCSC provides educators with a new tool to teach real-world engineering processes with the goal of providing unique educational materials that enhance critical thinking, decision-making and problem-solving skills. During this third phase of the project, responsibilities included: the revision of the Hyper Text Markup Language (HTML) source code to ensure all pages follow World Wide Web Consortium (W3C) standards, and the addition and edition of website content, including text, documents, and images. Basic HTML knowledge was required, as was basic knowledge of photo editing software, and training to learn how to use NASA's Content Management System for website design. The outcome of this project was its release to the public.

  18. Internet printing

    NASA Astrophysics Data System (ADS)

    Rahgozar, M. Armon; Hastings, Tom; McCue, Daniel L.

    1997-04-01

    The Internet is rapidly changing the traditional means of creation, distribution and retrieval of information. Today, information publishers leverage the capabilities provided by Internet technologies to rapidly communicate information to a much wider audience in unique customized ways. As a result, the volume of published content has been astronomically increasing. This, in addition to the ease of distribution afforded by the Internet has resulted in more and more documents being printed. This paper introduces several axes along which Internet printing may be examined and addresses some of the technological challenges that lay ahead. Some of these axes include: (1) submission--the use of the Internet protocols for selecting printers and submitting documents for print, (2) administration--the management and monitoring of printing engines and other print resources via Web pages, and (3) formats--printing document formats whose spectrum now includes HTML documents with simple text, layout-enhanced documents with Style Sheets, documents that contain audio, graphics and other active objects as well as the existing desktop and PDL formats. The format axis of the Internet Printing becomes even more exciting when one considers that the Web documents are inherently compound and the traversal into the various pieces may uncover various formats. The paper also examines some imaging specific issues that are paramount to Internet Printing. These include formats and structures for representing raster documents and images, compression, fonts rendering and color spaces.

  19. The depiction of medical education in medical school catalogs.

    PubMed

    Kohn, M; Wear, D

    1994-01-01

    Medical educators bear responsibility for the informational materials that their institutions use to communicate with potential applicants. These documents, because they are often the first official correspondence that prospective students receive, may be influential in shaping students' expectations. In March 1990 all North American medical schools that awarded MD or DO degrees were requested to send their catalogs and courses of study to the authors. In response came 175 documents, with nearly all the schools represented at least once. The photographs and other visual images in these documents were then analyzed from the perspective of a hypothetical applicant who perused what his or her initial request for information had produced. Nearly 3,400 images were analyzed and categorized according to content and stylistic approach. Two basic stylistic approaches were found: stylized and documentary. Few documents used exclusively one or the other approach, as the approaches represent poles along a continuum. The stylized approach portrays medical education as a product to be sold, whereas the documentary approach candidly tells the story of medical education. The authors conclude that the documentary approach is a more morally responsible way for schools to communicate with individuals who are in the beginning stages of building their mental images of medical education and medical care.

  20. Globe Teachers Guide and Photographic Data on the Web

    NASA Technical Reports Server (NTRS)

    Kowal, Dan

    2004-01-01

    The task of managing the GLOBE Online Teacher s Guide during this time period focused on transforming the technology behind the delivery system of this document. The web application transformed from a flat file retrieval system to a dynamic database access approach. The new methodology utilizes Java Server Pages (JSP) on the front-end and an Oracle relational database on the backend. This new approach allows users of the web site, mainly teachers, to access content efficiently by grade level and/or by investigation or educational concept area. Moreover, teachers can gain easier access to data sheets and lab and field guides. The new online guide also included updated content for all GLOBE protocols. The GLOBE web management team was given documentation for maintaining the new application. Instructions for modifying the JSP templates and managing database content were included in this document. It was delivered to the team by the end of October, 2003. The National Geophysical Data Center (NGDC) continued to manage the school study site photos on the GLOBE website. 333 study site photo images were added to the GLOBE database and posted on the web during this same time period for 64 schools. Documentation for processing study site photos was also delivered to the new GLOBE web management team. Lastly, assistance was provided in transferring reference applications such as the Cloud and LandSat quizzes and Earth Systems Online Poster from NGDC servers to GLOBE servers along with documentation for maintaining these applications.

  1. Ubiquitous picture-rich content representation

    NASA Astrophysics Data System (ADS)

    Wang, Wiley; Dean, Jennifer; Muzzolini, Russ

    2010-02-01

    The amount of digital images taken by the average consumer is consistently increasing. People enjoy the convenience of storing and sharing their pictures through online (digital) and offline (traditional) media. A set of pictures can be uploaded to: online photo services, web blogs and social network websites. Alternatively, these images can be used to generate: prints, cards, photo books or other photo products. Through uploading and sharing, images are easily transferred from one format to another. And often, a different set of associated content (text, tags) is created across formats. For example, on his web blog, a user may journal his experiences of his recent travel; on his social network website, his friends tag and comment on the pictures; in his online photo album, some pictures are titled and keyword-tagged. When the user wants to tell a complete story, perhaps in a photo book, he must collect, across all formats: the pictures, writings and comments, etc. and organize them in a book format. The user has to arrange the content of his trip in each format. The arrangement, the associations between the images, tags, keywords and text, cannot be shared with other formats. In this paper, we propose a system that allows the content to be easily created and shared across various digital media formats. We define a uniformed data association structure to connect: images, documents, comments, tags, keywords and other data. This content structure allows the user to switch representation formats without reediting. The framework under each format can emphasize (display or hide) content elements based on preference. For example, a slide show view will emphasize the display of pictures with limited text; a blog view will display highlighted images and journal text; and the photo book will try to fit in all images and text content. In this paper, we will discuss the strategy to associate pictures with text content, so that it can naturally tell a story. We will also list sample solutions on different formats such as: picture view, blog view and photo book view.

  2. Web Prep: How to Prepare NAS Reports For Publication on the Web

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela; Balakrishnan, Prithika; Clucas, Jean; McCabe, R. Kevin; Felchle, Gail; Brickell, Cristy

    1996-01-01

    This document contains specific advice and requirements for NASA Ames Code IN authors of NAS reports. Much of the information may be of interest to other authors writing for the Web. WebPrep has a graphic Table of Contents in the form of a WebToon, which simulates a discussion between a scientist and a Web publishing consultant. In the WebToon, Frequently Asked Questions about preparing reports for the Web are linked to relevant text in the body of this document. We also provide a text-only Table of Contents. The text for this document is divided into chapters: each chapter corresponds to one frame of the WebToons. The chapter topics are: converting text to HTML, converting 2D graphic images to gif, creating imagemaps and tables, converting movie and audio files to Web formats, supplying 3D interactive data, and (briefly) JAVA capabilities. The last chapter is specifically for NAS staff authors. The Glossary-Index lists web related words and links to topics covered in the main text.

  3. The Precise and Efficient Identification of Medical Order Forms Using Shape Trees

    NASA Astrophysics Data System (ADS)

    Henker, Uwe; Petersohn, Uwe; Ultsch, Alfred

    A powerful and flexible technique to identify, classify and process documents using images from a scanning process is presented. The types of documents can be described to the system as a set of differentiating features in a case base using shape trees. The features are filtered and abstracted from an extremely reduced scanner image of the document. Classification rules are stored with the cases to enable precise recognition and further mark reading and Optical Character Recognition (OCR) process. The method is implemented in a system which actually processes the majority of requests for medical lab procedures in Germany. A large practical experiment with data from practitioners was performed. An average of 97% of the forms were correctly identified; none were identified incorrectly. This meets the quality requirements for most medical applications. The modular description of the recognition process allows for a flexible adaptation of future changes to the form and content of the document’s structures.

  4. Clementine High Resolution Camera Mosaicking Project. Volume 14; CL 6014; 0 deg N to 80 deg N Latitude, 270 deg E to 300 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  5. Clementine High Resolution Camera Mosaicking Project. Volume 17; CL 6017; 0 deg to 80 deg S Latitude, 330 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  6. Clementine High Resolution Camera Mosaicking Project. Volume 15; CL 6015; 0 deg S to 80 deg S Latitude, 270 deg E to 300 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U. S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  7. Clementine High Resolution Camera Mosaicking Project. Volume 13; CL 6013; 0 deg S to 80 deg S Latitude, 240 deg to 270 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  8. Clementine High Resolution Camera Mosaicking Project. Volume 18; CL 6018; 80 deg N to 80 deg S Latitude, 330 deg E to 360 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U. S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  9. Clementine High Resolution Camera Mosaicking Project. Volume 12; CL 6012; 0 deg N to 80 deg N Latitude, 240 deg to 270 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  10. Clementine High Resolution Camera Mosaicking Project. Volume 10; CL 6010; 0 deg N to 80 deg N Latitude, 210 deg E to 240 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  11. Clementine High Resolution Camera Mosaicking Project. Volume 16; CL 6016; 0 deg N to 80 deg N Latitude, 300 deg E to 330 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  12. Interactive publications: creation and usage

    NASA Astrophysics Data System (ADS)

    Thoma, George R.; Ford, Glenn; Chung, Michael; Vasudevan, Kirankumar; Antani, Sameer

    2006-02-01

    As envisioned here, an "interactive publication" has similarities to multimedia documents that have been in existence for a decade or more, but possesses specific differentiating characteristics. In common usage, the latter refers to online entities that, in addition to text, consist of files of images and video clips residing separately in databases, rarely providing immediate context to the document text. While an interactive publication has many media objects as does the "traditional" multimedia document, it is a self-contained document, either as a single file with media files embedded within it, or as a "folder" containing tightly linked media files. The main characteristic that differentiates an interactive publication from a traditional multimedia document is that the reader would be able to reuse the media content for analysis and presentation, and to check the underlying data and possibly derive alternative conclusions leading, for example, to more in-depth peer reviews. We have created prototype publications containing paginated text and several media types encountered in the biomedical literature: 3D animations of anatomic structures; graphs, charts and tabular data; cell development images (video sequences); and clinical images such as CT, MRI and ultrasound in the DICOM format. This paper presents developments to date including: a tool to convert static tables or graphs into interactive entities, authoring procedures followed to create prototypes, and advantages and drawbacks of each of these platforms. It also outlines future work including meeting the challenge of network distribution for these large files.

  13. Web image retrieval using an effective topic and content-based technique

    NASA Astrophysics Data System (ADS)

    Lee, Ching-Cheng; Prabhakara, Rashmi

    2005-03-01

    There has been an exponential growth in the amount of image data that is available on the World Wide Web since the early development of Internet. With such a large amount of information and image available and its usefulness, an effective image retrieval system is thus greatly needed. In this paper, we present an effective approach with both image matching and indexing techniques that improvise on existing integrated image retrieval methods. This technique follows a two-phase approach, integrating query by topic and query by example specification methods. In the first phase, The topic-based image retrieval is performed by using an improved text information retrieval (IR) technique that makes use of the structured format of HTML documents. This technique consists of a focused crawler that not only provides for the user to enter the keyword for the topic-based search but also, the scope in which the user wants to find the images. In the second phase, we use query by example specification to perform a low-level content-based image match in order to retrieve smaller and relatively closer results of the example image. From this, information related to the image feature is automatically extracted from the query image. The main objective of our approach is to develop a functional image search and indexing technique and to demonstrate that better retrieval results can be achieved.

  14. Documenting the information content of images.

    PubMed Central

    Bidgood, W. D.

    1997-01-01

    A standards-based message and terminology architecture has been specified to enable large-scale open and non-proprietary interchange of imaging-procedure descriptions and image-interpretation reports providing semantically-rich linkage of linguistic and non-linguistic information. The DICOM Structured Reporting Supplement, now available for trial use, embodies this interdependent message/terminology architecture. A DICOM structured report object is a self-describing information structure that can be tailored to support diverse clinical observation reporting applications by utilization of templates and context-dependent terminology from an external message/terminology mapping resource such as the SNOMED DICOM Microglossary (SDM), HL7 Vocabulary, or Terminology Resource for Message Standards (TeRMS). PMID:9357661

  15. Mariner 9 television pictures: Microfiche library user's guide. MTC/MTVS real-time pictures

    NASA Technical Reports Server (NTRS)

    Becker, R. A.

    1973-01-01

    This document describes the content and organization of the Mariner 9 Mission Test Computer/Mission Test Video System microfiche library. This 775 card library is intended to supply the user with a complete record of the images received from Mars orbit during the Mariner 9 mission operations, from 15 Nov. 1971 to 1 Nov. 1972.

  16. Mobile visual object identification: from SIFT-BoF-RANSAC to Sketchprint

    NASA Astrophysics Data System (ADS)

    Voloshynovskiy, Sviatoslav; Diephuis, Maurits; Holotyak, Taras

    2015-03-01

    Mobile object identification based on its visual features find many applications in the interaction with physical objects and security. Discriminative and robust content representation plays a central role in object and content identification. Complex post-processing methods are used to compress descriptors and their geometrical information, aggregate them into more compact and discriminative representations and finally re-rank the results based on the similarity geometries of descriptors. Unfortunately, most of the existing descriptors are not very robust and discriminative once applied to the various contend such as real images, text or noise-like microstructures next to requiring at least 500-1'000 descriptors per image for reliable identification. At the same time, the geometric re-ranking procedures are still too complex to be applied to the numerous candidates obtained from the feature similarity based search only. This restricts that list of candidates to be less than 1'000 which obviously causes a higher probability of miss. In addition, the security and privacy of content representation has become a hot research topic in multimedia and security communities. In this paper, we introduce a new framework for non- local content representation based on SketchPrint descriptors. It extends the properties of local descriptors to a more informative and discriminative, yet geometrically invariant content representation. In particular it allows images to be compactly represented by 100 SketchPrint descriptors without being fully dependent on re-ranking methods. We consider several use cases, applying SketchPrint descriptors to natural images, text documents, packages and micro-structures and compare them with the traditional local descriptors.

  17. A novel methodology for querying web images

    NASA Astrophysics Data System (ADS)

    Prabhakara, Rashmi; Lee, Ching Cheng

    2005-01-01

    Ever since the advent of Internet, there has been an immense growth in the amount of image data that is available on the World Wide Web. With such a magnitude of image availability, an efficient and effective image retrieval system is required to make use of this information. This research presents an effective image matching and indexing technique that improvises on existing integrated image retrieval methods. The proposed technique follows a two-phase approach, integrating query by topic and query by example specification methods. The first phase consists of topic-based image retrieval using an improved text information retrieval (IR) technique that makes use of the structured format of HTML documents. It consists of a focused crawler that not only provides for the user to enter the keyword for the topic-based search but also, the scope in which the user wants to find the images. The second phase uses the query by example specification to perform a low-level content-based image match for the retrieval of smaller and relatively closer results of the example image. Information related to the image feature is automatically extracted from the query image by the image processing system. A technique that is not computationally intensive based on color feature is used to perform content-based matching of images. The main goal is to develop a functional image search and indexing system and to demonstrate that better retrieval results can be achieved with this proposed hybrid search technique.

  18. A novel methodology for querying web images

    NASA Astrophysics Data System (ADS)

    Prabhakara, Rashmi; Lee, Ching Cheng

    2004-12-01

    Ever since the advent of Internet, there has been an immense growth in the amount of image data that is available on the World Wide Web. With such a magnitude of image availability, an efficient and effective image retrieval system is required to make use of this information. This research presents an effective image matching and indexing technique that improvises on existing integrated image retrieval methods. The proposed technique follows a two-phase approach, integrating query by topic and query by example specification methods. The first phase consists of topic-based image retrieval using an improved text information retrieval (IR) technique that makes use of the structured format of HTML documents. It consists of a focused crawler that not only provides for the user to enter the keyword for the topic-based search but also, the scope in which the user wants to find the images. The second phase uses the query by example specification to perform a low-level content-based image match for the retrieval of smaller and relatively closer results of the example image. Information related to the image feature is automatically extracted from the query image by the image processing system. A technique that is not computationally intensive based on color feature is used to perform content-based matching of images. The main goal is to develop a functional image search and indexing system and to demonstrate that better retrieval results can be achieved with this proposed hybrid search technique.

  19. Enabling search over encrypted multimedia databases

    NASA Astrophysics Data System (ADS)

    Lu, Wenjun; Swaminathan, Ashwin; Varna, Avinash L.; Wu, Min

    2009-02-01

    Performing information retrieval tasks while preserving data confidentiality is a desirable capability when a database is stored on a server maintained by a third-party service provider. This paper addresses the problem of enabling content-based retrieval over encrypted multimedia databases. Search indexes, along with multimedia documents, are first encrypted by the content owner and then stored onto the server. Through jointly applying cryptographic techniques, such as order preserving encryption and randomized hash functions, with image processing and information retrieval techniques, secure indexing schemes are designed to provide both privacy protection and rank-ordered search capability. Retrieval results on an encrypted color image database and security analysis of the secure indexing schemes under different attack models show that data confidentiality can be preserved while retaining very good retrieval performance. This work has promising applications in secure multimedia management.

  20. Experiments with a novel content-based image retrieval software: can we eliminate classification systems in adolescent idiopathic scoliosis?

    PubMed

    Menon, K Venugopal; Kumar, Dinesh; Thomas, Tessamma

    2014-02-01

    Study Design Preliminary evaluation of new tool. Objective To ascertain whether the newly developed content-based image retrieval (CBIR) software can be used successfully to retrieve images of similar cases of adolescent idiopathic scoliosis (AIS) from a database to help plan treatment without adhering to a classification scheme. Methods Sixty-two operated cases of AIS were entered into the newly developed CBIR database. Five new cases of different curve patterns were used as query images. The images were fed into the CBIR database that retrieved similar images from the existing cases. These were analyzed by a senior surgeon for conformity to the query image. Results Within the limits of variability set for the query system, all the resultant images conformed to the query image. One case had no similar match in the series. The other four retrieved several images that were matching with the query. No matching case was left out in the series. The postoperative images were then analyzed to check for surgical strategies. Broad guidelines for treatment could be derived from the results. More precise query settings, inclusion of bending films, and a larger database will enhance accurate retrieval and better decision making. Conclusion The CBIR system is an effective tool for accurate documentation and retrieval of scoliosis images. Broad guidelines for surgical strategies can be made from the postoperative images of the existing cases without adhering to any classification scheme.

  1. Preparing a collection of radiology examinations for distribution and retrieval.

    PubMed

    Demner-Fushman, Dina; Kohli, Marc D; Rosenman, Marc B; Shooshan, Sonya E; Rodriguez, Laritza; Antani, Sameer; Thoma, George R; McDonald, Clement J

    2016-03-01

    Clinical documents made available for secondary use play an increasingly important role in discovery of clinical knowledge, development of research methods, and education. An important step in facilitating secondary use of clinical document collections is easy access to descriptions and samples that represent the content of the collections. This paper presents an approach to developing a collection of radiology examinations, including both the images and radiologist narrative reports, and making them publicly available in a searchable database. The authors collected 3996 radiology reports from the Indiana Network for Patient Care and 8121 associated images from the hospitals' picture archiving systems. The images and reports were de-identified automatically and then the automatic de-identification was manually verified. The authors coded the key findings of the reports and empirically assessed the benefits of manual coding on retrieval. The automatic de-identification of the narrative was aggressive and achieved 100% precision at the cost of rendering a few findings uninterpretable. Automatic de-identification of images was not quite as perfect. Images for two of 3996 patients (0.05%) showed protected health information. Manual encoding of findings improved retrieval precision. Stringent de-identification methods can remove all identifiers from text radiology reports. DICOM de-identification of images does not remove all identifying information and needs special attention to images scanned from film. Adding manual coding to the radiologist narrative reports significantly improved relevancy of the retrieved clinical documents. The de-identified Indiana chest X-ray collection is available for searching and downloading from the National Library of Medicine (http://openi.nlm.nih.gov/). Published by Oxford University Press on behalf of the American Medical Informatics Association 2015. This work is written by US Government employees and is in the public domain in the US.

  2. Design of a graphical user interface for an intelligent multimedia information system for radiology research

    NASA Astrophysics Data System (ADS)

    Taira, Ricky K.; Wong, Clement; Johnson, David; Bhushan, Vikas; Rivera, Monica; Huang, Lu J.; Aberle, Denise R.; Cardenas, Alfonso F.; Chu, Wesley W.

    1995-05-01

    With the increase in the volume and distribution of images and text available in PACS and medical electronic health-care environments it becomes increasingly important to maintain indexes that summarize the content of these multi-media documents. Such indices are necessary to quickly locate relevant patient cases for research, patient management, and teaching. The goal of this project is to develop an intelligent document retrieval system that allows researchers to request for patient cases based on document content. Thus we wish to retrieve patient cases from electronic information archives that could include a combined specification of patient demographics, low level radiologic findings (size, shape, number), intermediate-level radiologic findings (e.g., atelectasis, infiltrates, etc.) and/or high-level pathology constraints (e.g., well-differentiated small cell carcinoma). The cases could be distributed among multiple heterogeneous databases such as PACS, RIS, and HIS. Content- based retrieval systems go beyond the capabilities of simple key-word or string-based retrieval matching systems. These systems require a knowledge base to comprehend the generality/specificity of a concept (thus knowing the subclasses or related concepts to a given concept) and knowledge of the various string representations for each concept (i.e., synonyms, lexical variants, etc.). We have previously reported on a data integration mediation layer that allows transparent access to multiple heterogeneous distributed medical databases (HIS, RIS, and PACS). The data access layer of our architecture currently has limited query processing capabilities. Given a patient hospital identification number, the access mediation layer collects all documents in RIS and HIS and returns this information to a specified workstation location. In this paper we report on our efforts to extend the query processing capabilities of the system by creation of custom query interfaces, an intelligent query processing engine, and a document-content index that can be generated automatically (i.e., no manual authoring or changes to the normal clinical protocols).

  3. Informatics in radiology: use of CouchDB for document-based storage of DICOM objects.

    PubMed

    Rascovsky, Simón J; Delgado, Jorge A; Sanz, Alexander; Calvo, Víctor D; Castrillón, Gabriel

    2012-01-01

    Picture archiving and communication systems traditionally have depended on schema-based Structured Query Language (SQL) databases for imaging data management. To optimize database size and performance, many such systems store a reduced set of Digital Imaging and Communications in Medicine (DICOM) metadata, discarding informational content that might be needed in the future. As an alternative to traditional database systems, document-based key-value stores recently have gained popularity. These systems store documents containing key-value pairs that facilitate data searches without predefined schemas. Document-based key-value stores are especially suited to archive DICOM objects because DICOM metadata are highly heterogeneous collections of tag-value pairs conveying specific information about imaging modalities, acquisition protocols, and vendor-supported postprocessing options. The authors used an open-source document-based database management system (Apache CouchDB) to create and test two such databases; CouchDB was selected for its overall ease of use, capability for managing attachments, and reliance on HTTP and Representational State Transfer standards for accessing and retrieving data. A large database was created first in which the DICOM metadata from 5880 anonymized magnetic resonance imaging studies (1,949,753 images) were loaded by using a Ruby script. To provide the usual DICOM query functionality, several predefined "views" (standard queries) were created by using JavaScript. For performance comparison, the same queries were executed in both the CouchDB database and a SQL-based DICOM archive. The capabilities of CouchDB for attachment management and database replication were separately assessed in tests of a similar, smaller database. Results showed that CouchDB allowed efficient storage and interrogation of all DICOM objects; with the use of information retrieval algorithms such as map-reduce, all the DICOM metadata stored in the large database were searchable with only a minimal increase in retrieval time over that with the traditional database management system. Results also indicated possible uses for document-based databases in data mining applications such as dose monitoring, quality assurance, and protocol optimization. RSNA, 2012

  4. Document creation, linking, and maintenance system

    DOEpatents

    Claghorn, Ronald [Pasco, WA

    2011-02-15

    A document creation and citation system designed to maintain a database of reference documents. The content of a selected document may be automatically scanned and indexed by the system. The selected documents may also be manually indexed by a user prior to the upload. The indexed documents may be uploaded and stored within a database for later use. The system allows a user to generate new documents by selecting content within the reference documents stored within the database and inserting the selected content into a new document. The system allows the user to customize and augment the content of the new document. The system also generates citations to the selected content retrieved from the reference documents. The citations may be inserted into the new document in the appropriate location and format, as directed by the user. The new document may be uploaded into the database and included with the other reference documents. The system also maintains the database of reference documents so that when changes are made to a reference document, the author of a document referencing the changed document will be alerted to make appropriate changes to his document. The system also allows visual comparison of documents so that the user may see differences in the text of the documents.

  5. DEVA: An extensible ontology-based annotation model for visual document collections

    NASA Astrophysics Data System (ADS)

    Jelmini, Carlo; Marchand-Maillet, Stephane

    2003-01-01

    The description of visual documents is a fundamental aspect of any efficient information management system, but the process of manually annotating large collections of documents is tedious and far from being perfect. The need for a generic and extensible annotation model therefore arises. In this paper, we present DEVA, an open, generic and expressive multimedia annotation framework. DEVA is an extension of the Dublin Core specification. The model can represent the semantic content of any visual document. It is described in the ontology language DAML+OIL and can easily be extended with external specialized ontologies, adapting the vocabulary to the given application domain. In parallel, we present the Magritte annotation tool, which is an early prototype that validates the DEVA features. Magritte allows to manually annotating image collections. It is designed with a modular and extensible architecture, which enables the user to dynamically adapt the user interface to specialized ontologies merged into DEVA.

  6. Selected time-lapse movies of the east rift zone eruption of KĪlauea Volcano, 2004–2008

    USGS Publications Warehouse

    Orr, Tim R.

    2011-01-01

    Since 2004, the U.S. Geological Survey's Hawaiian Volcano Observatory has used mass-market digital time-lapse cameras and network-enabled Webcams for visual monitoring and research. The 26 time-lapse movies in this report were selected from the vast collection of images acquired by these camera systems during 2004–2008. Chosen for their content and broad aesthetic appeal, these image sequences document a variety of flow-field and vent processes from Kīlauea's east rift zone eruption, which began in 1983 and is still (as of 2011) ongoing.

  7. Plant Phenotyping using Probabilistic Topic Models: Uncovering the Hyperspectral Language of Plants

    PubMed Central

    Wahabzada, Mirwaes; Mahlein, Anne-Katrin; Bauckhage, Christian; Steiner, Ulrike; Oerke, Erich-Christian; Kersting, Kristian

    2016-01-01

    Modern phenotyping and plant disease detection methods, based on optical sensors and information technology, provide promising approaches to plant research and precision farming. In particular, hyperspectral imaging have been found to reveal physiological and structural characteristics in plants and to allow for tracking physiological dynamics due to environmental effects. In this work, we present an approach to plant phenotyping that integrates non-invasive sensors, computer vision, as well as data mining techniques and allows for monitoring how plants respond to stress. To uncover latent hyperspectral characteristics of diseased plants reliably and in an easy-to-understand way, we “wordify” the hyperspectral images, i.e., we turn the images into a corpus of text documents. Then, we apply probabilistic topic models, a well-established natural language processing technique that identifies content and topics of documents. Based on recent regularized topic models, we demonstrate that one can track automatically the development of three foliar diseases of barley. We also present a visualization of the topics that provides plant scientists an intuitive tool for hyperspectral imaging. In short, our analysis and visualization of characteristic topics found during symptom development and disease progress reveal the hyperspectral language of plant diseases. PMID:26957018

  8. Indexing the medical open access literature for textual and content-based visual retrieval.

    PubMed

    Eggel, Ivan; Müller, Henning

    2010-01-01

    Over the past few years an increasing amount of scientific journals have been created in an open access format. Particularly in the medical field the number of openly accessible journals is enormous making a wide body of knowledge available for analysis and retrieval. Part of the trend towards open access publications can be linked to funding bodies such as the NIH1 (National Institutes of Health) and the Swiss National Science Foundation (SNF2) requiring funded projects to make all articles of funded research available publicly. This article describes an approach to make part of the knowledge of open access journals available for retrieval including the textual information but also the images contained in the articles. For this goal all articles of 24 journals related to medical informatics and medical imaging were crawled from the web pages of BioMed Central. Text and images of the PDF (Portable Document Format) files were indexed separately and a web-based retrieval interface allows for searching via keyword queries or by visual similarity queries. Starting point for a visual similarity query can be an image on the local hard disk that is uploaded or any image found via the textual search. Search for similar documents is also possible.

  9. Computerization Project of the Archivo General de Indias, Seville, Spain. A Report to the Commission on Preservation and Access.

    ERIC Educational Resources Information Center

    Rutimann, Hans; Lynn, M. Stuart

    The Archivo General de Indias is operating a massive project to preserve and make accessible the contents of the 45 million documents and 7,000 maps and blueprints comprising the written heritage of Spain's 400 years in power in the Americas. The current objective is to scan about 10 percent of the archive (or about 8 million images) in…

  10. Warped document image correction method based on heterogeneous registration strategies

    NASA Astrophysics Data System (ADS)

    Tong, Lijing; Zhan, Guoliang; Peng, Quanyao; Li, Yang; Li, Yifan

    2013-03-01

    With the popularity of digital camera and the application requirement of digitalized document images, using digital cameras to digitalize document images has become an irresistible trend. However, the warping of the document surface impacts on the quality of the Optical Character Recognition (OCR) system seriously. To improve the warped document image's vision quality and the OCR rate, this paper proposed a warped document image correction method based on heterogeneous registration strategies. This method mosaics two warped images of the same document from different viewpoints. Firstly, two feature points are selected from one image. Then the two feature points are registered in the other image base on heterogeneous registration strategies. At last, image mosaics are done for the two images, and the best mosaiced image is selected by OCR recognition results. As a result, for the best mosaiced image, the distortions are mostly removed and the OCR results are improved markedly. Experimental results show that the proposed method can resolve the issue of warped document image correction more effectively.

  11. The inclusion of an online journal in PubMed central - a difficult path.

    PubMed

    Grech, Victor

    2016-01-01

    The indexing of a journal in a prominent database (such as PubMed) is an important imprimatur. Journals accepted for inclusion in PubMed Central (PMC) are automatically indexed in PubMed but must provide the entire contents of their publications as XML-tagged (Extensible Markup Language) data files compliant with PubMed's document type definition (DTD). This paper describes the various attempts that the journal Images in Paediatric Cardiology made in its efforts to convert the journal contents (including all of the extant backlog) to PMC-compliant XML for archiving and indexing in PubMed after the journal was accepted for inclusion by the database.

  12. Toward image phylogeny forests: automatically recovering semantically similar image relationships.

    PubMed

    Dias, Zanoni; Goldenstein, Siome; Rocha, Anderson

    2013-09-10

    In the past few years, several near-duplicate detection methods appeared in the literature to identify the cohabiting versions of a given document online. Following this trend, there are some initial attempts to go beyond the detection task, and look into the structure of evolution within a set of related images overtime. In this paper, we aim at automatically identify the structure of relationships underlying the images, correctly reconstruct their past history and ancestry information, and group them in distinct trees of processing history. We introduce a new algorithm that automatically handles sets of images comprising different related images, and outputs the phylogeny trees (also known as a forest) associated with them. Image phylogeny algorithms have many applications such as finding the first image within a set posted online (useful for tracking copyright infringement perpetrators), hint at child pornography content creators, and narrowing down a list of suspects for online harassment using photographs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. MToS: A Tree of Shapes for Multivariate Images.

    PubMed

    Carlinet, Edwin; Géraud, Thierry

    2015-12-01

    The topographic map of a gray-level image, also called tree of shapes, provides a high-level hierarchical representation of the image contents. This representation, invariant to contrast changes and to contrast inversion, has been proved very useful to achieve many image processing and pattern recognition tasks. Its definition relies on the total ordering of pixel values, so this representation does not exist for color images, or more generally, multivariate images. Common workarounds, such as marginal processing, or imposing a total order on data, are not satisfactory and yield many problems. This paper presents a method to build a tree-based representation of multivariate images, which features marginally the same properties of the gray-level tree of shapes. Briefly put, we do not impose an arbitrary ordering on values, but we only rely on the inclusion relationship between shapes in the image definition domain. The interest of having a contrast invariant and self-dual representation of multivariate image is illustrated through several applications (filtering, segmentation, and object recognition) on different types of data: color natural images, document images, satellite hyperspectral imaging, multimodal medical imaging, and videos.

  14. A super resolution framework for low resolution document image OCR

    NASA Astrophysics Data System (ADS)

    Ma, Di; Agam, Gady

    2013-01-01

    Optical character recognition is widely used for converting document images into digital media. Existing OCR algorithms and tools produce good results from high resolution, good quality, document images. In this paper, we propose a machine learning based super resolution framework for low resolution document image OCR. Two main techniques are used in our proposed approach: a document page segmentation algorithm and a modified K-means clustering algorithm. Using this approach, by exploiting coherence in the document, we reconstruct from a low resolution document image a better resolution image and improve OCR results. Experimental results show substantial gain in low resolution documents such as the ones captured from video.

  15. Bias in the Flesh

    PubMed Central

    Messing, Solomon; Jabon, Maria; Plaut, Ethan

    2016-01-01

    There is strong evidence linking skin complexion to negative stereotypes and adverse real-world outcomes. We extend these findings to political ad campaigns, in which skin complexion can be easily manipulated in ways that are difficult to detect. Devising a method to measure how dark a candidate appears in an image, this paper examines how complexion varied with ad content during the 2008 presidential election campaign (study 1). Findings show that darker images were more frequent in negative ads—especially those linking Obama to crime—which aired more frequently as Election Day approached. We then conduct an experiment to document how these darker images can activate stereotypes, and show that a subtle darkness manipulation is sufficient to activate the most negative stereotypes about Blacks—even when the candidate is a famous counter-stereotypical exemplar—Barack Obama (study 2). Further evidence of an evaluative penalty for darker skin comes from an observational study measuring affective responses to depictions of Obama with varying skin complexion, presented via the Affect Misattribution Procedure in the 2008 American National Election Study (study 3). This study demonstrates that darker images are used in a way that complements ad content, and shows that doing so can negatively affect how individuals evaluate candidates and think about politics. PMID:27257306

  16. Limitations and requirements of content-based multimedia authentication systems

    NASA Astrophysics Data System (ADS)

    Wu, Chai W.

    2001-08-01

    Recently, a number of authentication schemes have been proposed for multimedia data such as images and sound data. They include both label based systems and semifragile watermarks. The main requirement for such authentication systems is that minor modifications such as lossy compression which do not alter the content of the data preserve the authenticity of the data, whereas modifications which do modify the content render the data not authentic. These schemes can be classified into two main classes depending on the model of image authentication they are based on. One of the purposes of this paper is to look at some of the advantages and disadvantages of these image authentication schemes and their relationship with fundamental limitations of the underlying model of image authentication. In particular, we study feature-based algorithms which generate an authentication tag based on some inherent features in the image such as the location of edges. The main disadvantage of most proposed feature-based algorithms is that similar images generate similar features, and therefore it is possible for a forger to generate dissimilar images that have the same features. On the other hand, the class of hash-based algorithms utilizes a cryptographic hash function or a digital signature scheme to reduce the data and generate an authentication tag. It inherits the security of digital signatures to thwart forgery attacks. The main disadvantage of hash-based algorithms is that the image needs to be modified in order to be made authenticatable. The amount of modification is on the order of the noise the image can tolerate before it is rendered inauthentic. The other purpose of this paper is to propose a multimedia authentication scheme which combines some of the best features of both classes of algorithms. The proposed scheme utilizes cryptographic hash functions and digital signature schemes and the data does not need to be modified in order to be made authenticatable. Several applications including the authentication of images on CD-ROM and handwritten documents will be discussed.

  17. Dynamic "inline" images: context-sensitive retrieval and integration of images into Web documents.

    PubMed

    Kahn, Charles E

    2008-09-01

    Integrating relevant images into web-based information resources adds value for research and education. This work sought to evaluate the feasibility of using "Web 2.0" technologies to dynamically retrieve and integrate pertinent images into a radiology web site. An online radiology reference of 1,178 textual web documents was selected as the set of target documents. The ARRS GoldMiner image search engine, which incorporated 176,386 images from 228 peer-reviewed journals, retrieved images on demand and integrated them into the documents. At least one image was retrieved in real-time for display as an "inline" image gallery for 87% of the web documents. Each thumbnail image was linked to the full-size image at its original web site. Review of 20 randomly selected Collaborative Hypertext of Radiology documents found that 69 of 72 displayed images (96%) were relevant to the target document. Users could click on the "More" link to search the image collection more comprehensively and, from there, link to the full text of the article. A gallery of relevant radiology images can be inserted easily into web pages on any web server. Indexing by concepts and keywords allows context-aware image retrieval, and searching by document title and subject metadata yields excellent results. These techniques allow web developers to incorporate easily a context-sensitive image gallery into their documents.

  18. Text, photo, and line extraction in scanned documents

    NASA Astrophysics Data System (ADS)

    Erkilinc, M. Sezer; Jaber, Mustafa; Saber, Eli; Bauer, Peter; Depalov, Dejan

    2012-07-01

    We propose a page layout analysis algorithm to classify a scanned document into different regions such as text, photo, or strong lines. The proposed scheme consists of five modules. The first module performs several image preprocessing techniques such as image scaling, filtering, color space conversion, and gamma correction to enhance the scanned image quality and reduce the computation time in later stages. Text detection is applied in the second module wherein wavelet transform and run-length encoding are employed to generate and validate text regions, respectively. The third module uses a Markov random field based block-wise segmentation that employs a basis vector projection technique with maximum a posteriori probability optimization to detect photo regions. In the fourth module, methods for edge detection, edge linking, line-segment fitting, and Hough transform are utilized to detect strong edges and lines. In the last module, the resultant text, photo, and edge maps are combined to generate a page layout map using K-Means clustering. The proposed algorithm has been tested on several hundred documents that contain simple and complex page layout structures and contents such as articles, magazines, business cards, dictionaries, and newsletters, and compared against state-of-the-art page-segmentation techniques with benchmark performance. The results indicate that our methodology achieves an average of ˜89% classification accuracy in text, photo, and background regions.

  19. Interactive degraded document enhancement and ground truth generation

    NASA Astrophysics Data System (ADS)

    Bal, G.; Agam, G.; Frieder, O.; Frieder, G.

    2008-01-01

    Degraded documents are frequently obtained in various situations. Examples of degraded document collections include historical document depositories, document obtained in legal and security investigations, and legal and medical archives. Degraded document images are hard to to read and are hard to analyze using computerized techniques. There is hence a need for systems that are capable of enhancing such images. We describe a language-independent semi-automated system for enhancing degraded document images that is capable of exploiting inter- and intra-document coherence. The system is capable of processing document images with high levels of degradations and can be used for ground truthing of degraded document images. Ground truthing of degraded document images is extremely important in several aspects: it enables quantitative performance measurements of enhancement systems and facilitates model estimation that can be used to improve performance. Performance evaluation is provided using the historical Frieder diaries collection.1

  20. PDS MSL Analyst's Notebook: Supporting Active Rover Missions and Adding Value to Planetary Data Archives

    NASA Astrophysics Data System (ADS)

    Stein, Thomas

    Planetary data archives of surface missions contain data from numerous hosted instruments. Because of the nondeterministic nature of surface missions, it is not possible to assess the data without understanding the context in which they were collected. The PDS Analyst’s Notebook (http://an.rsl.wustl.edu) provides access to Mars Science Laboratory (MSL) data archives by integrating sequence information, engineering and science data, observation planning and targeting, and documentation into web-accessible pages to facilitate “mission replay.” In addition, Mars Exploration Rover (MER), Mars Phoenix Lander, Lunar Apollo surface mission, and LCROSS mission data are available in the Analyst’s Notebook concept, and a Notebook is planned for the Insight mission. The MSL Analyst’s Notebook contains data, documentation, and support files for the Curiosity rovers. The inputs are incorporated on a daily basis into a science team version of the Notebook. The public version of the Analyst’s Notebook is comprised of peer-reviewed, released data and is updated coincident with PDS data releases as defined in mission archive plans. The data are provided by the instrument teams and are supported by documentation describing data format, content, and calibration. Both operations and science data products are included. The operations versions are generated to support mission planning and operations on a daily basis. They are geared toward researchers working on machine vision and engineering operations. Science versions of observations from some instruments are provided for those interested in radiometric and photometric analyses. Both data set documentation and sol (i.e., Mars day) documents are included in the Notebook. The sol documents are the mission manager and documentarian reports that provide a view into science operations—insight into why and how particular observations were made. Data set documents contain detailed information regarding the mission, spacecraft, instruments, and data formats. In addition, observation planning and targeting information is extracted from each sol’s tactical science plan. A number of methods allow user access to the Notebook contents. The mission summary provides a high level overview of science operations. The Sol Summaries are the primary interface to integrated data and documents contained within the Notebooks. Data, documents, and planned observations are grouped for easy scanning. Data products are displayed in order of acquisition, and are grouped into logical sequences, such as a series of image data. Sequences and the individual products that comprise them may be viewed in detail, manipulated, and downloaded. Color composites and anaglyph stereo images may be created on demand. Graphs of some non-image data, such as spectra, may be viewed. Data may be downloaded as zip or gzip files, or as multiband ENVI image files. The Notebook contains a map with the rover traverse plotted on a HiRISE basemap using the raw and corrected drive telemetry provided by the project. Users may zoom and pan the map. Clicking on a traverse location brings up links to corresponding data. Three types of searching through data and documents are available within the Notebook. Free text searching of data set and sol documents are supported. Data are searchable by instrument, acquisition time, data type, and product ID. Results may be downloaded in a single collection or selected individually for detailed viewing. Additional resources include data set documents, references to published mission, links to related web resources, and online help. Finally, feedback is handled through an online forum. Work continues to improve functionality, including locating features of interest and a spectral library search/view/download tool. A number of Notebook functions are based on previous user suggestions, and feedback continues to be sought. The Analyst’s Notebook is available at http://an.rsl.wustl.edu.

  1. Health professionals' use of documents obtained through the Regional Medical Library Network.

    PubMed

    Lovas, I; Graham, E; Flack, V

    1991-01-01

    The Pacific Southwest Regional Medical Library Service (PSRMLS) studied how health professionals use documents obtained through the regional medical library (RML) network and how various factors, such as delivery time, affected that use. A random sample of libraries in Region 7 of the RML network was selected to survey health professionals who had received documents through the interlibrary loan (ILL) network. The survey provided data about the purposes for which health professionals requested documents, how the immediacy of need for the items affected their usefulness, what effect the obtained information had on the health professionals' work, and whether the illustrations represented an important part of the information content of the items. Survey results provided a positive assessment of the ILL network. Results also verified the basic value of the materials provided to health professionals through ILL and identified some areas for consideration in future network development. Users of the documents indicated that the network works efficiently and effectively to provide timely and useful information needed by health professionals. Technological developments in electronic information transmission and imaging will further enhance network operation in the future.

  2. Script identification from images using cluster-based templates

    DOEpatents

    Hochberg, J.G.; Kelly, P.M.; Thomas, T.R.

    1998-12-01

    A computer-implemented method identifies a script used to create a document. A set of training documents for each script to be identified is scanned into the computer to store a series of exemplary images representing each script. Pixels forming the exemplary images are electronically processed to define a set of textual symbols corresponding to the exemplary images. Each textual symbol is assigned to a cluster of textual symbols that most closely represents the textual symbol. The cluster of textual symbols is processed to form a representative electronic template for each cluster. A document having a script to be identified is scanned into the computer to form one or more document images representing the script to be identified. Pixels forming the document images are electronically processed to define a set of document textual symbols corresponding to the document images. The set of document textual symbols is compared to the electronic templates to identify the script. 17 figs.

  3. Script identification from images using cluster-based templates

    DOEpatents

    Hochberg, Judith G.; Kelly, Patrick M.; Thomas, Timothy R.

    1998-01-01

    A computer-implemented method identifies a script used to create a document. A set of training documents for each script to be identified is scanned into the computer to store a series of exemplary images representing each script. Pixels forming the exemplary images are electronically processed to define a set of textual symbols corresponding to the exemplary images. Each textual symbol is assigned to a cluster of textual symbols that most closely represents the textual symbol. The cluster of textual symbols is processed to form a representative electronic template for each cluster. A document having a script to be identified is scanned into the computer to form one or more document images representing the script to be identified. Pixels forming the document images are electronically processed to define a set of document textual symbols corresponding to the document images. The set of document textual symbols is compared to the electronic templates to identify the script.

  4. Automated search and retrieval of information from imaged documents using optical correlation techniques

    NASA Astrophysics Data System (ADS)

    Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.

    1999-10-01

    Litton PRC and Litton Data Systems Division are developing a system, the Imaged Document Optical Correlation and Conversion System (IDOCCS), to provide a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provides the search and retrieval of information from imaged documents. IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited; e.g., imaged documents containing an agency's seal or logo can be singled out. In this paper, we present a description of IDOCCS as well as preliminary performance results and theoretical projections.

  5. Document image database indexing with pictorial dictionary

    NASA Astrophysics Data System (ADS)

    Akbari, Mohammad; Azimi, Reza

    2010-02-01

    In this paper we introduce a new approach for information retrieval from Persian document image database without using Optical Character Recognition (OCR).At first an attribute called subword upper contour label is defined then, a pictorial dictionary is constructed based on this attribute for the subwords. By this approach we address two issues in document image retrieval: keyword spotting and retrieval according to the document similarities. The proposed methods have been evaluated on a Persian document image database. The results have proved the ability of this approach in document image information retrieval.

  6. Guidelines for Documentation of Computer Programs and Automated Data Systems. (Category: Software; Subcategory: Documentation).

    ERIC Educational Resources Information Center

    Federal Information Processing Standards Publication, 1976

    1976-01-01

    These guidelines provide a basis for determining the content and extent of documentation for computer programs and automated data systems. Content descriptions of ten document types plus examples of how management can determine when to use the various types are included. The documents described are (1) functional requirements documents, (2) data…

  7. Word Spotting and Recognition with Embedded Attributes.

    PubMed

    Almazán, Jon; Gordo, Albert; Fornés, Alicia; Valveny, Ernest

    2014-12-01

    This paper addresses the problems of word spotting and word recognition on images. In word spotting, the goal is to find all instances of a query word in a dataset of images. In recognition, the goal is to recognize the content of the word image, usually aided by a dictionary or lexicon. We describe an approach in which both word images and text strings are embedded in a common vectorial subspace. This is achieved by a combination of label embedding and attributes learning, and a common subspace regression. In this subspace, images and strings that represent the same word are close together, allowing one to cast recognition and retrieval tasks as a nearest neighbor problem. Contrary to most other existing methods, our representation has a fixed length, is low dimensional, and is very fast to compute and, especially, to compare. We test our approach on four public datasets of both handwritten documents and natural images showing results comparable or better than the state-of-the-art on spotting and recognition tasks.

  8. Secure content objects

    DOEpatents

    Evans, William D [Cupertino, CA

    2009-02-24

    A secure content object protects electronic documents from unauthorized use. The secure content object includes an encrypted electronic document, a multi-key encryption table having at least one multi-key component, an encrypted header and a user interface device. The encrypted document is encrypted using a document encryption key associated with a multi-key encryption method. The encrypted header includes an encryption marker formed by a random number followed by a derivable variation of the same random number. The user interface device enables a user to input a user authorization. The user authorization is combined with each of the multi-key components in the multi-key encryption key table and used to try to decrypt the encrypted header. If the encryption marker is successfully decrypted, the electronic document may be decrypted. Multiple electronic documents or a document and annotations may be protected by the secure content object.

  9. Exemplary design of a DICOM structured report template for CBIR integration into radiological routine

    NASA Astrophysics Data System (ADS)

    Welter, Petra; Deserno, Thomas M.; Gülpers, Ralph; Wein, Berthold B.; Grouls, Christoph; Günther, Rolf W.

    2010-03-01

    The large and continuously growing amount of medical image data demands access methods with regards to content rather than simple text-based queries. The potential benefits of content-based image retrieval (CBIR) systems for computer-aided diagnosis (CAD) are evident and have been approved. Still, CBIR is not a well-established part of daily routine of radiologists. We have already presented a concept of CBIR integration for the radiology workflow in accordance with the Integrating the Healthcare Enterprise (IHE) framework. The retrieval result is composed as a Digital Imaging and Communication in Medicine (DICOM) Structured Reporting (SR) document. The use of DICOM SR provides interchange with PACS archive and image viewer. It offers the possibility of further data mining and automatic interpretation of CBIR results. However, existing standard templates do not address the domain of CBIR. We present a design of a SR template customized for CBIR. Our approach is based on the DICOM standard templates and makes use of the mammography and chest CAD SR templates. Reuse of approved SR sub-trees promises a reliable design which is further adopted to the CBIR domain. We analyze the special CBIR requirements and integrate the new concept of similar images into our template. Our approach also includes the new concept of a set of selected images for defining the processed images for CBIR. A commonly accepted pre-defined template for the presentation and exchange of results in a standardized format promotes the widespread application of CBIR in radiological routine.

  10. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  11. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  12. 32 CFR 813.5 - Shipping or transmitting visual information documentation images.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... documentation images. 813.5 Section 813.5 National Defense Department of Defense (Continued) DEPARTMENT OF THE... visual information documentation images. (a) COMCAM images. Send COMCAM images to the DoD Joint Combat... the approval procedures that on-scene and theater commanders set. (b) Other non-COMCAM images. After...

  13. 32 CFR 813.5 - Shipping or transmitting visual information documentation images.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... documentation images. 813.5 Section 813.5 National Defense Department of Defense (Continued) DEPARTMENT OF THE... visual information documentation images. (a) COMCAM images. Send COMCAM images to the DoD Joint Combat... the approval procedures that on-scene and theater commanders set. (b) Other non-COMCAM images. After...

  14. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  15. 32 CFR 813.5 - Shipping or transmitting visual information documentation images.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... documentation images. 813.5 Section 813.5 National Defense Department of Defense (Continued) DEPARTMENT OF THE... visual information documentation images. (a) COMCAM images. Send COMCAM images to the DoD Joint Combat... the approval procedures that on-scene and theater commanders set. (b) Other non-COMCAM images. After...

  16. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  17. 32 CFR 813.5 - Shipping or transmitting visual information documentation images.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... documentation images. 813.5 Section 813.5 National Defense Department of Defense (Continued) DEPARTMENT OF THE... visual information documentation images. (a) COMCAM images. Send COMCAM images to the DoD Joint Combat... the approval procedures that on-scene and theater commanders set. (b) Other non-COMCAM images. After...

  18. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  19. 32 CFR 813.5 - Shipping or transmitting visual information documentation images.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... documentation images. 813.5 Section 813.5 National Defense Department of Defense (Continued) DEPARTMENT OF THE... visual information documentation images. (a) COMCAM images. Send COMCAM images to the DoD Joint Combat... the approval procedures that on-scene and theater commanders set. (b) Other non-COMCAM images. After...

  20. XDS-I Gateway Development for HIE Connectivity with Legacy PACS at Gil Hospital.

    PubMed

    Simalango, Mikael Fernandus; Kim, Youngchul; Seo, Young Tae; Choi, Young Hwan; Cho, Yong Kyun

    2013-12-01

    The ability to support healthcare document sharing is imperative in a health information exchange (HIE). Sharing imaging documents or images, however, can be challenging, especially when they are stored in a picture archiving and communication system (PACS) archive that does not support document sharing via standard HIE protocols. This research proposes a standard-compliant imaging gateway that enables connectivity between a legacy PACS and the entire HIE. Investigation of the PACS solutions used at Gil Hospital was conducted. An imaging gateway application was then developed using a Java technology stack. Imaging document sharing capability enabled by the gateway was tested by integrating it into Gil Hospital's order communication system and its HIE infrastructure. The gateway can acquire radiology images from a PACS storage system, provide and register the images to Gil Hospital's HIE for document sharing purposes, and make the images retrievable by a cross-enterprise document sharing document viewer. Development of an imaging gateway that mediates communication between a PACS and an HIE can be considered a viable option when the PACS does not support the standard protocol for cross-enterprise document sharing for imaging. Furthermore, the availability of common HIE standards expedites the development and integration of the imaging gateway with an HIE.

  1. XDS-I Gateway Development for HIE Connectivity with Legacy PACS at Gil Hospital

    PubMed Central

    Simalango, Mikael Fernandus; Kim, Youngchul; Seo, Young Tae; Cho, Yong Kyun

    2013-01-01

    Objectives The ability to support healthcare document sharing is imperative in a health information exchange (HIE). Sharing imaging documents or images, however, can be challenging, especially when they are stored in a picture archiving and communication system (PACS) archive that does not support document sharing via standard HIE protocols. This research proposes a standard-compliant imaging gateway that enables connectivity between a legacy PACS and the entire HIE. Methods Investigation of the PACS solutions used at Gil Hospital was conducted. An imaging gateway application was then developed using a Java technology stack. Imaging document sharing capability enabled by the gateway was tested by integrating it into Gil Hospital's order communication system and its HIE infrastructure. Results The gateway can acquire radiology images from a PACS storage system, provide and register the images to Gil Hospital's HIE for document sharing purposes, and make the images retrievable by a cross-enterprise document sharing document viewer. Conclusions Development of an imaging gateway that mediates communication between a PACS and an HIE can be considered a viable option when the PACS does not support the standard protocol for cross-enterprise document sharing for imaging. Furthermore, the availability of common HIE standards expedites the development and integration of the imaging gateway with an HIE. PMID:24523994

  2. Bias in the Flesh: Skin Complexion and Stereotype Consistency in Political Campaigns.

    PubMed

    Messing, Solomon; Jabon, Maria; Plaut, Ethan

    2016-01-01

    There is strong evidence linking skin complexion to negative stereotypes and adverse real-world outcomes. We extend these findings to political ad campaigns, in which skin complexion can be easily manipulated in ways that are difficult to detect. Devising a method to measure how dark a candidate appears in an image, this paper examines how complexion varied with ad content during the 2008 presidential election campaign (study 1). Findings show that darker images were more frequent in negative ads-especially those linking Obama to crime-which aired more frequently as Election Day approached. We then conduct an experiment to document how these darker images can activate stereotypes, and show that a subtle darkness manipulation is sufficient to activate the most negative stereotypes about Blacks-even when the candidate is a famous counter-stereotypical exemplar-Barack Obama (study 2). Further evidence of an evaluative penalty for darker skin comes from an observational study measuring affective responses to depictions of Obama with varying skin complexion, presented via the Affect Misattribution Procedure in the 2008 American National Election Study (study 3). This study demonstrates that darker images are used in a way that complements ad content, and shows that doing so can negatively affect how individuals evaluate candidates and think about politics.

  3. LCS Content Document Application

    NASA Technical Reports Server (NTRS)

    Hochstadt, Jake

    2011-01-01

    My project at KSC during my spring 2011 internship was to develop a Ruby on Rails application to manage Content Documents..A Content Document is a collection of documents and information that describes what software is installed on a Launch Control System Computer. It's important for us to make sure the tools we use everyday are secure, up-to-date, and properly licensed. Previously, keeping track of the information was done by Excel and Word files between different personnel. The goal of the new application is to be able to manage and access the Content Documents through a single database backed web application. Our LCS team will benefit greatly with this app. Admin's will be able to login securely to keep track and update the software installed on each computer in a timely manner. We also included exportability such as attaching additional documents that can be downloaded from the web application. The finished application will ease the process of managing Content Documents while streamlining the procedure. Ruby on Rails is a very powerful programming language and I am grateful to have the opportunity to build this application.

  4. Interactive visual comparison of multimedia data through type-specific views

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burtner, Edwin R.; Bohn, Shawn J.; Payne, Deborah A.

    2013-02-05

    Analysts who work with collections of multimedia to perform information foraging understand how difficult it is to connect information across diverse sets of mixed media. The wealth of information from blogs, social media, and news sites often can provide actionable intelligence; however, many of the tools used on these sources of content are not capable of multimedia analysis because they only analyze a single media type. As such, analysts are taxed to keep a mental model of the relationships among each of the media types when generating the broader content picture. To address this need, we have developed Canopy, amore » novel visual analytic tool for analyzing multimedia. Canopy provides insight into the multimedia data relationships by exploiting the linkages found in text, images, and video co-occurring in the same document and across the collection. Canopy connects derived and explicit linkages and relationships through multiple connected visualizations to aid analysts in quickly summarizing, searching, and browsing collected information to explore relationships and align content. In this paper, we will discuss the features and capabilities of the Canopy system and walk through a scenario illustrating how this system might be used in an operational environment. Keywords: Multimedia (Image/Video/Music) Visualization.« less

  5. Multimedia systems for art and culture: a case study of Brihadisvara Temple

    NASA Astrophysics Data System (ADS)

    Jain, Anil K.; Goel, Sanjay; Agarwal, Sachin; Mittal, Vipin; Sharma, Hariom; Mahindru, Ranjeev

    1997-01-01

    In India a temple is not only a structure of religious significance and celebration, but it also plays an important role in the social, administrative and cultural life of the locality. Temples have served as centers for learning Indian scriptures. Music and dance were fostered and performed in the precincts of the temples. Built at the end of the 10th century, the Brihadisvara temple signified new design methodologies. We have access to a large number of images, audio and video recordings, architectural drawings and scholarly publications of this temple. A multimedia system for this temple is being designed which is intended to be used for the following purposes: (1) to inform and enrich the general public, and (2) to assist the scholars in their research. Such a system will also preserve and archive old historical documents and images. The large database consists primarily of images which can be retrieved using keywords, but the emphasis here is largely on techniques which will allow access using image content. Besides classifying images as either long shots or close-ups, deformable template matching is used for shape-based query by image content, and digital video retrieval. Further, to exploit the non-linear accessibility of video sequences, key frames are determined to aid the domain experts in getting a quick preview of the video. Our database also has images of several old, and rare manuscripts many of which are noisy and difficult to read. We have enhanced them to make them more legible. We are also investigating the optimal trade-off between image quality and compression ratios.

  6. Imaged document information location and extraction using an optical correlator

    NASA Astrophysics Data System (ADS)

    Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.

    1999-12-01

    Today, the paper document is fast becoming a thing of the past. With the rapid development of fast, inexpensive computing and storage devices, many government and private organizations are archiving their documents in electronic form (e.g., personnel records, medical records, patents, etc.). Many of these organizations are converting their paper archives to electronic images, which are then stored in a computer database. Because of this, there is a need to efficiently organize this data into comprehensive and accessible information resources and provide for rapid access to the information contained within these imaged documents. To meet this need, Litton PRC and Litton Data Systems Division are developing a system, the Imaged Document Optical Correlation and Conversion System (IDOCCS), to provide a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provide a means for the search and retrieval of information from imaged documents. IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives and has the potential to determine the types of languages contained within a document. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited, e.g., imaged documents containing an agency's seal or logo can be singled out. In this paper, we present a description of IDOCCS as well as preliminary performance results and theoretical projections.

  7. To Image...or Not to Image?

    ERIC Educational Resources Information Center

    Bruley, Karina

    1996-01-01

    Provides a checklist of considerations for installing document image processing with an electronic document management system. Other topics include scanning; indexing; the image file life cycle; benefits of imaging; document-driven workflow; and planning for workplace changes like postsorting, creating a scanning room, redeveloping job tasks and…

  8. Information visualization: Beyond traditional engineering

    NASA Technical Reports Server (NTRS)

    Thomas, James J.

    1995-01-01

    This presentation addresses a different aspect of the human-computer interface; specifically the human-information interface. This interface will be dominated by an emerging technology called Information Visualization (IV). IV goes beyond the traditional views of computer graphics, CADS, and enables new approaches for engineering. IV specifically must visualize text, documents, sound, images, and video in such a way that the human can rapidly interact with and understand the content structure of information entities. IV is the interactive visual interface between humans and their information resources.

  9. Retinal slit lamp video mosaicking.

    PubMed

    De Zanet, Sandro; Rudolph, Tobias; Richa, Rogerio; Tappeiner, Christoph; Sznitman, Raphael

    2016-06-01

    To this day, the slit lamp remains the first tool used by an ophthalmologist to examine patient eyes. Imaging of the retina poses, however, a variety of problems, namely a shallow depth of focus, reflections from the optical system, a small field of view and non-uniform illumination. For ophthalmologists, the use of slit lamp images for documentation and analysis purposes, however, remains extremely challenging due to large image artifacts. For this reason, we propose an automatic retinal slit lamp video mosaicking, which enlarges the field of view and reduces amount of noise and reflections, thus enhancing image quality. Our method is composed of three parts: (i) viable content segmentation, (ii) global registration and (iii) image blending. Frame content is segmented using gradient boosting with custom pixel-wise features. Speeded-up robust features are used for finding pair-wise translations between frames with robust random sample consensus estimation and graph-based simultaneous localization and mapping for global bundle adjustment. Foreground-aware blending based on feathering merges video frames into comprehensive mosaics. Foreground is segmented successfully with an area under the curve of the receiver operating characteristic curve of 0.9557. Mosaicking results and state-of-the-art methods were compared and rated by ophthalmologists showing a strong preference for a large field of view provided by our method. The proposed method for global registration of retinal slit lamp images of the retina into comprehensive mosaics improves over state-of-the-art methods and is preferred qualitatively.

  10. An Introduction to Document Imaging in the Financial Aid Office.

    ERIC Educational Resources Information Center

    Levy, Douglas A.

    2001-01-01

    First describes the components of a document imaging system in general and then addresses this technology specifically in relation to financial aid document management: its uses and benefits, considerations in choosing a document imaging system, and additional sources for information. (EV)

  11. Line Segmentation in Handwritten Assamese and Meetei Mayek Script Using Seam Carving Based Algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, Chandan Jyoti; Kalita, Sanjib Kr.

    Line segmentation is a key stage in an Optical Character Recognition system. This paper primarily concerns the problem of text line extraction on color and grayscale manuscript pages of two major North-east Indian regional Scripts, Assamese and Meetei Mayek. Line segmentation of handwritten text in Assamese and Meetei Mayek scripts is an uphill task primarily because of the structural features of both the scripts and varied writing styles. Line segmentation of a document image is been achieved by using the Seam carving technique, in this paper. Researchers from various regions used this approach for content aware resizing of an image. However currently many researchers are implementing Seam Carving for line segmentation phase of OCR. Although it is a language independent technique, mostly experiments are done over Arabic, Greek, German and Chinese scripts. Two types of seams are generated, medial seams approximate the orientation of each text line, and separating seams separated one line of text from another. Experiments are performed extensively over various types of documents and detailed analysis of the evaluations reflects that the algorithm performs well for even documents with multiple scripts. In this paper, we present a comparative study of accuracy of this method over different types of data.

  12. Robust binarization of degraded document images using heuristics

    NASA Astrophysics Data System (ADS)

    Parker, Jon; Frieder, Ophir; Frieder, Gideon

    2013-12-01

    Historically significant documents are often discovered with defects that make them difficult to read and analyze. This fact is particularly troublesome if the defects prevent software from performing an automated analysis. Image enhancement methods are used to remove or minimize document defects, improve software performance, and generally make images more legible. We describe an automated, image enhancement method that is input page independent and requires no training data. The approach applies to color or greyscale images with hand written script, typewritten text, images, and mixtures thereof. We evaluated the image enhancement method against the test images provided by the 2011 Document Image Binarization Contest (DIBCO). Our method outperforms all 2011 DIBCO entrants in terms of average F1 measure - doing so with a significantly lower variance than top contest entrants. The capability of the proposed method is also illustrated using select images from a collection of historic documents stored at Yad Vashem Holocaust Memorial in Israel.

  13. Automated document analysis system

    NASA Astrophysics Data System (ADS)

    Black, Jeffrey D.; Dietzel, Robert; Hartnett, David

    2002-08-01

    A software application has been developed to aid law enforcement and government intelligence gathering organizations in the translation and analysis of foreign language documents with potential intelligence content. The Automated Document Analysis System (ADAS) provides the capability to search (data or text mine) documents in English and the most commonly encountered foreign languages, including Arabic. Hardcopy documents are scanned by a high-speed scanner and are optical character recognized (OCR). Documents obtained in an electronic format bypass the OCR and are copied directly to a working directory. For translation and analysis, the script and the language of the documents are first determined. If the document is not in English, the document is machine translated to English. The documents are searched for keywords and key features in either the native language or translated English. The user can quickly review the document to determine if it has any intelligence content and whether detailed, verbatim human translation is required. The documents and document content are cataloged for potential future analysis. The system allows non-linguists to evaluate foreign language documents and allows for the quick analysis of a large quantity of documents. All document processing can be performed manually or automatically on a single document or a batch of documents.

  14. Page layout analysis and classification for complex scanned documents

    NASA Astrophysics Data System (ADS)

    Erkilinc, M. Sezer; Jaber, Mustafa; Saber, Eli; Bauer, Peter; Depalov, Dejan

    2011-09-01

    A framework for region/zone classification in color and gray-scale scanned documents is proposed in this paper. The algorithm includes modules for extracting text, photo, and strong edge/line regions. Firstly, a text detection module which is based on wavelet analysis and Run Length Encoding (RLE) technique is employed. Local and global energy maps in high frequency bands of the wavelet domain are generated and used as initial text maps. Further analysis using RLE yields a final text map. The second module is developed to detect image/photo and pictorial regions in the input document. A block-based classifier using basis vector projections is employed to identify photo candidate regions. Then, a final photo map is obtained by applying probabilistic model based on Markov random field (MRF) based maximum a posteriori (MAP) optimization with iterated conditional mode (ICM). The final module detects lines and strong edges using Hough transform and edge-linkages analysis, respectively. The text, photo, and strong edge/line maps are combined to generate a page layout classification of the scanned target document. Experimental results and objective evaluation show that the proposed technique has a very effective performance on variety of simple and complex scanned document types obtained from MediaTeam Oulu document database. The proposed page layout classifier can be used in systems for efficient document storage, content based document retrieval, optical character recognition, mobile phone imagery, and augmented reality.

  15. Font adaptive word indexing of modern printed documents.

    PubMed

    Marinai, Simone; Marino, Emanuele; Soda, Giovanni

    2006-08-01

    We propose an approach for the word-level indexing of modern printed documents which are difficult to recognize using current OCR engines. By means of word-level indexing, it is possible to retrieve the position of words in a document, enabling queries involving proximity of terms. Web search engines implement this kind of indexing, allowing users to retrieve Web pages on the basis of their textual content. Nowadays, digital libraries hold collections of digitized documents that can be retrieved either by browsing the document images or relying on appropriate metadata assembled by domain experts. Word indexing tools would therefore increase the access to these collections. The proposed system is designed to index homogeneous document collections by automatically adapting to different languages and font styles without relying on OCR engines for character recognition. The approach is based on three main ideas: the use of Self Organizing Maps (SOM) to perform unsupervised character clustering, the definition of one suitable vector-based word representation whose size depends on the word aspect-ratio, and the run-time alignment of the query word with indexed words to deal with broken and touching characters. The most appropriate applications are for processing modern printed documents (17th to 19th centuries) where current OCR engines are less accurate. Our experimental analysis addresses six data sets containing documents ranging from books of the 17th century to contemporary journals.

  16. Utilizing the Structure and Content Information for XML Document Clustering

    NASA Astrophysics Data System (ADS)

    Tran, Tien; Kutty, Sangeetha; Nayak, Richi

    This paper reports on the experiments and results of a clustering approach used in the INEX 2008 document mining challenge. The clustering approach utilizes both the structure and content information of the Wikipedia XML document collection. A latent semantic kernel (LSK) is used to measure the semantic similarity between XML documents based on their content features. The construction of a latent semantic kernel involves the computing of singular vector decomposition (SVD). On a large feature space matrix, the computation of SVD is very expensive in terms of time and memory requirements. Thus in this clustering approach, the dimension of the document space of a term-document matrix is reduced before performing SVD. The document space reduction is based on the common structural information of the Wikipedia XML document collection. The proposed clustering approach has shown to be effective on the Wikipedia collection in the INEX 2008 document mining challenge.

  17. Clustering XML Documents Using Frequent Subtrees

    NASA Astrophysics Data System (ADS)

    Kutty, Sangeetha; Tran, Tien; Nayak, Richi; Li, Yuefeng

    This paper presents an experimental study conducted over the INEX 2008 Document Mining Challenge corpus using both the structure and the content of XML documents for clustering them. The concise common substructures known as the closed frequent subtrees are generated using the structural information of the XML documents. The closed frequent subtrees are then used to extract the constrained content from the documents. A matrix containing the term distribution of the documents in the dataset is developed using the extracted constrained content. The k-way clustering algorithm is applied to the matrix to obtain the required clusters. In spite of the large number of documents in the INEX 2008 Wikipedia dataset, the proposed frequent subtree-based clustering approach was successful in clustering the documents. This approach significantly reduces the dimensionality of the terms used for clustering without much loss in accuracy.

  18. Digital document imaging systems: An overview and guide

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This is an aid to NASA managers in planning the selection of a Digital Document Imaging System (DDIS) as a possible solution for document information processing and storage. Intended to serve as a manager's guide, this document contains basic information on digital imaging systems, technology, equipment standards, issues of interoperability and interconnectivity, and issues related to selecting appropriate imaging equipment based upon well defined needs.

  19. Adaptive Algorithms for Automated Processing of Document Images

    DTIC Science & Technology

    2011-01-01

    ABSTRACT Title of dissertation: ADAPTIVE ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES Mudit Agrawal, Doctor of Philosophy, 2011...2011 4. TITLE AND SUBTITLE Adaptive Algorithms for Automated Processing of Document Images 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES by Mudit Agrawal Dissertation submitted to the Faculty of the Graduate School of the University

  20. 48 CFR 1506.303-2 - Content.

    Code of Federal Regulations, 2010 CFR

    1997-10-01

    ... 48 Federal Acquisition Regulations System 6 1997-10-01 1997-10-01 false Content. 1506.303-2....303-2 Content. The documentation requirements in this section apply only to acquisitions processed... synopsis in the JOFOC. (See 1506.371(d) for contents of the evaluation document). [50 FR 14357, Apr. 11...

  1. A Framework for Integration of Heterogeneous Medical Imaging Networks

    PubMed Central

    Viana-Ferreira, Carlos; Ribeiro, Luís S; Costa, Carlos

    2014-01-01

    Medical imaging is increasing its importance in matters of medical diagnosis and in treatment support. Much is due to computers that have revolutionized medical imaging not only in acquisition process but also in the way it is visualized, stored, exchanged and managed. Picture Archiving and Communication Systems (PACS) is an example of how medical imaging takes advantage of computers. To solve problems of interoperability of PACS and medical imaging equipment, the Digital Imaging and Communications in Medicine (DICOM) standard was defined and widely implemented in current solutions. More recently, the need to exchange medical data between distinct institutions resulted in Integrating the Healthcare Enterprise (IHE) initiative that contains a content profile especially conceived for medical imaging exchange: Cross Enterprise Document Sharing for imaging (XDS-i). Moreover, due to application requirements, many solutions developed private networks to support their services. For instance, some applications support enhanced query and retrieve over DICOM objects metadata. This paper proposes anintegration framework to medical imaging networks that provides protocols interoperability and data federation services. It is an extensible plugin system that supports standard approaches (DICOM and XDS-I), but is also capable of supporting private protocols. The framework is being used in the Dicoogle Open Source PACS. PMID:25279021

  2. A framework for integration of heterogeneous medical imaging networks.

    PubMed

    Viana-Ferreira, Carlos; Ribeiro, Luís S; Costa, Carlos

    2014-01-01

    Medical imaging is increasing its importance in matters of medical diagnosis and in treatment support. Much is due to computers that have revolutionized medical imaging not only in acquisition process but also in the way it is visualized, stored, exchanged and managed. Picture Archiving and Communication Systems (PACS) is an example of how medical imaging takes advantage of computers. To solve problems of interoperability of PACS and medical imaging equipment, the Digital Imaging and Communications in Medicine (DICOM) standard was defined and widely implemented in current solutions. More recently, the need to exchange medical data between distinct institutions resulted in Integrating the Healthcare Enterprise (IHE) initiative that contains a content profile especially conceived for medical imaging exchange: Cross Enterprise Document Sharing for imaging (XDS-i). Moreover, due to application requirements, many solutions developed private networks to support their services. For instance, some applications support enhanced query and retrieve over DICOM objects metadata. This paper proposes anintegration framework to medical imaging networks that provides protocols interoperability and data federation services. It is an extensible plugin system that supports standard approaches (DICOM and XDS-I), but is also capable of supporting private protocols. The framework is being used in the Dicoogle Open Source PACS.

  3. Geometric rectification of camera-captured document images.

    PubMed

    Liang, Jian; DeMenthon, Daniel; Doermann, David

    2008-04-01

    Compared to typical scanners, handheld cameras offer convenient, flexible, portable, and non-contact image capture, which enables many new applications and breathes new life into existing ones. However, camera-captured documents may suffer from distortions caused by non-planar document shape and perspective projection, which lead to failure of current OCR technologies. We present a geometric rectification framework for restoring the frontal-flat view of a document from a single camera-captured image. Our approach estimates 3D document shape from texture flow information obtained directly from the image without requiring additional 3D/metric data or prior camera calibration. Our framework provides a unified solution for both planar and curved documents and can be applied in many, especially mobile, camera-based document analysis applications. Experiments show that our method produces results that are significantly more OCR compatible than the original images.

  4. Crossroads 2000 proceedings [table of contents hyperlinked to documents

    DOT National Transportation Integrated Search

    1998-08-19

    This document's table of contents hyperlinks to the 76 papers presented at the Crossroads 2000 Conference. The documents are housed at the web site for Iowa State University Center for Transportation Research and Education. A selection of 14 individu...

  5. Document Image Processing: Going beyond the Black-and-White Barrier. Progress, Issues and Options with Greyscale and Colour Image Processing.

    ERIC Educational Resources Information Center

    Hendley, Tom

    1995-01-01

    Discussion of digital document image processing focuses on issues and options associated with greyscale and color image processing. Topics include speed; size of original document; scanning resolution; markets for different categories of scanners, including photographic libraries, publishing, and office applications; hybrid systems; data…

  6. Forensic hash for multimedia information

    NASA Astrophysics Data System (ADS)

    Lu, Wenjun; Varna, Avinash L.; Wu, Min

    2010-01-01

    Digital multimedia such as images and videos are prevalent on today's internet and cause significant social impact, which can be evidenced by the proliferation of social networking sites with user generated contents. Due to the ease of generating and modifying images and videos, it is critical to establish trustworthiness for online multimedia information. In this paper, we propose novel approaches to perform multimedia forensics using compact side information to reconstruct the processing history of a document. We refer to this as FASHION, standing for Forensic hASH for informatION assurance. Based on the Radon transform and scale space theory, the proposed forensic hash is compact and can effectively estimate the parameters of geometric transforms and detect local tampering that an image may have undergone. Forensic hash is designed to answer a broader range of questions regarding the processing history of multimedia data than the simple binary decision from traditional robust image hashing, and also offers more efficient and accurate forensic analysis than multimedia forensic techniques that do not use any side information.

  7. Synthesis and Preclinical Characterization of a Cationic Iodinated Imaging Contrast Agent (CA4+) and Its Use for Quantitative Computed Tomography of Ex Vivo Human Hip Cartilage.

    PubMed

    Stewart, Rachel C; Patwa, Amit N; Lusic, Hrvoje; Freedman, Jonathan D; Wathier, Michel; Snyder, Brian D; Guermazi, Ali; Grinstaff, Mark W

    2017-07-13

    Contrast agents that go beyond qualitative visualization and enable quantitative assessments of functional tissue performance represent the next generation of clinically useful imaging tools. An optimized and efficient large-scale synthesis of a cationic iodinated contrast agent (CA4+) is described for imaging articular cartilage. Contrast-enhanced CT (CECT) using CA4+ reveals significantly greater agent uptake of CA4+ in articular cartilage compared to that of similar anionic or nonionic agents, and CA4+ uptake follows Donnan equilibrium theory. The CA4+ CECT attenuation obtained from imaging ex vivo human hip cartilage correlates with the glycosaminoglycan content, equilibrium modulus, and coefficient of friction, which are key indicators of cartilage functional performance and osteoarthritis stage. Finally, preliminary toxicity studies in a rat model show no adverse events, and a pharmacokinetics study documents a peak plasma concentration 30 min after dosing, with the agent no longer present in vivo at 96 h via excretion in the urine.

  8. Effect of Silicon in U-10Mo Alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kautz, Elizabeth J.; Devaraj, Arun; Kovarik, Libor

    2017-08-31

    This document details a method for evaluating the effect of silicon impurity content on U-10Mo alloys. Silicon concentration in U-10Mo alloys has been shown to impact the following: volume fraction of precipitate phases, effective density of the final alloy, and 235-U enrichment in the gamma-UMo matrix. This report presents a model for calculating these quantities as a function of Silicon concentration, which along with fuel foil characterization data, will serve as a reference for quality control of the U-10Mo final alloy Si content. Additionally, detailed characterization using scanning electron microscope imaging, transmission electron microscope diffraction, and atom probe tomography showedmore » that Silicon impurities present in U-10Mo alloys form a Si-rich precipitate phase.« less

  9. Impact of incomplete correspondence between document titles and texts on users' representations: a cognitive and linguistic analysis based on 25 technical documents.

    PubMed

    Eyrolle, Hélène; Virbel, Jacques; Lemarié, Julie

    2008-03-01

    Based on previous research in the field of cognitive psychology, highlighting the facilitatory effects of titles on several text-related activities, this paper looks at the extent to which titles reflect text content. An exploratory study of real-life technical documents investigated the content of their Subject lines, which linguistic analyses had led us to regard as titles. The study showed that most of the titles supplied by the writers failed to represent the documents' contents and that most users failed to detect this lack of validity.

  10. Extending the Life of Virtual Heritage: Reuse of Tls Point Clouds in Synthetic Stereoscopic Spherical Images

    NASA Astrophysics Data System (ADS)

    Garcia Fernandez, J.; Tammi, K.; Joutsiniemi, A.

    2017-02-01

    Recent advances in Terrestrial Laser Scanner (TLS), in terms of cost and flexibility, have consolidated this technology as an essential tool for the documentation and digitalization of Cultural Heritage. However, once the TLS data is used, it basically remains stored and left to waste.How can highly accurate and dense point clouds (of the built heritage) be processed for its reuse, especially to engage a broader audience? This paper aims to answer this question by a channel that minimizes the need for expert knowledge, while enhancing the interactivity with the as-built digital data: Virtual Heritage Dissemination through the production of VR content. Driven by the ProDigiOUs project's guidelines on data dissemination (EU funded), this paper advances in a production path to transform the point cloud into virtual stereoscopic spherical images, taking into account the different visual features that produce depth perception, and especially those prompting visual fatigue while experiencing the VR content. Finally, we present the results of the Hiedanranta's scans transformed into stereoscopic spherical animations.

  11. A Hybrid Digital-Signature and Zero-Watermarking Approach for Authentication and Protection of Sensitive Electronic Documents

    PubMed Central

    Kabir, Muhammad N.; Alginahi, Yasser M.

    2014-01-01

    This paper addresses the problems and threats associated with verification of integrity, proof of authenticity, tamper detection, and copyright protection for digital-text content. Such issues were largely addressed in the literature for images, audio, and video, with only a few papers addressing the challenge of sensitive plain-text media under known constraints. Specifically, with text as the predominant online communication medium, it becomes crucial that techniques are deployed to protect such information. A number of digital-signature, hashing, and watermarking schemes have been proposed that essentially bind source data or embed invisible data in a cover media to achieve its goal. While many such complex schemes with resource redundancies are sufficient in offline and less-sensitive texts, this paper proposes a hybrid approach based on zero-watermarking and digital-signature-like manipulations for sensitive text documents in order to achieve content originality and integrity verification without physically modifying the cover text in anyway. The proposed algorithm was implemented and shown to be robust against undetected content modifications and is capable of confirming proof of originality whilst detecting and locating deliberate/nondeliberate tampering. Additionally, enhancements in resource utilisation and reduced redundancies were achieved in comparison to traditional encryption-based approaches. Finally, analysis and remarks are made about the current state of the art, and future research issues are discussed under the given constraints. PMID:25254247

  12. Influence of Burke and Lessing on the Semiotic Theory of Document Design: Ideologies and Good Visual Images of Documents.

    ERIC Educational Resources Information Center

    Ding, Daniel D.

    2000-01-01

    Presents historical roots of page design principles, arguing that current theories and practices of document design have their roots in gender-related theories of images. Claims visual design should be evaluated regarding the rhetorical situation in which the design is used. Focuses on visual images of documents in professional communication,…

  13. Billet Level Documentation Policy Review

    DTIC Science & Technology

    1991-10-25

    UTILI.ATIN PRGRAM IN TABLE OF CONTENTS Page Table of Contents.......... ..... .. .. .. .... i Executive Summary...rather than substance . The objectives of the NAADS TDA document were: -- To provide a standard means of recording in one document the mission, capabilities...D-54 Drug and Alcohol Abuse Program............................ D-55 Schedule of Civilian and Military Personnel

  14. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  15. SharedCanvas: A Collaborative Model for Medieval Manuscript Layout Dissemination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanderson, Robert D.; Albritton, Benjamin; Schwemmer, Rafael

    2011-01-01

    In this paper we present a model based on the principles of Linked Data that can be used to describe the interrelationships of images, texts and other resources to facilitate the interoperability of repositories of medieval manuscripts or other culturally important handwritten documents. The model is designed from a set of requirements derived from the real world use cases of some of the largest digitized medieval content holders, and instantiations of the model are intended as the input to collection-independent page turning and scholarly presentation interfaces. A canvas painting paradigm, such as in PDF and SVG, was selected based onmore » the lack of a one to one correlation between image and page, and to fulfill complex requirements such as when the full text of a page is known, but only fragments of the physical object remain. The model is implemented using technologies such as OAI-ORE Aggregations and OAC Annotations, as the fundamental building blocks of emerging Linked Digital Libraries. The model and implementation are evaluated through prototypes of both content providing and consuming applications. Although the system was designed from requirements drawn from the medieval manuscript domain, it is applicable to any layout-oriented presentation of images of text.« less

  16. Shuttle Case Study Collection Website Development

    NASA Technical Reports Server (NTRS)

    Ransom, Khadijah S.; Johnson, Grace K.

    2012-01-01

    As a continuation from summer 2012, the Shuttle Case Study Collection has been developed using lessons learned documented by NASA engineers, analysts, and contractors. Decades of information related to processing and launching the Space Shuttle is gathered into a single database to provide educators with an alternative means to teach real-world engineering processes. The goal is to provide additional engineering materials that enhance critical thinking, decision making, and problem solving skills. During this second phase of the project, the Shuttle Case Study Collection website was developed. Extensive HTML coding to link downloadable documents, videos, and images was required, as was training to learn NASA's Content Management System (CMS) for website design. As the final stage of the collection development, the website is designed to allow for distribution of information to the public as well as for case study report submissions from other educators online.

  17. Evaluation of image quality of digital photo documentation of female genital injuries following sexual assault.

    PubMed

    Ernst, E J; Speck, Patricia M; Fitzpatrick, Joyce J

    2011-12-01

    With the patient's consent, physical injuries sustained in a sexual assault are evaluated and treated by the sexual assault nurse examiner (SANE) and documented on preprinted traumagrams and with photographs. Digital imaging is now available to the SANE for documentation of sexual assault injuries, but studies of the image quality of forensic digital imaging of female genital injuries after sexual assault were not found in the literature. The Photo Documentation Image Quality Scoring System (PDIQSS) was developed to rate the image quality of digital photo documentation of female genital injuries after sexual assault. Three expert observers performed evaluations on 30 separate images at two points in time. An image quality score, the sum of eight integral technical and anatomical attributes on the PDIQSS, was obtained for each image. Individual image quality ratings, defined by rating image quality for each of the data, were also determined. The results demonstrated a high level of image quality and agreement when measured in all dimensions. For the SANE in clinical practice, the results of this study indicate that a high degree of agreement exists between expert observers when using the PDIQSS to rate image quality of individual digital photographs of female genital injuries after sexual assault. © 2011 International Association of Forensic Nurses.

  18. Nutritional quality and child-oriented marketing of breakfast cereals in Guatemala.

    PubMed

    Soo, J; Letona, P; Chacon, V; Barnoya, J; Roberto, C A

    2016-01-01

    Food marketing has been implicated as an important driver of obesity. However, few studies have examined food marketing in low- and middle-income countries (LMICs). This study documents the prevalence of advertising on cereal boxes in Guatemala and examines associations between various marketing strategies and nutritional quality. One box from all available cereals was purchased from a supermarket located in an urban area in Guatemala City, Guatemala. A content analysis was performed to document child-oriented marketing practices, product claims and health-evoking images. The Nutrient Profile Model (NPM) was used to calculate an overall nutrition score for each cereal (the higher the score, the lower the nutritional quality). In all, 106 cereals were purchased, and half of the cereals featured child-oriented marketing (54, 50.9%). Cereals had a mean (±s.d.) of 5.10±2.83 product claims per cereal, and most cereals (102, 96.2%) contained health-evoking images. Child-oriented cereals had, on average, higher NPM scores (13.0±0.55 versus 7.90±0.74, P<0.001) and sugar content (10.1±0.48 versus 6.19±0.50 g/30 g, P<0.001) compared with non-child oriented cereals. Cereals with health claims were not significantly healthier than those without claims. In Guatemala, cereals targeting children were generally of poor nutritional quality. Cereals displaying health claims were also not healthier than those without such claims. Our findings support the need for regulations restricting the use of child-oriented marketing and health claims for certain products.

  19. System for information discovery

    DOEpatents

    Pennock, Kelly A [Richland, WA; Miller, Nancy E [Kennewick, WA

    2002-11-19

    A sequence of word filters are used to eliminate terms in the database which do not discriminate document content, resulting in a filtered word set and a topic word set whose members are highly predictive of content. These two word sets are then formed into a two dimensional matrix with matrix entries calculated as the conditional probability that a document will contain a word in a row given that it contains the word in a column. The matrix representation allows the resultant vectors to be utilized to interpret document contents.

  20. Imaging Modalities Relevant to Intracranial Pressure Assessment in Astronauts: A Case-Based Discussion

    NASA Technical Reports Server (NTRS)

    Sargsyan, Ashot E.; Kramer, Larry A.; Hamilton, Douglas R.; Hamilton, Douglas R.; Fogarty, Jennifer; Polk, J. D.

    2010-01-01

    Introduction: Intracranial pressure (ICP) elevation has been inferred or documented in a number of space crewmembers. Recent advances in noninvasive imaging technology offer new possibilities for ICP assessment. Most International Space Station (ISS) partner agencies have adopted a battery of occupational health monitoring tests including magnetic resonance imaging (MRI) pre- and postflight, and high-resolution sonography of the orbital structures in all mission phases including during flight. We hypothesize that joint consideration of data from the two techniques has the potential to improve quality and continuity of crewmember monitoring and care. Methods: Specially designed MRI and sonographic protocols were used to image eyes and optic nerves (ON) including the meningeal sheaths. Specific crewmembers multi-modality imaging data were analyzed to identify points of mutual validation as well as unique features of complementary nature. Results and Conclusion: Magnetic resonance imaging (MRI) and high-resolution sonography are both tomographic methods, however images obtained by the two modalities are based on different physical phenomena and use different acquisition principles. Consideration of the images acquired by these two modalities allows cross-validating findings related to the volume and fluid content of the ON subarachnoid space, shape of the globe, and other anatomical features of the orbit. Each of the imaging modalities also has unique advantages, making them complementary techniques.

  1. Curating and Preserving the Big Canopy Database System: an Active Curation Approach using SEAD

    NASA Astrophysics Data System (ADS)

    Myers, J.; Cushing, J. B.; Lynn, P.; Weiner, N.; Ovchinnikova, A.; Nadkarni, N.; McIntosh, A.

    2015-12-01

    Modern research is increasingly dependent upon highly heterogeneous data and on the associated cyberinfrastructure developed to organize, analyze, and visualize that data. However, due to the complexity and custom nature of such combined data-software systems, it can be very challenging to curate and preserve them for the long term at reasonable cost and in a way that retains their scientific value. In this presentation, we describe how this challenge was met in preserving the Big Canopy Database (CanopyDB) system using an agile approach and leveraging the Sustainable Environment - Actionable Data (SEAD) DataNet project's hosted data services. The CanopyDB system was developed over more than a decade at Evergreen State College to address the needs of forest canopy researchers. It is an early yet sophisticated exemplar of the type of system that has become common in biological research and science in general, including multiple relational databases for different experiments, a custom database generation tool used to create them, an image repository, and desktop and web tools to access, analyze, and visualize this data. SEAD provides secure project spaces with a semantic content abstraction (typed content with arbitrary RDF metadata statements and relationships to other content), combined with a standards-based curation and publication pipeline resulting in packaged research objects with Digital Object Identifiers. Using SEAD, our cross-project team was able to incrementally ingest CanopyDB components (images, datasets, software source code, documentation, executables, and virtualized services) and to iteratively define and extend the metadata and relationships needed to document them. We believe that both the process, and the richness of the resultant standards-based (OAI-ORE) preservation object, hold lessons for the development of best-practice solutions for preserving scientific data in association with the tools and services needed to derive value from it.

  2. Onboard shuttle on-line software requirements system: Prototype

    NASA Technical Reports Server (NTRS)

    Kolkhorst, Barbara; Ogletree, Barry

    1989-01-01

    The prototype discussed here was developed as proof of a concept for a system which could support high volumes of requirements documents with integrated text and graphics; the solution proposed here could be extended to other projects whose goal is to place paper documents in an electronic system for viewing and printing purposes. The technical problems (such as conversion of documentation between word processors, management of a variety of graphics file formats, and difficulties involved in scanning integrated text and graphics) would be very similar for other systems of this type. Indeed, technological advances in areas such as scanning hardware and software and display terminals insure that some of the problems encountered here will be solved in the near-term (less than five years). Examples of these solvable problems include automated input of integrated text and graphics, errors in the recognition process, and the loss of image information which results from the digitization process. The solution developed for the Online Software Requirements System is modular and allows hardware and software components to be upgraded or replaced as industry solutions mature. The extensive commercial software content allows the NASA customer to apply resources to solving the problem and maintaining documents.

  3. Automated Content Detection for Cassini Images

    NASA Astrophysics Data System (ADS)

    Stanboli, A.; Bue, B.; Wagstaff, K.; Altinok, A.

    2017-06-01

    NASA missions generate numerous images ever organized in increasingly large archives. Image archives are currently not searchable by image content. We present an automated content detection prototype that can enable content search.

  4. Reproducible Bioconductor workflows using browser-based interactive notebooks and containers.

    PubMed

    Almugbel, Reem; Hung, Ling-Hong; Hu, Jiaming; Almutairy, Abeer; Ortogero, Nicole; Tamta, Yashaswi; Yeung, Ka Yee

    2018-01-01

    Bioinformatics publications typically include complex software workflows that are difficult to describe in a manuscript. We describe and demonstrate the use of interactive software notebooks to document and distribute bioinformatics research. We provide a user-friendly tool, BiocImageBuilder, that allows users to easily distribute their bioinformatics protocols through interactive notebooks uploaded to either a GitHub repository or a private server. We present four different interactive Jupyter notebooks using R and Bioconductor workflows to infer differential gene expression, analyze cross-platform datasets, process RNA-seq data and KinomeScan data. These interactive notebooks are available on GitHub. The analytical results can be viewed in a browser. Most importantly, the software contents can be executed and modified. This is accomplished using Binder, which runs the notebook inside software containers, thus avoiding the need to install any software and ensuring reproducibility. All the notebooks were produced using custom files generated by BiocImageBuilder. BiocImageBuilder facilitates the publication of workflows with a point-and-click user interface. We demonstrate that interactive notebooks can be used to disseminate a wide range of bioinformatics analyses. The use of software containers to mirror the original software environment ensures reproducibility of results. Parameters and code can be dynamically modified, allowing for robust verification of published results and encouraging rapid adoption of new methods. Given the increasing complexity of bioinformatics workflows, we anticipate that these interactive software notebooks will become as necessary for documenting software methods as traditional laboratory notebooks have been for documenting bench protocols, and as ubiquitous. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  5. NASA software documentation standard software engineering program

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The NASA Software Documentation Standard (hereinafter referred to as Standard) can be applied to the documentation of all NASA software. This Standard is limited to documentation format and content requirements. It does not mandate specific management, engineering, or assurance standards or techniques. This Standard defines the format and content of documentation for software acquisition, development, and sustaining engineering. Format requirements address where information shall be recorded and content requirements address what information shall be recorded. This Standard provides a framework to allow consistency of documentation across NASA and visibility into the completeness of project documentation. This basic framework consists of four major sections (or volumes). The Management Plan contains all planning and business aspects of a software project, including engineering and assurance planning. The Product Specification contains all technical engineering information, including software requirements and design. The Assurance and Test Procedures contains all technical assurance information, including Test, Quality Assurance (QA), and Verification and Validation (V&V). The Management, Engineering, and Assurance Reports is the library and/or listing of all project reports.

  6. 43 CFR 45.11 - What are the form and content requirements for documents under this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false What are the form and content requirements for documents under this subpart? 45.11 Section 45.11 Public Lands: Interior Office of the Secretary of the Interior CONDITIONS AND PRESCRIPTIONS IN FERC HYDROPOWER LICENSES Hearing Process Document...

  7. World Wide Web Based Image Search Engine Using Text and Image Content Features

    NASA Astrophysics Data System (ADS)

    Luo, Bo; Wang, Xiaogang; Tang, Xiaoou

    2003-01-01

    Using both text and image content features, a hybrid image retrieval system for Word Wide Web is developed in this paper. We first use a text-based image meta-search engine to retrieve images from the Web based on the text information on the image host pages to provide an initial image set. Because of the high-speed and low cost nature of the text-based approach, we can easily retrieve a broad coverage of images with a high recall rate and a relatively low precision. An image content based ordering is then performed on the initial image set. All the images are clustered into different folders based on the image content features. In addition, the images can be re-ranked by the content features according to the user feedback. Such a design makes it truly practical to use both text and image content for image retrieval over the Internet. Experimental results confirm the efficiency of the system.

  8. Contrast performance modeling of broadband reflective imaging systems with hypothetical tunable filter fore-optics

    NASA Astrophysics Data System (ADS)

    Hodgkin, Van A.

    2015-05-01

    Most mass-produced, commercially available and fielded military reflective imaging systems operate across broad swaths of the visible, near infrared (NIR), and shortwave infrared (SWIR) wavebands without any spectral selectivity within those wavebands. In applications that employ these systems, it is not uncommon to be imaging a scene in which the image contrasts between the objects of interest, i.e., the targets, and the objects of little or no interest, i.e., the backgrounds, are sufficiently low to make target discrimination difficult or uncertain. This can occur even when the spectral distribution of the target and background reflectivity across the given waveband differ significantly from each other, because the fundamental components of broadband image contrast are the spectral integrals of the target and background signatures. Spectral integration by the detectors tends to smooth out any differences. Hyperspectral imaging is one approach to preserving, and thus highlighting, spectral differences across the scene, even when the waveband integrated signatures would be about the same, but it is an expensive, complex, noncompact, and untimely solution. This paper documents a study of how the capability to selectively customize the spectral width and center wavelength with a hypothetical tunable fore-optic filter would allow a broadband reflective imaging sensor to optimize image contrast as a function of scene content and ambient illumination.

  9. Goal-oriented evaluation of binarization algorithms for historical document images

    NASA Astrophysics Data System (ADS)

    Obafemi-Ajayi, Tayo; Agam, Gady

    2013-01-01

    Binarization is of significant importance in document analysis systems. It is an essential first step, prior to further stages such as Optical Character Recognition (OCR), document segmentation, or enhancement of readability of the document after some restoration stages. Hence, proper evaluation of binarization methods to verify their effectiveness is of great value to the document analysis community. In this work, we perform a detailed goal-oriented evaluation of image quality assessment of the 18 binarization methods that participated in the DIBCO 2011 competition using the 16 historical document test images used in the contest. We are interested in the image quality assessment of the outputs generated by the different binarization algorithms as well as the OCR performance, where possible. We compare our evaluation of the algorithms based on human perception of quality to the DIBCO evaluation metrics. The results obtained provide an insight into the effectiveness of these methods with respect to human perception of image quality as well as OCR performance.

  10. IHE cross-enterprise document sharing for imaging: design challenges

    NASA Astrophysics Data System (ADS)

    Noumeir, Rita

    2006-03-01

    Integrating the Healthcare Enterprise (IHE) has recently published a new integration profile for sharing documents between multiple enterprises. The Cross-Enterprise Document Sharing Integration Profile (XDS) lays the basic framework for deploying regional and national Electronic Health Record (EHR). This profile proposes an architecture based on a central Registry that holds metadata information describing published Documents residing in one or multiple Documents Repositories. As medical images constitute important information of the patient health record, it is logical to extend the XDS Integration Profile to include images. However, including images in the EHR presents many challenges. The complete image set is very large; it is useful for radiologists and other specialists such as surgeons and orthopedists. The imaging report, on the other hand, is widely needed and its broad accessibility is vital for achieving optimal patient care. Moreover, a subset of relevant images may also be of wide interest along with the report. Therefore, IHE recently published a new integration profile for sharing images and imaging reports between multiple enterprises. This new profile, the Cross-Enterprise Document Sharing for Imaging (XDS-I), is based on the XDS architecture. The XDS-I integration solution that is published as part of the IHE Technical Framework is the result of an extensive investigation effort of several design solutions. This paper presents and discusses the design challenges and the rationales behind the design decisions of the IHE XDS-I Integration Profile, for a better understanding and appreciation of the final published solution.

  11. Rectification of curved document images based on single view three-dimensional reconstruction.

    PubMed

    Kang, Lai; Wei, Yingmei; Jiang, Jie; Bai, Liang; Lao, Songyang

    2016-10-01

    Since distortions in camera-captured document images significantly affect the accuracy of optical character recognition (OCR), distortion removal plays a critical role for document digitalization systems using a camera for image capturing. This paper proposes a novel framework that performs three-dimensional (3D) reconstruction and rectification of camera-captured document images. While most existing methods rely on additional calibrated hardware or multiple images to recover the 3D shape of a document page, or make a simple but not always valid assumption on the corresponding 3D shape, our framework is more flexible and practical since it only requires a single input image and is able to handle a general locally smooth document surface. The main contributions of this paper include a new iterative refinement scheme for baseline fitting from connected components of text line, an efficient discrete vertical text direction estimation algorithm based on convex hull projection profile analysis, and a 2D distortion grid construction method based on text direction function estimation using 3D regularization. In order to examine the performance of our proposed method, both qualitative and quantitative evaluation and comparison with several recent methods are conducted in our experiments. The experimental results demonstrate that the proposed method outperforms relevant approaches for camera-captured document image rectification, in terms of improvements on both visual distortion removal and OCR accuracy.

  12. Beta cells transfer vesicles containing insulin to phagocytes for presentation to T cells.

    PubMed

    Vomund, Anthony N; Zinselmeyer, Bernd H; Hughes, Jing; Calderon, Boris; Valderrama, Carolina; Ferris, Stephen T; Wan, Xiaoxiao; Kanekura, Kohsuke; Carrero, Javier A; Urano, Fumihiko; Unanue, Emil R

    2015-10-06

    Beta cells from nondiabetic mice transfer secretory vesicles to phagocytic cells. The passage was shown in culture studies where the transfer was probed with CD4 T cells reactive to insulin peptides. Two sets of vesicles were transferred, one containing insulin and another containing catabolites of insulin. The passage required live beta cells in a close cell contact interaction with the phagocytes. It was increased by high glucose concentration and required mobilization of intracellular Ca2+. Live images of beta cell-phagocyte interactions documented the intimacy of the membrane contact and the passage of the granules. The passage was found in beta cells isolated from islets of young nonobese diabetic (NOD) mice and nondiabetic mice as well as from nondiabetic humans. Ultrastructural analysis showed intraislet phagocytes containing vesicles having the distinct morphology of dense-core granules. These findings document a process whereby the contents of secretory granules become available to the immune system.

  13. Imaging Systems: What, When, How.

    ERIC Educational Resources Information Center

    Lunin, Lois F.; And Others

    1992-01-01

    The three articles in this special section on document image files discuss intelligent character recognition, including comparison with optical character recognition; selection of displays for document image processing, focusing on paperlike displays; and imaging hardware, software, and vendors, including guidelines for system selection. (MES)

  14. BioSig3D: High Content Screening of Three-Dimensional Cell Culture Models

    PubMed Central

    Bilgin, Cemal Cagatay; Fontenay, Gerald; Cheng, Qingsu; Chang, Hang; Han, Ju; Parvin, Bahram

    2016-01-01

    BioSig3D is a computational platform for high-content screening of three-dimensional (3D) cell culture models that are imaged in full 3D volume. It provides an end-to-end solution for designing high content screening assays, based on colony organization that is derived from segmentation of nuclei in each colony. BioSig3D also enables visualization of raw and processed 3D volumetric data for quality control, and integrates advanced bioinformatics analysis. The system consists of multiple computational and annotation modules that are coupled together with a strong use of controlled vocabularies to reduce ambiguities between different users. It is a web-based system that allows users to: design an experiment by defining experimental variables, upload a large set of volumetric images into the system, analyze and visualize the dataset, and either display computed indices as a heatmap, or phenotypic subtypes for heterogeneity analysis, or download computed indices for statistical analysis or integrative biology. BioSig3D has been used to profile baseline colony formations with two experiments: (i) morphogenesis of a panel of human mammary epithelial cell lines (HMEC), and (ii) heterogeneity in colony formation using an immortalized non-transformed cell line. These experiments reveal intrinsic growth properties of well-characterized cell lines that are routinely used for biological studies. BioSig3D is being released with seed datasets and video-based documentation. PMID:26978075

  15. Nuclear protein accumulation in cellular senescence and organismal aging revealed with a novel single-cell resolution fluorescence microscopy assay.

    PubMed

    De Cecco, Marco; Jeyapalan, Jessie; Zhao, Xiaoai; Tamamori-Adachi, Mimi; Sedivy, John M

    2011-10-01

    Replicative cellular senescence was discovered some 50 years ago. The phenotypes of senescent cells have been investigated extensively in cell culture, and found to affect essentially all aspects of cellular physiology. The relevance of cellular senescence in the context of age-associated pathologies as well as normal aging is a topic of active and ongoing interest. Considerable effort has been devoted to biomarker discovery to enable the microscopic detection of single senescent cells in tissues. One characteristic of senescent cells documented very early in cell culture studies was an increase in cell size and total protein content, but whether this occurs in vivo is not known. A limiting factor for studies of protein content and localization has been the lack of suitable fluorescence microscopy tools. We have developed an easy and flexible method, based on the merocyanine dye known as NanoOrange, to visualize and quantitatively measure total protein levels by high resolution fluorescence microscopy. NanoOrange staining can be combined with antibody-based immunofluorescence, thus providing both specific target and total protein information in the same specimen. These methods are optimally combined with automated image analysis platforms for high throughput analysis. We document here increasing protein content and density in nuclei of senescent human and mouse fibroblasts in vitro, and in liver nuclei of aged mice in vivo. Additionally, in aged liver nuclei NanoOrange revealed protein-dense foci that colocalize with centromeric heterochromatin.

  16. Nuclear protein accumulation in cellular senescence and organismal aging revealed with a novel single-cell resolution fluorescence microscopy assay

    PubMed Central

    De Cecco, Marco; Jeyapalan, Jessie; Zhao, Xiaoai; Tamamori-Adachi, Mimi; Sedivy, John M.

    2011-01-01

    Replicative cellular senescence was discovered some 50 years ago. The phenotypes of senescent cells have been investigated extensively in cell culture, and found to affect essentially all aspects of cellular physiology. The relevance of cellular senescence in the context of age-associated pathologies as well as normal aging is a topic of active and ongoing interest. Considerable effort has been devoted to biomarker discovery to enable the microscopic detection of single senescent cells in tissues. One characteristic of senescent cells documented very early in cell culture studies was an increase in cell size and total protein content, but whether this occurs in vivo is not known. A limiting factor for studies of protein content and localization has been the lack of suitable fluorescence microscopy tools. We have developed an easy and flexible method, based on the merocyanine dye known as NanoOrange, to visualize and quantitatively measure total protein levels by high resolution fluorescence microscopy. NanoOrange staining can be combined with antibody-based immunofluorescence, thus providing both specific target and total protein information in the same specimen. These methods are optimally combined with automated image analysis platforms for high throughput analysis. We document here increasing protein content and density in nuclei of senescent human and mouse fibroblasts in vitro, and in liver nuclei of aged mice in vivo. Additionally, in aged liver nuclei NanoOrange revealed protein-dense foci that colocalize with centromeric heterochromatin. PMID:22006542

  17. Every document and picture tells a story: using internal corporate document reviews, semiotics, and content analysis to assess tobacco advertising.

    PubMed

    Anderson, S J; Dewhirst, T; Ling, P M

    2006-06-01

    In this article we present communication theory as a conceptual framework for conducting documents research on tobacco advertising strategies, and we discuss two methods for analysing advertisements: semiotics and content analysis. We provide concrete examples of how we have used tobacco industry documents archives and tobacco advertisement collections iteratively in our research to yield a synergistic analysis of these two complementary data sources. Tobacco promotion researchers should consider adopting these theoretical and methodological approaches.

  18. Digitization of medical documents: an X-Windows application for fast scanning.

    PubMed

    Muñoz, A; Salvador, C H; Gonzalez, M A; Dueñas, A

    1992-01-01

    This paper deals with digitization, using a commercial scanner, of medical documents as still images for introduction into a computer-based Information System. Document management involves storing, editing and transmission. This task has usually been approached from the perspective of the difficulties posed by radiologic images because of their indisputable qualitative and quantitative significance. However, healthcare activities require the management of many other types of documents and involve the requirements of numerous users. One key to document management will be the availability of a digitizer to deal with the greatest possible number of different types of documents. This paper describes the relevant aspects of documents and the technical specifications that digitizers must fulfill. The concept of document type is introduced as the ideal set of digitizing parameters for a given document. The use of document type parameters can drastically reduce the time the user spends in scanning sessions. Presentation is made of an application based on Unix, X-Windows and OSF/Motif, with a GPIB interface, implemented around the document type concept. Finally, the results of the evaluation of the application are presented, focusing on the user interface, as well as on the viewing of color images in an X-Windows environment and the use of lossy algorithms in the compression of medical images.

  19. SHUTTLE IMAGING RADAR: PHYSICAL CONTROLS ON SIGNAL PENETRATION AND SUBSURFACE SCATTERING IN THE EASTERN SAHARA.

    USGS Publications Warehouse

    Schaber, Gerald G.; McCauley, John F.; Breed, Carol S.; Olhoeft, Gary R.

    1986-01-01

    It is found that the Shuttle Imaging Radar A (SIR-A) signal penetration and subsurface backscatter within the upper meter or so of the sediment blanket in the Eastern Sahara of southern Egypt and northern Sudan are enhanced both by radar sensor parameters and by the physical and chemical characteristics of eolian and alluvial materials. The near-surface stratigraphy, the electrical properties of materials, and the types of radar interfaces found to be responsible for different classes of SIR-A tonal response are summarized. The dominant factors related to efficient microwave signal penetration into the sediment blanket include 1) favorable distribution of particle sizes, 2) extremely low moisture content and 3) reduced geometric scattering at the SIR-A frequency (1. 3 GHz). The depth of signal penetration that results in a recorded backscatter, called radar imaging depth, was documented in the field to be a maximum of 1. 5 m, or 0. 25 times the calculated skin depth, for the sediment blanket. The radar imaging depth is estimated to be between 2 and 3 m for active sand dune materials.

  20. Medication order communication using fax and document-imaging technologies.

    PubMed

    Simonian, Armen I

    2008-03-15

    The implementation of fax and document-imaging technology to electronically communicate medication orders from nursing stations to the pharmacy is described. The evaluation of a commercially available pharmacy order imaging system to improve order communication and to make document retrieval more efficient led to the selection and customization of a system already licensed and used in seven affiliated hospitals. The system consisted of existing fax machines and document-imaging software that would capture images of written orders and send them from nursing stations to a central database server. Pharmacists would then retrieve the images and enter the orders in an electronic medical record system. The pharmacy representatives from all seven hospitals agreed on the configuration and functionality of the custom application. A 30-day trial of the order imaging system was successfully conducted at one of the larger institutions. The new system was then implemented at the remaining six hospitals over a period of 60 days. The transition from a paper-order system to electronic communication via a standardized pharmacy document management application tailored to the specific needs of this health system was accomplished. A health system with seven affiliated hospitals successfully implemented electronic communication and the management of inpatient paper-chart orders by using faxes and document-imaging technology. This standardized application eliminated the problems associated with the hand delivery of paper orders, the use of the pneumatic tube system, and the printing of traditional faxes.

  1. Fusion of Deep Learning and Compressed Domain features for Content Based Image Retrieval.

    PubMed

    Liu, Peizhong; Guo, Jing-Ming; Wu, Chi-Yi; Cai, Danlin

    2017-08-29

    This paper presents an effective image retrieval method by combining high-level features from Convolutional Neural Network (CNN) model and low-level features from Dot-Diffused Block Truncation Coding (DDBTC). The low-level features, e.g., texture and color, are constructed by VQ-indexed histogram from DDBTC bitmap, maximum, and minimum quantizers. Conversely, high-level features from CNN can effectively capture human perception. With the fusion of the DDBTC and CNN features, the extended deep learning two-layer codebook features (DL-TLCF) is generated using the proposed two-layer codebook, dimension reduction, and similarity reweighting to improve the overall retrieval rate. Two metrics, average precision rate (APR) and average recall rate (ARR), are employed to examine various datasets. As documented in the experimental results, the proposed schemes can achieve superior performance compared to the state-of-the-art methods with either low- or high-level features in terms of the retrieval rate. Thus, it can be a strong candidate for various image retrieval related applications.

  2. An exponentiation method for XML element retrieval.

    PubMed

    Wichaiwong, Tanakorn

    2014-01-01

    XML document is now widely used for modelling and storing structured documents. The structure is very rich and carries important information about contents and their relationships, for example, e-Commerce. XML data-centric collections require query terms allowing users to specify constraints on the document structure; mapping structure queries and assigning the weight are significant for the set of possibly relevant documents with respect to structural conditions. In this paper, we present an extension to the MEXIR search system that supports the combination of structural and content queries in the form of content-and-structure queries, which we call the Exponentiation function. It has been shown the structural information improve the effectiveness of the search system up to 52.60% over the baseline BM25 at MAP.

  3. The use of fingerprints available on the web in false identity documents: Analysis from a forensic intelligence perspective.

    PubMed

    Girelli, Carlos Magno Alves

    2016-05-01

    Fingerprints present in false identity documents were found on the web. In some cases, laterally reversed (mirrored) images of a same fingerprint were observed in different documents. In the present work, 100 fingerprints images downloaded from the web, as well as their reversals obtained by image editing, were compared between themselves and against the database of the Brazilian Federal Police AFIS, in order to better understand trends about this kind of forgery in Brazil. Some image editing effects were observed in the analyzed fingerprints: addition of artifacts (such as watermarks), image rotation, image stylization, lateral reversal and tonal reversal. Discussion about lateral reversals' detection is presented in this article, as well as suggestion to reduce errors due to missed HIT decisions between reversed fingerprints. The present work aims to highlight the importance of the fingerprints' analysis when performing document examination, especially when only copies of documents are available, something very common in Brazil. Besides the intrinsic features of the fingermarks considered in three levels of details by ACE-V methodology, some visual features of the fingerprints images can be helpful to identify sources of forgeries and modus operandi, such as: limits and image contours, fails in the friction ridges caused by excess or lack of inking and presence of watermarks and artifacts arising from the background. Based on the agreement of such features in fingerprints present in different identity documents and also on the analysis of the time and location where the documents were seized, it is possible to highlight potential links between apparently unconnected crimes. Therefore, fingerprints have potential to reduce linkage blindness and the present work suggests the analysis of fingerprints when profiling false identity documents, as well as the inclusion of fingerprints features in the profile of the documents. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. 7 CFR 1.611 - What are the form and content requirements for documents under §§ 1.610 through 1.660?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 1 2010-01-01 2010-01-01 false What are the form and content requirements for documents under §§ 1.610 through 1.660? 1.611 Section 1.611 Agriculture Office of the Secretary of Agriculture ADMINISTRATIVE REGULATIONS Conditions in FERC Hydropower Licenses Document Filing and Service § 1...

  5. Tobacco Industry Lifestyle Magazines Targeted to Young Adults

    PubMed Central

    Cortese, Daniel K.; Lewis, M. Jane; Ling, Pamela M.

    2010-01-01

    Purpose This is the first study describing the tobacco industry’s objectives developing and publishing lifestyle magazines, linking them to tobacco marketing strategies, and how these magazines may encourage smoking. Methods Analysis of previously secret tobacco industry documents and content analysis of 31 lifestyle magazines to understand the motives behind producing these magazines and the role they played in tobacco marketing strategies. Results Philip Morris (PM) debuted Unlimited in 1996 to nearly 2 million readers and RJ Reynolds (RJR) debuted CML in 1999 targeting young adults with their interests. Both magazines were developed as the tobacco companies faced increased advertising restrictions Unlimited contained few images of smoking, but frequently featured elements of the Marlboro brand identity in both advertising and article content. CML featured more smoking imagery and fewer Camel brand identity elements. Conclusions Lifestyle promotions that lack images of smoking may still promote tobacco use through brand imagery. The tobacco industry still uses the “under the radar” strategies used in development of lifestyle magazines in branded websites. Prohibiting lifestyle advertising including print and electronic media that associate tobacco with recreation, action, pleasures, and risky behaviors or that reinforces tobacco brand identity may be an effective strategy to curb young adult smoking. PMID:19699423

  6. Predicting floods with Flickr tags.

    PubMed

    Tkachenko, Nataliya; Jarvis, Stephen; Procter, Rob

    2017-01-01

    Increasingly, user generated content (UGC) in social media postings and their associated metadata such as time and location stamps are being used to provide useful operational information during natural hazard events such as hurricanes, storms and floods. The main advantage of these new sources of data are twofold. First, in a purely additive sense, they can provide much denser geographical coverage of the hazard as compared to traditional sensor networks. Second, they provide what physical sensors are not able to do: By documenting personal observations and experiences, they directly record the impact of a hazard on the human environment. For this reason interpretation of the content (e.g., hashtags, images, text, emojis, etc) and metadata (e.g., keywords, tags, geolocation) have been a focus of much research into social media analytics. However, as choices of semantic tags in the current methods are usually reduced to the exact name or type of the event (e.g., hashtags '#Sandy' or '#flooding'), the main limitation of such approaches remains their mere nowcasting capacity. In this study we make use of polysemous tags of images posted during several recent flood events and demonstrate how such volunteered geographic data can be used to provide early warning of an event before its outbreak.

  7. Tobacco industry lifestyle magazines targeted to young adults.

    PubMed

    Cortese, Daniel K; Lewis, M Jane; Ling, Pamela M

    2009-09-01

    This is the first study describing the tobacco industry's objectives developing and publishing lifestyle magazines, linking them to tobacco marketing strategies, and how these magazines may encourage smoking. Analysis of previously secret tobacco industry documents and content analysis of 31 lifestyle magazines to understand the motives behind producing these magazines and the role they played in tobacco marketing strategies. Philip Morris (PM) debuted Unlimited in 1996 to nearly 2 million readers and RJ Reynolds (RJR) debuted CML in 1999, targeting young adults with their interests. Both magazines were developed as the tobacco companies faced increased advertising restrictions. Unlimited contained few images of smoking, but frequently featured elements of the Marlboro brand identity in both advertising and article content. CML featured more smoking imagery and fewer Camel brand identity elements. Lifestyle promotions that lack images of smoking may still promote tobacco use through brand imagery. The tobacco industry still uses the "under-the-radar" strategies used in development of lifestyle magazines in branded Websites. Prohibiting lifestyle advertising including print and electronic media that associate tobacco with recreation, action, pleasures, and risky behaviors or that reinforces tobacco brand identity may be an effective strategy to curb young adult smoking.

  8. Landmark Image Retrieval by Jointing Feature Refinement and Multimodal Classifier Learning.

    PubMed

    Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun; Ma, Shuai; Xiaoming Zhang; Senzhang Wang; Zhoujun Li; Shuai Ma; Ma, Shuai; Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun

    2018-06-01

    Landmark retrieval is to return a set of images with their landmarks similar to those of the query images. Existing studies on landmark retrieval focus on exploiting the geometries of landmarks for visual similarity matches. However, the visual content of social images is of large diversity in many landmarks, and also some images share common patterns over different landmarks. On the other side, it has been observed that social images usually contain multimodal contents, i.e., visual content and text tags, and each landmark has the unique characteristic of both visual content and text content. Therefore, the approaches based on similarity matching may not be effective in this environment. In this paper, we investigate whether the geographical correlation among the visual content and the text content could be exploited for landmark retrieval. In particular, we propose an effective multimodal landmark classification paradigm to leverage the multimodal contents of social image for landmark retrieval, which integrates feature refinement and landmark classifier with multimodal contents by a joint model. The geo-tagged images are automatically labeled for classifier learning. Visual features are refined based on low rank matrix recovery, and multimodal classification combined with group sparse is learned from the automatically labeled images. Finally, candidate images are ranked by combining classification result and semantic consistence measuring between the visual content and text content. Experiments on real-world datasets demonstrate the superiority of the proposed approach as compared to existing methods.

  9. Annotating image ROIs with text descriptions for multimodal biomedical document retrieval

    NASA Astrophysics Data System (ADS)

    You, Daekeun; Simpson, Matthew; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-01-01

    Regions of interest (ROIs) that are pointed to by overlaid markers (arrows, asterisks, etc.) in biomedical images are expected to contain more important and relevant information than other regions for biomedical article indexing and retrieval. We have developed several algorithms that localize and extract the ROIs by recognizing markers on images. Cropped ROIs then need to be annotated with contents describing them best. In most cases accurate textual descriptions of the ROIs can be found from figure captions, and these need to be combined with image ROIs for annotation. The annotated ROIs can then be used to, for example, train classifiers that separate ROIs into known categories (medical concepts), or to build visual ontologies, for indexing and retrieval of biomedical articles. We propose an algorithm that pairs visual and textual ROIs that are extracted from images and figure captions, respectively. This algorithm based on dynamic time warping (DTW) clusters recognized pointers into groups, each of which contains pointers with identical visual properties (shape, size, color, etc.). Then a rule-based matching algorithm finds the best matching group for each textual ROI mention. Our method yields a precision and recall of 96% and 79%, respectively, when ground truth textual ROI data is used.

  10. XDS in healthcare: Could it lead to a duplication problem? Field study from GVR Sweden

    NASA Astrophysics Data System (ADS)

    Wintell, M.; Lundberg, N.; Lindsköld, L.

    2011-03-01

    Managing different registries and repositories within healthcare regions grows the risk of having almost the same information but with different status and with different content. This is due to the fact that when medical information is created it's done in a dynamical process that will lead to that information will change its contents during lifetime within the "active" healthcare phase. The information needs to be easy accessible, being the platform for making the medical decisions transparent. In the Region Västra Götaland (VGR), Sweden, data is shared from 29 X-ray departments with different Picture Archive and Communication Systems (PACS) and Radiology Information Systems (RIS) systems through the Infobroker solution, that's acts as a broker between the actors involved. Request/reports from RIS are stored as DIgital COmmunication in Medicine (DICOM)-Structured Reports (SR) objects, together with the images. Every status change within this activities are updated within the Information Infrastructure based on Integrating the Healthcare Enterprise (IHE) mission. Cross-enterprise Document Sharing for Imaging (XDS-I) were the registry and the central repository are the components used for sharing medical documentation. The VGR strategy was not to apply one regional XDS-I registry and repository, instead VGR applied an Enterprise Architecture (EA) intertwined with the Information Infrastructure for the dynamic delivery to consumers. The upcoming usage of different Regional XDS registries and repositories could lead to new ways of carrying out shared work but it can also lead into "problems". XDS and XDS-I implemented without a strategy could lead to increased numbers of status/versions but also duplication of information in the Information Infrastructure.

  11. Clementine High Resolution Camera Mosaicking Project. Volume 21; CL 6021; 80 deg S to 90 deg S Latitude, North Periapsis; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Clementine I high resolution (HiRes) camera lunar image mosaics developed by Malin Space Science Systems (MSSS). These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. The geometric control is provided by the U. S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD are compiled from polar data (latitudes greater than 80 degrees), and are presented in the stereographic projection at a scale of 30 m/pixel at the pole, a resolution 5 times greater than that (150 m/pixel) of the corresponding UV/Vis polar basemap. This 5:1 scale ratio is in keeping with the sub-polar mosaic, in which the HiRes and UV/Vis mosaics had scales of 20 m/pixel and 100 m/pixel, respectively. The equal-area property of the stereographic projection made this preferable for the HiRes polar mosaic rather than the basemap's orthographic projection. Thus, a necessary first step in constructing the mosaic was the reprojection of the UV/Vis basemap to the stereographic projection. The HiRes polar data can be naturally grouped according to the orbital periapsis, which was in the south during the first half of the mapping mission and in the north during the second half. Images in each group have generally uniform intrinsic resolution, illumination, exposure and gain. Rather than mingle data from the two periapsis epochs, separate mosaics are provided for each, a total of 4 polar mosaics. The mosaics are divided into 100 square tiles of 2250 pixels (approximately 2.2 deg near the pole) on a side. Not all squares of this grid contain HiRes mosaic data, some inevitably since a square is not a perfect representation of a (latitude) circle, others due to the lack of HiRes data. This CD also contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  12. Large-scale image region documentation for fully automated image biomarker algorithm development and evaluation.

    PubMed

    Reeves, Anthony P; Xie, Yiting; Liu, Shuang

    2017-04-01

    With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset.

  13. Ns-scaled time-gated fluorescence lifetime imaging for forensic document examination

    NASA Astrophysics Data System (ADS)

    Zhong, Xin; Wang, Xinwei; Zhou, Yan

    2018-01-01

    A method of ns-scaled time-gated fluorescence lifetime imaging (TFLI) is proposed to distinguish different fluorescent substances in forensic document examination. Compared with Video Spectral Comparator (VSC) which can examine fluorescence intensity images only, TFLI can detect questioned documents like falsification or alteration. TFLI system can enhance weak signal by accumulation method. The two fluorescence intensity images of the interval delay time tg are acquired by ICCD and fitted into fluorescence lifetime image. The lifetimes of fluorescence substances are represented by different colors, which make it easy to detect the fluorescent substances and the sequence of handwritings. It proves that TFLI is a powerful tool for forensic document examination. Furthermore, the advantages of TFLI system are ns-scaled precision preservation and powerful capture capability.

  14. Global and Local Features Based Classification for Bleed-Through Removal

    NASA Astrophysics Data System (ADS)

    Hu, Xiangyu; Lin, Hui; Li, Shutao; Sun, Bin

    2016-12-01

    The text on one side of historical documents often seeps through and appears on the other side, so the bleed-through is a common problem in historical document images. It makes the document images hard to read and the text difficult to recognize. To improve the image quality and readability, the bleed-through has to be removed. This paper proposes a global and local features extraction based bleed-through removal method. The Gaussian mixture model is used to get the global features of the images. Local features are extracted by the patch around each pixel. Then, the extreme learning machine classifier is utilized to classify the scanned images into the foreground text and the bleed-through component. Experimental results on real document image datasets show that the proposed method outperforms the state-of-the-art bleed-through removal methods and preserves the text strokes well.

  15. A hierarchical SVG image abstraction layer for medical imaging

    NASA Astrophysics Data System (ADS)

    Kim, Edward; Huang, Xiaolei; Tan, Gang; Long, L. Rodney; Antani, Sameer

    2010-03-01

    As medical imaging rapidly expands, there is an increasing need to structure and organize image data for efficient analysis, storage and retrieval. In response, a large fraction of research in the areas of content-based image retrieval (CBIR) and picture archiving and communication systems (PACS) has focused on structuring information to bridge the "semantic gap", a disparity between machine and human image understanding. An additional consideration in medical images is the organization and integration of clinical diagnostic information. As a step towards bridging the semantic gap, we design and implement a hierarchical image abstraction layer using an XML based language, Scalable Vector Graphics (SVG). Our method encodes features from the raw image and clinical information into an extensible "layer" that can be stored in a SVG document and efficiently searched. Any feature extracted from the raw image including, color, texture, orientation, size, neighbor information, etc., can be combined in our abstraction with high level descriptions or classifications. And our representation can natively characterize an image in a hierarchical tree structure to support multiple levels of segmentation. Furthermore, being a world wide web consortium (W3C) standard, SVG is able to be displayed by most web browsers, interacted with by ECMAScript (standardized scripting language, e.g. JavaScript, JScript), and indexed and retrieved by XML databases and XQuery. Using these open source technologies enables straightforward integration into existing systems. From our results, we show that the flexibility and extensibility of our abstraction facilitates effective storage and retrieval of medical images.

  16. NASA Software Documentation Standard

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The NASA Software Documentation Standard (hereinafter referred to as "Standard") is designed to support the documentation of all software developed for NASA; its goal is to provide a framework and model for recording the essential information needed throughout the development life cycle and maintenance of a software system. The NASA Software Documentation Standard can be applied to the documentation of all NASA software. The Standard is limited to documentation format and content requirements. It does not mandate specific management, engineering, or assurance standards or techniques. This Standard defines the format and content of documentation for software acquisition, development, and sustaining engineering. Format requirements address where information shall be recorded and content requirements address what information shall be recorded. This Standard provides a framework to allow consistency of documentation across NASA and visibility into the completeness of project documentation. The basic framework consists of four major sections (or volumes). The Management Plan contains all planning and business aspects of a software project, including engineering and assurance planning. The Product Specification contains all technical engineering information, including software requirements and design. The Assurance and Test Procedures contains all technical assurance information, including Test, Quality Assurance (QA), and Verification and Validation (V&V). The Management, Engineering, and Assurance Reports is the library and/or listing of all project reports.

  17. New public dataset for spotting patterns in medieval document images

    NASA Astrophysics Data System (ADS)

    En, Sovann; Nicolas, Stéphane; Petitjean, Caroline; Jurie, Frédéric; Heutte, Laurent

    2017-01-01

    With advances in technology, a large part of our cultural heritage is becoming digitally available. In particular, in the field of historical document image analysis, there is now a growing need for indexing and data mining tools, thus allowing us to spot and retrieve the occurrences of an object of interest, called a pattern, in a large database of document images. Patterns may present some variability in terms of color, shape, or context, making the spotting of patterns a challenging task. Pattern spotting is a relatively new field of research, still hampered by the lack of available annotated resources. We present a new publicly available dataset named DocExplore dedicated to spotting patterns in historical document images. The dataset contains 1500 images and 1464 queries, and allows the evaluation of two tasks: image retrieval and pattern localization. A standardized benchmark protocol along with ad hoc metrics is provided for a fair comparison of the submitted approaches. We also provide some first results obtained with our baseline system on this new dataset, which show that there is room for improvement and that should encourage researchers of the document image analysis community to design new systems and submit improved results.

  18. Every document and picture tells a story: using internal corporate document reviews, semiotics, and content analysis to assess tobacco advertising

    PubMed Central

    Anderson, S J; Dewhirst, T; Ling, P M

    2006-01-01

    In this article we present communication theory as a conceptual framework for conducting documents research on tobacco advertising strategies, and we discuss two methods for analysing advertisements: semiotics and content analysis. We provide concrete examples of how we have used tobacco industry documents archives and tobacco advertisement collections iteratively in our research to yield a synergistic analysis of these two complementary data sources. Tobacco promotion researchers should consider adopting these theoretical and methodological approaches. PMID:16728758

  19. An Exponentiation Method for XML Element Retrieval

    PubMed Central

    2014-01-01

    XML document is now widely used for modelling and storing structured documents. The structure is very rich and carries important information about contents and their relationships, for example, e-Commerce. XML data-centric collections require query terms allowing users to specify constraints on the document structure; mapping structure queries and assigning the weight are significant for the set of possibly relevant documents with respect to structural conditions. In this paper, we present an extension to the MEXIR search system that supports the combination of structural and content queries in the form of content-and-structure queries, which we call the Exponentiation function. It has been shown the structural information improve the effectiveness of the search system up to 52.60% over the baseline BM25 at MAP. PMID:24696643

  20. Model-based document categorization employing semantic pattern analysis and local structure clustering

    NASA Astrophysics Data System (ADS)

    Fume, Kosei; Ishitani, Yasuto

    2008-01-01

    We propose a document categorization method based on a document model that can be defined externally for each task and that categorizes Web content or business documents into a target category in accordance with the similarity of the model. The main feature of the proposed method consists of two aspects of semantics extraction from an input document. The semantics of terms are extracted by the semantic pattern analysis and implicit meanings of document substructure are specified by a bottom-up text clustering technique focusing on the similarity of text line attributes. We have constructed a system based on the proposed method for trial purposes. The experimental results show that the system achieves more than 80% classification accuracy in categorizing Web content and business documents into 15 or 70 categories.

  1. HAIC/HIWC field project: characterizing the high ice water content environment

    NASA Astrophysics Data System (ADS)

    Leroy, Delphine; Coutris, Pierre; Fontaine, Emmanuel; Schwarzenboeck, Alfons; Strapp, J. Walter; Korolev, Alexei; McFarquhar, Greg; Gourbeyre, Christophe; Dupuy, Regis; Dezitter, Fabien; Calmels, Alice

    2016-04-01

    High ice water content (IWC) cloud regions in mesoscale convective systems (MCSs) are suspected to cause in-service engine power loss events and air-data probe malfunctions on commercial aircraft. In order to better document this particular environment, a multi-year international HAIC/HIWC (High Altitude Ice Crystals / High Ice Water Content) field project has been designed including two field campaigns. The first campaign was conducted in Darwin in 2014 while the second one took place in Cayenne in May 2015. The French Falcon 20 research aircraft has been deployed for the two campaigns, with an instrumental payload including an IKP-2 (isokinetic evaporator probe which provides a reference measurement of IWC), a CDP-2 (cloud droplet spectrometer probe measuring particles in the range 2-50 μm), and optical array probes 2D-S (2D-Stereo, 10-1280 μm) and PIP (precipitation imaging probe, 100-6400 μm). 23 flights were performed in Darwin, 18 in Cayenne, all sampling MCSs at different flight levels with temperatures from -10°C to -50°C. The study presented here focuses on ice crystal size properties related to IWC, thereby analyzing in detail the 2D image data from 2D-S and PIP optical array imaging probes. 2D images recorded with 2D-S and PIP probes were processed in order to produce particle size distributions (PSDs) and median mass diameters (MMDs). Darwin results shows that ice crystals properties are quite different in high IWC areas compared to the surrounding cloud regions. Most of the sampled MCS reveal that the higher the measured IWC, the smaller are the corresponding crystal MMD. This effect is interfering with a temperature trend, whereby colder temperatures are leading to smaller MMD. A preliminary analysis of the Cayenne data seems to be consistent with the above trends.

  2. Word spotting for handwritten documents using Chamfer Distance and Dynamic Time Warping

    NASA Astrophysics Data System (ADS)

    Saabni, Raid M.; El-Sana, Jihad A.

    2011-01-01

    A large amount of handwritten historical documents are located in libraries around the world. The desire to access, search, and explore these documents paves the way for a new age of knowledge sharing and promotes collaboration and understanding between human societies. Currently, the indexes for these documents are generated manually, which is very tedious and time consuming. Results produced by state of the art techniques, for converting complete images of handwritten documents into textual representations, are not yet sufficient. Therefore, word-spotting methods have been developed to archive and index images of handwritten documents in order to enable efficient searching within documents. In this paper, we present a new matching algorithm to be used in word-spotting tasks for historical Arabic documents. We present a novel algorithm based on the Chamfer Distance to compute the similarity between shapes of word-parts. Matching results are used to cluster images of Arabic word-parts into different classes using the Nearest Neighbor rule. To compute the distance between two word-part images, the algorithm subdivides each image into equal-sized slices (windows). A modified version of the Chamfer Distance, incorporating geometric gradient features and distance transform data, is used as a similarity distance between the different slices. Finally, the Dynamic Time Warping (DTW) algorithm is used to measure the distance between two images of word-parts. By using the DTW we enabled our system to cluster similar word-parts, even though they are transformed non-linearly due to the nature of handwriting. We tested our implementation of the presented methods using various documents in different writing styles, taken from Juma'a Al Majid Center - Dubai, and obtained encouraging results.

  3. Webizing mobile augmented reality content

    NASA Astrophysics Data System (ADS)

    Ahn, Sangchul; Ko, Heedong; Yoo, Byounghyun

    2014-01-01

    This paper presents a content structure for building mobile augmented reality (AR) applications in HTML5 to achieve a clean separation of the mobile AR content and the application logic for scaling as on the Web. We propose that the content structure contains the physical world as well as virtual assets for mobile AR applications as document object model (DOM) elements and that their behaviour and user interactions are controlled through DOM events by representing objects and places with a uniform resource identifier. Our content structure enables mobile AR applications to be seamlessly developed as normal HTML documents under the current Web eco-system.

  4. Ensemble methods with simple features for document zone classification

    NASA Astrophysics Data System (ADS)

    Obafemi-Ajayi, Tayo; Agam, Gady; Xie, Bingqing

    2012-01-01

    Document layout analysis is of fundamental importance for document image understanding and information retrieval. It requires the identification of blocks extracted from a document image via features extraction and block classification. In this paper, we focus on the classification of the extracted blocks into five classes: text (machine printed), handwriting, graphics, images, and noise. We propose a new set of features for efficient classifications of these blocks. We present a comparative evaluation of three ensemble based classification algorithms (boosting, bagging, and combined model trees) in addition to other known learning algorithms. Experimental results are demonstrated for a set of 36503 zones extracted from 416 document images which were randomly selected from the tobacco legacy document collection. The results obtained verify the robustness and effectiveness of the proposed set of features in comparison to the commonly used Ocropus recognition features. When used in conjunction with the Ocropus feature set, we further improve the performance of the block classification system to obtain a classification accuracy of 99.21%.

  5. NASA IMAGESEER: NASA IMAGEs for Science, Education, Experimentation and Research

    NASA Technical Reports Server (NTRS)

    Le Moigne, Jacqueline; Grubb, Thomas G.; Milner, Barbara C.

    2012-01-01

    A number of web-accessible databases, including medical, military or other image data, offer universities and other users the ability to teach or research new Image Processing techniques on relevant and well-documented data. However, NASA images have traditionally been difficult for researchers to find, are often only available in hard-to-use formats, and do not always provide sufficient context and background for a non-NASA Scientist user to understand their content. The new IMAGESEER (IMAGEs for Science, Education, Experimentation and Research) database seeks to address these issues. Through a graphically-rich web site for browsing and downloading all of the selected datasets, benchmarks, and tutorials, IMAGESEER provides a widely accessible database of NASA-centric, easy to read, image data for teaching or validating new Image Processing algorithms. As such, IMAGESEER fosters collaboration between NASA and research organizations while simultaneously encouraging development of new and enhanced Image Processing algorithms. The first prototype includes a representative sampling of NASA multispectral and hyperspectral images from several Earth Science instruments, along with a few small tutorials. Image processing techniques are currently represented with cloud detection, image registration, and map cover/classification. For each technique, corresponding data are selected from four different geographic regions, i.e., mountains, urban, water coastal, and agriculture areas. Satellite images have been collected from several instruments - Landsat-5 and -7 Thematic Mappers, Earth Observing-1 (EO-1) Advanced Land Imager (ALI) and Hyperion, and the Moderate Resolution Imaging Spectroradiometer (MODIS). After geo-registration, these images are available in simple common formats such as GeoTIFF and raw formats, along with associated benchmark data.

  6. Three-Dimensional Dispaly Of Document Set

    DOEpatents

    Lantrip, David B.; Pennock, Kelly A.; Pottier, Marc C.; Schur, Anne; Thomas, James J.; Wise, James A.

    2003-06-24

    A method for spatializing text content for enhanced visual browsing and analysis. The invention is applied to large text document corpora such as digital libraries, regulations and procedures, archived reports, and the like. The text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The three-dimensional representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts' effort.

  7. Three-dimensional display of document set

    DOEpatents

    Lantrip, David B [Oxnard, CA; Pennock, Kelly A [Richland, WA; Pottier, Marc C [Richland, WA; Schur, Anne [Richland, WA; Thomas, James J [Richland, WA; Wise, James A [Richland, WA

    2006-09-26

    A method for spatializing text content for enhanced visual browsing and analysis. The invention is applied to large text document corpora such as digital libraries, regulations and procedures, archived reports, and the like. The text content from these sources may e transformed to a spatial representation that preserves informational characteristics from the documents. The three-dimensional representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts' effort.

  8. Three-dimensional display of document set

    DOEpatents

    Lantrip, David B [Oxnard, CA; Pennock, Kelly A [Richland, WA; Pottier, Marc C [Richland, WA; Schur, Anne [Richland, WA; Thomas, James J [Richland, WA; Wise, James A [Richland, WA

    2001-10-02

    A method for spatializing text content for enhanced visual browsing and analysis. The invention is applied to large text document corpora such as digital libraries, regulations and procedures, archived reports, and the like. The text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The three-dimensional representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts' effort.

  9. Three-dimensional display of document set

    DOEpatents

    Lantrip, David B [Oxnard, CA; Pennock, Kelly A [Richland, WA; Pottier, Marc C [Richland, WA; Schur, Anne [Richland, WA; Thomas, James J [Richland, WA; Wise, James A [Richland, WA; York, Jeremy [Bothell, WA

    2009-06-30

    A method for spatializing text content for enhanced visual browsing and analysis. The invention is applied to large text document corpora such as digital libraries, regulations and procedures, archived reports, and the like. The text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The three-dimensional representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts' effort.

  10. Procedures and Guidelines for Digitization (Scanning)

    EPA Pesticide Factsheets

    These documents establishes EPA's approach for creating digitized versions of Agency documents and establishes standards for capturing digitized content from paper and microform Agency documents and records.

  11. A rapid review of treatment literacy materials for tuberculosis patients.

    PubMed

    Brumwell, A; Noyes, E; Kulkarni, S; Lin, V; Becerra, M C; Yuen, C M

    2018-03-01

    To assess available treatment literacy materials for patients undergoing treatment for tuberculosis (TB). We conducted a rapid review by searching the US Centers for Disease Control's Find TB Resources website and the websites of health departments and TB-focused organizations. We included English-language documents intended to educate TB patients about anti-tuberculosis treatment. We evaluated the format, readability, and content of documents, and audience. We defined 12 essential content elements based on those previously identified as facilitating human immunodeficiency virus treatment literacy. Of the 205 documents obtained, 45 were included in our review. The median reading grade level was 7 (IQR 5-8). The median number of essential content elements present was 6 (IQR 4-8), with the most comprehensive document containing 11 of the 12 elements. Only two documents were written for children with TB or their care givers, and two for patients with drug-resistant TB. Many documents contained paternalistic and non-patient-centered language. We found few examples of comprehensive, patient-centered documents. Work is needed to achieve consensus as to the essential elements of TB treatment literacy and to create additional materials for children, patients with drug-resistant TB, and those with lower literacy levels.

  12. Page segmentation using script identification vectors: A first look

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hochberg, J.; Cannon, M.; Kelly, P.

    1997-07-01

    Document images in which different scripts, such as Chinese and Roman, appear on a single page pose a problem for optical character recognition (OCR) systems. This paper explores the use of script identification vectors in the analysis of multilingual document images. A script identification vector is calculated for each connected component in a document. The vector expresses the closest distance between the component and templates developed for each of thirteen scripts, including Arabic, Chinese, Cyrillic, and Roman. The authors calculate the first three principal components within the resulting thirteen-dimensional space for each image. By mapping these components to red, green,more » and blue, they can visualize the information contained in the script identification vectors. The visualization of several multilingual images suggests that the script identification vectors can be used to segment images into script-specific regions as large as several paragraphs or as small as a few characters. The visualized vectors also reveal distinctions within scripts, such as font in Roman documents, and kanji vs. kana in Japanese. Results are best for documents containing highly dissimilar scripts such as Roman and Japanese. Documents containing similar scripts, such as Roman and Cyrillic will require further investigation.« less

  13. Restoring warped document images through 3D shape modeling.

    PubMed

    Tan, Chew Lim; Zhang, Li; Zhang, Zheng; Xia, Tao

    2006-02-01

    Scanning a document page from a thick bound volume often results in two kinds of distortions in the scanned image, i.e., shade along the "spine" of the book and warping in the shade area. In this paper, we propose an efficient restoration method based on the discovery of the 3D shape of a book surface from the shading information in a scanned document image. From a technical point of view, this shape from shading (SFS) problem in real-world environments is characterized by 1) a proximal and moving light source, 2) Lambertian reflection, 3) nonuniform albedo distribution, and 4) document skew. Taking all these factors into account, we first build practical models (consisting of a 3D geometric model and a 3D optical model) for the practical scanning conditions to reconstruct the 3D shape of the book surface. We next restore the scanned document image using this shape based on deshading and dewarping models. Finally, we evaluate the restoration results by comparing our estimated surface shape with the real shape as well as the OCR performance on original and restored document images. The results show that the geometric and photometric distortions are mostly removed and the OCR results are improved markedly.

  14. Requirements for a documentation of the image manipulation processes within PACS

    NASA Astrophysics Data System (ADS)

    Retter, Klaus; Rienhoff, Otto; Karsten, Ch.; Prince, Hazel E.

    1990-08-01

    This paper discusses to which extent manipulation functions which have been applied to images handled in PACS should be documented. After postulating an increasing amount of postprocessing features on PACS-consoles, legal, educational and medical reasons for a documentation of image manipulation processes are presented. Besides legal necessities, aspects of storage capacity, response time, and potential uses determine the extent of this documentation. Is there a specific kind of manipulation functions which has to be documented generally? Should the physician decide which parts of the various pathways he tries are recorded by the system? To distinguish, for example, between reversible and irreversible functions or between interactive and non-interactive functions is one step towards a solution. Another step is to establish definitions for terms like "raw" and "final" image. The paper systematizes these questions and offers strategic help. The answers will have an important impact on PACS design and functionality.

  15. Image and information management system

    NASA Technical Reports Server (NTRS)

    Robertson, Tina L. (Inventor); Raney, Michael C. (Inventor); Dougherty, Dennis M. (Inventor); Kent, Peter C. (Inventor); Brucker, Russell X. (Inventor); Lampert, Daryl A. (Inventor)

    2009-01-01

    A system and methods through which pictorial views of an object's configuration, arranged in a hierarchical fashion, are navigated by a person to establish a visual context within the configuration. The visual context is automatically translated by the system into a set of search parameters driving retrieval of structured data and content (images, documents, multimedia, etc.) associated with the specific context. The system places ''hot spots'', or actionable regions, on various portions of the pictorials representing the object. When a user interacts with an actionable region, a more detailed pictorial from the hierarchy is presented representing that portion of the object, along with real-time feedback in the form of a popup pane containing information about that region, and counts-by-type reflecting the number of items that are available within the system associated with the specific context and search filters established at that point in time.

  16. Image and information management system

    NASA Technical Reports Server (NTRS)

    Robertson, Tina L. (Inventor); Kent, Peter C. (Inventor); Raney, Michael C. (Inventor); Dougherty, Dennis M. (Inventor); Brucker, Russell X. (Inventor); Lampert, Daryl A. (Inventor)

    2007-01-01

    A system and methods through which pictorial views of an object's configuration, arranged in a hierarchical fashion, are navigated by a person to establish a visual context within the configuration. The visual context is automatically translated by the system into a set of search parameters driving retrieval of structured data and content (images, documents, multimedia, etc.) associated with the specific context. The system places hot spots, or actionable regions, on various portions of the pictorials representing the object. When a user interacts with an actionable region, a more detailed pictorial from the hierarchy is presented representing that portion of the object, along with real-time feedback in the form of a popup pane containing information about that region, and counts-by-type reflecting the number of items that are available within the system associated with the specific context and search filters established at that point in time.

  17. Method of determining a content of a nuclear waste container

    DOEpatents

    Bernardi, Richard T.; Entwistle, David

    2003-04-22

    A method and apparatus are provided for identifying contents of a nuclear waste container. The method includes the steps of forming an image of the contents of the container using digital radiography, visually comparing contents of the image with expected contents of the container and performing computer tomography on the container when the visual inspection reveals an inconsistency between the contents of the image and the expected contents of the container.

  18. Large-scale image region documentation for fully automated image biomarker algorithm development and evaluation

    PubMed Central

    Reeves, Anthony P.; Xie, Yiting; Liu, Shuang

    2017-01-01

    Abstract. With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset. PMID:28612037

  19. Use of existing patient-reported outcome (PRO) instruments and their modification: the ISPOR Good Research Practices for Evaluating and Documenting Content Validity for the Use of Existing Instruments and Their Modification PRO Task Force Report.

    PubMed

    Rothman, Margaret; Burke, Laurie; Erickson, Pennifer; Leidy, Nancy Kline; Patrick, Donald L; Petrie, Charles D

    2009-01-01

    Patient-reported outcome (PRO) instruments are used to evaluate the effect of medical products on how patients feel or function. This article presents the results of an ISPOR task force convened to address good clinical research practices for the use of existing or modified PRO instruments to support medical product labeling claims. The focus of the article is on content validity, with specific reference to existing or modified PRO instruments, because of the importance of content validity in selecting or modifying an existing PRO instrument and the lack of consensus in the research community regarding best practices for establishing and documenting this measurement property. Topics addressed in the article include: definition and general description of content validity; PRO concept identification as the important first step in establishing content validity; instrument identification and the initial review process; key issues in qualitative methodology; and potential threats to content validity, with three case examples used to illustrate types of threats and how they might be resolved. A table of steps used to identify and evaluate an existing PRO instrument is provided, and figures are used to illustrate the meaning of content validity in relationship to instrument development and evaluation. RESULTS & RECOMMENDATIONS: Four important threats to content validity are identified: unclear conceptual match between the PRO instrument and the intended claim, lack of direct patient input into PRO item content from the target population in which the claim is desired, no evidence that the most relevant and important item content is contained in the instrument, and lack of documentation to support modifications to the PRO instrument. In some cases, careful review of the threats to content validity in a specific application may be reduced through additional well documented qualitative studies that specifically address the issue of concern. Published evidence of the content validity of a PRO instrument for an intended application is often limited. Such evidence is, however, important to evaluating the adequacy of a PRO instrument for the intended application. This article provides an overview of key issues involved in assessing and documenting content validity as it relates to using existing instruments in the drug approval process.

  20. A Sensitive Measurement for Estimating Impressions of Image-Contents

    NASA Astrophysics Data System (ADS)

    Sato, Mie; Matouge, Shingo; Mori, Toshifumi; Suzuki, Noboru; Kasuga, Masao

    We have investigated Kansei Content that appeals maker's intention to viewer's kansei. An SD method is a very good way to evaluate subjective impression of image-contents. However, because the SD method is performed after subjects view the image-contents, it is difficult to examine impression of detailed scenes of the image-contents in real time. To measure viewer's impression of the image-contents in real time, we have developed a Taikan sensor. With the Taikan sensor, we investigate relations among the image-contents, the grip strength and the body temperature. We also explore the interface of the Taikan sensor to use it easily. In our experiment, a horror movie is used that largely affects emotion of the subjects. Our results show that there is a possibility that the grip strength increases when the subjects view a strained scene and that it is easy to use the Taikan sensor without its circle base that is originally installed.

  1. Selection. ERIC Processing Manual, Section III.

    ERIC Educational Resources Information Center

    Brandhorst, Ted, Ed.

    Rules and guidelines are provided governing the selection of documents and journal articles to be included in the ERIC database. Selection criteria are described under the five headings: (1) Appropriateness of content/subject matter; (2) Suitability of format, medium, document type; (3) Quality of content; (4) Legibility and reproducibility; (5)…

  2. Minnesota Academic Standards: Kindergarten

    ERIC Educational Resources Information Center

    Minnesota Department of Education, 2017

    2017-01-01

    This document contains all of the Minnesota kindergarten academic standards in the content areas of Arts, English Language Arts, Mathematics, Science and Social Studies. For each content area there is a short overview followed by a coding diagram of how the standards are organized and displayed. This document is adapted from the official versions…

  3. 47 CFR 1.913 - Application and notification forms; electronic and manual filing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... notifications whenever possible. The files, other than the ASCII table of contents, should be in Adobe Acrobat... possible. The attachment should be uploaded via ULS in Adobe Acrobat Portable Document Format (PDF... the table of contents, should be in Adobe Acrobat Portable Document Format (PDF) whenever possible...

  4. Method and apparatus for filtering visual documents

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E. (Inventor); Shelton, Robert O. (Inventor)

    1993-01-01

    A method and apparatus for producing an abstract or condensed version of a visual document is presented. The frames comprising the visual document are first sampled to reduce the number of frames required for processing. The frames are then subjected to a structural decomposition process that reduces all information in each frame to a set of values. These values are in turn normalized and further combined to produce only one information content value per frame. The information content values of these frames are then compared to a selected distribution cutoff point. This effectively selects those values at the tails of a normal distribution, thus filtering key frames from their surrounding frames. The value for each frame is then compared with the value from the previous frame, and the respective frame is finally stored only if the values are significantly different. The method filters or compresses a visual document with a reduction in digital storage on the ratio of up to 700 to 1 or more, depending on the content of the visual document being filtered.

  5. Spatial Paradigm for Information Retrieval and Exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The SPIRE system consists of software for visual analysis of primarily text based information sources. This technology enables the content analysis of text documents without reading all the documents. It employs several algorithms for text and word proximity analysis. It identifies the key themes within the text documents. From this analysis, it projects the results onto a visual spatial proximity display (Galaxies or Themescape) where items (documents and/or themes) visually close to each other are known to have content which is close to each other. Innovative interaction techniques then allow for dynamic visual analysis of large text based information spaces.

  6. SPIRE1.03. Spatial Paradigm for Information Retrieval and Exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, K.J.; Bohn, S.; Crow, V.

    The SPIRE system consists of software for visual analysis of primarily text based information sources. This technology enables the content analysis of text documents without reading all the documents. It employs several algorithms for text and word proximity analysis. It identifies the key themes within the text documents. From this analysis, it projects the results onto a visual spatial proximity display (Galaxies or Themescape) where items (documents and/or themes) visually close to each other are known to have content which is close to each other. Innovative interaction techniques then allow for dynamic visual analysis of large text based information spaces.

  7. Method and apparatus for imaging and documenting fingerprints

    DOEpatents

    Fernandez, Salvador M.

    2002-01-01

    The invention relates to a method and apparatus for imaging and documenting fingerprints. A fluorescent dye brought in intimate proximity with the lipid residues of a latent fingerprint is caused to fluoresce on exposure to light energy. The resulting fluorescing image may be recorded photographically.

  8. Low-cost conversion of the Polaroid MD-4 land camera to a digital gel documentation system.

    PubMed

    Porch, Timothy G; Erpelding, John E

    2006-04-30

    A simple, inexpensive design is presented for the rapid conversion of the popular MD-4 Polaroid land camera to a high quality digital gel documentation system. Images of ethidium bromide stained DNA gels captured using the digital system were compared to images captured on Polaroid instant film. Resolution and sensitivity were enhanced using the digital system. In addition to the low cost and superior image quality of the digital system, there is also the added convenience of real-time image viewing through the swivel LCD of the digital camera, wide flexibility of gel sizes, accurate automatic focusing, variable image resolution, and consistent ease of use and quality. Images can be directly imported to a computer by using the USB port on the digital camera, further enhancing the potential of the digital system for documentation, analysis, and archiving. The system is appropriate for use as a start-up gel documentation system and for routine gel analysis.

  9. Signature detection and matching for document image retrieval.

    PubMed

    Zhu, Guangyu; Zheng, Yefeng; Doermann, David; Jaeger, Stefan

    2009-11-01

    As one of the most pervasive methods of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. However, detection and segmentation of free-form objects such as signatures from clustered background is currently an open document analysis problem. In this paper, we focus on two fundamental problems in signature-based document image retrieval. First, we propose a novel multiscale approach to jointly detecting and segmenting signatures from document images. Rather than focusing on local features that typically have large variations, our approach captures the structural saliency using a signature production model and computes the dynamic curvature of 2D contour fragments over multiple scales. This detection framework is general and computationally tractable. Second, we treat the problem of signature retrieval in the unconstrained setting of translation, scale, and rotation invariant nonrigid shape matching. We propose two novel measures of shape dissimilarity based on anisotropic scaling and registration residual error and present a supervised learning framework for combining complementary shape information from different dissimilarity metrics using LDA. We quantitatively study state-of-the-art shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple instances as query in document image retrieval. We further demonstrate our matching techniques in offline signature verification. Extensive experiments using large real-world collections of English and Arabic machine-printed and handwritten documents demonstrate the excellent performance of our approaches.

  10. Search systems and computer-implemented search methods

    DOEpatents

    Payne, Deborah A.; Burtner, Edwin R.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.

    2017-03-07

    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  11. Search systems and computer-implemented search methods

    DOEpatents

    Payne, Deborah A.; Burtner, Edwin R.; Bohn, Shawn J.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.

    2015-12-22

    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  12. Content-based image retrieval on mobile devices

    NASA Astrophysics Data System (ADS)

    Ahmad, Iftikhar; Abdullah, Shafaq; Kiranyaz, Serkan; Gabbouj, Moncef

    2005-03-01

    Content-based image retrieval area possesses a tremendous potential for exploration and utilization equally for researchers and people in industry due to its promising results. Expeditious retrieval of desired images requires indexing of the content in large-scale databases along with extraction of low-level features based on the content of these images. With the recent advances in wireless communication technology and availability of multimedia capable phones it has become vital to enable query operation in image databases and retrieve results based on the image content. In this paper we present a content-based image retrieval system for mobile platforms, providing the capability of content-based query to any mobile device that supports Java platform. The system consists of light-weight client application running on a Java enabled device and a server containing a servlet running inside a Java enabled web server. The server responds to image query using efficient native code from selected image database. The client application, running on a mobile phone, is able to initiate a query request, which is handled by a servlet in the server for finding closest match to the queried image. The retrieved results are transmitted over mobile network and images are displayed on the mobile phone. We conclude that such system serves as a basis of content-based information retrieval on wireless devices and needs to cope up with factors such as constraints on hand-held devices and reduced network bandwidth available in mobile environments.

  13. Study on Hybrid Image Search Technology Based on Texts and Contents

    NASA Astrophysics Data System (ADS)

    Wang, H. T.; Ma, F. L.; Yan, C.; Pan, H.

    2018-05-01

    Image search was studied first here based on texts and contents, respectively. The text-based image feature extraction was put forward by integrating the statistical and topic features in view of the limitation of extraction of keywords only by means of statistical features of words. On the other hand, a search-by-image method was put forward based on multi-feature fusion in view of the imprecision of the content-based image search by means of a single feature. The layered-searching method depended on primarily the text-based image search method and additionally the content-based image search was then put forward in view of differences between the text-based and content-based methods and their difficult direct fusion. The feasibility and effectiveness of the hybrid search algorithm were experimentally verified.

  14. Translated and annotated version of the 2015-2020 National Mental Health Work Plan of the People's Republic of China

    PubMed Central

    XIONG, Wei; PHILLIPS, Michael R.

    2016-01-01

    The following document is a translation of the 2015-2020 National Mental Health Work Plan of the People's Republic of China which was issued by the General Office of China's State Council on June 4, 2015. The original Chinese version of the document is available at the official government website: http://www.gov.cn/gongbao/content/2015/content_2883226.htm The translators have added annotations at the end of the document that provide background information to help contextualize content that may be unclear to readers unfamiliar with China and explain their decisions when translating terms that can have multiple interpretations. PMID:27688639

  15. Compressibility-aware media retargeting with structure preserving.

    PubMed

    Wang, Shu-Fan; Lai, Shang-Hong

    2011-03-01

    A number of algorithms have been proposed for intelligent image/video retargeting with image content retained as much as possible. However, they usually suffer from some artifacts in the results, such as ridge or structure twist. In this paper, we present a structure-preserving media retargeting technique that preserves the content and image structure as best as possible. Different from the previous pixel or grid based methods, we estimate the image content saliency from the structure of the content. A block structure energy is introduced with a top-down strategy to constrain the image structure inside to deform uniformly in either x or y direction. However, the flexibilities for retargeting are quite different for different images. To cope with this problem, we propose a compressibility assessment scheme for media retargeting by combining the entropies of image gradient magnitude and orientation distributions. Thus, the resized media is produced to preserve the image content and structure as best as possible. Our experiments demonstrate that the proposed method provides resized images/videos with better preservation of content and structure than those by the previous methods.

  16. Image enhancement using the hypothesis selection filter: theory and application to JPEG decoding.

    PubMed

    Wong, Tak-Shing; Bouman, Charles A; Pollak, Ilya

    2013-03-01

    We introduce the hypothesis selection filter (HSF) as a new approach for image quality enhancement. We assume that a set of filters has been selected a priori to improve the quality of a distorted image containing regions with different characteristics. At each pixel, HSF uses a locally computed feature vector to predict the relative performance of the filters in estimating the corresponding pixel intensity in the original undistorted image. The prediction result then determines the proportion of each filter used to obtain the final processed output. In this way, the HSF serves as a framework for combining the outputs of a number of different user selected filters, each best suited for a different region of an image. We formulate our scheme in a probabilistic framework where the HSF output is obtained as the Bayesian minimum mean square error estimate of the original image. Maximum likelihood estimates of the model parameters are determined from an offline fully unsupervised training procedure that is derived from the expectation-maximization algorithm. To illustrate how to apply the HSF and to demonstrate its potential, we apply our scheme as a post-processing step to improve the decoding quality of JPEG-encoded document images. The scheme consistently improves the quality of the decoded image over a variety of image content with different characteristics. We show that our scheme results in quantitative improvements over several other state-of-the-art JPEG decoding methods.

  17. Statistical Techniques for Efficient Indexing and Retrieval of Document Images

    ERIC Educational Resources Information Center

    Bhardwaj, Anurag

    2010-01-01

    We have developed statistical techniques to improve the performance of document image search systems where the intermediate step of OCR based transcription is not used. Previous research in this area has largely focused on challenges pertaining to generation of small lexicons for processing handwritten documents and enhancement of poor quality…

  18. A picture tells a thousand words: A content analysis of concussion-related images online.

    PubMed

    Ahmed, Osman H; Lee, Hopin; Struik, Laura L

    2016-09-01

    Recently image-sharing social media platforms have become a popular medium for sharing health-related images and associated information. However within the field of sports medicine, and more specifically sports related concussion, the content of images and meta-data shared through these popular platforms have not been investigated. The aim of this study was to analyse the content of concussion-related images and its accompanying meta-data on image-sharing social media platforms. We retrieved 300 images from Pinterest, Instagram and Flickr by using a standardised search strategy. All images were screened and duplicate images were removed. We excluded images if they were: non-static images; illustrations; animations; or screenshots. The content and characteristics of each image was evaluated using a customised coding scheme to determine major content themes, and images were referenced to the current international concussion management guidelines. From 300 potentially relevant images, 176 images were included for analysis; 70 from Pinterest, 63 from Flickr, and 43 from Instagram. Most images were of another person or a scene (64%), with the primary content depicting injured individuals (39%). The primary purposes of the images were to share a concussion-related incident (33%) and to dispense education (19%). For those images where it could be evaluated, the majority (91%) were found to reflect the Sports Concussion Assessment Tool 3 (SCAT3) guidelines. The ability to rapidly disseminate rich information though photos, images, and infographics to a wide-reaching audience suggests that image-sharing social media platforms could be used as an effective communication tool for sports concussion. Public health strategies could direct educative content to targeted populations via the use of image-sharing platforms. Further research is required to understand how image-sharing platforms can be used to effectively relay evidence-based information to patients and sports medicine clinicians. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. One-click scanning of large-size documents using mobile phone camera

    NASA Astrophysics Data System (ADS)

    Liu, Sijiang; Jiang, Bo; Yang, Yuanjie

    2016-07-01

    Currently mobile apps for document scanning do not provide convenient operations to tackle large-size documents. In this paper, we present a one-click scanning approach for large-size documents using mobile phone camera. After capturing a continuous video of documents, our approach automatically extracts several key frames by optical flow analysis. Then based on key frames, a mobile GPU based image stitching method is adopted to generate a completed document image with high details. There are no extra manual intervention in the process and experimental results show that our app performs well, showing convenience and practicability for daily life.

  20. Prototype for Meta-Algorithmic, Content-Aware Image Analysis

    DTIC Science & Technology

    2015-03-01

    PROTOTYPE FOR META-ALGORITHMIC, CONTENT-AWARE IMAGE ANALYSIS UNIVERSITY OF VIRGINIA MARCH 2015 FINAL TECHNICAL REPORT...ALGORITHMIC, CONTENT-AWARE IMAGE ANALYSIS 5a. CONTRACT NUMBER FA8750-12-C-0181 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 62305E 6. AUTHOR(S) S...approaches were studied in detail and their results on a sample dataset are presented. 15. SUBJECT TERMS Image Analysis , Computer Vision, Content

  1. Selected Images of the Pu'u 'O'o-Kupaianaha Eruption, 1983-1997

    USGS Publications Warehouse

    Takahashi, Taeko Jane; Heliker, Christina C.; Diggles, Michael F.

    2003-01-01

    The 100 images in this CD?ROM have been selected from the collections of the Hawaiian Volcano Observatory as enduring favorites of the staff, researchers, media, designers, and the public over time. They represent photographs of a variety of geological phenomena and eruptive events, chosen for their content, quality of exposure, and aesthetic appeal. The number was kept to 100 to maintain the high resolution desirable. Since 1997, digital imagery has been the predominant mode of photographically documenting the eruption. Many of these photos, from 1998 to the present, are viewable on the website: http://hvo.wr.usgs.gov/kilauea/update/archive/ Episode numbers are given as E-numbers in parentheses before each caption that pertains to the Pu`u `O`o?Kupaianaha eruption; details of the episodes are given in table 1. Hawaiian words and place names are listed below to facilitate searching. All images included in this collection are owned by the U.S. Geological Survey, Hawaiian Volcano Observatory, and are in the public domain. Therefore, no permission or fee is required for their use. Please include photo credit for the photographer and the U.S. Geological Survey. We assume no responsibility for the modification of these images.

  2. Text extraction method for historical Tibetan document images based on block projections

    NASA Astrophysics Data System (ADS)

    Duan, Li-juan; Zhang, Xi-qun; Ma, Long-long; Wu, Jian

    2017-11-01

    Text extraction is an important initial step in digitizing the historical documents. In this paper, we present a text extraction method for historical Tibetan document images based on block projections. The task of text extraction is considered as text area detection and location problem. The images are divided equally into blocks and the blocks are filtered by the information of the categories of connected components and corner point density. By analyzing the filtered blocks' projections, the approximate text areas can be located, and the text regions are extracted. Experiments on the dataset of historical Tibetan documents demonstrate the effectiveness of the proposed method.

  3. Supporting document for the historical tank content estimate for AY-tank farm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brevick, C H; Stroup, J L; Funk, J. W.

    1997-03-12

    This Supporting Document provides historical in-depth characterization information on AY-Tank Farm, such as historical waste transfer and level data, tank physical information, temperature plots, liquid observation well plots, chemical analyte and radionuclide inventories for the Historical Tank Content Estimate Report for the Southeast Quadrant of the Hanford 200 Areas.

  4. Pedagogical Content Knowledge and Industrial Design Education

    ERIC Educational Resources Information Center

    Phillips, Kenneth R.; De Miranda, Michael A.; Shin, Jinseup

    2009-01-01

    Pedagogical content knowledge (PCK) has been embraced by many of the recent educational reform documents as a way of describing the knowledge possessed by expert teachers. These reform documents have also served as guides for educators to develop models of teacher development. However, in the United States, few if any of the current models…

  5. A model for enhancing Internet medical document retrieval with "medical core metadata".

    PubMed

    Malet, G; Munoz, F; Appleyard, R; Hersh, W

    1999-01-01

    Finding documents on the World Wide Web relevant to a specific medical information need can be difficult. The goal of this work is to define a set of document content description tags, or metadata encodings, that can be used to promote disciplined search access to Internet medical documents. The authors based their approach on a proposed metadata standard, the Dublin Core Metadata Element Set, which has recently been submitted to the Internet Engineering Task Force. Their model also incorporates the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary and MEDLINE-type content descriptions. The model defines a medical core metadata set that can be used to describe the metadata for a wide variety of Internet documents. The authors propose that their medical core metadata set be used to assign metadata to medical documents to facilitate document retrieval by Internet search engines.

  6. A Model for Enhancing Internet Medical Document Retrieval with “Medical Core Metadata”

    PubMed Central

    Malet, Gary; Munoz, Felix; Appleyard, Richard; Hersh, William

    1999-01-01

    Objective: Finding documents on the World Wide Web relevant to a specific medical information need can be difficult. The goal of this work is to define a set of document content description tags, or metadata encodings, that can be used to promote disciplined search access to Internet medical documents. Design: The authors based their approach on a proposed metadata standard, the Dublin Core Metadata Element Set, which has recently been submitted to the Internet Engineering Task Force. Their model also incorporates the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary and Medline-type content descriptions. Results: The model defines a medical core metadata set that can be used to describe the metadata for a wide variety of Internet documents. Conclusions: The authors propose that their medical core metadata set be used to assign metadata to medical documents to facilitate document retrieval by Internet search engines. PMID:10094069

  7. Document Examination: Applications of Image Processing Systems.

    PubMed

    Kopainsky, B

    1989-12-01

    Dealing with images is a familiar business for an expert in questioned documents: microscopic, photographic, infrared, and other optical techniques generate images containing the information he or she is looking for. A recent method for extracting most of this information is digital image processing, ranging from the simple contrast and contour enhancement to the advanced restoration of blurred texts. When combined with a sophisticated physical imaging system, an image pricessing system has proven to be a powerful and fast tool for routine non-destructive scanning of suspect documents. This article reviews frequent applications, comprising techniques to increase legibility, two-dimensional spectroscopy (ink discrimination, alterations, erased entries, etc.), comparison techniques (stamps, typescript letters, photo substitution), and densitometry. Computerized comparison of handwriting is not included. Copyright © 1989 Central Police University.

  8. Path Searching Based Crease Detection for Large Scale Scanned Document Images

    NASA Astrophysics Data System (ADS)

    Zhang, Jifu; Li, Yi; Li, Shutao; Sun, Bin; Sun, Jun

    2017-12-01

    Since the large size documents are usually folded for preservation, creases will occur in the scanned images. In this paper, a crease detection method is proposed to locate the crease pixels for further processing. According to the imaging process of contactless scanners, the shading on both sides of the crease usually varies a lot. Based on this observation, a convex hull based algorithm is adopted to extract the shading information of the scanned image. Then, the possible crease path can be achieved by applying the vertical filter and morphological operations on the shading image. Finally, the accurate crease is detected via Dijkstra path searching. Experimental results on the dataset of real scanned newspapers demonstrate that the proposed method can obtain accurate locations of the creases in the large size document images.

  9. A Content Markup Language for Data Services

    NASA Astrophysics Data System (ADS)

    Noviello, C.; Acampa, P.; Mango Furnari, M.

    Network content delivery and documents sharing is possible using a variety of technologies, such as distributed databases, service-oriented applications, and so forth. The development of such systems is a complex job, because document life cycle involves a strong cooperation between domain experts and software developers. Furthermore, the emerging software methodologies, such as the service-oriented architecture and knowledge organization (e.g., semantic web) did not really solve the problems faced in a real distributed and cooperating settlement. In this chapter the authors' efforts to design and deploy a distribute and cooperating content management system are described. The main features of the system are a user configurable document type definition and a management middleware layer. It allows CMS developers to orchestrate the composition of specialized software components around the structure of a document. In this chapter are also reported some of the experiences gained on deploying the developed framework in a cultural heritage dissemination settlement.

  10. RocketCam systems for providing situational awareness on rockets, spacecraft, and other remote platforms

    NASA Astrophysics Data System (ADS)

    Ridenoure, Rex

    2004-09-01

    Space-borne imaging systems derived from commercial technology have been successfully employed on launch vehicles for several years. Since 1997, over sixty such imagers - all in the product family called RocketCamTM - have operated successfully on 29 launches involving most U.S. launch systems. During this time, these inexpensive systems have demonstrated their utility in engineering analysis of liftoff and ascent events, booster performance, separation events and payload separation operations, and have also been employed to support and document related ground-based engineering tests. Such views from various vantage points provide not only visualization of key events but stunning and extremely positive public relations video content. Near-term applications include capturing key events on Earth-orbiting spacecraft and related proximity operations. This paper examines the history to date of RocketCams on expendable and manned launch vehicles, assesses their current utility on rockets, spacecraft and other aerospace vehicles (e.g., UAVs), and provides guidance for their use in selected defense and security applications. Broad use of RocketCams on defense and security projects will provide critical engineering data for developmental efforts, a large database of in-situ measurements onboard and around aerospace vehicles and platforms, compelling public relations content, and new diagnostic information for systems designers and failure-review panels alike.

  11. Portable document format file showing the surface models of cadaver whole body.

    PubMed

    Shin, Dong Sun; Chung, Min Suk; Park, Jin Seo; Park, Hyung Seon; Lee, Sangho; Moon, Young Lae; Jang, Hae Gwon

    2012-08-01

    In the Visible Korean project, 642 three-dimensional (3D) surface models have been built from the sectioned images of a male cadaver. It was recently discovered that popular PDF file enables users to approach the numerous surface models conveniently on Adobe Reader. Purpose of this study was to present a PDF file including systematized surface models of human body as the beneficial contents. To achieve the purpose, fitting software packages were employed in accordance with the procedures. Two-dimensional (2D) surface models including the original sectioned images were embedded into the 3D surface models. The surface models were categorized into systems and then groups. The adjusted surface models were inserted to a PDF file, where relevant multimedia data were added. The finalized PDF file containing comprehensive data of a whole body could be explored in varying manners. The PDF file, downloadable freely from the homepage (http://anatomy.co.kr), is expected to be used as a satisfactory self-learning tool of anatomy. Raw data of the surface models can be extracted from the PDF file and employed for various simulations for clinical practice. The technique to organize the surface models will be applied to manufacture of other PDF files containing various multimedia contents.

  12. Predicting floods with Flickr tags

    PubMed Central

    Jarvis, Stephen; Procter, Rob

    2017-01-01

    Increasingly, user generated content (UGC) in social media postings and their associated metadata such as time and location stamps are being used to provide useful operational information during natural hazard events such as hurricanes, storms and floods. The main advantage of these new sources of data are twofold. First, in a purely additive sense, they can provide much denser geographical coverage of the hazard as compared to traditional sensor networks. Second, they provide what physical sensors are not able to do: By documenting personal observations and experiences, they directly record the impact of a hazard on the human environment. For this reason interpretation of the content (e.g., hashtags, images, text, emojis, etc) and metadata (e.g., keywords, tags, geolocation) have been a focus of much research into social media analytics. However, as choices of semantic tags in the current methods are usually reduced to the exact name or type of the event (e.g., hashtags ‘#Sandy’ or ‘#flooding’), the main limitation of such approaches remains their mere nowcasting capacity. In this study we make use of polysemous tags of images posted during several recent flood events and demonstrate how such volunteered geographic data can be used to provide early warning of an event before its outbreak. PMID:28235035

  13. The element of naturalness when evaluating image quality of digital photo documentation after sexual assault.

    PubMed

    Ernst, E J; Speck, P M; Fitzpatrick, J J

    2012-01-01

    Digital photography is a valuable adjunct to document physical injuries after sexual assault. In order for a digital photograph to have high image quality, there must exist a high level of naturalness. Digital photo documentation has varying degrees of naturalness; however, for a photograph to be natural, specific technical elements for the viewer must be satisfied. No tool was available to rate the naturalness of digital photo documentation of female genital injuries after sexual assault. The Photo Documentation Image Quality Scoring System (PDIQSS) tool was developed to rate technical elements for naturalness. Using this tool, experts evaluated randomly selected digital photographs of female genital injuries captured following sexual assault. Naturalness of female genital injuries following sexual assault was demonstrated when measured in all dimensions.

  14. Kepler Fine Guidance Sensor Data

    NASA Technical Reports Server (NTRS)

    Van Cleve, Jeffrey; Campbell, Jennifer Roseanna

    2017-01-01

    The Kepler and K2 missions collected Fine Guidance Sensor (FGS) data in addition to the science data, as discussed in the Kepler Instrument Handbook (KIH, Van Cleve and Caldwell 2016). The FGS CCDs are frame transfer devices (KIH Table 7) located in the corners of the Kepler focal plane (KIH Figure 24), which are read out 10 times every second. The FGS data are being made available to the user community for scientific analysis as flux and centroid time series, along with a limited number of FGS full frame images which may be useful for constructing a World Coordinate System (WCS) or otherwise putting the time series data in context. This document will describe the data content and file format, and give example MATLAB scripts to read the time series. There are three file types delivered as the FGS data.1. Flux and Centroid (FLC) data: time series of star signal and centroid data. 2. Ancillary FGS Reference (AFR) data: catalog of information about the observed stars in the FLC data. 3. FGS Full-Frame Image (FGI) data: full-frame image snapshots of the FGS CCDs.

  15. Supporting the education evidence portal via text mining

    PubMed Central

    Ananiadou, Sophia; Thompson, Paul; Thomas, James; Mu, Tingting; Oliver, Sandy; Rickinson, Mark; Sasaki, Yutaka; Weissenbacher, Davy; McNaught, John

    2010-01-01

    The UK Education Evidence Portal (eep) provides a single, searchable, point of access to the contents of the websites of 33 organizations relating to education, with the aim of revolutionizing work practices for the education community. Use of the portal alleviates the need to spend time searching multiple resources to find relevant information. However, the combined content of the websites of interest is still very large (over 500 000 documents and growing). This means that searches using the portal can produce very large numbers of hits. As users often have limited time, they would benefit from enhanced methods of performing searches and viewing results, allowing them to drill down to information of interest more efficiently, without having to sift through potentially long lists of irrelevant documents. The Joint Information Systems Committee (JISC)-funded ASSIST project has produced a prototype web interface to demonstrate the applicability of integrating a number of text-mining tools and methods into the eep, to facilitate an enhanced searching, browsing and document-viewing experience. New features include automatic classification of documents according to a taxonomy, automatic clustering of search results according to similar document content, and automatic identification and highlighting of key terms within documents. PMID:20643679

  16. Leveraging Text Content for Management of Construction Project Documents

    ERIC Educational Resources Information Center

    Alqady, Mohammed

    2012-01-01

    The construction industry is a knowledge intensive industry. Thousands of documents are generated by construction projects. Documents, as information carriers, must be managed effectively to ensure successful project management. The fact that a single project can produce thousands of documents and that a lot of the documents are generated in a…

  17. Document image cleanup and binarization

    NASA Astrophysics Data System (ADS)

    Wu, Victor; Manmatha, Raghaven

    1998-04-01

    Image binarization is a difficult task for documents with text over textured or shaded backgrounds, poor contrast, and/or considerable noise. Current optical character recognition (OCR) and document analysis technology do not handle such documents well. We have developed a simple yet effective algorithm for document image clean-up and binarization. The algorithm consists of two basic steps. In the first step, the input image is smoothed using a low-pass filter. The smoothing operation enhances the text relative to any background texture. This is because background texture normally has higher frequency than text does. The smoothing operation also removes speckle noise. In the second step, the intensity histogram of the smoothed image is computed and a threshold automatically selected as follows. For black text, the first peak of the histogram corresponds to text. Thresholding the image at the value of the valley between the first and second peaks of the histogram binarizes the image well. In order to reliably identify the valley, the histogram is smoothed by a low-pass filter before the threshold is computed. The algorithm has been applied to some 50 images from a wide variety of source: digitized video frames, photos, newspapers, advertisements in magazines or sales flyers, personal checks, etc. There are 21820 characters and 4406 words in these images. 91 percent of the characters and 86 percent of the words are successfully cleaned up and binarized. A commercial OCR was applied to the binarized text when it consisted of fonts which were OCR recognizable. The recognition rate was 84 percent for the characters and 77 percent for the words.

  18. Contrast in Terahertz Images of Archival Documents—Part I: Influence of the Optical Parameters from the Ink and Support

    NASA Astrophysics Data System (ADS)

    Bardon, Tiphaine; May, Robert K.; Jackson, J. Bianca; Beentjes, Gabriëlle; de Bruin, Gerrit; Taday, Philip F.; Strlič, Matija

    2017-04-01

    This study aims to objectively inform curators when terahertz time-domain (TD) imaging set in reflection mode is likely to give well-contrasted images of inscriptions in a complex archival document and is a useful non-invasive alternative to current digitisation processes. To this end, the dispersive refractive indices and absorption coefficients from various archival materials are assessed and their influence on contrast in terahertz images from historical documents is explored. Sepia ink and inks produced with bistre or verdigris mixed with a solution of Arabic gum or rabbit skin glue are unlikely to lead to well-contrasted images. However, dispersions of bone black, ivory black, iron gall ink, malachite, lapis lazuli, minium and vermilion are likely to lead to well-contrasted images. Inscriptions written with lamp black, carbon black and graphite give the best imaging results. The characteristic spectral signatures from iron gall ink, minium and vermilion pellets between 5 and 100 cm-1 relate to a ringing effect at late collection times in TD waveforms transmitted through these pellets. The same ringing effect can be probed in waveforms reflected from iron gall, minium and vermilion ink deposits at the surface of a document. Since TD waveforms collected for each scanning pixel can be Fourier-transformed into spectral information, terahertz TD imaging in reflection mode can serve as a hyperspectral imaging tool. However, chemical recognition and mapping of the ink is currently limited by the fact that the morphology of the document influences more the terahertz spectral response of the document than the resonant behaviour of the ink.

  19. Document authentication at molecular levels using desorption atmospheric pressure chemical ionization mass spectrometry imaging.

    PubMed

    Li, Ming; Jia, Bin; Ding, Liying; Hong, Feng; Ouyang, Yongzhong; Chen, Rui; Zhou, Shumin; Chen, Huanwen; Fang, Xiang

    2013-09-01

    Molecular images of documents were obtained by sequentially scanning the surface of the document using desorption atmospheric pressure chemical ionization mass spectrometry (DAPCI-MS), which was operated in either a gasless, solvent-free or methanol vapor-assisted mode. The decay process of the ink used for handwriting was monitored by following the signal intensities recorded by DAPCI-MS. Handwritings made using four types of inks on four kinds of paper surfaces were tested. By studying the dynamic decay of the inks, DAPCI-MS imaging differentiated a 10-min old from two 4 h old samples. Non-destructive forensic analysis of forged signatures either handwritten or computer-assisted was achieved according to the difference of the contour in DAPCI images, which was attributed to the strength personalized by different writers. Distinction of the order of writing/stamping on documents and detection of illegal printings were accomplished with a spatial resolution of about 140 µm. A Matlab® written program was developed to facilitate the visualization of the similarity between signature images obtained by DAPCI-MS. The experimental results show that DAPCI-MS imaging provides rich information at the molecular level and thus can be used for the reliable document analysis in forensic applications. © 2013 The Authors. Journal of Mass Spectrometry published by John Wiley & Sons, Ltd.

  20. [The procedure for documentation of digital images in forensic medical histology].

    PubMed

    Putintsev, V A; Bogomolov, D V; Fedulova, M V; Gribunov, Iu P; Kul'bitskiĭ, B N

    2012-01-01

    This paper is devoted to the novel computer technologies employed in the studies of histological preparations. These technologies allow to visualize digital images, structurize the data obtained and store the results in computer memory. The authors emphasize the necessity to properly document digital images obtained during forensic-histological studies and propose the procedure for the formulation of electronic documents in conformity with the relevant technical and legal requirements. It is concluded that the use of digital images as a new study object permits to obviate the drawbacks inherent in the work with the traditional preparations and pass from descriptive microscopy to their quantitative analysis.

  1. Performance evaluation methodology for historical document image binarization.

    PubMed

    Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis

    2013-02-01

    Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.

  2. Visible Light Image-Based Method for Sugar Content Classification of Citrus

    PubMed Central

    Wang, Xuefeng; Wu, Chunyan; Hirafuji, Masayuki

    2016-01-01

    Visible light imaging of citrus fruit from Mie Prefecture of Japan was performed to determine whether an algorithm could be developed to predict the sugar content. This nondestructive classification showed that the accurate segmentation of different images can be realized by a correlation analysis based on the threshold value of the coefficient of determination. There is an obvious correlation between the sugar content of citrus fruit and certain parameters of the color images. The selected image parameters were connected by addition algorithm. The sugar content of citrus fruit can be predicted by the dummy variable method. The results showed that the small but orange citrus fruits often have a high sugar content. The study shows that it is possible to predict the sugar content of citrus fruit and to perform a classification of the sugar content using light in the visible spectrum and without the need for an additional light source. PMID:26811935

  3. A content analysis of thinspiration images and text posts on Tumblr.

    PubMed

    Wick, Madeline R; Harriger, Jennifer A

    2018-03-01

    Thinspiration is content advocating extreme weight loss by means of images and/or text posts. While past content analyses have examined thinspiration content on social media and other websites, no research to date has examined thinspiration content on Tumblr. Over the course of a week, 222 images and text posts were collected after entering the keyword 'thinspiration' into the Tumblr search bar. These images were then rated on a variety of characteristics. The majority of thinspiration images included a thin woman adhering to culturally based beauty, often posing in a manner that accentuated her thinness or sexuality. The most common themes for thinspiration text posts included dieting/restraint, weight loss, food guilt, and body guilt. The thinspiration content on Tumblr appears to be consistent with that on other mediums. Future research should utilize experimental methods to examine the potential effects of consuming thinspiration content on Tumblr. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Computer software documentation

    NASA Technical Reports Server (NTRS)

    Comella, P. A.

    1973-01-01

    A tutorial in the documentation of computer software is presented. It presents a methodology for achieving an adequate level of documentation as a natural outgrowth of the total programming effort commencing with the initial problem statement and definition and terminating with the final verification of code. It discusses the content of adequate documentation, the necessity for such documentation and the problems impeding achievement of adequate documentation.

  5. Variation in printed handoff documents: Results and recommendations from a multicenter needs assessment.

    PubMed

    Rosenbluth, Glenn; Bale, James F; Starmer, Amy J; Spector, Nancy D; Srivastava, Rajendu; West, Daniel C; Sectish, Theodore C; Landrigan, Christopher P

    2015-08-01

    Handoffs of patient care are a leading root cause of medical errors. Standardized techniques exist to minimize miscommunications during verbal handoffs, but studies to guide standardization of printed handoff documents are lacking. To determine whether variability exists in the content of printed handoff documents and to identify key data elements that should be uniformly included in these documents. Pediatric hospitalist services at 9 institutions in the United States and Canada. Sample handoff documents from each institution were reviewed, and structured group interviews were conducted to understand each institution's priorities for written handoffs. An expert panel reviewed all handoff documents and structured group-interview findings, and subsequently made consensus-based recommendations for data elements that were either essential or recommended, including best overall printed handoff practices. Nine sites completed structured group interviews and submitted data. We identified substantial variation in both the structure and content of printed handoff documents. Only 4 of 23 possible data elements (17%) were uniformly present in all sites' handoff documents. The expert panel recommended the following as essential for all printed handoffs: assessment of illness severity, patient summary, action items, situation awareness and contingency plans, allergies, medications, age, weight, date of admission, and patient and hospital service identifiers. Code status and several other elements were also recommended. Wide variation exists in the content of printed handoff documents. Standardizing printed handoff documents has the potential to decrease omissions of key data during patient care transitions, which may decrease the risk of downstream medical errors. © 2015 Society of Hospital Medicine.

  6. X3DOM as Carrier of the Virtual Heritage

    NASA Astrophysics Data System (ADS)

    Jung, Y.; Behr, J.; Graf, H.

    2011-09-01

    Virtual Museums (VM) are a new model of communication that aims at creating a personalized, immersive, and interactive way to enhance our understanding of the world around us. The term "VM" is a short-cut that comprehends various types of digital creations. One of the carriers for the communication of the virtual heritage at future internet level as de-facto standard is browser front-ends presenting the content and assets of museums. A major driving technology for the documentation and presentation of heritage driven media is real-time 3D content, thus imposing new strategies for a web inclusion. 3D content must become a first class web media that can be created, modified, and shared in the same way as text, images, audio and video are handled on the web right now. A new integration model based on a DOM integration into the web browsers' architecture opens up new possibilities for declarative 3 D content on the web and paves the way for new application scenarios for the virtual heritage at future internet level. With special regards to the X3DOM project as enabling technology for declarative 3D in HTML, this paper describes application scenarios and analyses its technological requirements for an efficient presentation and manipulation of virtual heritage assets on the web.

  7. Electronic Imaging in Admissions, Records & Financial Aid Offices.

    ERIC Educational Resources Information Center

    Perkins, Helen L.

    Over the years, efforts have been made to work more efficiently with the ever increasing number of records and paper documents that cross workers' desks. Filing records on optical disk through electronic imaging is an alternative that many feel is the answer to successful document management. The pioneering efforts in electronic imaging in…

  8. Supporting document for the historical tank content estimate for AX-tank farm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brevick, C.H., Westinghouse Hanford

    This Supporting Document provides historical in-depth characterization information on AX-Tank Farm, such as historical waste transfer and level data, tank physical information,temperature plots, liquid observation well plots, chemical analyte and radionuclide inventories for the Historical Tank Content Estimate Report for the northeast quadrant of the Hanford 200 East Area.

  9. Analysis of Documents Published in Scopus Database on Foreign Language Learning through Mobile Learning: A Content Analysis

    ERIC Educational Resources Information Center

    Uzunboylu, Huseyin; Genc, Zeynep

    2017-01-01

    The purpose of this study is to determine the recent trends in foreign language learning through mobile learning. The study was conducted employing document analysis and related content analysis among the qualitative research methodology. Through the search conducted on Scopus database with the key words "mobile learning and foreign language…

  10. 47 CFR 1.913 - Application and notification forms; electronic and manual filing.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Portable Document Format (PDF) whenever possible. (2) Any associated documents submitted with an... possible. The attachment should be uploaded via ULS in Adobe Acrobat Portable Document Format (PDF... the table of contents, should be in Adobe Acrobat Portable Document Format (PDF) whenever possible...

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koeller, E.; Dobmann, G.; Kuhn, W.

    Initial results are presented on the application of NMR techniques to prepregs in order to characterize the crosslink state under exposure to room and elevated (50 C) temperature. The experiments were conducted with a MSL-400 Bruker NMR spectrometer and microimaging system which works at 400 MHz. Aside from the sensitive measurement of the cross-link density there is also the potential to separate the influence of moisture content as a further parameter contributing to the aging process. It is shown that these experimental results correlate with results of destructive tests and document the potential of NMR as a NDT tool. Anmore » NMR-image of the moisture distribution in a glassfiber reinforced expoxy resin sample is shown. 17 refs.« less

  12. Wavelet domain textual coding of Ottoman script images

    NASA Astrophysics Data System (ADS)

    Gerek, Oemer N.; Cetin, Enis A.; Tewfik, Ahmed H.

    1996-02-01

    Image coding using wavelet transform, DCT, and similar transform techniques is well established. On the other hand, these coding methods neither take into account the special characteristics of the images in a database nor are they suitable for fast database search. In this paper, the digital archiving of Ottoman printings is considered. Ottoman documents are printed in Arabic letters. Witten et al. describes a scheme based on finding the characters in binary document images and encoding the positions of the repeated characters This method efficiently compresses document images and is suitable for database research, but it cannot be applied to Ottoman or Arabic documents as the concept of character is different in Ottoman or Arabic. Typically, one has to deal with compound structures consisting of a group of letters. Therefore, the matching criterion will be according to those compound structures. Furthermore, the text images are gray tone or color images for Ottoman scripts for the reasons that are described in the paper. In our method the compound structure matching is carried out in wavelet domain which reduces the search space and increases the compression ratio. In addition to the wavelet transformation which corresponds to the linear subband decomposition, we also used nonlinear subband decomposition. The filters in the nonlinear subband decomposition have the property of preserving edges in the low resolution subband image.

  13. Content dependent selection of image enhancement parameters for mobile displays

    NASA Astrophysics Data System (ADS)

    Lee, Yoon-Gyoo; Kang, Yoo-Jin; Kim, Han-Eol; Kim, Ka-Hee; Kim, Choon-Woo

    2011-01-01

    Mobile devices such as cellular phones and portable multimedia player with capability of playing terrestrial digital multimedia broadcasting (T-DMB) contents have been introduced into consumer market. In this paper, content dependent image quality enhancement method for sharpness and colorfulness and noise reduction is presented to improve perceived image quality on mobile displays. Human visual experiments are performed to analyze viewers' preference. Relationship between the objective measures and the optimal values of image control parameters are modeled by simple lookup tables based on the results of human visual experiments. Content dependent values of image control parameters are determined based on the calculated measures and predetermined lookup tables. Experimental results indicate that dynamic selection of image control parameters yields better image quality.

  14. Author name recognition in degraded journal images

    NASA Astrophysics Data System (ADS)

    de Bodard de la Jacopière, Aliette; Likforman-Sulem, Laurence

    2006-01-01

    A method for extracting names in degraded documents is presented in this article. The documents targeted are images of photocopied scientific journals from various scientific domains. Due to the degradation, there is poor OCR recognition, and pieces of other articles appear on the sides of the image. The proposed approach relies on the combination of a low-level textual analysis and an image-based analysis. The textual analysis extracts robust typographic features, while the image analysis selects image regions of interest through anchor components. We report results on the University of Washington benchmark database.

  15. us9805_latilt

    Science.gov Websites

    ;meta http-equiv=Content-Type content="text/html; charset=iso-8859-1"> <meta name=ProgId content=Word.Document> <meta name=Generator content="Microsoft Word 11"> <meta name /dublin_core"> <meta name=dc.title content="Alaska Solar Resource: Flat Plate Collector, Facing

  16. Math: Basic Skills Content Standards

    ERIC Educational Resources Information Center

    CASAS - Comprehensive Adult Student Assessment Systems (NJ1), 2008

    2008-01-01

    This document presents content standards tables for math. [CASAS content standards tables are designed for educators at national, state and local levels to inform the alignment of content standards, instruction and assessment. The Content Standards along with the CASAS Competencies form the basis of the CASAS integrated assessment and curriculum…

  17. Degraded document image enhancement

    NASA Astrophysics Data System (ADS)

    Agam, G.; Bal, G.; Frieder, G.; Frieder, O.

    2007-01-01

    Poor quality documents are obtained in various situations such as historical document collections, legal archives, security investigations, and documents found in clandestine locations. Such documents are often scanned for automated analysis, further processing, and archiving. Due to the nature of such documents, degraded document images are often hard to read, have low contrast, and are corrupted by various artifacts. We describe a novel approach for the enhancement of such documents based on probabilistic models which increases the contrast, and thus, readability of such documents under various degradations. The enhancement produced by the proposed approach can be viewed under different viewing conditions if desired. The proposed approach was evaluated qualitatively and compared to standard enhancement techniques on a subset of historical documents obtained from the Yad Vashem Holocaust museum. In addition, quantitative performance was evaluated based on synthetically generated data corrupted under various degradation models. Preliminary results demonstrate the effectiveness of the proposed approach.

  18. Identification needs in developing, documenting, and indexing WSDOT photographs : research report, February 2010.

    DOT National Transportation Integrated Search

    2010-02-01

    Over time, the Department of Transportation has accumulated image collections, which document important : aspects of the transportation infrastructure in the Pacific Northwest, project status and construction details. These : images range from paper ...

  19. Structured Forms Reference Set of Binary Images (SFRS)

    National Institute of Standards and Technology Data Gateway

    NIST Structured Forms Reference Set of Binary Images (SFRS) (Web, free access)   The NIST Structured Forms Database (Special Database 2) consists of 5,590 pages of binary, black-and-white images of synthesized documents. The documents in this database are 12 different tax forms from the IRS 1040 Package X for the year 1988.

  20. Detection of text strings from mixed text/graphics images

    NASA Astrophysics Data System (ADS)

    Tsai, Chien-Hua; Papachristou, Christos A.

    2000-12-01

    A robust system for text strings separation from mixed text/graphics images is presented. Based on a union-find (region growing) strategy the algorithm is thus able to classify the text from graphics and adapts to changes in document type, language category (e.g., English, Chinese and Japanese), text font style and size, and text string orientation within digital images. In addition, it allows for a document skew that usually occurs in documents, without skew correction prior to discrimination while these proposed methods such a projection profile or run length coding are not always suitable for the condition. The method has been tested with a variety of printed documents from different origins with one common set of parameters, and the experimental results of the performance of the algorithm in terms of computational efficiency are demonstrated by using several tested images from the evaluation.

  1. LANDSAT-D accelerated payload correction subsystem output computer compatible tape format

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The NASA GSFC LANDSAT-D Ground Segment (GS) is developing an Accelerated Payload Correction Subsystem (APCS) to provide Thematic Mapper (TM) image correction data to be used outside the GS. This correction data is computed from a subset of the TM Payload Correction Data (PCD), which is downlinked from the spacecraft in a 32 Kbps data stream, and mirror scan correction data (MSCD), which is extracted from the wideband video data. This correction data is generated in the GS Thematic Mapper Mission Management Facility (MMF-T), and is recorded on a 9-track 1600 bit per inch computer compatible tape (CCT). This CCT is known as a APCS Output CCT (AOT). The AOT follows standardized corrections with respect to data formats, record construction and record identification. Applicable documents are delineated; common conventions which are used in further defining the structure, format and content of the AOT are defined; and the structure and content of the AOT are described.

  2. A parieto-medial temporal pathway for the strategic control over working memory biases in human visual attention.

    PubMed

    Soto, David; Greene, Ciara M; Kiyonaga, Anastasia; Rosenthal, Clive R; Egner, Tobias

    2012-12-05

    The contents of working memory (WM) can both aid and disrupt the goal-directed allocation of visual attention. WM benefits attention when its contents overlap with goal-relevant stimulus features, but WM leads attention astray when its contents match features of currently irrelevant stimuli. Recent behavioral data have documented that WM biases of attention may be subject to strategic cognitive control processes whereby subjects are able to either enhance or inhibit the influence of WM contents on attention. However, the neural mechanisms supporting cognitive control over WM biases on attention are presently unknown. Here, we characterize these mechanisms by combining human functional magnetic resonance imaging with a task that independently manipulates the relationship between WM cues and attention targets during visual search (with WM contents matching either search targets or distracters), as well as the predictability of this relationship (100 vs 50% predictability) to assess participants' ability to strategically enhance or inhibit WM biases on attention when WM contents reliably matched targets or distracter stimuli, respectively. We show that cues signaling predictable (> unpredictable) WM-attention relations reliably enhanced search performance, and that this strategic modulation of the interplay between WM contents and visual attention was mediated by a neuroanatomical network involving the posterior parietal cortex, the posterior cingulate, and medial temporal lobe structures, with responses in the hippocampus proper correlating with behavioral measures of strategic control of WM biases. Thus, we delineate a novel parieto-medial temporal pathway implementing cognitive control over WM biases to optimize goal-directed selection.

  3. The Effects of Topic Familiarity, Author Expertise, and Content Relevance on Norwegian Students' Document Selection: A Mixed Methods Study

    ERIC Educational Resources Information Center

    McCrudden, Matthew T.; Stenseth, Tonje; Bråten, Ivar; Strømsø, Helge I.

    2016-01-01

    This mixed methods study investigated the extent to which author expertise and content relevance were salient to secondary Norwegian students (N = 153) when they selected documents that pertained to more familiar and less familiar topics. Quantitative results indicated that author expertise was more salient for the less familiar topic (nuclear…

  4. Using XML to Separate Content from the Presentation Software in eLearning Applications

    ERIC Educational Resources Information Center

    Merrill, Paul F.

    2005-01-01

    This paper has shown how XML (extensible Markup Language) can be used to mark up content. Since XML documents, with meaningful tags, can be interpreted easily by humans as well as computers, they are ideal for the interchange of information. Because XML tags can be defined by an individual or organization, XML documents have proven useful in a…

  5. A content analysis of Health Technology Assessment programs in Latin America.

    PubMed

    Arellano, Luis E; Reza, Mercedes; Blasco, Juan Antonio; Andradas, Elena

    2009-10-01

    Health Technology Assessment (HTA) is a relatively new concept in Latin America (LA). The objectives of this exploratory study were to identify HTA programs in LA, review HTA documents produced by those programs, and assess the extent to which HTA aims are being achieved. An electronic search through two databases was performed to identify HTA programs in LA. A content analysis was performed on HTA documents (n = 236) produced by six programs between January 2000 and March 2007. Results were analyzed by comparing document content with the main goals of HTA. The number of HTA documents increased incrementally during the study period. The documents produced were mostly short HTA documents (82 percent) that assessed technologies such as drugs (31 percent), diagnostic and/or screening technologies (18 percent), or medical procedures (18 percent). Two-thirds (66 percent) of all HTA documents addressed issues related to clinical effectiveness and economic evaluations. Ethical, social, and/or legal issues were rarely addressed (<1 percent). The two groups most often targeted for dissemination of HTA information were third-party payers (55 percent) or government policy makers (41 percent). This study showed that while HTA programs in LA have attempted to address the main goals of HTA, they have done so through the production of short documents that focus on practical high-technology areas of importance to two specific target groups. Clinical and economic considerations still take precedence over ethical, social, and/or legal issues. Thus, an integrated conceptual framework in LA is wanting.

  6. Toward privacy-preserving JPEG image retrieval

    NASA Astrophysics Data System (ADS)

    Cheng, Hang; Wang, Jingyue; Wang, Meiqing; Zhong, Shangping

    2017-07-01

    This paper proposes a privacy-preserving retrieval scheme for JPEG images based on local variance. Three parties are involved in the scheme: the content owner, the server, and the authorized user. The content owner encrypts JPEG images for privacy protection by jointly using permutation cipher and stream cipher, and then, the encrypted versions are uploaded to the server. With an encrypted query image provided by an authorized user, the server may extract blockwise local variances in different directions without knowing the plaintext content. After that, it can calculate the similarity between the encrypted query image and each encrypted database image by a local variance-based feature comparison mechanism. The authorized user with the encryption key can decrypt the returned encrypted images with plaintext content similar to the query image. The experimental results show that the proposed scheme not only provides effective privacy-preserving retrieval service but also ensures both format compliance and file size preservation for encrypted JPEG images.

  7. Automatic image enhancement based on multi-scale image decomposition

    NASA Astrophysics Data System (ADS)

    Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong

    2014-01-01

    In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.

  8. Advance Care Planning Documentation in Electronic Health Records: Current Challenges and Recommendations for Change.

    PubMed

    Lamas, Daniela; Panariello, Natalie; Henrich, Natalie; Hammes, Bernard; Hanson, Laura C; Meier, Diane E; Guinn, Nancy; Corrigan, Janet; Hubber, Sean; Luetke-Stahlman, Hannah; Block, Susan

    2018-04-01

    To develop a set of clinically relevant recommendations to improve the state of advance care planning (ACP) documentation in the electronic health record (EHR). Advance care planning (ACP) is a key process that supports goal-concordant care. For preferences to be honored, clinicians must be able to reliably record, find, and use ACP documentation. However, there are no standards to guide ACP documentation in the electronic health record (EHR). We interviewed 21 key informants to understand the strengths and weaknesses of EHR documentation systems for ACP and identify best practices. We analyzed these interviews using a qualitative content analysis approach and subsequently developed a preliminary set of recommendations. These recommendations were vetted and refined in a second round of input from a national panel of content experts. Informants identified six themes regarding current inadequacies in documentation and accessibility of ACP information and opportunities for improvement. We offer a set of concise, clinically relevant recommendations, informed by expert opinion, to improve the state of ACP documentation in the EHR.

  9. Interactive visual comparison of multimedia data through type-specific views

    NASA Astrophysics Data System (ADS)

    Burtner, Russ; Bohn, Shawn; Payne, Debbie

    2013-01-01

    Analysts who work with collections of multimedia to perform information foraging understand how difficult it is to connect information across diverse sets of mixed media. The wealth of information from blogs, social media, and news sites often can provide actionable intelligence; however, many of the tools used on these sources of content are not capable of multimedia analysis because they only analyze a single media type. As such, analysts are taxed to keep a mental model of the relationships among each of the media types when generating the broader content picture. To address this need, we have developed Canopy, a novel visual analytic tool for analyzing multimedia. Canopy provides insight into the multimedia data relationships by exploiting the linkages found in text, images, and video co-occurring in the same document and across the collection. Canopy connects derived and explicit linkages and relationships through multiple connected visualizations to aid analysts in quickly summarizing, searching, and browsing collected information to explore relationships and align content. In this paper, we will discuss the features and capabilities of the Canopy system and walk through a scenario illustrating how this system might be used in an operational environment.

  10. 42 CFR 486.346 - Condition: Organ preparation and transport.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... with the identification number, specific contents, and donor's blood type. ... complete documentation of donor information to the transplant center with the organ, including donor evaluation, the complete record of the donor's management, documentation of consent, documentation of the...

  11. "torino 1911" Project: a Contribution of a Slam-Based Survey to Extensive 3d Heritage Modeling

    NASA Astrophysics Data System (ADS)

    Chiabrando, F.; Della Coletta, C.; Sammartano, G.; Spanò, A.; Spreafico, A.

    2018-05-01

    In the framework of the digital documentation of complex environments the advanced Geomatics researches offers integrated solution and multi-sensor strategies for the 3D accurate reconstruction of stratified structures and articulated volumes in the heritage domain. The use of handheld devices for rapid mapping, both image- and range-based, can help the production of suitable easy-to use and easy-navigable 3D model for documentation projects. These types of reality-based modelling could support, with their tailored integrated geometric and radiometric aspects, valorisation and communication projects including virtual reconstructions, interactive navigation settings, immersive reality for dissemination purposes and evoking past places and atmospheres. The aim of this research is localized within the "Torino 1911" project, led by the University of San Diego (California) in cooperation with the PoliTo. The entire project is conceived for multi-scale reconstruction of the real and no longer existing structures in the whole park space of more than 400,000 m2, for a virtual and immersive visualization of the Turin 1911 International "Fabulous Exposition" event, settled in the Valentino Park. Particularly, in the presented research, a 3D metric documentation workflow is proposed and validated in order to integrate the potentialities of LiDAR mapping by handheld SLAM-based device, the ZEB REVO Real Time instrument by GeoSLAM (2017 release), instead of TLS consolidated systems. Starting from these kind of models, the crucial aspects of the trajectories performances in the 3D reconstruction and the radiometric content from imaging approaches are considered, specifically by means of compared use of common DSLR cameras and portable sensors.

  12. Document clustering methods, document cluster label disambiguation methods, document clustering apparatuses, and articles of manufacture

    DOEpatents

    Sanfilippo, Antonio [Richland, WA; Calapristi, Augustin J [West Richland, WA; Crow, Vernon L [Richland, WA; Hetzler, Elizabeth G [Kennewick, WA; Turner, Alan E [Kennewick, WA

    2009-12-22

    Document clustering methods, document cluster label disambiguation methods, document clustering apparatuses, and articles of manufacture are described. In one aspect, a document clustering method includes providing a document set comprising a plurality of documents, providing a cluster comprising a subset of the documents of the document set, using a plurality of terms of the documents, providing a cluster label indicative of subject matter content of the documents of the cluster, wherein the cluster label comprises a plurality of word senses, and selecting one of the word senses of the cluster label.

  13. Automatic detection of hemorrhagic pericardial effusion on PMCT using deep learning - a feasibility study.

    PubMed

    Ebert, Lars C; Heimer, Jakob; Schweitzer, Wolf; Sieberth, Till; Leipner, Anja; Thali, Michael; Ampanozi, Garyfalia

    2017-12-01

    Post mortem computed tomography (PMCT) can be used as a triage tool to better identify cases with a possibly non-natural cause of death, especially when high caseloads make it impossible to perform autopsies on all cases. Substantial data can be generated by modern medical scanners, especially in a forensic setting where the entire body is documented at high resolution. A solution for the resulting issues could be the use of deep learning techniques for automatic analysis of radiological images. In this article, we wanted to test the feasibility of such methods for forensic imaging by hypothesizing that deep learning methods can detect and segment a hemopericardium in PMCT. For deep learning image analysis software, we used the ViDi Suite 2.0. We retrospectively selected 28 cases with, and 24 cases without, hemopericardium. Based on these data, we trained two separate deep learning networks. The first one classified images into hemopericardium/not hemopericardium, and the second one segmented the blood content. We randomly selected 50% of the data for training and 50% for validation. This process was repeated 20 times. The best performing classification network classified all cases of hemopericardium from the validation images correctly with only a few false positives. The best performing segmentation network would tend to underestimate the amount of blood in the pericardium, which is the case for most networks. This is the first study that shows that deep learning has potential for automated image analysis of radiological images in forensic medicine.

  14. Color transplant for reverse ageing of faded artworks

    NASA Astrophysics Data System (ADS)

    Del Mastio, A.; Piva, A.; Barni, M.; Cappellini, V.; Stefanini, L.

    2008-02-01

    Nowadays, photographs are one of the most used media for communication. Images are used for the representation of documents, Cultural goods, and so on: they are used to pass on a wedge of historical memory of the society. Since its origin, the photographic technique has got several improvements; nevertheless, photos are liable to several damages, both concerning the physical support and concerning the colors and figures which are depicted in it: for example, think about scratches or rips happened to a photo, or think about the fading or red (or yellow) toning concerning the colors of a photo. In this paper, we propose a novel method which is able to assess the original beauty of digital reproductions of aged photos, as well as digital reproductions of faded goods. The method is based on the comparison of the degraded image with a not-degraded one showing similar contents; thus, the colors of the not-degraded image can be transplanted in the degraded one. The key idea is a dualism between the analytical mechanics and the color theory: for each of the degraded and not-degraded images we compute first a scatter plot of the x and y normalized coordinates of their colors; these scatter diagrams can be regarded as a system of point masses, thus provided of inertia axes and an inertia ellipsoid. Moving the scatter diagram of the degraded image over the one belonging to the not-degraded image, the colors of the degraded image can be restored.

  15. What's in "Your" File Cabinet? Leveraging Technology for Document Imaging and Storage

    ERIC Educational Resources Information Center

    Flaherty, William

    2011-01-01

    Spotsylvania County Public Schools (SCPS) in Virginia uses a document-imaging solution that leverages the features of a multifunction printer (MFP). An MFP is a printer, scanner, fax machine, and copier all rolled into one. It can scan a document and email it all in one easy step. Software is available that allows the MFP to scan bubble sheets and…

  16. Analysis of a risk prevention document using dependability techniques: a first step towards an effectiveness model

    NASA Astrophysics Data System (ADS)

    Ferrer, Laetitia; Curt, Corinne; Tacnet, Jean-Marc

    2018-04-01

    Major hazard prevention is a main challenge given that it is specifically based on information communicated to the public. In France, preventive information is notably provided by way of local regulatory documents. Unfortunately, the law requires only few specifications concerning their content; therefore one can question the impact on the general population relative to the way the document is concretely created. Ergo, the purpose of our work is to propose an analytical methodology to evaluate preventive risk communication document effectiveness. The methodology is based on dependability approaches and is applied in this paper to the Document d'Information Communal sur les Risques Majeurs (DICRIM; in English, Municipal Information Document on Major Risks). DICRIM has to be made by mayors and addressed to the public to provide information on major hazards affecting their municipalities. An analysis of law compliance of the document is carried out thanks to the identification of regulatory detection elements. These are applied to a database of 30 DICRIMs. This analysis leads to a discussion on points such as usefulness of the missing elements. External and internal function analysis permits the identification of the form and content requirements and service and technical functions of the document and its components (here its sections). Their results are used to carry out an FMEA (failure modes and effects analysis), which allows us to define the failure and to identify detection elements. This permits the evaluation of the effectiveness of form and content of each components of the document. The outputs are validated by experts from the different fields investigated. Those results are obtained to build, in future works, a decision support model for the municipality (or specialised consulting firms) in charge of drawing up documents.

  17. Testing a Nursing-Specific Model of Electronic Patient Record documentation with regard to information completeness, comprehensiveness and consistency.

    PubMed

    von Krogh, Gunn; Nåden, Dagfinn; Aasland, Olaf Gjerløw

    2012-10-01

    To present the results from the test site application of the documentation model KPO (quality assurance, problem solving and caring) designed to impact the quality of nursing information in electronic patient record (EPR). The KPO model was developed by means of consensus group and clinical testing. Four documentation arenas and eight content categories, nursing terminologies and a decision-support system were designed to impact the completeness, comprehensiveness and consistency of nursing information. The testing was performed in a pre-test/post-test time series design, three times at a one-year interval. Content analysis of nursing documentation was accomplished through the identification, interpretation and coding of information units. Data from the pre-test and post-test 2 were subjected to statistical analyses. To estimate the differences, paired t-tests were used. At post-test 2, the information is found to be more complete, comprehensive and consistent than at pre-test. The findings indicate that documentation arenas combining work flow and content categories deduced from theories on nursing practice can influence the quality of nursing information. The KPO model can be used as guide when shifting from paper-based to electronic-based nursing documentation with the aim of obtaining complete, comprehensive and consistent nursing information. © 2012 Blackwell Publishing Ltd.

  18. Fast frequency domain method to detect skew in a document image

    NASA Astrophysics Data System (ADS)

    Mehta, Sunita; Walia, Ekta; Dutta, Maitreyee

    2015-12-01

    In this paper, a new fast frequency domain method based on Discrete Wavelet Transform and Fast Fourier Transform has been implemented for the determination of the skew angle in a document image. Firstly, image size reduction is done by using two-dimensional Discrete Wavelet Transform and then skew angle is computed using Fast Fourier Transform. Skew angle error is almost negligible. The proposed method is experimented using a large number of documents having skew between -90° and +90° and results are compared with Moments with Discrete Wavelet Transform method and other commonly used existing methods. It has been determined that this method works more efficiently than the existing methods. Also, it works with typed, picture documents having different fonts and resolutions. It overcomes the drawback of the recently proposed method of Moments with Discrete Wavelet Transform that does not work with picture documents.

  19. Application of content-based image compression to telepathology

    NASA Astrophysics Data System (ADS)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  20. CAED Document Repository

    EPA Pesticide Factsheets

    Compliance Assurance and Enforcement Division Document Repository (CAEDDOCRESP) provides internal and external access of Inspection Records, Enforcement Actions, and National Environmental Protection Act (NEPA) documents to all CAED staff. The respository will also include supporting documents, images, etc.

  1. Refining Presentation Documents with Presentation Schema

    ERIC Educational Resources Information Center

    Obara, Yuki; Kashihara, Akihiro

    2017-01-01

    Presentation is one of the important activities in research to publish research results. When we create presentation documents (P-documents for short), it is important to compose presentation structure (P-structure for short) that represents what to present and how to sequence the contents. To create proper P-documents, we need to learn how to…

  2. Document boundary determination using structural and lexical analysis

    NASA Astrophysics Data System (ADS)

    Taghva, Kazem; Cartright, Marc-Allen

    2009-01-01

    The document boundary determination problem is the process of identifying individual documents in a stack of papers. In this paper, we report on a classification system for automation of this process. The system employs features based on document structure and lexical content. We also report on experimental results to support the effectiveness of this system.

  3. From a Content Delivery Portal to a Knowledge Management System for Standardized Cancer Documentation.

    PubMed

    Schlue, Danijela; Mate, Sebastian; Haier, Jörg; Kadioglu, Dennis; Prokosch, Hans-Ulrich; Breil, Bernhard

    2017-01-01

    Heterogeneous tumor documentation and its challenges of interpretation of medical terms lead to problems in analyses of data from clinical and epidemiological cancer registries. The objective of this project was to design, implement and improve a national content delivery portal for oncological terms. Data elements of existing handbooks and documentation sources were analyzed, combined and summarized by medical experts of different comprehensive cancer centers. Informatics experts created a generic data model based on an existing metadata repository. In order to establish a national knowledge management system for standardized cancer documentation, a prototypical tumor wiki was designed and implemented. Requirements engineering techniques were applied to optimize this platform. It is targeted to user groups such as documentation officers, physicians and patients. The linkage to other information sources like PubMed and MeSH was realized.

  4. End-User Imaging DISKussions.

    ERIC Educational Resources Information Center

    McConnell, Pamela Jean

    1993-01-01

    This third in a series of articles on EDIS (Electronic Document Imaging System) technology focuses on organizational issues. Highlights include computer platforms; management information systems; computer-based skills of staff; new technology and change; time factors; financial considerations; document conversion costs; the benefits of EDIS…

  5. Handwritten text line segmentation by spectral clustering

    NASA Astrophysics Data System (ADS)

    Han, Xuecheng; Yao, Hui; Zhong, Guoqiang

    2017-02-01

    Since handwritten text lines are generally skewed and not obviously separated, text line segmentation of handwritten document images is still a challenging problem. In this paper, we propose a novel text line segmentation algorithm based on the spectral clustering. Given a handwritten document image, we convert it to a binary image first, and then compute the adjacent matrix of the pixel points. We apply spectral clustering on this similarity metric and use the orthogonal kmeans clustering algorithm to group the text lines. Experiments on Chinese handwritten documents database (HIT-MW) demonstrate the effectiveness of the proposed method.

  6. An algorithm for encryption of secret images into meaningful images

    NASA Astrophysics Data System (ADS)

    Kanso, A.; Ghebleh, M.

    2017-03-01

    Image encryption algorithms typically transform a plain image into a noise-like cipher image, whose appearance is an indication of encrypted content. Bao and Zhou [Image encryption: Generating visually meaningful encrypted images, Information Sciences 324, 2015] propose encrypting the plain image into a visually meaningful cover image. This improves security by masking existence of encrypted content. Following their approach, we propose a lossless visually meaningful image encryption scheme which improves Bao and Zhou's algorithm by making the encrypted content, i.e. distortions to the cover image, more difficult to detect. Empirical results are presented to show high quality of the resulting images and high security of the proposed algorithm. Competence of the proposed scheme is further demonstrated by means of comparison with Bao and Zhou's scheme.

  7. Document image archive transfer from DOS to UNIX

    NASA Technical Reports Server (NTRS)

    Hauser, Susan E.; Gill, Michael J.; Thoma, George R.

    1994-01-01

    An R&D division of the National Library of Medicine has developed a prototype system for automated document image delivery as an adjunct to the labor-intensive manual interlibrary loan service of the library. The document image archive is implemented by a PC controlled bank of optical disk drives which use 12 inch WORM platters containing bitmapped images of over 200,000 pages of medical journals. Following three years of routine operation which resulted in serving patrons with articles both by mail and fax, an effort is underway to relocate the storage environment from the DOS-based system to a UNIX-based jukebox whose magneto-optical erasable 5 1/4 inch platters hold the images. This paper describes the deficiencies of the current storage system, the design issues of modifying several modules in the system, the alternatives proposed and the tradeoffs involved.

  8. Content-independent embedding scheme for multi-modal medical image watermarking.

    PubMed

    Nyeem, Hussain; Boles, Wageeh; Boyd, Colin

    2015-02-04

    As the increasing adoption of information technology continues to offer better distant medical services, the distribution of, and remote access to digital medical images over public networks continues to grow significantly. Such use of medical images raises serious concerns for their continuous security protection, which digital watermarking has shown great potential to address. We present a content-independent embedding scheme for medical image watermarking. We observe that the perceptual content of medical images varies widely with their modalities. Recent medical image watermarking schemes are image-content dependent and thus they may suffer from inconsistent embedding capacity and visual artefacts. To attain the image content-independent embedding property, we generalise RONI (region of non-interest, to the medical professionals) selection process and use it for embedding by utilising RONI's least significant bit-planes. The proposed scheme thus avoids the need for RONI segmentation that incurs capacity and computational overheads. Our experimental results demonstrate that the proposed embedding scheme performs consistently over a dataset of 370 medical images including their 7 different modalities. Experimental results also verify how the state-of-the-art reversible schemes can have an inconsistent performance for different modalities of medical images. Our scheme has MSSIM (Mean Structural SIMilarity) larger than 0.999 with a deterministically adaptable embedding capacity. Our proposed image-content independent embedding scheme is modality-wise consistent, and maintains a good image quality of RONI while keeping all other pixels in the image untouched. Thus, with an appropriate watermarking framework (i.e., with the considerations of watermark generation, embedding and detection functions), our proposed scheme can be viable for the multi-modality medical image applications and distant medical services such as teleradiology and eHealth.

  9. Nonlinear filtering for character recognition in low quality document images

    NASA Astrophysics Data System (ADS)

    Diaz-Escobar, Julia; Kober, Vitaly

    2014-09-01

    Optical character recognition in scanned printed documents is a well-studied task, where the captured conditions like sheet position, illumination, contrast and resolution are controlled. Nowadays, it is more practical to use mobile devices for document capture than a scanner. So as a consequence, the quality of document images is often poor owing to presence of geometric distortions, nonhomogeneous illumination, low resolution, etc. In this work we propose to use multiple adaptive nonlinear composite filters for detection and classification of characters. Computer simulation results obtained with the proposed system are presented and discussed.

  10. Image information content and patient exposure.

    PubMed

    Motz, J W; Danos, M

    1978-01-01

    Presently, patient exposure and x-ray tube kilovoltage are determined by image visibility requirements on x-ray film. With the employment of image-processing techniques, image visibility may be manipulated and the exposure may be determined only by the desired information content, i.e., by the required degree of tissue-density descrimination and spatial resolution. This work gives quantitative relationships between the image information content and the patient exposure, give estimates of the minimum exposures required for the detection of image signals associated with particular radiological exams. Also, for subject thickness larger than approximately 5 cm, the results show that the maximum information content may be obtained at a single kilovoltage and filtration with the simultaneous employment of image-enhancement and antiscatter techniques. This optimization may be used either to reduce the patient exposure or to increase the retrieved information.

  11. Standard Health Level Seven for Odontological Digital Imaging

    PubMed Central

    Abril-Gonzalez, Mauricio; Portilla, Fernando A.

    2017-01-01

    Abstract Background: A guide for the implementation of dental digital imaging reports was developed and validated through the International Standard of Health Informatics–Health Level Seven (HL7), achieving interoperability with an electronic system that keeps dental records. Introduction: Digital imaging benefits patients, who can view previous close-ups of dental examinations; providers, because of greater efficiency in managing information; and insurers, because of improved accessibility, patient monitoring, and more efficient cost management. Finally, imaging is beneficial for the dentist who can be more agile in the diagnosis and treatment of patients using this tool. Materials and Methods: The guide was developed under the parameters of an HL7 standard. It was necessary to create a group of dentists and three experts in information and communication technologies from different institutions. Discussion: Diagnostic images scanned with conventional radiology or from a radiovisiograph can be converted to Digital Imaging and Communications in Medicine (DICOM) format, while also retaining patient information. The guide shows how the information of the health record of the patient and the information of the dental image could be standardized in a Clinical Dental Record document using international informatics standard like HL7-V3-CDA document (dental document Level 2). Since it is an informatics standardized document, it could be sent, stored, or displayed using different devices—personal computers or mobile devices—independent of the platform used. Conclusions: Interoperability using dental images and dental record systems reduces adverse events, increases security for the patient, and makes more efficient use of resources. This article makes a contribution to the field of telemedicine in dental informatics. In addition to that, the results could be a reference for projects of electronic medical records when the dental documents are part of them. PMID:27248059

  12. Standard Health Level Seven for Odontological Digital Imaging.

    PubMed

    Abril-Gonzalez, Mauricio; Portilla, Fernando A; Jaramillo-Mejia, Marta C

    2017-01-01

    A guide for the implementation of dental digital imaging reports was developed and validated through the International Standard of Health Informatics-Health Level Seven (HL7), achieving interoperability with an electronic system that keeps dental records. Digital imaging benefits patients, who can view previous close-ups of dental examinations; providers, because of greater efficiency in managing information; and insurers, because of improved accessibility, patient monitoring, and more efficient cost management. Finally, imaging is beneficial for the dentist who can be more agile in the diagnosis and treatment of patients using this tool. The guide was developed under the parameters of an HL7 standard. It was necessary to create a group of dentists and three experts in information and communication technologies from different institutions. Diagnostic images scanned with conventional radiology or from a radiovisiograph can be converted to Digital Imaging and Communications in Medicine (DICOM) format, while also retaining patient information. The guide shows how the information of the health record of the patient and the information of the dental image could be standardized in a Clinical Dental Record document using international informatics standard like HL7-V3-CDA document (dental document Level 2). Since it is an informatics standardized document, it could be sent, stored, or displayed using different devices-personal computers or mobile devices-independent of the platform used. Interoperability using dental images and dental record systems reduces adverse events, increases security for the patient, and makes more efficient use of resources. This article makes a contribution to the field of telemedicine in dental informatics. In addition to that, the results could be a reference for projects of electronic medical records when the dental documents are part of them.

  13. Digital authentication with copy-detection patterns

    NASA Astrophysics Data System (ADS)

    Picard, Justin

    2004-06-01

    Technologies for making high-quality copies of documents are getting more available, cheaper, and more efficient. As a result, the counterfeiting business engenders huge losses, ranging to 5% to 8% of worldwide sales of brand products, and endangers the reputation and value of the brands themselves. Moreover, the growth of the Internet drives the business of counterfeited documents (fake IDs, university diplomas, checks, and so on), which can be bought easily and anonymously from hundreds of companies on the Web. The incredible progress of digital imaging equipment has put in question the very possibility of verifying the authenticity of documents: how can we discern genuine documents from seemingly "perfect" copies? This paper proposes a solution based on creating digital images with specific properties, called a Copy-detection patterns (CDP), that is printed on arbitrary documents, packages, etc. CDPs make an optimal use of an "information loss principle": every time an imae is printed or scanned, some information is lost about the original digital image. That principle applies even for the highest quality scanning, digital imaging, printing or photocopying equipment today, and will likely remain true for tomorrow. By measuring the amount of information contained in a scanned CDP, the CDP detector can take a decision on the authenticity of the document.

  14. Spotting words in handwritten Arabic documents

    NASA Astrophysics Data System (ADS)

    Srihari, Sargur; Srinivasan, Harish; Babu, Pavithra; Bhole, Chetan

    2006-01-01

    The design and performance of a system for spotting handwritten Arabic words in scanned document images is presented. Three main components of the system are a word segmenter, a shape based matcher for words and a search interface. The user types in a query in English within a search window, the system finds the equivalent Arabic word, e.g., by dictionary look-up, locates word images in an indexed (segmented) set of documents. A two-step approach is employed in performing the search: (1) prototype selection: the query is used to obtain a set of handwritten samples of that word from a known set of writers (these are the prototypes), and (2) word matching: the prototypes are used to spot each occurrence of those words in the indexed document database. A ranking is performed on the entire set of test word images-- where the ranking criterion is a similarity score between each prototype word and the candidate words based on global word shape features. A database of 20,000 word images contained in 100 scanned handwritten Arabic documents written by 10 different writers was used to study retrieval performance. Using five writers for providing prototypes and the other five for testing, using manually segmented documents, 55% precision is obtained at 50% recall. Performance increases as more writers are used for training.

  15. Investigating the Link Between Radiologists Gaze, Diagnostic Decision, and Image Content

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tourassi, Georgia; Voisin, Sophie; Paquit, Vincent C

    2013-01-01

    Objective: To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods: Gaze data and diagnostic decisions were collected from six radiologists who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Texture analysis was performed in mammographic regions that attracted radiologists attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results: By poolingmore » the data from all radiologists machine learning produced highly accurate predictive models linking image content, gaze, cognition, and error. Merging radiologists gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the radiologists diagnostic errors while confirming 96.2% of their correct diagnoses. The radiologists individual errors could be adequately predicted by modeling the behavior of their peers. However, personalized tuning appears to be beneficial in many cases to capture more accurately individual behavior. Conclusions: Machine learning algorithms combining image features with radiologists gaze data and diagnostic decisions can be effectively developed to recognize cognitive and perceptual errors associated with the diagnostic interpretation of mammograms.« less

  16. Interpretive versus noninterpretive content in top-selling radiology textbooks: what are we teaching medical students?

    PubMed

    Webb, Emily M; Vella, Maya; Straus, Christopher M; Phelps, Andrew; Naeger, David M

    2015-04-01

    There are little data as to whether appropriate, cost effective, and safe ordering of imaging examinations are adequately taught in US medical school curricula. We sought to determine the proportion of noninterpretive content (such as appropriate ordering) versus interpretive content (such as reading a chest x-ray) in the top-selling medical student radiology textbooks. We performed an online search to identify a ranked list of the six top-selling general radiology textbooks for medical students. Each textbook was reviewed including content in the text, tables, images, figures, appendices, practice questions, question explanations, and glossaries. Individual pages of text and individual images were semiquantitatively scored on a six-level scale as to the percentage of material that was interpretive versus noninterpretive. The predominant imaging modality addressed in each was also recorded. Descriptive statistical analysis was performed. All six books had more interpretive content. On average, 1.4 pages of text focused on interpretation for every one page focused on noninterpretive content. Seventeen images/figures were dedicated to interpretive skills for every one focused on noninterpretive skills. In all books, the largest proportion of text and image content was dedicated to plain films (51.2%), with computed tomography (CT) a distant second (16%). The content on radiographs (3.1:1) and CT (1.6:1) was more interpretive than not. The current six top-selling medical student radiology textbooks contain a preponderance of material teaching image interpretation compared to material teaching noninterpretive skills, such as appropriate imaging examination selection, rational utilization, and patient safety. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  17. Parallel processing considerations for image recognition tasks

    NASA Astrophysics Data System (ADS)

    Simske, Steven J.

    2011-01-01

    Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.

  18. Comparison of approaches for mobile document image analysis using server supported smartphones

    NASA Astrophysics Data System (ADS)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-03-01

    With the recent advances in mobile technologies, new capabilities are emerging, such as mobile document image analysis. However, mobile phones are still less powerful than servers, and they have some resource limitations. One approach to overcome these limitations is performing resource-intensive processes of the application on remote servers. In mobile document image analysis, the most resource consuming process is the Optical Character Recognition (OCR) process, which is used to extract text in mobile phone captured images. In this study, our goal is to compare the in-phone and the remote server processing approaches for mobile document image analysis in order to explore their trade-offs. For the inphone approach, all processes required for mobile document image analysis run on the mobile phone. On the other hand, in the remote-server approach, core OCR process runs on the remote server and other processes run on the mobile phone. Results of the experiments show that the remote server approach is considerably faster than the in-phone approach in terms of OCR time, but adds extra delays such as network delay. Since compression and downscaling of images significantly reduce file sizes and extra delays, the remote server approach overall outperforms the in-phone approach in terms of selected speed and correct recognition metrics, if the gain in OCR time compensates for the extra delays. According to the results of the experiments, using the most preferable settings, the remote server approach performs better than the in-phone approach in terms of speed and acceptable correct recognition metrics.

  19. The Road to Paperless

    ERIC Educational Resources Information Center

    Villano, Matt

    2006-01-01

    More and more colleges and universities today have discovered electronic record-keeping and record-sharing, made possible by document imaging technology. Across the country, schools such as Monmouth University (New Jersey), Washington State University, the University of Idaho, and Towson University (Maryland) are embracing document imaging. Yet…

  20. Italian Chapter of the International Society of Cardiovascular Ultrasound expert consensus document on coronary computed tomography angiography: overview and new insights.

    PubMed

    Sozzi, Fabiola B; Maiello, Maria; Pelliccia, Francesco; Parato, Vito Maurizio; Canetta, Ciro; Savino, Ketty; Lombardi, Federico; Palmiero, Pasquale

    2016-09-01

    Coronary computed tomography angiography is a noninvasive heart imaging test currently undergoing rapid development and advancement. The high resolution of the three-dimensional pictures of the moving heart and great vessels is performed during a coronary computed tomography to identify coronary artery disease and classify patient risk for atherosclerotic cardiovascular disease. The technique provides useful information about the coronary tree and atherosclerotic plaques beyond simple luminal narrowing and plaque type defined by calcium content. This application will improve image-guided prevention, medical therapy, and coronary interventions. The ability to interpret coronary computed tomography images is of utmost importance as we develop personalized medical care to enable therapeutic interventions stratified on the bases of plaque characteristics. This overview provides available data and expert's recommendations in the utilization of coronary computed tomography findings. We focus on the use of coronary computed tomography to detect coronary artery disease and stratify patients at risk, illustrating the implications of this test on patient management. We describe its diagnostic power in identifying patients at higher risk to develop acute coronary syndrome and its prognostic significance. Finally, we highlight the features of the vulnerable plaques imaged by coronary computed tomography angiography. © 2016, Wiley Periodicals, Inc.

  1. Image segmentation evaluation for very-large datasets

    NASA Astrophysics Data System (ADS)

    Reeves, Anthony P.; Liu, Shuang; Xie, Yiting

    2016-03-01

    With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.

  2. Using High-Content Imaging to Analyze Toxicological Tipping ...

    EPA Pesticide Factsheets

    Presentation at International Conference on Toxicological Alternatives & Translational Toxicology (ICTATT) held in China and Discussing the possibility of using High Content Imaging to Analyze Toxicological Tipping Points Slide Presentation at International Conference on Toxicological Alternatives & Translational Toxicology (ICTATT) held in China and Discussing the possibility of using High Content Imaging to Analyze Toxicological Tipping Points

  3. Evaluation of Advanced Signal Processing Techniques to Improve Detection and Identification of Embedded Defects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clayton, Dwight A.; Santos-Villalobos, Hector J.; Baba, Justin S.

    By the end of 1996, 109 Nuclear Power Plants were operating in the United States, producing 22% of the Nation’s electricity [1]. At present, more than two thirds of these power plants are more than 40 years old. The purpose of the U.S. Department of Energy Office of Nuclear Energy’s Light Water Reactor Sustainability (LWRS) Program is to develop technologies and other solutions that can improve the reliability, sustain the safety, and extend the operating lifetimes of nuclear power plants (NPPs) beyond 60 years [2]. The most important safety structures in an NPP are constructed of concrete. The structures generallymore » do not allow for destructive evaluation and access is limited to one side of the concrete element. Therefore, there is a need for techniques and technologies that can assess the internal health of complex, reinforced concrete structures nondestructively. Previously, we documented the challenges associated with Non-Destructive Evaluation (NDE) of thick, reinforced concrete sections and prioritized conceptual designs of specimens that could be fabricated to represent NPP concrete structures [3]. Consequently, a 7 feet tall, by 7 feet wide, by 3 feet and 4-inch-thick concrete specimen was constructed with 2.257-inch-and 1-inch-diameter rebar every 6 to 12 inches. In addition, defects were embedded the specimen to assess the performance of existing and future NDE techniques. The defects were designed to give a mix of realistic and controlled defects for assessment of the necessary measures needed to overcome the challenges with more heavily reinforced concrete structures. Information on the embedded defects is documented in [4]. We also documented the superiority of Frequency Banded Decomposition (FBD) Synthetic Aperture Focusing Technique (SAFT) over conventional SAFT when probing defects under deep concrete cover. Improvements include seeing an intensity corresponding to a defect that is either not visible at all in regular, full frequency content SAFT, or an improvement in contrast over conventional SAFT reconstructed images. This report documents our efforts in four fronts: 1) Comparative study between traditional SAFT and FBD SAFT for concrete specimen with and without Alkali-Silica Reaction (ASR) damage, 2) improvement of our Model-Based Iterative Reconstruction (MBIR) for thick reinforced concrete [5], 3) development of a universal framework for sharing, reconstruction, and visualization of ultrasound NDE datasets, and 4) application of machine learning techniques for automated detection of ASR inside concrete. Our comparative study between FBD and traditional SAFT reconstruction images shows a clear difference between images of ASR and non-ASR specimens. In particular, the left first harmonic shows an increased contrast and sensitivity to ASR damage. For MBIR, we show the superiority of model-based techniques over delay and sum techniques such as SAFT. Improvements include elimination of artifacts caused by direct arrival signals, and increased contrast and Signal to Noise Ratio. For the universal framework, we document a format for data storage based on the HDF5 file format, and also propose a modular Graphic User Interface (GUI) for easy customization of data conversion, reconstruction, and visualization routines. Finally, two techniques for ASR automated detection are presented. The first technique is based on an analysis of the frequency content using Hilbert Transform Indicator (HTI) and the second technique employees Artificial Neural Network (ANN) techniques for training and classification of ultrasound data as ASR or non-ASR damaged classes. The ANN technique shows great potential with classification accuracy above 95%. These approaches are extensible to the detection of additional reinforced, thick concrete defects and damage.« less

  4. Document Monitor

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The charters of Freedom Monitoring System will periodically assess the physical condition of the U.S. Constitution, Declaration of Independence and Bill of Rights. Although protected in helium filled glass cases, the documents are subject to damage from light vibration and humidity. The photometer is a CCD detector used as the electronic film for the camera system's scanning camera which mechanically scans the document line by line and acquires a series of images, each representing a one square inch portion of the document. Perkin-Elmer Corporation's photometer is capable of detecting changes in contrast, shape or other indicators of degradation with 5 to 10 times the sensitivity of the human eye. A Vicom image processing computer receives the data from the photometer stores it and manipulates it, allowing comparison of electronic images over time to detect changes.

  5. Evaluation of pleural and pericardial effusions by magnetic resonance imaging.

    PubMed

    Tscholakoff, D; Sechtem, U; de Geer, G; Schmidt, H; Higgins, C B

    1987-08-01

    MR examinations of 36 patients with pleural and/or pericardial effusions were retrospectively evaluated. The purpose of this study was to determine of MR imaging is capable of differentiating between pleural and pericardial effusions of different compositions using standard electrocardiogram (ECG)-gated and non-gated spin echo pulse sequences. Additional data was obtained from experimental pleural effusions in 10 dogs. The results of this study indicate that old hemorrhages into the pleural or pericardial space can be differentiated from other pleural or pericardial effusions. However, further differentiation between transudates, exudates and sanguinous effusions is not possible on MR images acquired with standard spin echo pulse sequences. Respiratory and cardiac motion are responsible for signal loss, particularly on first echo images. This was documented in experiments in dogs with induced effusions of known composition; "negative" T2 values consistent with fluid motion during imaging sequences were observed in 80% of cases. However, postmortem studies of the dogs with experimental effusions showed differences between effusions with low protein concentrations and higher protein concentrations. We conclude from our study that characterization of pleural and pericardial effusions on standard ECG-gated and non-gated MR examinations is limited to the positive identification of hemorrhage. Motion of the fluid due to cardiac and respiratory activity causes artifactual and unpredictable changes in intensity values negating the more subtle differences in intensity associated with increasing protein content.

  6. Structured Forms Reference Set of Binary Images II (SFRS2)

    National Institute of Standards and Technology Data Gateway

    NIST Structured Forms Reference Set of Binary Images II (SFRS2) (Web, free access)   The second NIST database of structured forms (Special Database 6) consists of 5,595 pages of binary, black-and-white images of synthesized documents containing hand-print. The documents in this database are 12 different tax forms with the IRS 1040 Package X for the year 1988.

  7. Outpatients flow management and ophthalmic electronic medical records system in university hospital using Yahgee Document View.

    PubMed

    Matsuo, Toshihiko; Gochi, Akira; Hirakawa, Tsuyoshi; Ito, Tadashi; Kohno, Yoshihisa

    2010-10-01

    General electronic medical records systems remain insufficient for ophthalmology outpatient clinics from the viewpoint of dealing with many ophthalmic examinations and images in a large number of patients. Filing systems for documents and images by Yahgee Document View (Yahgee, Inc.) were introduced on the platform of general electronic medical records system (Fujitsu, Inc.). Outpatients flow management system and electronic medical records system for ophthalmology were constructed. All images from ophthalmic appliances were transported to Yahgee Image by the MaxFile gateway system (P4 Medic, Inc.). The flow of outpatients going through examinations such as visual acuity testing were monitored by the list "Ophthalmology Outpatients List" by Yahgee Workflow in addition to the list "Patients Reception List" by Fujitsu. Patients' identification number was scanned with bar code readers attached to ophthalmic appliances. Dual monitors were placed in doctors' rooms to show Fujitsu Medical Records on the left-hand monitor and ophthalmic charts of Yahgee Document on the right-hand monitor. The data of manually-inputted visual acuity, automatically-exported autorefractometry and non-contact tonometry on a new template, MaxFile ED, were again automatically transported to designated boxes on ophthalmic charts of Yahgee Document. Images such as fundus photographs, fluorescein angiograms, optical coherence tomographic and ultrasound scans were viewed by Yahgee Image, and were copy-and-pasted to assigned boxes on the ophthalmic charts. Ordering such as appointments, drug prescription, fees and diagnoses input, central laboratory tests, surgical theater and ward room reservations were placed by functions of the Fujitsu electronic medical records system. The combination of the Fujitsu electronic medical records and Yahgee Document View systems enabled the University Hospital to examine the same number of outpatients as prior to the implementation of the computerized filing system.

  8. Research on Generating Method of Embedded Software Test Document Based on Dynamic Model

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper provides a dynamic model-based test document generation method for embedded software that provides automatic generation of two documents: test requirements specification documentation and configuration item test documentation. This method enables dynamic test requirements to be implemented in dynamic models, enabling dynamic test demand tracking to be easily generated; able to automatically generate standardized, standardized test requirements and test documentation, improved document-related content inconsistency and lack of integrity And other issues, improve the efficiency.

  9. 16 CFR 436.4 - Table of contents.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Table of contents. 436.4 Section 436.4 Commercial Practices FEDERAL TRADE COMMISSION TRADE REGULATION RULES DISCLOSURE REQUIREMENTS AND PROHIBITIONS CONCERNING FRANCHISING Contents of a Disclosure Document § 436.4 Table of contents. Include the following...

  10. An Open Source Agenda for Research Linking Text and Image Content Features.

    ERIC Educational Resources Information Center

    Goodrum, Abby A.; Rorvig, Mark E.; Jeong, Ki-Tai; Suresh, Chitturi

    2001-01-01

    Proposes methods to utilize image primitives to support term assignment for image classification. Proposes to release code for image analysis in a common tool set for other researchers to use. Of particular focus is the expansion of work by researchers in image indexing to include image content-based feature extraction capabilities in their work.…

  11. Web Content Management Systems: An Analysis of Forensic Investigatory Challenges.

    PubMed

    Horsman, Graeme

    2018-02-26

    With an increase in the creation and maintenance of personal websites, web content management systems are now frequently utilized. Such systems offer a low cost and simple solution for those seeking to develop an online presence, and subsequently, a platform from which reported defamatory content, abuse, and copyright infringement has been witnessed. This article provides an introductory forensic analysis of the three current most popular web content management systems available, WordPress, Drupal, and Joomla! Test platforms have been created, and their site structures have been examined to provide guidance for forensic practitioners facing investigations of this type. Result's document available metadata for establishing site ownership, user interactions, and stored content following analysis of artifacts including Wordpress's wp_users, and wp_comments tables, Drupal's "watchdog" records, and Joomla!'s _users, and _content tables. Finally, investigatory limitations documenting the difficulties of investigating WCMS usage are noted, and analysis recommendations are offered. © 2018 American Academy of Forensic Sciences.

  12. A Visible Ideology: A Document Series in a Women's Clothing Company.

    ERIC Educational Resources Information Center

    Cronn-Mills, Kirstin

    2000-01-01

    Notes that corporate documents of a women's clothing company changed in one season from relatively outdated designs to more updated, professional layouts but the content changed very little. Contends that the document redesign indicates a move to a more feminist outlook for the company. Describes how the document design represents a slow change…

  13. 42 CFR 420.304 - Procedures for obtaining access to books, documents, and records.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 3 2012-10-01 2012-10-01 false Procedures for obtaining access to books, documents... Access to Books, Documents, and Records of Subcontractors § 420.304 Procedures for obtaining access to books, documents, and records. (a) Contents of the request. Requests for access will be in writing and...

  14. 42 CFR 420.304 - Procedures for obtaining access to books, documents, and records.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 3 2014-10-01 2014-10-01 false Procedures for obtaining access to books, documents... Access to Books, Documents, and Records of Subcontractors § 420.304 Procedures for obtaining access to books, documents, and records. (a) Contents of the request. Requests for access will be in writing and...

  15. 42 CFR 420.304 - Procedures for obtaining access to books, documents, and records.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 3 2013-10-01 2013-10-01 false Procedures for obtaining access to books, documents... Access to Books, Documents, and Records of Subcontractors § 420.304 Procedures for obtaining access to books, documents, and records. (a) Contents of the request. Requests for access will be in writing and...

  16. Large-Scale Document Automation: The Systems Integration Issue.

    ERIC Educational Resources Information Center

    Kalthoff, Robert J.

    1985-01-01

    Reviews current technologies for electronic imaging and its recording and transmission, including digital recording, optical data disks, automated image-delivery micrographics, high-density-magnetic recording, and new developments in telecommunications and computers. The role of the document automation systems integrator, who will bring these…

  17. Description of the MHS Health Level 7 Chemistry Laboratory for Public Health Surveillance

    DTIC Science & Technology

    2012-09-01

    document provides a history of the HL7 chemistry database and its contents, explains the creation of chemistry/serology records, describes the pathway...in health surveillance activities. This technical document discusses the chemistry database by providing a history of the dataset and its contents...source for its usefulness in public health surveillance. While HL7 data also includes radiology, anatomic pathology reports and pharmacy transactions

  18. 40 CFR 300.810 - Contents of the administrative record file.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... typically, but not in all cases, will contain the following types of documents: (1) Documents containing... determination of imminent and substantial endangerment, public health evaluations, and technical and engineering... investigation/feasibility study, state documentation of applicable or relevant and appropriate requirements, and...

  19. Shoulder dystocia documentation: an evaluation of a documentation training intervention.

    PubMed

    LeRiche, Tammy; Oppenheimer, Lawrence; Caughey, Sharon; Fell, Deshayne; Walker, Mark

    2015-03-01

    To evaluate the quality and content of nurse and physician shoulder dystocia delivery documentation before and after MORE training in shoulder dystocia management skills and documentation. Approximately 384 charts at the Ottawa Hospital General Campus involving a diagnosis of shoulder dystocia between the years of 2000 and 2006 excluding the training year of 2003 were identified. The charts were evaluated for 14 key components derived from a validated instrument. The delivery notes were then scored based on these components by 2 separate investigators who were blinded to delivery note author, date, and patient identification to further quantify delivery record quality. Approximately 346 charts were reviewed for physician and nurse delivery documentation. The average score for physician notes was 6 (maximum possible score of 14) both before and after the training intervention. The nurses' average score was 5 before and after the training intervention. Negligible improvement was observed in the content and quality of shoulder dystocia documentation before and after nurse and physician training.

  20. Liposomes versus metallic nanostructures: differences in the process of knowledge translation in cancer.

    PubMed

    Fajardo-Ortiz, David; Duran, Luis; Moreno, Laura; Ochoa, Héctor; Castaño, Víctor M

    2014-01-01

    This research maps the knowledge translation process for two different types of nanotechnologies applied to cancer: liposomes and metallic nanostructures (MNs). We performed a structural analysis of citation networks and text mining supported in controlled vocabularies. In the case of liposomes, our results identify subnetworks (invisible colleges) associated with different therapeutic strategies: nanopharmacology, hyperthermia, and gene therapy. Only in the pharmacological strategy was an organized knowledge translation process identified, which, however, is monopolized by the liposomal doxorubicins. In the case of MNs, subnetworks are not differentiated by the type of therapeutic strategy, and the content of the documents is still basic research. Research on MNs is highly focused on developing a combination of molecular imaging and photothermal therapy.

  1. Liposomes versus metallic nanostructures: differences in the process of knowledge translation in cancer

    PubMed Central

    Fajardo-Ortiz, David; Duran, Luis; Moreno, Laura; Ochoa, Héctor; Castaño, Víctor M

    2014-01-01

    This research maps the knowledge translation process for two different types of nanotechnologies applied to cancer: liposomes and metallic nanostructures (MNs). We performed a structural analysis of citation networks and text mining supported in controlled vocabularies. In the case of liposomes, our results identify subnetworks (invisible colleges) associated with different therapeutic strategies: nanopharmacology, hyperthermia, and gene therapy. Only in the pharmacological strategy was an organized knowledge translation process identified, which, however, is monopolized by the liposomal doxorubicins. In the case of MNs, subnetworks are not differentiated by the type of therapeutic strategy, and the content of the documents is still basic research. Research on MNs is highly focused on developing a combination of molecular imaging and photothermal therapy. PMID:24920900

  2. The impact of image-size manipulation and sugar content on children's cereal consumption.

    PubMed

    Neyens, E; Aerts, G; Smits, T

    2015-12-01

    Previous studies have demonstrated that portion sizes and food energy-density influence children's eating behavior. However, the potential effects of front-of-pack image-sizes of serving suggestions and sugar content have not been tested. Using a mixed experimental design among young children, this study examines the effects of image-size manipulation and sugar content on cereal and milk consumption. Children poured and consumed significantly more cereal and drank significantly more milk when exposed to a larger sized image of serving suggestion as compared to a smaller image-size. Sugar content showed no main effects. Nevertheless, cereal consumption only differed significantly between small and large image-sizes when sugar content was low. An advantage of this study was the mundane setting in which the data were collected: a school's dining room instead of an artificial lab. Future studies should include a control condition, with children eating by themselves to reflect an even more natural context. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Patient-generated Digital Images after Pediatric Ambulatory Surgery.

    PubMed

    Miller, Matthew W; Ross, Rachael K; Voight, Christina; Brouwer, Heather; Karavite, Dean J; Gerber, Jeffrey S; Grundmeier, Robert W; Coffin, Susan E

    2016-07-06

    To describe the use of digital images captured by parents or guardians and sent to clinicians for assessment of wounds after pediatric ambulatory surgery. Subjects with digital images of post-operative wounds were identified as part of an on-going cohort study of infections after ambulatory surgery within a large pediatric healthcare system. We performed a structured review of the electronic health record (EHR) to determine how digital images were documented in the EHR and used in clinical care. We identified 166 patients whose parent or guardian reported sending a digital image of the wound to the clinician after surgery. A corresponding digital image was located in the EHR in only 121 of these encounters. A change in clinical management was documented in 20% of these encounters, including referral for in-person evaluation of the wound and antibiotic prescription. Clinical teams have developed ad hoc workflows to use digital images to evaluate post-operative pediatric surgical patients. Because the use of digital images to support follow-up care after ambulatory surgery is likely to increase, it is important that high-quality images are captured and documented appropriately in the EHR to ensure privacy, security, and a high-level of care.

  4. Patient-Generated Digital Images after Pediatric Ambulatory Surgery

    PubMed Central

    Ross, Rachael K.; Voight, Christina; Brouwer, Heather; Karavite, Dean J.; Gerber, Jeffrey S.; Grundmeier, Robert W.; Coffin, Susan E.

    2016-01-01

    Summary Objective To describe the use of digital images captured by parents or guardians and sent to clinicians for assessment of wounds after pediatric ambulatory surgery. Methods Subjects with digital images of post-operative wounds were identified as part of an ongoing cohort study of infections after ambulatory surgery within a large pediatric healthcare system. We performed a structured review of the electronic health record (EHR) to determine how digital images were documented in the EHR and used in clinical care. Results We identified 166 patients whose parent or guardian reported sending a digital image of the wound to the clinician after surgery. A corresponding digital image was located in the EHR in only 121 of these encounters. A change in clinical management was documented in 20% of these encounters, including referral for in-person evaluation of the wound and antibiotic prescription. Conclusion Clinical teams have developed ad hoc workflows to use digital images to evaluate post-operative pediatric surgical patients. Because the use of digital images to support follow-up care after ambulatory surgery is likely to increase, it is important that high-quality images are captured and documented appropriately in the EHR to ensure privacy, security, and a high-level of care. PMID:27452477

  5. Fast words boundaries localization in text fields for low quality document images

    NASA Astrophysics Data System (ADS)

    Ilin, Dmitry; Novikov, Dmitriy; Polevoy, Dmitry; Nikolaev, Dmitry

    2018-04-01

    The paper examines the problem of word boundaries precise localization in document text zones. Document processing on a mobile device consists of document localization, perspective correction, localization of individual fields, finding words in separate zones, segmentation and recognition. While capturing an image with a mobile digital camera under uncontrolled capturing conditions, digital noise, perspective distortions or glares may occur. Further document processing gets complicated because of its specifics: layout elements, complex background, static text, document security elements, variety of text fonts. However, the problem of word boundaries localization has to be solved at runtime on mobile CPU with limited computing capabilities under specified restrictions. At the moment, there are several groups of methods optimized for different conditions. Methods for the scanned printed text are quick but limited only for images of high quality. Methods for text in the wild have an excessively high computational complexity, thus, are hardly suitable for running on mobile devices as part of the mobile document recognition system. The method presented in this paper solves a more specialized problem than the task of finding text on natural images. It uses local features, a sliding window and a lightweight neural network in order to achieve an optimal algorithm speed-precision ratio. The duration of the algorithm is 12 ms per field running on an ARM processor of a mobile device. The error rate for boundaries localization on a test sample of 8000 fields is 0.3

  6. Writer identification on historical Glagolitic documents

    NASA Astrophysics Data System (ADS)

    Fiel, Stefan; Hollaus, Fabian; Gau, Melanie; Sablatnig, Robert

    2013-12-01

    This work aims at automatically identifying scribes of historical Slavonic manuscripts. The quality of the ancient documents is partially degraded by faded-out ink or varying background. The writer identification method used is based on image features, which are described with Scale Invariant Feature Transform (SIFT) features. A visual vocabulary is used for the description of handwriting characteristics, whereby the features are clustered using a Gaussian Mixture Model and employing the Fisher kernel. The writer identification approach is originally designed for grayscale images of modern handwritings. But contrary to modern documents, the historical manuscripts are partially corrupted by background clutter and water stains. As a result, SIFT features are also found on the background. Since the method shows also good results on binarized images of modern handwritings, the approach was additionally applied on binarized images of the ancient writings. Experiments show that this preprocessing step leads to a significant performance increase: The identification rate on binarized images is 98.9%, compared to an identification rate of 87.6% gained on grayscale images.

  7. Content based image retrieval using local binary pattern operator and data mining techniques.

    PubMed

    Vatamanu, Oana Astrid; Frandeş, Mirela; Lungeanu, Diana; Mihalaş, Gheorghe-Ioan

    2015-01-01

    Content based image retrieval (CBIR) concerns the retrieval of similar images from image databases, using feature vectors extracted from images. These feature vectors globally define the visual content present in an image, defined by e.g., texture, colour, shape, and spatial relations between vectors. Herein, we propose the definition of feature vectors using the Local Binary Pattern (LBP) operator. A study was performed in order to determine the optimum LBP variant for the general definition of image feature vectors. The chosen LBP variant is then subsequently used to build an ultrasound image database, and a database with images obtained from Wireless Capsule Endoscopy. The image indexing process is optimized using data clustering techniques for images belonging to the same class. Finally, the proposed indexing method is compared to the classical indexing technique, which is nowadays widely used.

  8. Mobile and web-based education: delivering emergency department discharge and aftercare instructions.

    PubMed

    Saidinejad, Mohsen; Zorc, Joseph

    2014-03-01

    Prior research has identified deficiencies in the standard process of providing instructions for care at discharge from the emergency department (ED). Patients typically receive a brief verbal instruction, along with preformatted written discharge documents. Studies have found that understanding and retention of such information by families are very poor, leading to nonadherence in follow-up care, unnecessary return visit to the ED, and poor health outcomes. The combination of systems factors (information content, delivery methods, and timing) and patient factors (health literacy, language proficiency, and cultural factors) contributes to the challenge of providing successful discharge communication. Internet and mobile devices provide a novel opportunity to better engage families in this process.Mobile health can address both system- and patient-level challenges. By incorporating images, animation, and full Web-based video content, more comprehensible content that is better suited for patients with lower health literacy and today's visual learners can be created. Information can also be delivered both synchronously and asynchronously, enabling the health care providers to deliver health education to the patients electronically to their home, where health care occurs. Furthermore, the providers can track information access by patients, customize content to the individual patients, and reach other caregivers who may not be present during the ED visit. Further research is needed to develop the systems and best practices for incorporating mobile health in the ED setting.

  9. The paper crisis: from hospitals to medical practices.

    PubMed

    Park, Gregory; Neaveill, Rodney S

    2009-01-01

    Hospitals, not unlike physician practices, are faced with an increasing burden of managing piles of hard copy documents including insurance forms, requests for information, and advance directives. Healthcare organizations are moving to transform paper-based forms and documents into digitized files in order to save time and money and to have those documents available at a moment's notice. The cost of these document management/imaging systems can be easily justified with the significant savings of resources realized from the implementation of these systems. This article illustrates the enormity of the "paper problem" in healthcare and outlines just a few of the required processes that could be improved with the use of automated document management/imaging systems.

  10. Musepick: AN Integrated Technological Framework to Present the Complex of Santissima Annunziata in Ascoli Piceno (italy)

    NASA Astrophysics Data System (ADS)

    Petrucci, E.; Rossi, D.

    2017-05-01

    Nowadays, digital media play a central role in a shift towards updated modes of communicating knowledge. In addition to this, the tragic recent events related to the long series of earthquakes that have taken place in central Italy have also, unfortunately, reiterated the need to document and preserve not only the material value of the architectural heritage but also the intangible values related to the events and people that have characterized their history. In this framework, the paper investigates some of the opportunities offered by technological innovations, in particular, by the specific application areas of augmented reality and augmented virtuality. The case study The historical site chosen as case study is the complex of Santissima Annunziata, which has played a very important role in the city of Ascoli Piceno (Italy) for centuries. The objective was to develop a low-cost web-based platform to serve as a place to gather cultural content related to the diffuse cultural heritage, organized in applications regarding graphical and 3D models as well as 360° images and archival documents.

  11. Storing and Viewing Electronic Documents.

    ERIC Educational Resources Information Center

    Falk, Howard

    1999-01-01

    Discusses the conversion of fragile library materials to computer storage and retrieval to extend the life of the items and to improve accessibility through the World Wide Web. Highlights include entering the images, including scanning; optical character recognition; full text and manual indexing; and available document- and image-management…

  12. Document Indexing for Image-Based Optical Information Systems.

    ERIC Educational Resources Information Center

    Thiel, Thomas J.; And Others

    1991-01-01

    Discussion of image-based information retrieval systems focuses on indexing. Highlights include computerized information retrieval; multimedia optical systems; optical mass storage and personal computers; and a case study that describes an optical disk system which was developed to preserve, access, and disseminate military documents. (19…

  13. Method for the reduction of image content redundancy in large image databases

    DOEpatents

    Tobin, Kenneth William; Karnowski, Thomas P.

    2010-03-02

    A method of increasing information content for content-based image retrieval (CBIR) systems includes the steps of providing a CBIR database, the database having an index for a plurality of stored digital images using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the images. A visual similarity parameter value is calculated based on a degree of visual similarity between features vectors of an incoming image being considered for entry into the database and feature vectors associated with a most similar of the stored images. Based on said visual similarity parameter value it is determined whether to store or how long to store the feature vectors associated with the incoming image in the database.

  14. Interpreting the ASTM 'content standard for digital geospatial metadata'

    USGS Publications Warehouse

    Nebert, Douglas D.

    1996-01-01

    ASTM and the Federal Geographic Data Committee have developed a content standard for spatial metadata to facilitate documentation, discovery, and retrieval of digital spatial data using vendor-independent terminology. Spatial metadata elements are identifiable quality and content characteristics of a data set that can be tied to a geographic location or area. Several Office of Management and Budget Circulars and initiatives have been issued that specify improved cataloguing of and accessibility to federal data holdings. An Executive Order further requires the use of the metadata content standard to document digital spatial data sets. Collection and reporting of spatial metadata for field investigations performed for the federal government is an anticipated requirement. This paper provides an overview of the draft spatial metadata content standard and a description of how the standard could be applied to investigations collecting spatially-referenced field data.

  15. A Novel Navigation Paradigm for XML Repositories.

    ERIC Educational Resources Information Center

    Azagury, Alain; Factor, Michael E.; Maarek, Yoelle S.; Mandler, Benny

    2002-01-01

    Discusses data exchange over the Internet and describes the architecture and implementation of an XML document repository that promotes a navigation paradigm for XML documents based on content and context. Topics include information retrieval and semistructured documents; and file systems as information storage infrastructure, particularly XMLFS.…

  16. New Tools to Convert PDF Math Contents into Accessible e-Books Efficiently.

    PubMed

    Suzuki, Masakazu; Terada, Yugo; Kanahori, Toshihiro; Yamaguchi, Katsuhito

    2015-01-01

    New features in our math-OCR software to convert PDF math contents into accessible e-books are shown. A method for recognizing PDF is thoroughly improved. In addition, contents in any selected area including math formulas in a PDF file can be cut and pasted into a document in various accessible formats, which is automatically recognized and converted into texts and accessible math formulas through this process. Combining it with our authoring tool for a technical document, one can easily produce accessible e-books in various formats such as DAISY, accessible EPUB3, DAISY-like HTML5, Microsoft Word with math objects and so on. Those contents are useful for various print-disabled students ranging from the blind to the dyslexic.

  17. Emergency Response Capability Baseline Needs Assessment - Requirements Document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharry, John A.

    This document was prepared by John A. Sharry, LLNL Fire Marshal and LLNL Division Leader for Fire Protection and reviewed by LLNL Emergency Management Department Head James Colson. The document follows and expands upon the format and contents of the DOE Model Fire Protection Baseline Capabilities Assessment document contained on the DOE Fire Protection Web Site, but only addresses emergency response.

  18. Corneal tissue water content mapping with THz imaging: preliminary clinical results (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Sung, Shijun; Bajwa, Neha; Deng, Sophie X.; Taylor, Zachary; Grundfest, Warren

    2016-03-01

    Well-regulated corneal water content is critical for ocular health and function and can be adversely affected by a number of diseases and injuries. Current clinical practice limits detection of unhealthy corneal water content levels to central corneal thickness measurements performed by ultrasound or optical coherence tomography. Trends revealing increasing or decreasing corneal thickness are fair indicators of corneal water content by individual measurements are highly inaccurate due to the poorly understood relationship between corneal thickness and natural physiologic variation. Recently the utility of THz imaging to accuarately measure corneal water content has been explored on with rabbit models. Preliminary experiments revealed that contact with dielectric windows confounded imaging data and made it nearly impossible to deconvolve thickness variations due to contact from thickness variations due to water content variation. A follow up study with a new optical design allowed the acquisition of rabbit data and the results suggest that the observed, time varying contrast was due entirely to the water dynamics of the cornea. This paper presents the first ever in vivo images of human cornea. Five volunteers with healthy cornea were recruited and their eyes were imaged three times over the course of a few minutes with our novel imaging system. Noticeable changes in corneal reflectivity were observed and attributed to the drying of the tear film. The results suggest that clinically compatible, non-contact corneal imaging is feasible and indicate that signal acquired from non-contact imaging of the cornea is a complicated coupling of stromal water content and tear film.

  19. Choosing a Scanner: Points To Consider before Buying a Scanner.

    ERIC Educational Resources Information Center

    Raby, Chris

    1998-01-01

    Outlines ten factors to consider before buying a scanner: size of document; type of document; color; speed and volume; resolution; image enhancement; image compression; optical character recognition; scanning subsystem; and the option to use a commercial bureau service. The importance of careful analysis of requirements is emphasized. (AEF)

  20. Illinois Occupational Skill Standards: Imaging/Pre-Press Cluster.

    ERIC Educational Resources Information Center

    Illinois Occupational Skill Standards and Credentialing Council, Carbondale.

    This document, which is intended as a guide for work force preparation program providers, details the Illinois occupational skill standards for programs preparing students for employment in occupations in the imaging/pre-press cluster. The document begins with a brief overview of the Illinois perspective on occupational skill standards and…

  1. KAT: A Flexible XML-based Knowledge Authoring Environment

    PubMed Central

    Hulse, Nathan C.; Rocha, Roberto A.; Del Fiol, Guilherme; Bradshaw, Richard L.; Hanna, Timothy P.; Roemer, Lorrie K.

    2005-01-01

    As part of an enterprise effort to develop new clinical information systems at Intermountain Health Care, the authors have built a knowledge authoring tool that facilitates the development and refinement of medical knowledge content. At present, users of the application can compose order sets and an assortment of other structured clinical knowledge documents based on XML schemas. The flexible nature of the application allows the immediate authoring of new types of documents once an appropriate XML schema and accompanying Web form have been developed and stored in a shared repository. The need for a knowledge acquisition tool stems largely from the desire for medical practitioners to be able to write their own content for use within clinical applications. We hypothesize that medical knowledge content for clinical use can be successfully created and maintained through XML-based document frameworks containing structured and coded knowledge. PMID:15802477

  2. iPhone 4s and iPhone 5s Imaging of the Eye.

    PubMed

    Jalil, Maaz; Ferenczy, Sandor R; Shields, Carol L

    2017-01-01

    To evaluate the technical feasibility of a consumer-grade cellular iPhone camera as an ocular imaging device compared to existing ophthalmic imaging equipment for documentation purposes. A comparison of iPhone 4s and 5s images was made with external facial images (macrophotography) using Nikon cameras, slit-lamp images (microphotography) using Zeiss photo slit-lamp camera, and fundus images (fundus photography) using RetCam II. In an analysis of six consecutive patients with ophthalmic conditions, both iPhones achieved documentation of external findings (macrophotography) using standard camera modality, tap to focus, and built-in flash. Both iPhones achieved documentation of anterior segment findings (microphotography) during slit-lamp examination through oculars. Both iPhones achieved fundus imaging using standard video modality with continuous iPhone illumination through an ophthalmic lens. Comparison to standard ophthalmic cameras, macrophotography and microphotography were excellent. In comparison to RetCam fundus photography, iPhone fundus photography revealed smaller field and was technically more difficult to obtain, but the quality was nearly similar to RetCam. iPhone versions 4s and 5s can provide excellent ophthalmic macrophotography and microphotography and adequate fundus photography. We believe that iPhone imaging could be most useful in settings where expensive, complicated, and cumbersome imaging equipment is unavailable.

  3. Data Mining and Knowledge Discovery tools for exploiting big Earth-Observation data

    NASA Astrophysics Data System (ADS)

    Espinoza Molina, D.; Datcu, M.

    2015-04-01

    The continuous increase in the size of the archives and in the variety and complexity of Earth-Observation (EO) sensors require new methodologies and tools that allow the end-user to access a large image repository, to extract and to infer knowledge about the patterns hidden in the images, to retrieve dynamically a collection of relevant images, and to support the creation of emerging applications (e.g.: change detection, global monitoring, disaster and risk management, image time series, etc.). In this context, we are concerned with providing a platform for data mining and knowledge discovery content from EO archives. The platform's goal is to implement a communication channel between Payload Ground Segments and the end-user who receives the content of the data coded in an understandable format associated with semantics that is ready for immediate exploitation. It will provide the user with automated tools to explore and understand the content of highly complex images archives. The challenge lies in the extraction of meaningful information and understanding observations of large extended areas, over long periods of time, with a broad variety of EO imaging sensors in synergy with other related measurements and data. The platform is composed of several components such as 1.) ingestion of EO images and related data providing basic features for image analysis, 2.) query engine based on metadata, semantics and image content, 3.) data mining and knowledge discovery tools for supporting the interpretation and understanding of image content, 4.) semantic definition of the image content via machine learning methods. All these components are integrated and supported by a relational database management system, ensuring the integrity and consistency of Terabytes of Earth Observation data.

  4. Effectiveness of image features and similarity measures in cluster-based approaches for content-based image retrieval

    NASA Astrophysics Data System (ADS)

    Du, Hongbo; Al-Jubouri, Hanan; Sellahewa, Harin

    2014-05-01

    Content-based image retrieval is an automatic process of retrieving images according to image visual contents instead of textual annotations. It has many areas of application from automatic image annotation and archive, image classification and categorization to homeland security and law enforcement. The key issues affecting the performance of such retrieval systems include sensible image features that can effectively capture the right amount of visual contents and suitable similarity measures to find similar and relevant images ranked in a meaningful order. Many different approaches, methods and techniques have been developed as a result of very intensive research in the past two decades. Among many existing approaches, is a cluster-based approach where clustering methods are used to group local feature descriptors into homogeneous regions, and search is conducted by comparing the regions of the query image against those of the stored images. This paper serves as a review of works in this area. The paper will first summarize the existing work reported in the literature and then present the authors' own investigations in this field. The paper intends to highlight not only achievements made by recent research but also challenges and difficulties still remaining in this area.

  5. 49 CFR 511.21 - Prehearing conferences.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... fact and of the content and authenticity of documents; (v) Oppositions to notices of oral examination... the appearance of witnesses and the production of documents; (viii) Limitation of the number of...

  6. Automated classification of Acid Rock Drainage potential from Corescan drill core imagery

    NASA Astrophysics Data System (ADS)

    Cracknell, M. J.; Jackson, L.; Parbhakar-Fox, A.; Savinova, K.

    2017-12-01

    Classification of the acid forming potential of waste rock is important for managing environmental hazards associated with mining operations. Current methods for the classification of acid rock drainage (ARD) potential usually involve labour intensive and subjective assessment of drill core and/or hand specimens. Manual methods are subject to operator bias, human error and the amount of material that can be assessed within a given time frame is limited. The automated classification of ARD potential documented here is based on the ARD Index developed by Parbhakar-Fox et al. (2011). This ARD Index involves the combination of five indicators: A - sulphide content; B - sulphide alteration; C - sulphide morphology; D - primary neutraliser content; and E - sulphide mineral association. Several components of the ARD Index require accurate identification of sulphide minerals. This is achieved by classifying Corescan Red-Green-Blue true colour images into the presence or absence of sulphide minerals using supervised classification. Subsequently, sulphide classification images are processed and combined with Corescan SWIR-based mineral classifications to obtain information on sulphide content, indices representing sulphide textures (disseminated versus massive and degree of veining), and spatially associated minerals. This information is combined to calculate ARD Index indicator values that feed into the classification of ARD potential. Automated ARD potential classifications of drill core samples associated with a porphyry Cu-Au deposit are compared to manually derived classifications and those obtained by standard static geochemical testing and X-ray diffractometry analyses. Results indicate a high degree of similarity between automated and manual ARD potential classifications. Major differences between approaches are observed in sulphide and neutraliser mineral percentages, likely due to the subjective nature of manual estimates of mineral content. The automated approach presented here for the classification of ARD potential offers rapid, repeatable and accurate outcomes comparable to manually derived classifications. Methods for automated ARD classifications from digital drill core data represent a step-change for geoenvironmental management practices in the mining industry.

  7. Topographical Variation of Human Femoral Articular Cartilage Thickness, T1rho and T2 Relaxation Times Is Related to Local Loading during Walking.

    PubMed

    Van Rossom, Sam; Wesseling, Mariska; Van Assche, Dieter; Jonkers, Ilse

    2018-01-01

    Objective Early detection of degenerative changes in the cartilage matrix composition is essential for evaluating early interventions that slow down osteoarthritis (OA) initiation. T1rho and T2 relaxation times were found to be effective for detecting early changes in proteoglycan and collagen content. To use these magnetic resonance imaging (MRI) methods, it is important to document the topographical variation in cartilage thickness, T1rho and T2 relaxation times in a healthy population. As OA is partially mechanically driven, the relation between these MRI-based parameters and localized mechanical loading during walking was investigated. Design MR images were acquired in 14 healthy adults and cartilage thickness and T1rho and T2 relaxation times were determined. Experimental gait data was collected and processed using musculoskeletal modeling to identify weight-bearing zones and estimate the contact force impulse during gait. Variation of the cartilage properties (i.e., thickness, T1rho, and T2) over the femoral cartilage was analyzed and compared between the weight-bearing and non-weight-bearing zone of the medial and lateral condyle as well as the trochlea. Results Medial condyle cartilage thickness was correlated to the contact force impulse ( r = 0.78). Lower T1rho, indicating increased proteoglycan content, was found in the medial weight-bearing zone. T2 was higher in all weight-bearing zones compared with the non-weight-bearing zones, indicating lower relative collagen content. Conclusions The current results suggest that medial condyle cartilage is adapted as a long-term protective response to localized loading during a frequently performed task and that the weight-bearing zone of the medial condyle has superior weight bearing capacities compared with the non-weight-bearing zones.

  8. Analysis of line structure in handwritten documents using the Hough transform

    NASA Astrophysics Data System (ADS)

    Ball, Gregory R.; Kasiviswanathan, Harish; Srihari, Sargur N.; Narayanan, Aswin

    2010-01-01

    In the analysis of handwriting in documents a central task is that of determining line structure of the text, e.g., number of text lines, location of their starting and end-points, line-width, etc. While simple methods can handle ideal images, real world documents have complexities such as overlapping line structure, variable line spacing, line skew, document skew, noisy or degraded images etc. This paper explores the application of the Hough transform method to handwritten documents with the goal of automatically determining global document line structure in a top-down manner which can then be used in conjunction with a bottom-up method such as connected component analysis. The performance is significantly better than other top-down methods, such as the projection profile method. In addition, we evaluate the performance of skew analysis by the Hough transform on handwritten documents.

  9. Topic Models for Link Prediction in Document Networks

    ERIC Educational Resources Information Center

    Kataria, Saurabh

    2012-01-01

    Recent explosive growth of interconnected document collections such as citation networks, network of web pages, content generated by crowd-sourcing in collaborative environments, etc., has posed several challenging problems for data mining and machine learning community. One central problem in the domain of document networks is that of "link…

  10. In-service documentation tools and statements on palliative sedation in Germany--do they meet the EAPC framework recommendations? A qualitative document analysis.

    PubMed

    Stiel, Stephanie; Heckel, Maria; Christensen, Britta; Ostgathe, Christoph; Klein, Carsten

    2016-01-01

    Numerous (inter-)national guidelines and frameworks have been developed to provide recommendations for the application of palliative sedation (PS). However, they are still not widely known, and large variations in PS clinical practice can be found. This study aims to collect and describe contents from documents used in clinical practice and to compare to what extent they match the European Association for Palliative Care (EAPC) framework recommendations. In a national survey on PS in Germany 2012, participants were asked to upload their in-service templates, assessment tools, specific protocols, and in-service statements for the application and documentation of PS. These documents are analyzed by using systematic structured content analysis. Three hundred seven content units of 52 provided documents were coded. The analyzed templates are very heterogeneous and also contain items not mentioned in the EAPC framework. Among 11 scales for the evaluation of sedation level, the Ramsey Sedation Score (n = 5) and the Richmond-Agitation-Sedation-Scale (n = 2) were found most often. For symptom assessment, three different scales were provided one time respectively. In all six PS statements, the common core elements were possible indications for PS, instructions on dose titration, patient monitoring, and care. Wide congruency exists for physical and psychological indications. Most documents coincide on midazolam as a preferred drug and basic monitoring in regular intervals. Aspects such as pre-emptive discussion of the potential role of sedation, informational needs of relatives, and care for the medical professionals are mentioned rarely. The analyzed templates do neglect some points of the EAPC recommendations. However, they expand the ten-point scheme of the framework in some details. The findings may facilitate the development of standardized consensus documentation and monitoring draft as an operational statement.

  11. Text-image alignment for historical handwritten documents

    NASA Astrophysics Data System (ADS)

    Zinger, S.; Nerbonne, J.; Schomaker, L.

    2009-01-01

    We describe our work on text-image alignment in context of building a historical document retrieval system. We aim at aligning images of words in handwritten lines with their text transcriptions. The images of handwritten lines are automatically segmented from the scanned pages of historical documents and then manually transcribed. To train automatic routines to detect words in an image of handwritten text, we need a training set - images of words with their transcriptions. We present our results on aligning words from the images of handwritten lines and their corresponding text transcriptions. Alignment based on the longest spaces between portions of handwriting is a baseline. We then show that relative lengths, i.e. proportions of words in their lines, can be used to improve the alignment results considerably. To take into account the relative word length, we define the expressions for the cost function that has to be minimized for aligning text words with their images. We apply right to left alignment as well as alignment based on exhaustive search. The quality assessment of these alignments shows correct results for 69% of words from 100 lines, or 90% of partially correct and correct alignments combined.

  12. Policy environment for prevention, control and management of cardiovascular diseases in primary health care in Kenya.

    PubMed

    Asiki, Gershim; Shao, Shuai; Wainana, Carol; Khayeka-Wandabwa, Christopher; Haregu, Tilahun N; Juma, Pamela A; Mohammed, Shukri; Wambui, David; Gong, Enying; Yan, Lijing L; Kyobutungi, Catherine

    2018-05-09

    In Kenya, cardiovascular diseases (CVDs) accounted for more than 10% of total deaths and 4% of total Disability-Adjusted Life Years (DALYs) in 2015 with a steady increase over the past decade. The main objective of this paper was to review the existing policies and their content in relation to prevention, control and management of CVDs at primary health care (PHC) level in Kenya. A targeted document search in Google engine using keywords "Kenya national policy on cardiovascular diseases" and "Kenya national policy on non-communicable diseases (NCDs)" was conducted in addition to key informant interviews with Kenyan policy makers. Relevant regional and international policy documents were also included. The contents of documents identified were reviewed to assess how well they aligned with global health policies on CVD prevention, control and management. Thematic content analysis of the key informant interviews was also conducted to supplement the document reviews. A total of 17 documents were reviewed and three key informants interviewed. Besides the Tobacco Control Act (2007), all policy documents for CVD prevention, control and management were developed after 2013. The national policies were preceded by global initiatives and guidelines and were similar in content with the global policies. The Kenya health policy (2014-2030), The Kenya Health Sector Strategic and Investment Plan (2014-2018) and the Kenya National Strategy for the Prevention and Control of Non-communicable diseases (2015-2020) had strategies on NCDs including CVDs. Other policy documents for behavioral risk factors (The Tobacco Control Act 2007, Alcoholic Drinks Control (Licensing) Regulations (2010)) were available. The National Nutrition Action Plan (2012-2017) was available as a draft. Although Kenya has a tiered health care system comprising primary healthcare, integration of CVD prevention and control at PHC level was not explicitly mentioned in the policy documents. This review revealed important gaps in the policy environment for prevention, control and management of CVDs in PHC settings in Kenya. There is need to continuously engage the ministry of health and other sectors to prioritize inclusion of CVD services in PHC.

  13. TU-B-19A-01: Image Registration II: TG132-Quality Assurance for Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brock, K; Mutic, S

    2014-06-15

    AAPM Task Group 132 was charged with a review of the current approaches and solutions for image registration in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. As the results of image registration are always used as the input of another process for planning or delivery, it is important for the user to understand and document the uncertainty associate with the algorithm in general and the Result of a specific registration. The recommendations of this task group, which at the time of abstract submission are currently being reviewed by the AAPM, include themore » following components. The user should understand the basic image registration techniques and methods of visualizing image fusion. The disclosure of basic components of the image registration by commercial vendors is critical in this respect. The physicists should perform end-to-end tests of imaging, registration, and planning/treatment systems if image registration is performed on a stand-alone system. A comprehensive commissioning process should be performed and documented by the physicist prior to clinical use of the system. As documentation is important to the safe implementation of this process, a request and report system should be integrated into the clinical workflow. Finally, a patient specific QA practice should be established for efficient evaluation of image registration results. The implementation of these recommendations will be described and illustrated during this educational session. Learning Objectives: Highlight the importance of understanding the image registration techniques used in their clinic. Describe the end-to-end tests needed for stand-alone registration systems. Illustrate a comprehensive commissioning program using both phantom data and clinical images. Describe a request and report system to ensure communication and documentation. Demonstrate an clinically-efficient patient QA practice for efficient evaluation of image registration.« less

  14. Selective document image data compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1998-05-19

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.

  15. Novel computer-based endoscopic camera

    NASA Astrophysics Data System (ADS)

    Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia

    1995-05-01

    We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.

  16. Selective document image data compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1998-01-01

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)

  17. Complex Event Processing for Content-Based Text, Image, and Video Retrieval

    DTIC Science & Technology

    2016-06-01

    NY): Wiley- Interscience; 2000. Feldman R, Sanger J. The text mining handbook: advanced approaches in analyzing unstructured data. New York (NY...ARL-TR-7705 ● JUNE 2016 US Army Research Laboratory Complex Event Processing for Content-Based Text , Image, and Video Retrieval...ARL-TR-7705 ● JUNE 2016 US Army Research Laboratory Complex Event Processing for Content-Based Text , Image, and Video Retrieval

  18. Towards Mobile OCR: How To Take a Good Picture of a Document Without Sight.

    PubMed

    Cutter, Michael; Manduchi, Roberto

    The advent of mobile OCR (optical character recognition) applications on regular smartphones holds great promise for enabling blind people to access printed information. Unfortunately, these systems suffer from a problem: in order for OCR output to be meaningful, a well-framed image of the document needs to be taken, something that is difficult to do without sight. This contribution presents an experimental investigation of how blind people position and orient a camera phone while acquiring document images. We developed experimental software to investigate if verbal guidance aids in the acquisition of OCR-readable images without sight. We report on our participant's feedback and performance before and after assistance from our software.

  19. Towards Mobile OCR: How To Take a Good Picture of a Document Without Sight

    PubMed Central

    Cutter, Michael; Manduchi, Roberto

    2015-01-01

    The advent of mobile OCR (optical character recognition) applications on regular smartphones holds great promise for enabling blind people to access printed information. Unfortunately, these systems suffer from a problem: in order for OCR output to be meaningful, a well-framed image of the document needs to be taken, something that is difficult to do without sight. This contribution presents an experimental investigation of how blind people position and orient a camera phone while acquiring document images. We developed experimental software to investigate if verbal guidance aids in the acquisition of OCR-readable images without sight. We report on our participant's feedback and performance before and after assistance from our software. PMID:26677461

  20. Towards a better understanding of the nomenclature used in information-packaging efforts to support evidence-informed policymaking in low- and middle-income countries

    PubMed Central

    2014-01-01

    Background The growing recognition of the importance of concisely communicating research evidence and other policy-relevant information to policymakers has underpinned the development of several information-packaging efforts over the past decade. This has led to a wide variability in the types of documents produced, which is at best confusing and at worst discouraging for those they intend to reach. This paper has two main objectives: to develop a better understanding of the range of documents and document names used by the organizations preparing them; and to assess whether there are any consistencies in the characteristics of sampled documents across the names employed to label (in the title) or describe (in the document or website) them. Methods We undertook a documentary analysis of web-published document series that are prepared by a variety of organizations with the primary intention of providing information to health systems policymakers and stakeholders, and addressing questions related to health policy and health systems with a focus on low- and middle-income countries. No time limit was set. Results In total, 109 individual documents from 24 series produced by 16 different organizations were included. The name ‘policy brief/briefing’ was the most frequently used (39%) to label or describe a document, and was used in all eight broad content areas that we identified, even though they did not have obviously common traits among them. In terms of document characteristics, most documents (90%) used skimmable formats that are easy to read, with understandable, jargon-free, language (80%). Availability of information on the methods (47%) or the quality of the presented evidence (27%) was less common. One-third (32%) chose the topic based on an explicit process to assess the demand for information from policy makers and even fewer (19%) engaged with policymakers to discuss the content of these documents such as through merit review. Conclusions This study highlights the need for organizations embarking on future information-packaging efforts to be more thoughtful when deciding how to name these documents and the need for greater transparency in describing their content, purpose and intended audience. PMID:24889015

  1. Content metamorphosis in synthetic holography

    NASA Astrophysics Data System (ADS)

    Desbiens, Jacques

    2013-02-01

    A synthetic hologram is an optical system made of hundreds of images amalgamated in a structure of holographic cells. Each of these images represents a point of view on a three-dimensional space which makes us consider synthetic holography as a multiple points of view perspective system. In the composition of a computer graphics scene for a synthetic hologram, the field of view of the holographic image can be divided into several viewing zones. We can attribute these divisions to any object or image feature independently and operate different transformations on image content. In computer generated holography, we tend to consider content variations as a continuous animation much like a short movie. However, by composing sequential variations of image features in relation with spatial divisions, we can build new narrative forms distinct from linear cinematographic narration. When observers move freely and change their viewing positions, they travel from one field of view division to another. In synthetic holography, metamorphoses of image content are within the observer's path. In all imaging Medias, the transformation of image features in synchronisation with the observer's position is a rare occurrence. However, this is a predominant characteristic of synthetic holography. This paper describes some of my experimental works in the development of metamorphic holographic images.

  2. QBIC project: querying images by content, using color, texture, and shape

    NASA Astrophysics Data System (ADS)

    Niblack, Carlton W.; Barber, Ron; Equitz, Will; Flickner, Myron D.; Glasman, Eduardo H.; Petkovic, Dragutin; Yanker, Peter; Faloutsos, Christos; Taubin, Gabriel

    1993-04-01

    In the query by image content (QBIC) project we are studying methods to query large on-line image databases using the images' content as the basis of the queries. Examples of the content we use include color, texture, and shape of image objects and regions. Potential applications include medical (`Give me other images that contain a tumor with a texture like this one'), photo-journalism (`Give me images that have blue at the top and red at the bottom'), and many others in art, fashion, cataloging, retailing, and industry. Key issues include derivation and computation of attributes of images and objects that provide useful query functionality, retrieval methods based on similarity as opposed to exact match, query by image example or user drawn image, the user interfaces, query refinement and navigation, high dimensional database indexing, and automatic and semi-automatic database population. We currently have a prototype system written in X/Motif and C running on an RS/6000 that allows a variety of queries, and a test database of over 1000 images and 1000 objects populated from commercially available photo clip art images. In this paper we present the main algorithms for color texture, shape and sketch query that we use, show example query results, and discuss future directions.

  3. Kingfisher: a system for remote sensing image database management

    NASA Astrophysics Data System (ADS)

    Bruzzo, Michele; Giordano, Ferdinando; Dellepiane, Silvana G.

    2003-04-01

    At present retrieval methods in remote sensing image database are mainly based on spatial-temporal information. The increasing amount of images to be collected by the ground station of earth observing systems emphasizes the need for database management with intelligent data retrieval capabilities. The purpose of the proposed method is to realize a new content based retrieval system for remote sensing images database with an innovative search tool based on image similarity. This methodology is quite innovative for this application, at present many systems exist for photographic images, as for example QBIC and IKONA, but they are not able to extract and describe properly remote image content. The target database is set by an archive of images originated from an X-SAR sensor (spaceborne mission, 1994). The best content descriptors, mainly texture parameters, guarantees high retrieval performances and can be extracted without losses independently of image resolution. The latter property allows DBMS (Database Management System) to process low amount of information, as in the case of quick-look images, improving time performance and memory access without reducing retrieval accuracy. The matching technique has been designed to enable image management (database population and retrieval) independently of dimensions (width and height). Local and global content descriptors are compared, during retrieval phase, with the query image and results seem to be very encouraging.

  4. Measurement of food-related approach-avoidance biases: Larger biases when food stimuli are task relevant.

    PubMed

    Lender, Anja; Meule, Adrian; Rinck, Mike; Brockmeyer, Timo; Blechert, Jens

    2018-06-01

    Strong implicit responses to food have evolved to avoid energy depletion but contribute to overeating in today's affluent environments. The Approach-Avoidance Task (AAT) supposedly assesses implicit biases in response to food stimuli: Participants push pictures on a monitor "away" or pull them "near" with a joystick that controls a corresponding image zoom. One version of the task couples movement direction with image content-independent features, for example, pulling blue-framed images and pushing green-framed images regardless of content ('irrelevant feature version'). However, participants might selectively attend to this feature and ignore image content and, thus, such a task setup might underestimate existing biases. The present study tested this attention account by comparing two irrelevant feature versions of the task with either a more peripheral (image frame color: green vs. blue) or central (small circle vs. cross overlaid over the image content) image feature as response instruction to a 'relevant feature version', in which participants responded to the image content, thus making it impossible to ignore that content. Images of chocolate-containing foods and of objects were used, and several trait and state measures were acquired to validate the obtained biases. Results revealed a robust approach bias towards food only in the relevant feature condition. Interestingly, a positive correlation with state chocolate craving during the task was found when all three conditions were combined, indicative of criterion validity of all three versions. However, no correlations were found with trait chocolate craving. Results provide a strong case for the relevant feature version of the AAT for bias measurement. They also point to several methodological avenues for future research around selective attention in the irrelevant versions and task validity regarding trait vs. state variables. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Human Systems Engineering and Program Success - A Retrospective Content Analysis

    DTIC Science & Technology

    2016-01-01

    collected from the 546 documents and entered into SPSS Statistics Version 22.0 for Windows. HSI words within the sampled doc- uments ranged from zero to...engineers. The approach used a retrospective content analysis of documents from weapon systems acquisi- tion programs, namely Major Defense Acquisition...January 2016, Vol. 23 No. 1 : 78–101 January 2016 The interaction between humans and the systems they use affects program success, as well as life-cycle

  6. 75 FR 32860 - Regulatory Guidance Concerning the Preparation of Drivers' Record of Duty Status To Document...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-10

    ... motor carrier of a scanned image of the original record; the driver would retain the original while the carrier maintains the electronic scanned electronic image along with any supporting documents. [[Page... plans to implement a new approach for receiving and processing RODS. Its drivers would complete their...

  7. Multispectral image restoration of historical documents based on LAAMs and mathematical morphology

    NASA Astrophysics Data System (ADS)

    Lechuga-S., Edwin; Valdiviezo-N., Juan C.; Urcid, Gonzalo

    2014-09-01

    This research introduces an automatic technique designed for the digital restoration of the damaged parts in historical documents. For this purpose an imaging spectrometer is used to acquire a set of images in the wavelength interval from 400 to 1000 nm. Assuming the presence of linearly mixed spectral pixels registered from the multispectral image, our technique uses two lattice autoassociative memories to extract the set of pure pigments conforming a given document. Through an spectral unmixing analysis, our method produces fractional abundance maps indicating the distributions of each pigment in the scene. These maps are then used to locate cracks and holes in the document under study. The restoration process is performed by the application of a region filling algorithm, based on morphological dilation, followed by a color interpolation to restore the original appearance of the filled areas. This procedure has been successfully applied to the analysis and restoration of three multispectral data sets: two corresponding to artificially superimposed scripts and a real data acquired from a Mexican pre-Hispanic codex, whose restoration results are presented.

  8. [Development of an ophthalmological clinical information system for inpatient eye clinics].

    PubMed

    Kortüm, K U; Müller, M; Babenko, A; Kampik, A; Kreutzer, T C

    2015-12-01

    In times of increased digitalization in healthcare, departments of ophthalmology are faced with the challenge of introducing electronic clinical health records (EHR); however, specialized software for ophthalmology is not available with most major EHR sytems. The aim of this project was to create specific ophthalmological user interfaces for large inpatient eye care providers within a hospitalwide EHR. Additionally the integration of ophthalmic imaging systems, scheduling and surgical documentation should be achieved. The existing EHR i.s.h.med (Siemens, Germany) was modified using advanced business application programming (ABAP) language to create specific ophthalmological user interfaces for reproduction and moreover optimization of the clinical workflow. A user interface for documentation of ambulatory patients with eight tabs was designed. From June 2013 to October 2014 a total of 61,551 patient contact details were documented. For surgical documentation a separate user interface was set up. Digital clinical orders for documentation of registration and scheduling of operations user interfaces were also set up. A direct integration of ophthalmic imaging modalities could be established. An ophthalmologist-orientated EHR for outpatient and surgical documentation for inpatient clinics was created and successfully implemented. By incorporation of imaging procedures the foundation of future smart/big data analyses was created.

  9. 10 CFR 110.103 - Acceptance of hearing documents.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... address and date of signature indicated. The signature is a representation that the document is submitted with full authority, the signer knows its contents, and that, to the best of his knowledge, the...

  10. 10 CFR 110.103 - Acceptance of hearing documents.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... address and date of signature indicated. The signature is a representation that the document is submitted with full authority, the signer knows its contents, and that, to the best of his knowledge, the...

  11. 10 CFR 110.103 - Acceptance of hearing documents.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... address and date of signature indicated. The signature is a representation that the document is submitted with full authority, the signer knows its contents, and that, to the best of his knowledge, the...

  12. 10 CFR 110.103 - Acceptance of hearing documents.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... address and date of signature indicated. The signature is a representation that the document is submitted with full authority, the signer knows its contents, and that, to the best of his knowledge, the...

  13. 10 CFR 110.103 - Acceptance of hearing documents.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... address and date of signature indicated. The signature is a representation that the document is submitted with full authority, the signer knows its contents, and that, to the best of his knowledge, the...

  14. 45 CFR 170.205 - Content exchange standards and implementation specifications for exchanging electronic health...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... The Healthcare Information Technology Standards Panel (HITSP) Summary Documents Using HL7 CCD... Guide for Ambulatory Healthcare Provider Reporting to Central Cancer Registries, HL7 Clinical Document...

  15. 14 CFR 302.3 - Filing of documents.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... set at the DOT Dockets Management System (DMS) internet website. (2) Such documents will be deemed to... as to the contents and style of briefs. (2) Papers may be reproduced by any duplicating process...

  16. Bibliography on contaminants and solubility of organic compounds in oxygen

    NASA Technical Reports Server (NTRS)

    Ordin, P. M. (Compiler)

    1975-01-01

    A compilation of a number of document citations is presented which contains information on contaminants in oxygen. Topics covered include contaminants and solubility of organic compounds in oxygen, reaction characteristics of organic compounds with oxygen, and sampling and detection limits of impurities. Each citation in the data bank contains many items of information about the document. Some of the items are title, author, abstract, corporate source, description of figures pertinent to hazards or safety, key references, and descriptors (keywords) by which the document can be retrieved. Each citation includes an evaluation of the technical contents as to being good/excellent, acceptable, or poor. The descriptors used to define the contents of the documents and subsequently used in the computerized search operations were developed for the cryogenic fluid safety by experts in the cryogenics field.

  17. Commercial applications for optical data storage

    NASA Astrophysics Data System (ADS)

    Tas, Jeroen

    1991-03-01

    Optical data storage has spurred the market for document imaging systems. These systems are increasingly being used to electronically manage the processing, storage and retrieval of documents. Applications range from straightforward archives to sophisticated workflow management systems. The technology is developing rapidly and within a few years optical imaging facilities will be incorporated in most of the office information systems. This paper gives an overview of the status of the market, the applications and the trends of optical imaging systems.

  18. Documents, Practices and Policy

    ERIC Educational Resources Information Center

    Freeman, Richard; Maybin, Jo

    2011-01-01

    What are the practices of policy making? In this paper, we seek to identify and understand them by attending to one of the principal artefacts--the document--through which they are organised. We review the different ways in which researchers have understood documents and their function in public policy, endorsing a focus on content but noting that…

  19. A Comparison of State Advance Directive Documents

    ERIC Educational Resources Information Center

    Gunter-Hunt, Gail; Mahoney, Jane E.; Sieger, Carol E.

    2002-01-01

    Purpose: Advance directive (AD) documents are based on state-specific statutes and vary in terms of content. These differences can create confusion and inconsistencies resulting in a possible failure to honor the health care wishes of people who execute health care documents for one state and receive health care in another state. The purpose of…

  20. Waukesha County Technical College Budget Document, Fiscal Year 2000-2001.

    ERIC Educational Resources Information Center

    Waukesha County Technical Coll., Pewaukee, WI.

    This report presents Waukesha County Area Technical College District's (Wisconsin) fiscal year 2000-2001 budget document. It contains the following sections: table of contents; a reader's guide to the budget document; a quick reference guide; an introduction section, which contains a transmittal letter, a budget message for 2000-2001 combining…

  1. 75 FR 10182 - Approval and Promulgation of Implementation Plans; State of Iowa

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-05

    ... INFORMATION: Throughout this document ``we,'' ''us,'' or ``our'' refer to the EPA. Table of Contents I. What is being addressed in this document? II. What revisions is EPA approving? III. What action is EPA taking? IV. Statutory and Executive Order Reviews I. What is being addressed in this document? The State...

  2. Correcting geometric and photometric distortion of document images on a smartphone

    NASA Astrophysics Data System (ADS)

    Simon, Christian; Williem; Park, In Kyu

    2015-01-01

    A set of document image processing algorithms for improving the optical character recognition (OCR) capability of smartphone applications is presented. The scope of the problem covers the geometric and photometric distortion correction of document images. The proposed framework was developed to satisfy industrial requirements. It is implemented on an off-the-shelf smartphone with limited resources in terms of speed and memory. Geometric distortions, i.e., skew and perspective distortion, are corrected by sending horizontal and vertical vanishing points toward infinity in a downsampled image. Photometric distortion includes image degradation from moiré pattern noise and specular highlights. Moiré pattern noise is removed using low-pass filters with different sizes independently applied to the background and text region. The contrast of the text in a specular highlighted area is enhanced by locally enlarging the intensity difference between the background and text while the noise is suppressed. Intensive experiments indicate that the proposed methods show a consistent and robust performance on a smartphone with a runtime of less than 1 s.

  3. Boost OCR accuracy using iVector based system combination approach

    NASA Astrophysics Data System (ADS)

    Peng, Xujun; Cao, Huaigu; Natarajan, Prem

    2015-01-01

    Optical character recognition (OCR) is a challenging task because most existing preprocessing approaches are sensitive to writing style, writing material, noises and image resolution. Thus, a single recognition system cannot address all factors of real document images. In this paper, we describe an approach to combine diverse recognition systems by using iVector based features, which is a newly developed method in the field of speaker verification. Prior to system combination, document images are preprocessed and text line images are extracted with different approaches for each system, where iVector is transformed from a high-dimensional supervector of each text line and is used to predict the accuracy of OCR. We merge hypotheses from multiple recognition systems according to the overlap ratio and the predicted OCR score of text line images. We present evaluation results on an Arabic document database where the proposed method is compared against the single best OCR system using word error rate (WER) metric.

  4. Effects of Improved Content Knowledge on Pedagogical Content Knowledge and Student Performance in Physical Education

    ERIC Educational Resources Information Center

    Iserbyt, Peter; Ward, Phillip; Li, Weidong

    2017-01-01

    Background: Pedagogical content knowledge (PCK) is an interaction of several knowledge bases upon which the teacher makes decisions about what and how to teach. To date, there are no studies in physical education directly documenting relationships between specialized content knowledge (SCK) and PCK. Such relationships have not been empirically…

  5. Identification of the Most Critical Content Knowledge Base for Middle School Science Teachers

    ERIC Educational Resources Information Center

    Saderholm, Jon C.; Tretter, Thomas R.

    2008-01-01

    Much has been said about what science content students need to learn (e.g., "Benchmarks for Science Literacy, National Science Education Standards"). Less has been said about what science content teachers need to know to teach the content students are expected to learn. This study analyzed four standards documents and assessment frameworks to…

  6. M68000 RNF text formatter user's manual

    NASA Technical Reports Server (NTRS)

    Will, R. W.; Grantham, C.

    1985-01-01

    A powerful, flexible text formatting program, RNF, is described. It is designed to automate many of the tedious elements of typing, including breaking a document into pages with titles and page numbers, formatting chapter and section headings, keeping track of page numbers for use in a table of contents, justifying lines by inserting blanks to give an even right margin, and inserting figures and footnotes at appropriate places on the page. The RNF program greatly facilitates both preparing and modifying a document because it allows you to concentrate your efforts on the content of the document instead of its appearance and because it removes the necessity of retyping text that has not changed.

  7. Documentation of Nursing Practice Using a Computerized Medical Information System

    PubMed Central

    Romano, Carol

    1981-01-01

    This paper discusses a definition of the content of the computerized nursing data base developed by the Nursing Department for the Clinical Center Medical Information System at the National Institutes of Health in Bethesda, Maryland. The author describes the theoretical framework for the content and presents a model to describe the organization of the nursing data components in relation to the process of nursing care delivery. Nursing documentation requirements of Nurse Practice Acts, American Nurses Association Standards of Practice and the Joint Commission on Accreditation of Hospitals are also addressed as they relate to this data base. The advantages and disadvantages of such an approach to computerized documentation are discussed.

  8. Video document

    NASA Astrophysics Data System (ADS)

    Davies, Bob; Lienhart, Rainer W.; Yeo, Boon-Lock

    1999-08-01

    The metaphor of film and TV permeates the design of software to support video on the PC. Simply transplanting the non- interactive, sequential experience of film to the PC fails to exploit the virtues of the new context. Video ont eh PC should be interactive and non-sequential. This paper experiments with a variety of tools for using video on the PC that exploits the new content of the PC. Some feature are more successful than others. Applications that use these tools are explored, including primarily the home video archive but also streaming video servers on the Internet. The ability to browse, edit, abstract and index large volumes of video content such as home video and corporate video is a problem without appropriate solution in today's market. The current tools available are complex, unfriendly video editors, requiring hours of work to prepare a short home video, far more work that a typical home user can be expected to provide. Our proposed solution treats video like a text document, providing functionality similar to a text editor. Users can browse, interact, edit and compose one or more video sequences with the same ease and convenience as handling text documents. With this level of text-like composition, we call what is normally a sequential medium a 'video document'. An important component of the proposed solution is shot detection, the ability to detect when a short started or stopped. When combined with a spreadsheet of key frames, the host become a grid of pictures that can be manipulated and viewed in the same way that a spreadsheet can be edited. Multiple video documents may be viewed, joined, manipulated, and seamlessly played back. Abstracts of unedited video content can be produce automatically to create novel video content for export to other venues. Edited and raw video content can be published to the net or burned to a CD-ROM with a self-installing viewer for Windows 98 and Windows NT 4.0.

  9. A neotropical Miocene pollen database employing image-based search and semantic modeling.

    PubMed

    Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W; Jaramillo, Carlos; Shyu, Chi-Ren

    2014-08-01

    Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery.

  10. Quality of nursing documentation: Paper-based health records versus electronic-based health records.

    PubMed

    Akhu-Zaheya, Laila; Al-Maaitah, Rowaida; Bany Hani, Salam

    2018-02-01

    To assess and compare the quality of paper-based and electronic-based health records. The comparison examined three criteria: content, documentation process and structure. Nursing documentation is a significant indicator of the quality of patient care delivery. It can be either paper-based or organised within the system known as the electronic health records. Nursing documentation must be completed at the highest standards, to ensure the safety and quality of healthcare services. However, the evidence is not clear on which one of the two forms of documentation (paper-based versus electronic health records is more qualified. A retrospective, descriptive, comparative design was used to address the study's purposes. A convenient number of patients' records, from two public hospitals, were audited using the Cat-ch-Ing audit instrument. The sample size consisted of 434 records for both paper-based health records and electronic health records from medical and surgical wards. Electronic health records were better than paper-based health records in terms of process and structure. In terms of quantity and quality content, paper-based records were better than electronic health records. The study affirmed the poor quality of nursing documentation and lack of nurses' knowledge and skills in the nursing process and its application in both paper-based and electronic-based systems. Both forms of documentation revealed drawbacks in terms of content, process and structure. This study provided important information, which can guide policymakers and administrators in identifying effective strategies aimed at enhancing the quality of nursing documentation. Policies and actions to ensure quality nursing documentation at the national level should focus on improving nursing knowledge, competencies, practice in nursing process, enhancing the work environment and nursing workload, as well as strengthening the capacity building of nursing practice to improve the quality of nursing care and patients' outcomes. © 2017 John Wiley & Sons Ltd.

  11. iPhone 4s and iPhone 5s Imaging of the Eye

    PubMed Central

    Jalil, Maaz; Ferenczy, Sandor R.; Shields, Carol L.

    2017-01-01

    Background/Aims To evaluate the technical feasibility of a consumer-grade cellular iPhone camera as an ocular imaging device compared to existing ophthalmic imaging equipment for documentation purposes. Methods A comparison of iPhone 4s and 5s images was made with external facial images (macrophotography) using Nikon cameras, slit-lamp images (microphotography) using Zeiss photo slit-lamp camera, and fundus images (fundus photography) using RetCam II. Results In an analysis of six consecutive patients with ophthalmic conditions, both iPhones achieved documentation of external findings (macrophotography) using standard camera modality, tap to focus, and built-in flash. Both iPhones achieved documentation of anterior segment findings (microphotography) during slit-lamp examination through oculars. Both iPhones achieved fundus imaging using standard video modality with continuous iPhone illumination through an ophthalmic lens. Comparison to standard ophthalmic cameras, macrophotography and microphotography were excellent. In comparison to RetCam fundus photography, iPhone fundus photography revealed smaller field and was technically more difficult to obtain, but the quality was nearly similar to RetCam. Conclusions iPhone versions 4s and 5s can provide excellent ophthalmic macrophotography and microphotography and adequate fundus photography. We believe that iPhone imaging could be most useful in settings where expensive, complicated, and cumbersome imaging equipment is unavailable. PMID:28275604

  12. Shuttle Imaging Radar - Physical controls on signal penetration and subsurface scattering in the Eastern Sahara

    NASA Technical Reports Server (NTRS)

    Schaber, G. G.; Mccauley, J. F.; Breed, C. S.; Olhoeft, G. R.

    1986-01-01

    Interpretation of Shuttle Imaging Radar-A (SIR-A) images by McCauley et al. (1982) dramatically changed previous concepts of the role that fluvial processes have played over the past 10,000 to 30 million years in shaping this now extremely flat, featureless, and hyperarid landscape. In the present paper, the near-surface stratigraphy, the electrical properties of materials, and the types of radar interfaces found to be responsible for different classes of SIR-A tonal response are summarized. The dominant factors related to efficient microwave signal penetration into the sediment blanket include (1) favorable distribution of particle sizes, (2) extremely low moisture content and (3) reduced geometric scattering at the SIR-A frequency (1.3 GHz). The depth of signal penetration that results in a recorded backscatter, here called 'radar imaging depth', was documented in the field to be a maximum of 1.5 m, or 0.25 of the calculated 'skin depth', for the sediment blanket. Radar imaging depth is estimated to be between 2 and 3 m for active sand dune materials. Diverse permittivity interfaces and volume scatterers within the shallow subsurface are responsible for most of the observed backscatter not directly attributable to grazing outcrops. Calcium carbonate nodules and rhizoliths concentrated in sandy alluvium of Pleistocene age south of Safsaf oasis in south Egypt provide effective contrast in premittivity and thus act as volume scatterers that enhance SIR-A portrayal of younger inset stream channels.

  13. Extra dimensions: 3D in PDF documentation

    DOE PAGES

    Graf, Norman A.

    2011-01-11

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universalmore » 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide, non-technical audience. We discuss how the field of radiation imaging could benefit from incorporating full 3D information about not only the detectors, but also the results of the experimental analyses, in its electronic publications. In this article, we present examples drawn from high-energy physics, mathematics and molecular biology which take advantage of this functionality. Furthermore, we demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input.« less

  14. Extra dimensions: 3D and time in PDF documentation

    NASA Astrophysics Data System (ADS)

    Graf, N. A.

    2011-01-01

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide, non-technical audience. We discuss how the field of radiation imaging could benefit from incorporating full 3D information about not only the detectors, but also the results of the experimental analyses, in its electronic publications. In this article, we present examples drawn from high-energy physics, mathematics and molecular biology which take advantage of this functionality. We demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input.

  15. Extra Dimensions: 3D and Time in PDF Documentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, N.A.; /SLAC

    2012-04-11

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universalmore » 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide, non-technical audience. We discuss how the field of radiation imaging could benefit from incorporating full 3D information about not only the detectors, but also the results of the experimental analyses, in its electronic publications. In this article, we present examples drawn from high-energy physics, mathematics and molecular biology which take advantage of this functionality. We demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input.« less

  16. Combining Digital Archives Content with Serious Game Approach to Create a Gamified Learning Experience

    NASA Astrophysics Data System (ADS)

    Shih, D.-T.; Lin, C. L.; Tseng, C.-Y.

    2015-08-01

    This paper presents an interdisciplinary to develop content-aware application that combines game with learning on specific categories of digital archives. The employment of content-oriented game enhances the gamification and efficacy of learning in culture education on architectures and history of Hsinchu County, Taiwan. The gamified form of the application is used as a backbone to support and provide a strong stimulation to engage users in learning art and culture, therefore this research is implementing under the goal of "The Digital ARt/ARchitecture Project". The purpose of the abovementioned project is to develop interactive serious game approaches and applications for Hsinchu County historical archives and architectures. Therefore, we present two applications, "3D AR for Hukou Old " and "Hsinchu County History Museum AR Tour" which are in form of augmented reality (AR). By using AR imaging techniques to blend real object and virtual content, the users can immerse in virtual exhibitions of Hukou Old Street and Hsinchu County History Museum, and to learn in ubiquitous computing environment. This paper proposes a content system that includes tools and materials used to create representations of digitized cultural archives including historical artifacts, documents, customs, religion, and architectures. The Digital ARt / ARchitecture Project is based on the concept of serious game and consists of three aspects: content creation, target management, and AR presentation. The project focuses on developing a proper approach to serve as an interactive game, and to offer a learning opportunity for appreciating historic architectures by playing AR cards. Furthermore, the card game aims to provide multi-faceted understanding and learning experience to help user learning through 3D objects, hyperlinked web data, and the manipulation of learning mode, and then effectively developing their learning levels on cultural and historical archives in Hsinchu County.

  17. Deformable image registration with content mismatch: a demons variant to account for added material and surgical devices in the target image

    NASA Astrophysics Data System (ADS)

    Nithiananthan, S.; Uneri, A.; Schafer, S.; Mirota, D.; Otake, Y.; Stayman, J. W.; Zbijewski, W.; Khanna, A. J.; Reh, D. D.; Gallia, G. L.; Siewerdsen, J. H.

    2013-03-01

    Fast, accurate, deformable image registration is an important aspect of image-guided interventions. Among the factors that can confound registration is the presence of additional material in the intraoperative image - e.g., contrast bolus or a surgical implant - that was not present in the prior image. Existing deformable registration methods generally fail to account for tissue excised between image acquisitions and typically simply "move" voxels within the images with no ability to account for tissue that is removed or introduced between scans. We present a variant of the Demons algorithm to accommodate such content mismatch. The approach combines segmentation of mismatched content with deformable registration featuring an extra pseudo-spatial dimension representing a reservoir from which material can be drawn into the registered image. Previous work tested the registration method in the presence of tissue excision ("missing tissue"). The current paper tests the method in the presence of additional material in the target image and presents a general method by which either missing or additional material can be accommodated. The method was tested in phantom studies, simulations, and cadaver models in the context of intraoperative cone-beam CT with three examples of content mismatch: a variable-diameter bolus (contrast injection); surgical device (rod), and additional material (bone cement). Registration accuracy was assessed in terms of difference images and normalized cross correlation (NCC). We identify the difficulties that traditional registration algorithms encounter when faced with content mismatch and evaluate the ability of the proposed method to overcome these challenges.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundergan, C. D.; Mead, P. L.

    This report is a compilation of 17 individual documents that together summarize the technical capabilities of Sandia Laboratories. Each document in this compilation contains details about a specific area of capability. Examples of application of the capability to research and development problems are provided. An eighteenth document summarizes the content of the other seventeen. Each of these documents was issued with a separate report number (SAND 74-0073A through SAND 74-0091, except -0078). (RWR)

  19. 5 CFR 293.403 - Contents of employee performance files.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... REGULATIONS PERSONNEL RECORDS Employee Performance File System Records § 293.403 Contents of employee performance files. (a) A decision on what constitutes a performance-related document within the meaning of... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Contents of employee performance files...

  20. 5 CFR 293.403 - Contents of employee performance files.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... REGULATIONS PERSONNEL RECORDS Employee Performance File System Records § 293.403 Contents of employee performance files. (a) A decision on what constitutes a performance-related document within the meaning of... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Contents of employee performance files...

  1. 5 CFR 293.403 - Contents of employee performance files.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... REGULATIONS PERSONNEL RECORDS Employee Performance File System Records § 293.403 Contents of employee performance files. (a) A decision on what constitutes a performance-related document within the meaning of... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Contents of employee performance files...

  2. 5 CFR 293.403 - Contents of employee performance files.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... REGULATIONS PERSONNEL RECORDS Employee Performance File System Records § 293.403 Contents of employee performance files. (a) A decision on what constitutes a performance-related document within the meaning of... 5 Administrative Personnel 1 2012-01-01 2012-01-01 false Contents of employee performance files...

  3. Content-Based Instruction in Higher Education Settings

    ERIC Educational Resources Information Center

    Crandall, JoAnn, Ed.; Kaufman, Dorit, Ed.

    2002-01-01

    Content-based instruction (CBI) challenges ESOL teachers to teach language through specialist content in institutional settings. This volume addresses CBI negotiation between ESOL teachers and subject specialists in higher education. Writers document and evaluate courses that support the subject discipline and meet the language needs of EFL and…

  4. 10 CFR 2.1013 - Use of the electronic docket during the proceeding.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... bi-tonal documents. (v) Electronic submissions must be generated in the appropriate PDF output format by using: (A) PDF—Formatted Text and Graphics for textual documents converted from native applications; (B) PDF—Searchable Image (Exact) for textual documents converted from scanned documents; and (C...

  5. New concept high-speed and high-resolution color scanner

    NASA Astrophysics Data System (ADS)

    Nakashima, Keisuke; Shinoda, Shin'ichi; Konishi, Yoshiharu; Sugiyama, Kenji; Hori, Tetsuya

    2003-05-01

    We have developed a new concept high-speed and high-resolution color scanner (Blinkscan) using digital camera technology. With our most advanced sub-pixel image processing technology, approximately 12 million pixel image data can be captured. High resolution imaging capability allows various uses such as OCR, color document read, and document camera. The scan time is only about 3 seconds for a letter size sheet. Blinkscan scans documents placed "face up" on its scan stage and without any special illumination lights. Using Blinkscan, a high-resolution color document can be easily inputted into a PC at high speed, a paperless system can be built easily. It is small, and since the occupancy area is also small, setting it on an individual desk is possible. Blinkscan offers the usability of a digital camera and accuracy of a flatbed scanner with high-speed processing. Now, about several hundred of Blinkscan are mainly shipping for the receptionist operation in a bank and a security. We will show the high-speed and high-resolution architecture of Blinkscan. Comparing operation-time with conventional image capture device, the advantage of Blinkscan will make clear. And image evaluation for variety of environment, such as geometric distortions or non-uniformity of brightness, will be made.

  6. Remote sensing assessment of oil lakes and oil-polluted surfaces at the Greater Burgan oil field, Kuwait

    NASA Astrophysics Data System (ADS)

    Kwarteng, Andy Yaw

    A heinous catastrophe imposed on Kuwait's desert environment during the 1990 to 1991 Arabian Gulf War was the formation of oil lakes and oil-contaminated surfaces. Presently, the affected areas consist of oil lakes, thick light and disintegrated tarmats, black soil and vegetation. In this study, Landsat TM, Spot, colour aerial photographs and IRS-1D digital image data acquired between 1989 and 1998 were used to monitor the spatial and temporal changes of the oil lakes and polluted surfaces at the Greater Burgan oil field. The use of multisensor datasets provided the opportunity to observe the polluted areas in different wavelengths, look angles and resolutions. The images were digitally enhanced to optimize the visual outlook and improve the information content. The data documented the gradual disappearance of smaller oil lakes and soot/black soil from the surface with time. Even though some of the contaminants were obscured by sand and vegetation and not readily observed on the surface or from satellite images, the harmful chemicals still remain in the soil. Some of the contaminated areas displayed a remarkable ability to support vegetation growth during the higher than average rainfall that occurred between 1992 to 1998. The total area of oil lakes calculated from an IRS-1D panchromatic image acquired on 16 February 1998, using supervised classification applied separately to different parts, was 24.13 km 2.

  7. NASA Glenn Propulsion Systems Lab: 2012 Inaugural Ice Crystal Cloud Calibration Procedure and Results

    NASA Technical Reports Server (NTRS)

    VanZante, Judith F.; Rosine, Bryan M.

    2014-01-01

    The inaugural calibration of the ice crystal and supercooled liquid water clouds generated in NASA Glenn's engine altitude test facility, the Propulsion Systems Lab (PSL) is reported herein. This calibration was in support of the inaugural engine ice crystal validation test. During the Fall of 2012 calibration effort, cloud uniformity was documented via an icing grid, laser sheet and cloud tomography. Water content was measured via multi-wire and robust probes, and particle sizes were measured with a Cloud Droplet Probe and Cloud Imaging Probe. The environmental conditions ranged from 5,000 to 35,000 ft, Mach 0.15 to 0.55, temperature from +50 to -35 F and relative humidities from less than 1 percent to 75 percent in the plenum.

  8. [English translation of the title of ancient Chinese medical books and documents].

    PubMed

    Zeng, Fang; Shao, Xin; Zhang, Pei-Hai

    2008-11-01

    The title of a book is, generally, the high concentration of the writer's intention and the theme of content. Translate the title of an ancient Chinese medical book or document accurately and plainly is meaningful for exhibiting the style of the book, also for promoting the international communication of TCM. The principle should be followed is to choose the translating terms accurately to reveal the theme of content and express the cultural connotation of the book perfectly.

  9. Texture for script identification.

    PubMed

    Busch, Andrew; Boles, Wageeh W; Sridharan, Sridha

    2005-11-01

    The problem of determining the script and language of a document image has a number of important applications in the field of document analysis, such as indexing and sorting of large collections of such images, or as a precursor to optical character recognition (OCR). In this paper, we investigate the use of texture as a tool for determining the script of a document image, based on the observation that text has a distinct visual texture. An experimental evaluation of a number of commonly used texture features is conducted on a newly created script database, providing a qualitative measure of which features are most appropriate for this task. Strategies for improving classification results in situations with limited training data and multiple font types are also proposed.

  10. Imaging Water in Deformed Quartzites: Examples from Caledonian and Himalayan Shear Zones

    NASA Astrophysics Data System (ADS)

    Kronenberg, Andreas; Ashley, Kyle; Hasnan, Hasnor; Holyoke, Caleb; Jezek, Lynna; Law, Richard; Thomas, Jay

    2016-04-01

    Infrared IR measurements of OH absorption bands due to water in deformed quartz grains have been collected from major shear zones of the Caledonian and Himalayan orogens. Mean intragranular water contents were determined from the magnitude of the broad OH absorption at 3400 cm-1 as a function of structural position, averaging over multiple grains, using an IR microscope coupled to a conventional FTIR spectrometer with apertures of 50-100 μm. Images of water content were generated by scanning areas of up to 4 mm2 of individual specimens with a 10 μm synchrotron-generated IR beam and contouring OH absorptions. Water contents vary with structural level relative to the central cores of shear zones and they vary at the grain scale corresponding to deformation and recrystallization microstructures. Gradients in quartz water content expressed over structural distances of 10 to 400 m from the centers of the Moine Thrust (Stack of Glencoul, NW Scotland), the Main Central Thrust (Sutlej valley of NW India), and the South Tibetan Detachment System (Rongbuk valley north of Mount Everest) indicate that these shear zones functioned as fluid conduits. However, the gradients differ substantially: in some cases, enhanced fluid fluxes appear to have increased quartz water contents, while in others, they served to decrease water contents. Water contents of Moine thrust quartzites appear to have been reduced during shear at greenschist facies by processes of regime II BLG/SGR dislocation creep. Intragranular water contents of the protolith 70 m below the central fault core are large (4078 ± 247 ppm, H/106 Si) while mylonites within 5 mm of the Moine hanging wall rocks have water contents of only 1570 (± 229) ppm. Water contents between these extremes vary systematically with structural level and correlate inversely with the extent of dynamic recrystallization (20 to 100%). Quartz intragranular water contents of Himalayan thrust and low-angle detachment zones sheared at upper amphibolite conditions by regime III GBM creep show varying trends with structural level. Water contents increase toward the Lhotse detachment of the Rongbuk valley, reaching 11,350 (± 1095) ppm, whereas they decrease toward the Main Central Thrust exposed in the western part of the Sutlej valley to values as low as 170 (± 25) ppm. Maps of intragranular water content correspond to populations of fluid inclusions, which depend on the history of deformation and dynamic recrystallization. Increases in water content require the introduction of secondary fluid inclusions, generally by brittle microcracking followed by crack healing and processes of inclusion redistribution documented in milky quartz experiments. Decreases in water content result from dynamic recrystallization, as mobile grain boundaries sweep through wet porphyroclasts, leaving behind dry recrystallized grains. Intragranular water contents throughout greenschist mylonites of the Moine thrust are comparable to those of quartz weakened by water in laboratory experiments. However, water contents of upper amphibolite mylonites of the Main Central Thrust are far below those required for water weakening at experimental strain rates and offer challenges to our understanding of quartz rheology.

  11. The MVACS Surface Stereo Imager on Mars Polar Lander

    NASA Astrophysics Data System (ADS)

    Smith, P. H.; Reynolds, R.; Weinberg, J.; Friedman, T.; Lemmon, M. T.; Tanner, R.; Reid, R. J.; Marcialis, R. L.; Bos, B. J.; Oquest, C.; Keller, H. U.; Markiewicz, W. J.; Kramm, R.; Gliem, F.; Rueffer, P.

    2001-08-01

    The Surface Stereo Imager (SSI), a stereoscopic, multispectral camera on the Mars Polar Lander, is described in terms of its capabilities for studying the Martian polar environment. The camera's two eyes, separated by 15.0 cm, provide the camera with range-finding ability. Each eye illuminates half of a single CCD detector with a field of view of 13.8° high by 14.3° wide and has 12 selectable filters between 440 and 1000 nm. The f18 optics have a large depth of field, and no focusing mechanism is required; a mechanical shutter is avoided by using the frame transfer capability of the 528 × 512 CCD. The resolving power of the camera, 0.975 mrad/pixel, is the same as the Imager for Mars Pathfinder camera, of which it is nearly an exact copy. Specially designed targets are positioned on the Lander; they provide information on the magnetic properties of wind-blown dust, and radiometric standards for calibration. Several experiments beyond the requisite color panorama are described in detail: contour mapping of the local terrain, multispectral imaging of interesting features (possibly with ice or frost in shaded spots) to study local mineralogy, and atmospheric imaging to constrain the properties of the haze and clouds. Eight low-transmission filters are included for imaging the Sun directly at multiple wavelengths to give SSI the ability to measure dust opacity and potentially the water vapor content. This paper is intended to document the functionality and calibration of the SSI as flown on the failed lander.

  12. Near-infrared imaging of water in human hair.

    PubMed

    Egawa, Mariko; Hagihara, Motofumi; Yanai, Motohiro

    2013-02-01

    The water content of hair can be evaluated by weighing, the Karl Fischer method, and from electrical properties. However, these methods cannot be used to study the distribution of water in the hair. Imaging techniques are required for this purpose. In this study, a highly sensitive near-infrared (NIR) imaging system was developed for evaluating water in human hair. The results obtained from NIR imaging and conventional methods were compared. An extended indium-gallium-arsenide NIR camera (detection range: 1100-2200 nm) and diffuse illumination unit developed in our laboratory were used to obtain a NIR image of hair. A water image was obtained using a 1950-nm interference filter and polarization filter. Changes in the hair water content with relative humidity (20-95% RH) and after immersion in a 7% (w/w) sorbitol solution were measured using the NIR camera and an insulation resistance tester. The changes in the water content after treatment with two types of commercially available shampoo were also measured using the NIR camera. As the water content increased with changes in the relative humidity, the brightness of the water image decreased and the insulation resistance decreased. The brightness in the NIR image of hair treated with sorbitol solution was lower than that in the image of hair treated with water. This shows the sorbitol-treated hair contains more water than water-treated hair. The sorbitol-treated hair had a lower resistance after treatment than before, which also shows that sorbitol treatment increases the water content. With this system, we could detect a difference in the moisturizing effect between two commercially available shampoos. The highly sensitive imaging system could be used to study water in human hair. Changes in the water content of hair depended on the relative humidity and treatment with moisturizer. The results obtained using the NIR imaging system were similar to those obtained using a conventional method. Our system could detect differences in the moisturizing effects of two commercially available shampoos. © 2012 John Wiley & Sons A/S.

  13. Current approaches and future role of high content imaging in safety sciences and drug discovery.

    PubMed

    van Vliet, Erwin; Daneshian, Mardas; Beilmann, Mario; Davies, Anthony; Fava, Eugenio; Fleck, Roland; Julé, Yvon; Kansy, Manfred; Kustermann, Stefan; Macko, Peter; Mundy, William R; Roth, Adrian; Shah, Imran; Uteng, Marianne; van de Water, Bob; Hartung, Thomas; Leist, Marcel

    2014-01-01

    High content imaging combines automated microscopy with image analysis approaches to simultaneously quantify multiple phenotypic and/or functional parameters in biological systems. The technology has become an important tool in the fields of safety sciences and drug discovery, because it can be used for mode-of-action identification, determination of hazard potency and the discovery of toxicity targets and biomarkers. In contrast to conventional biochemical endpoints, high content imaging provides insight into the spatial distribution and dynamics of responses in biological systems. This allows the identification of signaling pathways underlying cell defense, adaptation, toxicity and death. Therefore, high content imaging is considered a promising technology to address the challenges for the "Toxicity testing in the 21st century" approach. Currently, high content imaging technologies are frequently applied in academia for mechanistic toxicity studies and in pharmaceutical industry for the ranking and selection of lead drug compounds or to identify/confirm mechanisms underlying effects observed in vivo. A recent workshop gathered scientists working on high content imaging in academia, pharmaceutical industry and regulatory bodies with the objective to compile the state-of-the-art of the technology in the different institutions. Together they defined technical and methodological gaps, proposed quality control measures and performance standards, highlighted cell sources and new readouts and discussed future requirements for regulatory implementation. This review summarizes the discussion, proposed solutions and recommendations of the specialists contributing to the workshop.

  14. Untitled Document

    Science.gov Websites

    You have reached a collection of archived material. The content available is no longer being . If you wish to see the latest content, please visit the current version of the site. For persons with disabilities experiencing difficulties accessing content on archive.defense.gov, please use the DoD Section 508

  15. 48 CFR 1506.303-2 - Content.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false Content. 1506.303-2 Section 1506.303-2 Federal Acquisition Regulations System ENVIRONMENTAL PROTECTION AGENCY ACQUISITION PLANNING COMPETITION REQUIREMENTS Other Than Full and Open Competition 1506.303-2 Content. The documentation requirements in this section apply only to...

  16. 48 CFR 1506.303-2 - Content.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false Content. 1506.303-2 Section 1506.303-2 Federal Acquisition Regulations System ENVIRONMENTAL PROTECTION AGENCY ACQUISITION PLANNING COMPETITION REQUIREMENTS Other Than Full and Open Competition 1506.303-2 Content. The documentation requirements in this section apply only to...

  17. 48 CFR 1506.303-2 - Content.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false Content. 1506.303-2 Section 1506.303-2 Federal Acquisition Regulations System ENVIRONMENTAL PROTECTION AGENCY ACQUISITION PLANNING COMPETITION REQUIREMENTS Other Than Full and Open Competition 1506.303-2 Content. The documentation requirements in this section apply only to...

  18. 48 CFR 1506.303-2 - Content.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Content. 1506.303-2 Section 1506.303-2 Federal Acquisition Regulations System ENVIRONMENTAL PROTECTION AGENCY ACQUISITION PLANNING COMPETITION REQUIREMENTS Other Than Full and Open Competition 1506.303-2 Content. The documentation requirements in this section apply only to...

  19. 43 CFR 1862.1 - Contents.

    Code of Federal Regulations, 2010 CFR

    1997-10-01

    ... 43 Public Lands: Interior 2 1997-10-01 1997-10-01 false Contents. 1862.1 Section 1862.1 GENERAL MANAGEMENT (1000) CONVEYANCES, DISCLAIMERS AND CORRECTION DOCUMENTS Patent Preparation and Issuance § 1862.1 Contents. (a) Patents for lands entered or located under general laws can be issued only in the name of the...

  20. 48 CFR 1506.303-2 - Content.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Content. 1506.303-2 Section 1506.303-2 Federal Acquisition Regulations System ENVIRONMENTAL PROTECTION AGENCY ACQUISITION PLANNING COMPETITION REQUIREMENTS Other Than Full and Open Competition 1506.303-2 Content. The documentation requirements in this section apply only to...

  1. Undergraduate Professors' Pedagogical Content Knowledge: The Case of "Amount of Substance"

    ERIC Educational Resources Information Center

    Padilla, Kira; Ponce-de-Leon, Ana Maria; Rembado, Florencia Mabel; Garritz, Andoni

    2008-01-01

    This paper documents the pedagogical content knowledge (PCK) of four university professors in General Chemistry for the topic "amount of substance"; a fundamental quantity of the International System of Units (SI). The research method involved the development of a Content Representation and the application of Mortimer's Conceptual…

  2. 49 CFR 659.23 - System security plan: contents.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 7 2010-10-01 2010-10-01 false System security plan: contents. 659.23 Section 659... State Oversight Agency § 659.23 System security plan: contents. The system security plan must, at a... system security plan; and (e) Document the rail transit agency's process for making its system security...

  3. 46 CFR 298.30 - Nature and content of Obligations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 8 2011-10-01 2011-10-01 false Nature and content of Obligations. 298.30 Section 298.30 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VESSEL FINANCING ASSISTANCE OBLIGATION GUARANTEES Documentation § 298.30 Nature and content of Obligations. (a) Single page. An Obligation, in the...

  4. 46 CFR 298.30 - Nature and content of Obligations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 8 2010-10-01 2010-10-01 false Nature and content of Obligations. 298.30 Section 298.30 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VESSEL FINANCING ASSISTANCE OBLIGATION GUARANTEES Documentation § 298.30 Nature and content of Obligations. (a) Single page. An Obligation, in the...

  5. 46 CFR 298.30 - Nature and content of Obligations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 8 2012-10-01 2012-10-01 false Nature and content of Obligations. 298.30 Section 298.30 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VESSEL FINANCING ASSISTANCE OBLIGATION GUARANTEES Documentation § 298.30 Nature and content of Obligations. (a) Single page. An Obligation, in the...

  6. 46 CFR 298.30 - Nature and content of Obligations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 8 2014-10-01 2014-10-01 false Nature and content of Obligations. 298.30 Section 298.30 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VESSEL FINANCING ASSISTANCE OBLIGATION GUARANTEES Documentation § 298.30 Nature and content of Obligations. (a) Single page. An Obligation, in the...

  7. 46 CFR 298.30 - Nature and content of Obligations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 8 2013-10-01 2013-10-01 false Nature and content of Obligations. 298.30 Section 298.30 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VESSEL FINANCING ASSISTANCE OBLIGATION GUARANTEES Documentation § 298.30 Nature and content of Obligations. (a) Single page. An Obligation, in the...

  8. Detecting mineral content in turbid medium using nonlinear Raman imaging: feasibility study

    PubMed Central

    Arora, Rajan; Petrov, Georgi I.; Noojin, Gary D.; Thomas, Patrick A.; Denton, Michael L.; Rockwell, Benjamin A.; Thomas, Robert J.; Yakovlev, Vladislav V.

    2012-01-01

    Osteoporosis is a bone disease characterized by reduced mineral content with resulting changes in bone architecture, which in turn increases the risk of bone fracture. Raman spectroscopy has an intrinsic sensitivity to the chemical content of the bone, but its application to study bones in vivo is limited due to strong optical scattering in tissue. It has been proposed that Raman excitation with photoacoustic detection can successfully address the problem of chemically specific imaging in deep tissue. In this report, the principal possibility of photoacoustic imaging for detecting mineral content is evaluated. PMID:22337734

  9. Method for indexing and retrieving manufacturing-specific digital imagery based on image content

    DOEpatents

    Ferrell, Regina K.; Karnowski, Thomas P.; Tobin, Jr., Kenneth W.

    2004-06-15

    A method for indexing and retrieving manufacturing-specific digital images based on image content comprises three steps. First, at least one feature vector can be extracted from a manufacturing-specific digital image stored in an image database. In particular, each extracted feature vector corresponds to a particular characteristic of the manufacturing-specific digital image, for instance, a digital image modality and overall characteristic, a substrate/background characteristic, and an anomaly/defect characteristic. Notably, the extracting step includes generating a defect mask using a detection process. Second, using an unsupervised clustering method, each extracted feature vector can be indexed in a hierarchical search tree. Third, a manufacturing-specific digital image associated with a feature vector stored in the hierarchicial search tree can be retrieved, wherein the manufacturing-specific digital image has image content comparably related to the image content of the query image. More particularly, can include two data reductions, the first performed based upon a query vector extracted from a query image. Subsequently, a user can select relevant images resulting from the first data reduction. From the selection, a prototype vector can be calculated, from which a second-level data reduction can be performed. The second-level data reduction can result in a subset of feature vectors comparable to the prototype vector, and further comparable to the query vector. An additional fourth step can include managing the hierarchical search tree by substituting a vector average for several redundant feature vectors encapsulated by nodes in the hierarchical search tree.

  10. Attention to local and global levels of hierarchical Navon figures affects rapid scene categorization.

    PubMed

    Brand, John; Johnson, Aaron P

    2014-01-01

    In four experiments, we investigated how attention to local and global levels of hierarchical Navon figures affected the selection of diagnostic spatial scale information used in scene categorization. We explored this issue by asking observers to classify hybrid images (i.e., images that contain low spatial frequency (LSF) content of one image, and high spatial frequency (HSF) content from a second image) immediately following global and local Navon tasks. Hybrid images can be classified according to either their LSF, or HSF content; thus, making them ideal for investigating diagnostic spatial scale preference. Although observers were sensitive to both spatial scales (Experiment 1), they overwhelmingly preferred to classify hybrids based on LSF content (Experiment 2). In Experiment 3, we demonstrated that LSF based hybrid categorization was faster following global Navon tasks, suggesting that LSF processing associated with global Navon tasks primed the selection of LSFs in hybrid images. In Experiment 4, replicating Experiment 3 but suppressing the LSF information in Navon letters by contrast balancing the stimuli examined this hypothesis. Similar to Experiment 3, observers preferred to classify hybrids based on LSF content; however and in contrast, LSF based hybrid categorization was slower following global than local Navon tasks.

  11. Attention to local and global levels of hierarchical Navon figures affects rapid scene categorization

    PubMed Central

    Brand, John; Johnson, Aaron P.

    2014-01-01

    In four experiments, we investigated how attention to local and global levels of hierarchical Navon figures affected the selection of diagnostic spatial scale information used in scene categorization. We explored this issue by asking observers to classify hybrid images (i.e., images that contain low spatial frequency (LSF) content of one image, and high spatial frequency (HSF) content from a second image) immediately following global and local Navon tasks. Hybrid images can be classified according to either their LSF, or HSF content; thus, making them ideal for investigating diagnostic spatial scale preference. Although observers were sensitive to both spatial scales (Experiment 1), they overwhelmingly preferred to classify hybrids based on LSF content (Experiment 2). In Experiment 3, we demonstrated that LSF based hybrid categorization was faster following global Navon tasks, suggesting that LSF processing associated with global Navon tasks primed the selection of LSFs in hybrid images. In Experiment 4, replicating Experiment 3 but suppressing the LSF information in Navon letters by contrast balancing the stimuli examined this hypothesis. Similar to Experiment 3, observers preferred to classify hybrids based on LSF content; however and in contrast, LSF based hybrid categorization was slower following global than local Navon tasks. PMID:25520675

  12. Practice-Based Measures of Elementary Science Teachers' Content Knowledge for Teaching: Initial Item Development and Validity Evidence. Research Report. ETS RR-17-43

    ERIC Educational Resources Information Center

    Mikeska, Jamie N.; Phelps, Geoffrey; Croft, Andrew J.

    2017-01-01

    This report describes efforts by a group of science teachers, teacher educators, researchers, and content specialists to conceptualize, develop, and pilot practice-based assessment items designed to measure elementary science teachers' content knowledge for teaching (CKT). The report documents the framework used to specify the content-specific…

  13. 17 CFR 4.1 - Requirements as to form.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... table of contents is required, the electronic document must either include page numbers in the text or... as to form. (a) Each document distributed pursuant to this part 4 must be: (1) Clear and legible; (2...” disclosed under this part 4 must be displayed in capital letters and in boldface type. (c) Where a document...

  14. 17 CFR 4.1 - Requirements as to form.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... table of contents is required, the electronic document must either include page numbers in the text or... as to form. (a) Each document distributed pursuant to this part 4 must be: (1) Clear and legible; (2...” disclosed under this part 4 must be displayed in capital letters and in boldface type. (c) Where a document...

  15. 17 CFR 4.1 - Requirements as to form.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... table of contents is required, the electronic document must either include page numbers in the text or... as to form. (a) Each document distributed pursuant to this part 4 must be: (1) Clear and legible; (2...” disclosed under this part 4 must be displayed in capital letters and in boldface type. (c) Where a document...

  16. 17 CFR 4.1 - Requirements as to form.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... table of contents is required, the electronic document must either include page numbers in the text or... as to form. (a) Each document distributed pursuant to this part 4 must be: (1) Clear and legible; (2...” disclosed under this part 4 must be displayed in capital letters and in boldface type. (c) Where a document...

  17. 17 CFR 4.1 - Requirements as to form.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... table of contents is required, the electronic document must either include page numbers in the text or... as to form. (a) Each document distributed pursuant to this part 4 must be: (1) Clear and legible; (2...” disclosed under this part 4 must be displayed in capital letters and in boldface type. (c) Where a document...

  18. Messages discriminated from the media about illicit drugs.

    PubMed

    Patterson, S J

    1994-01-01

    The electronic media have been an instrumental tool in the most recent efforts to address the issue of illicit drug abuse in the United States. Messages about illicit drugs appear in three places in the media: advertising content, news content, and entertainment content. Many studies have documented the amount and types of messages that appear on the electronic media, but few have asked the audience how they interpret these messages. The purpose of this study is to investigate how much and what type of information college students receive from the media about drugs. Interviews were conducted with 228 students using the message discrimination protocol. The messages were then content analyzed into theme areas. Results indicate the majority of messages discriminated from advertising content were fear appeals; that the majority of messages discriminated from news content documented the enforcement efforts in the war on drugs; and that messages about drugs in entertainment content were more likely to provide clear accurate information about drugs than the other two content sources. The results are discussed in terms of the audience receiving fear and fight messages from the electronic media rather than clear, accurate information necessary to make informed decisions about drugs.

  19. Summary of Expansions, Updates, and Results in GREET® 2016 Suite of Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    2016-10-01

    This report documents the technical content of the expansions and updates in Argonne National Laboratory’s GREET® 2016 release and provides references and links to key documents related to these expansions and updates.

  20. Exploring Models and Data for Remote Sensing Image Caption Generation

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoqiang; Wang, Binqiang; Zheng, Xiangtao; Li, Xuelong

    2018-04-01

    Inspired by recent development of artificial satellite, remote sensing images have attracted extensive attention. Recently, noticeable progress has been made in scene classification and target detection.However, it is still not clear how to describe the remote sensing image content with accurate and concise sentences. In this paper, we investigate to describe the remote sensing images with accurate and flexible sentences. First, some annotated instructions are presented to better describe the remote sensing images considering the special characteristics of remote sensing images. Second, in order to exhaustively exploit the contents of remote sensing images, a large-scale aerial image data set is constructed for remote sensing image caption. Finally, a comprehensive review is presented on the proposed data set to fully advance the task of remote sensing caption. Extensive experiments on the proposed data set demonstrate that the content of the remote sensing image can be completely described by generating language descriptions. The data set is available at https://github.com/201528014227051/RSICD_optimal

  1. XML DTD and Schemas for HDF-EOS

    NASA Technical Reports Server (NTRS)

    Ullman, Richard; Yang, Jingli

    2008-01-01

    An Extensible Markup Language (XML) document type definition (DTD) standard for the structure and contents of HDF-EOS files and their contents, and an equivalent standard in the form of schemas, have been developed.

  2. The Virtual Seismic Atlas Project: sharing the interpretation of seismic data

    NASA Astrophysics Data System (ADS)

    Butler, R.; Mortimer, E.; McCaffrey, B.; Stuart, G.; Sizer, M.; Clayton, S.

    2007-12-01

    Through the activities of academic research programs, national institutions and corporations, especially oil and gas companies, there is a substantial volume of seismic reflection data. Although the majority is proprietary and confidential, there are significant volumes of data that are potentially within the public domain and available for research. Yet the community is poorly connected to these data and consequently geological and other research using seismic reflection data is limited to very few groups of researchers. This is about to change. The Virtual Seismic Atlas (VSA) is generating an independent, free-to-use, community based internet resource that captures and shares the geological interpretation of seismic data globally. Images and associated documents are explicitly indexed using not only existing survey and geographical data but also on the geology they portray. By using "Guided Navigation" to search, discover and retrieve images, users are exposed to arrays of geological analogues that provide novel insights and opportunities for research and education. The VSA goes live, with evolving content and functionality, through 2008. There are opportunities for designed integration with other global data programs in the earth sciences.

  3. Fair Balance and Adequate Provision in Direct-to-Consumer Prescription Drug Online Banner Advertisements: A Content Analysis.

    PubMed

    Adams, Crystal

    2016-02-18

    The current direct-to-consumer advertising (DTCA) guidelines were developed with print, television, and radio media in mind, and there are no specific guidelines for online banner advertisements. This study evaluates how well Internet banner ads comply with existing Food and Drug Administration (FDA) guidelines for DTCA in other media. A content analysis was performed of 68 banner advertisements. A coding sheet was developed based on (1) FDA guidance documents for consumer-directed prescription drug advertisements and (2) previous DTCA content analyses. Specifically, the presence of a brief summary detailing the drug's risks and side effects or of a "major statement" identifying the drug's major risks, and the number and type of provisions made available to consumers for comprehensive information about the drug were coded. In addition, the criterion of "fair balance," the FDA's requirement that prescription drug ads balance information relating to the drug's risks with information relating to its benefits, was measured by numbering the benefit and risk facts identified in the ads and by examining the presentation of risk and benefit information. Every ad in the sample included a brief summary of risk information and at least one form of adequate provision as required by the FDA for broadcast ads that do not give audiences a brief summary of a drug's risks. No ads included a major statement. There were approximately 7.18 risk facts for every benefit fact. Most of the risks (98.85%, 1292/1307) were presented in the scroll portion of the ad, whereas most of the benefits (66.5%, 121/182) were presented in the main part of the ad. Out of 1307 risk facts, 1292 were qualitative and 15 were quantitative. Out of 182 benefit facts, 181 were qualitative and 1 was quantitative. The majority of ads showed neutral images during the disclosure of benefit and risk facts. Only 9% (6/68) of the ads displayed positive images and none displayed negative images when presenting risks facts. When benefit facts were being presented, 7% (5/68) showed only positive images. No ads showed negative images when the benefit facts were being presented. In the face of ambiguous regulatory guidelines for online banner promotion, drug companies appear to make an attempt to adapt to regulatory guidelines designed for traditional media. However, banner ads use various techniques of presentation to present the advertised drug in the best possible light. The FDA should formalize requirements that drug companies provide a brief summary and include multiple forms of adequate provision in banner ads.

  4. Informatics in radiology: A prototype Web-based reporting system for onsite-offsite clinician communication.

    PubMed

    Arnold, Corey W; Bui, Alex A T; Morioka, Craig; El-Saden, Suzie; Kangarloo, Hooshang

    2007-01-01

    The communication of imaging findings to a referring physician is an important role of the radiologist. However, communication between onsite and offsite physicians is a time-consuming process that can obstruct work flow and frequently involves no exchange of visual information, which is especially problematic given the importance of radiologic images for diagnosis and treatment. A prototype World Wide Web-based image documentation and reporting system was developed for use in supporting a "communication loop" that is based on the concept of a classic "wet-read" system. The proposed system represents an attempt to address many of the problems seen in current communication work flows by implementing a well-documented and easily accessible communication loop that is adaptable to different types of imaging study evaluation. Images are displayed in a native (DICOM) Digital Imaging and Communications in Medicine format with a Java applet, which allows accurate presentation along with use of various image manipulation tools. The Web-based infrastructure consists of a server that stores imaging studies and reports, with Web browsers that download and install necessary client software on demand. Application logic consists of a set of PHP (hypertext preprocessor) modules that are accessible with an application programming interface. The system may be adapted to any clinician-specialist communication loop, and, because it integrates radiologic standards with Web-based technologies, can more effectively communicate and document imaging data. RSNA, 2007

  5. Adaptive optics imaging of geographic atrophy.

    PubMed

    Gocho, Kiyoko; Sarda, Valérie; Falah, Sabrina; Sahel, José-Alain; Sennlaub, Florian; Benchaboune, Mustapha; Ullern, Martine; Paques, Michel

    2013-05-01

    To report the findings of en face adaptive optics (AO) near infrared (NIR) reflectance fundus flood imaging in eyes with geographic atrophy (GA). Observational clinical study of AO NIR fundus imaging was performed in 12 eyes of nine patients with GA, and in seven controls using a flood illumination camera operating at 840 nm, in addition to routine clinical examination. To document short term and midterm changes, AO imaging sessions were repeated in four patients (mean interval between sessions 21 days; median follow up 6 months). As compared with scanning laser ophthalmoscope imaging, AO NIR imaging improved the resolution of the changes affecting the RPE. Multiple hyporeflective clumps were seen within and around GA areas. Time-lapse imaging revealed micrometric-scale details of the emergence and progression of areas of atrophy as well as the complex kinetics of some hyporeflective clumps. Such dynamic changes were observed within as well as outside atrophic areas. in eyes affected by GA, AO nir imaging allows high resolution documentation of the extent of RPE damage. this also revealed that a complex, dynamic process of redistribution of hyporeflective clumps throughout the posterior pole precedes and accompanies the emergence and progression of atrophy. therefore, these clumps are probably also a biomarker of rpe damage. AO NIR imaging may, therefore, be of interest to detect the earliest stages, to document the retinal pathology and to monitor the progression oF GA. (ClinicalTrials.gov number, NCT01546181.).

  6. Flickr's Potential as an Academic Image Resource: An Exploratory Study

    ERIC Educational Resources Information Center

    Angus, Emma; Stuart, David; Thelwall, Mike

    2010-01-01

    Many web 2.0 sites are extremely popular and contain vast amounts of content, but how much of this content is useful in academia? This exploratory paper investigates the potential use of the popular web 2.0 image site Flickr as an academic image resource. The study identified images tagged with any one of 12 subject names derived from recognized…

  7. Reading and Writing in the 21st Century.

    ERIC Educational Resources Information Center

    Soloway, Elliot; And Others

    1993-01-01

    Describes MediaText, a multimedia document processor developed at the University of Michigan that allows the incorporation of video, music, sound, animations, still images, and text into one document. Interactive documents are discussed, and the need for users to be able to write documents as well as read them is emphasized. (four references) (LRW)

  8. Embedding the shapes of regions of interest into a Clinical Document Architecture document.

    PubMed

    Minh, Nguyen Hai; Yi, Byoung-Kee; Kim, Il Kon; Song, Joon Hyun; Binh, Pham Viet

    2015-03-01

    Sharing a medical image visually annotated by a region of interest with a remotely located specialist for consultation is a good practice. It may, however, require a special-purpose (and most likely expensive) system to send and view them, which is an unfeasible solution in developing countries such as Vietnam. In this study, we design and implement interoperable methods based on the HL7 Clinical Document Architecture and the eXtensible Markup Language Stylesheet Language for Transformation standards to seamlessly exchange and visually present the shapes of regions of interest using web browsers. We also propose a new integration architecture for a Clinical Document Architecture generator that enables embedding of regions of interest and simultaneous auto-generation of corresponding style sheets. Using the Clinical Document Architecture document and style sheet, a sender can transmit clinical documents and medical images together with coordinate values of regions of interest to recipients. Recipients can easily view the documents and display embedded regions of interest by rendering them in their web browser of choice. © The Author(s) 2014.

  9. CAMEL: concept annotated image libraries

    NASA Astrophysics Data System (ADS)

    Natsev, Apostol; Chadha, Atul; Soetarman, Basuki; Vitter, Jeffrey S.

    2001-01-01

    The problem of content-based image searching has received considerable attention in the last few years. Thousands of images are now available on the Internet, and many important applications require searching of images in domains such as E-commerce, medical imaging, weather prediction, satellite imagery, and so on. Yet, content-based image querying is still largely unestablished as a mainstream field, nor is it widely used by search engines. We believe that two of the major hurdles for this poor acceptance are poor retrieval quality and usability.

  10. CAMEL: concept annotated image libraries

    NASA Astrophysics Data System (ADS)

    Natsev, Apostol; Chadha, Atul; Soetarman, Basuki; Vitter, Jeffrey S.

    2000-12-01

    The problem of content-based image searching has received considerable attention in the last few years. Thousands of images are now available on the Internet, and many important applications require searching of images in domains such as E-commerce, medical imaging, weather prediction, satellite imagery, and so on. Yet, content-based image querying is still largely unestablished as a mainstream field, nor is it widely used by search engines. We believe that two of the major hurdles for this poor acceptance are poor retrieval quality and usability.

  11. Web Mining for Web Image Retrieval.

    ERIC Educational Resources Information Center

    Chen, Zheng; Wenyin, Liu; Zhang, Feng; Li, Mingjing; Zhang, Hongjiang

    2001-01-01

    Presents a prototype system for image retrieval from the Internet using Web mining. Discusses the architecture of the Web image retrieval prototype; document space modeling; user log mining; and image retrieval experiments to evaluate the proposed system. (AEF)

  12. Quality Evaluation of Nursing Observation Based on a Survey of Nursing Documents Using NursingNAVI.

    PubMed

    Tsuru, Satoko; Omori, Miho; Inoue, Manami; Wako, Fumiko

    2016-01-01

    We have identified three foci of the nursing observation and nursing action respectively. Using these frameworks, we have developed the structured knowledge model for a number of diseases and medical interventions. We developed this structure based NursingNAVI® contents collaborated with some quality centred hospitals. Authors analysed the nursing care documentations of post-gastrectomy patients in light of the standardized nursing care plan in the "NursingNAVI®" developed by ourselves and revealed the "failure to observe" and "failure to document", which leaded to the volatility of the patients' data, conditions and some situation. This phenomenon should have been avoided if nurses had employed a standardized nursing care plan. So, we developed thinking process support system for planning, delivering, recording and evaluating in daily nursing using NursingNAVI® contents. It is important to identify the problem of the volatility of the patients' data, conditions and some situation. We developed a survey tool of nursing documents using NursingNAVI® Content for quality evaluation of nursing observation. We recommended some hospitals to use this survey tool. Fifteen hospitals participated the survey using this tool. It is estimated that the volatilizing situation. A hospital which don't participate this survey, knew the result. So the hospital decided to use NursingNAVI® contents in HIS. It was suggested that the system has availability for nursing OJT and time reduction of planning and recording without volatilizing situation.

  13. Post-prandial reflux suppression by a raft-forming alginate (Gaviscon Advance) compared to a simple antacid documented by magnetic resonance imaging and pH-impedance monitoring: mechanistic assessment in healthy volunteers and randomised, controlled, double-blind study in reflux patients.

    PubMed

    Sweis, R; Kaufman, E; Anggiansah, A; Wong, T; Dettmar, P; Fried, M; Schwizer, W; Avvari, R K; Pal, A; Fox, M

    2013-06-01

    Alginates form a raft above the gastric contents, which may suppress gastro-oesophageal reflux; however, inconsistent effects have been reported in mechanistic and clinical studies. To visualise reflux suppression by an alginate-antacid [Gaviscon Advance (GA), Reckitt Benckiser, UK] compared with a nonraft-forming antacid using magnetic resonance imaging (MRI), and to determine the feasibility of pH-impedance monitoring for assessment of reflux suppression by alginates. Two studies were performed: (i) GA and antacid (Alucol, Wander Ltd, Switzerland) were visualised in the stomach after ingestion in 12 healthy volunteers over 30 min after a meal by MRI, with reflux events documented by manometry. (ii) A randomised controlled, double-blind cross-over trial of post-prandial reflux suppression documented by pH-impedance in 20 patients randomised to GA or antacid (Milk of Magnesia; Boots, UK) after two meals taken 24 h apart. MRI visualized a "mass" of GA form at the oesophago-gastric junction (OGJ); simple antacid sank to the distal stomach. The number of post-prandial common cavity reflux events was less with GA than antacid [median 2 (0-5) vs. 5 (1-11); P < 0.035]. Distal reflux events and acid exposure measured by pH-impedance were similar after GA and antacid. There was a trend to reduced proximal reflux events with GA compared with antacid [10.5 (8.9) vs. 13.9 (8.3); P = 0.070]. Gaviscon Advance forms a 'mass' close to the OGJ and significantly suppresses reflux compared with a nonraft-forming antacid. Standard pH-impedance monitoring is suitable for clinical studies of GA in gastro-oesophageal reflux disease patients where proximal reflux is the primary outcome. © 2013 Blackwell Publishing Ltd.

  14. 36 CFR § 1238.14 - What are the microfilming requirements for permanent and unscheduled records?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... processing procedures in ANSI/AIIM MS1 and ANSI/AIIM MS23 (both incorporated by reference, see § 1238.5). (d... reference, see § 1238.5). (2) Background density of images. Agencies must use the background ISO standard... densities for images of documents are as follows: Classification Description of document Background density...

  15. "That's in the Time of the Romans!" Knowledge and Strategies Students Use to Contextualize Historical Images and Documents

    ERIC Educational Resources Information Center

    van Boxtel, Carla; van Drie, Jannet

    2012-01-01

    An important goal of history education is the development of a chronological frame of reference that can be used to interpret and date historical images and documents. Despite the importance of this contextualization goal, little is known about the knowledge and strategies that allow students to situate information historically. Two studies were…

  16. Electronic Document Imaging and Optical Storage Systems for Local Governments: An Introduction. Local Government Records Technical Information Series. Number 21.

    ERIC Educational Resources Information Center

    Schwartz, Stanley F.

    This publication introduces electronic document imaging systems and provides guidance for local governments in New York in deciding whether such systems should be adopted for their own records and information management purposes. It advises local governments on how to develop plans for using such technology by discussing its advantages and…

  17. Excellence in Physics Education Award Talk: Evolving Evaluation and Evidence

    NASA Astrophysics Data System (ADS)

    Matsler, Karen

    2011-04-01

    AAPT/PTRA institutes were part of the first NSF projects encouraged to design rigorous evaluations to determine the characteristic of effective professional development. The evaluation of the AAPT/PTRA program has evolved from documenting the number of teachers attending daily workshops to documenting gains in content understanding and confidence by conducting comparison study groups for over 30 institutes across the nation. Components of the current AAPT/PTRA evaluation model include documentation of teacher gains in content understanding, confidence, use of technology, changes in classroom practice, and student achievement. This talk will reflect on the evaluation components, the inherent challenges, components that were successful, and lessons learned. Results of the data collected on over 1000 teachers since 2003 will be shared.

  18. Residential Energy Consumption Survey (RECS): Household screener survey, 1979-1980, household characteristics and annualized consumption

    NASA Astrophysics Data System (ADS)

    Windell, P.

    1981-08-01

    This document provides basic information and technical specifications necessary for using the machine-readable magnetic tape containing data from the Household Screener Survey of the Residential Energy Consumption Survey (RECS). Included in this document are an overview of the RECS and a brief description of the Household Screener Survey. The next section contains technical specifications for reading the tape and descriptions of the contents of each of the files contained on the tape. The remaining four sections are devoted to technical topics of special interest to users of the data. Appended to this document are copies of the fieldwork instruments used in the survey and a listing of the contents of a portion of the SPSS labels information.

  19. Facing the Limitations of Electronic Document Handling.

    ERIC Educational Resources Information Center

    Moralee, Dennis

    1985-01-01

    This essay addresses problems associated with technology used in the handling of high-resolution visual images in electronic document delivery. Highlights include visual fidelity, laser-driven optical disk storage, electronics versus micrographics for document storage, videomicrographics, and system configurations and peripherals. (EJS)

  20. Preparing PNNL Reports with LaTeX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waichler, Scott R.

    2005-06-01

    LaTeX is a mature document preparation system that is the standard in many scientific and academic workplaces. It has been used extensively by scattered individuals and research groups within PNNL for years, but until now there have been no centralized or lab-focused resources to help authors and editors. PNNL authors and editors can produce correctly formatted PNNL or PNWD reports using the LaTeX document preparation system and the available template files. Please visit the PNNL-LaTeX Project (http://stidev.pnl.gov/resources/latex/, inside the PNNL firewall) for additional information and files. In LaTeX, document content is maintained separately from document structure for the most part.more » This means that the author can easily produce the same content in different formats and, more importantly, can focus on the content and write it in a plain text file that doesn't go awry, is easily transferable, and won't become obsolete due to software changes. LaTeX produces the finest print quality output; its typesetting is noticeably better than that of MS Word. This is particularly true for mathematics, tables, and other types of special text. Other benefits of LaTeX: easy handling of large numbers of figures and tables; automatic and error-free captioning, citation, cross-referencing, hyperlinking, and indexing; excellent published and online documentation; free or low-cost distributions for Windows/Linux/Unix/Mac OS X. This document serves two purposes: (1) it provides instructions to produce reports formatted to PNNL requirements using LaTeX, and (2) the document itself is in the form of a PNNL report, providing examples of many solved formatting challenges. Authors can use this document or its skeleton version (with formatting examples removed) as the starting point for their own reports. The pnnreport.cls class file and pnnl.bst bibliography style file contain the required formatting specifications for reports to the Department of Energy. Options are also provided for formatting PNWD (non-1830) reports. This documentation and the referenced files are meant to provide a complete package of PNNL particulars for authors and editors who wish to prepare technical reports using LaTeX. The example material in this document was borrowed from real reports and edited for demonstration purposes. The subject matter content of the example material is not relevant here and generally does not make literal sense in the context of this document. Brackets ''[]'' are used to denote large blocks of example text. The PDF file for this report contains hyperlinks to facilitate navigation. Hyperlinks are provided for all cross-referenced material, including section headings, figures, tables, and references. Not all hyperlinks are colored but will be obvious when you move your mouse over them.« less

  1. A neotropical Miocene pollen database employing image-based search and semantic modeling1

    PubMed Central

    Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W.; Jaramillo, Carlos; Shyu, Chi-Ren

    2014-01-01

    • Premise of the study: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Methods: Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Results: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Discussion: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery. PMID:25202648

  2. Visual Based Retrieval Systems and Web Mining--Introduction.

    ERIC Educational Resources Information Center

    Iyengar, S. S.

    2001-01-01

    Briefly discusses Web mining and image retrieval techniques, and then presents a summary of articles in this special issue. Articles focus on Web content mining, artificial neural networks as tools for image retrieval, content-based image retrieval systems, and personalizing the Web browsing experience using media agents. (AEF)

  3. Spatial assessment of soluble solid contents on apple slices using hyperspectral imaging

    USDA-ARS?s Scientific Manuscript database

    A partial least squares regression (PLSR) model to map internal soluble solids content (SSC) of apples using visible/near-infrared (VNIR) hyperspectral imaging was developed. The reflectance spectra of sliced apples were extracted from hyperspectral absorbance images obtained in the 400e1000 nm rang...

  4. Use of Image Based Modelling for Documentation of Intricately Shaped Objects

    NASA Astrophysics Data System (ADS)

    Marčiš, M.; Barták, P.; Valaška, D.; Fraštia, M.; Trhan, O.

    2016-06-01

    In the documentation of cultural heritage, we can encounter three dimensional shapes and structures which are complicated to measure. Such objects are for example spiral staircases, timber roof trusses, historical furniture or folk costume where it is nearly impossible to effectively use the traditional surveying or the terrestrial laser scanning due to the shape of the object, its dimensions and the crowded environment. The actual methods of digital photogrammetry can be very helpful in such cases with the emphasis on the automated processing of the extensive image data. The created high resolution 3D models and 2D orthophotos are very important for the documentation of architectural elements and they can serve as an ideal base for the vectorization and 2D drawing documentation. This contribution wants to describe the various usage of image based modelling in specific interior spaces and specific objects. The advantages and disadvantages of the photogrammetric measurement of such objects in comparison to other surveying methods are reviewed.

  5. Standardization of left atrial, right ventricular, and right atrial deformation imaging using two-dimensional speckle tracking echocardiography: a consensus document of the EACVI/ASE/Industry Task Force to standardize deformation imaging.

    PubMed

    Badano, Luigi P; Kolias, Theodore J; Muraru, Denisa; Abraham, Theodore P; Aurigemma, Gerard; Edvardsen, Thor; D'Hooge, Jan; Donal, Erwan; Fraser, Alan G; Marwick, Thomas; Mertens, Luc; Popescu, Bogdan A; Sengupta, Partho P; Lancellotti, Patrizio; Thomas, James D; Voigt, Jens-Uwe

    2018-03-27

    The EACVI/ASE/Industry Task Force to standardize deformation imaging prepared this consensus document to standardize definitions and techniques for using two-dimensional (2D) speckle tracking echocardiography (STE) to assess left atrial, right ventricular, and right atrial myocardial deformation. This document is intended for both the technical engineering community and the clinical community at large to provide guidance on selecting the functional parameters to measure and how to measure them using 2D STE.This document aims to represent a significant step forward in the collaboration between the scientific societies and the industry since technical specifications of the software packages designed to post-process echocardiographic datasets have been agreed and shared before their actual development. Hopefully, this will lead to more clinically oriented software packages which will be better tailored to clinical needs and will allow industry to save time and resources in their development.

  6. Paper trails. Document management is no silver bullet, but it can patch holes as hospitals transition to paperless.

    PubMed

    Gamble, Kate Huvane

    2009-10-01

    Hospitals are leveraging content management to ease the transition from a paper-based to electronic environment. Document management is used to scan, index and archive medical records and financial documents. Even fully integrated health systems receive outside documents such as lab results and referrals that must be incorporated into the patient record. The data in scanned documents cannot be used for trending purposes without manual work. The market for natural language processing, a tool used to extract data elements from scanned documents, could ramp up significantly in the near future.

  7. Ancient administrative handwritten documents: X-ray analysis and imaging

    PubMed Central

    Albertin, F.; Astolfo, A.; Stampanoni, M.; Peccenini, Eva; Hwu, Y.; Kaplan, F.; Margaritondo, G.

    2015-01-01

    Handwritten characters in administrative antique documents from three centuries have been detected using different synchrotron X-ray imaging techniques. Heavy elements in ancient inks, present even for everyday administrative manuscripts as shown by X-ray fluorescence spectra, produce attenuation contrast. In most cases the image quality is good enough for tomography reconstruction in view of future applications to virtual page-by-page ‘reading’. When attenuation is too low, differential phase contrast imaging can reveal the characters from refractive index effects. The results are potentially important for new information harvesting strategies, for example from the huge Archivio di Stato collection, objective of the Venice Time Machine project. PMID:25723946

  8. Ancient administrative handwritten documents: X-ray analysis and imaging.

    PubMed

    Albertin, F; Astolfo, A; Stampanoni, M; Peccenini, Eva; Hwu, Y; Kaplan, F; Margaritondo, G

    2015-03-01

    Handwritten characters in administrative antique documents from three centuries have been detected using different synchrotron X-ray imaging techniques. Heavy elements in ancient inks, present even for everyday administrative manuscripts as shown by X-ray fluorescence spectra, produce attenuation contrast. In most cases the image quality is good enough for tomography reconstruction in view of future applications to virtual page-by-page `reading'. When attenuation is too low, differential phase contrast imaging can reveal the characters from refractive index effects. The results are potentially important for new information harvesting strategies, for example from the huge Archivio di Stato collection, objective of the Venice Time Machine project.

  9. Faxed document image restoration method based on local pixel patterns

    NASA Astrophysics Data System (ADS)

    Akiyama, Teruo; Miyamoto, Nobuo; Oguro, Masami; Ogura, Kenji

    1998-04-01

    A method for restoring degraded faxed document images using the patterns of pixels that construct small areas in a document is proposed. The method effectively restores faxed images that contain the halftone textures and/or density salt-and-pepper noise that degrade OCR system performance. The halftone image restoration process, white-centered 3 X 3 pixels, in which black-and-white pixels alternate, are identified first using the distribution of the pixel values as halftone textures, and then the white center pixels are inverted to black. To remove high-density salt- and-pepper noise, it is assumed that the degradation is caused by ill-balanced bias and inappropriate thresholding of the sensor output which results in the addition of random noise. Restored image can be estimated using an approximation that uses the inverse operation of the assumed original process. In order to process degraded faxed images, the algorithms mentioned above are combined. An experiment is conducted using 24 especially poor quality examples selected from data sets that exemplify what practical fax- based OCR systems cannot handle. The maximum recovery rate in terms of mean square error was 98.8 percent.

  10. 78 FR 23918 - Request for Information Regarding Third Party Testing for Lead Content, Phthalate Content, and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-23

    ... CONSUMER PRODUCT SAFETY COMMISSION [Docket No. CPSC 2011-0081] Request for Information Regarding Third Party Testing for Lead Content, Phthalate Content, and the Solubility of the Eight Elements Listed in ASTM F963-11 Correction In notice document 2013-8858 appearing on pages 22518-22520 in the issue of Tuesday, April 16, 2013, make the followin...

  11. Resolution analysis of archive films for the purpose of their optimal digitization and distribution

    NASA Astrophysics Data System (ADS)

    Fliegel, Karel; Vítek, Stanislav; Páta, Petr; Myslík, Jiří; Pecák, Josef; Jícha, Marek

    2017-09-01

    With recent high demand for ultra-high-definition (UHD) content to be screened in high-end digital movie theaters but also in the home environment, film archives full of movies in high-definition and above are in the scope of UHD content providers. Movies captured with the traditional film technology represent a virtually unlimited source of UHD content. The goal to maintain complete image information is also related to the choice of scanning resolution and spatial resolution for further distribution. It might seem that scanning the film material in the highest possible resolution using state-of-the-art film scanners and also its distribution in this resolution is the right choice. The information content of the digitized images is however limited, and various degradations moreover lead to its further reduction. Digital distribution of the content in the highest image resolution might be therefore unnecessary or uneconomical. In other cases, the highest possible resolution is inevitable if we want to preserve fine scene details or film grain structure for archiving purposes. This paper deals with the image detail content analysis of archive film records. The resolution limit in captured scene image and factors which lower the final resolution are discussed. Methods are proposed to determine the spatial details of the film picture based on the analysis of its digitized image data. These procedures allow determining recommendations for optimal distribution of digitized video content intended for various display devices with lower resolutions. Obtained results are illustrated on spatial downsampling use case scenario, and performance evaluation of the proposed techniques is presented.

  12. Nanoparticulate NaA zeolite composites for MRI: Effect of iron oxide content on image contrast

    NASA Astrophysics Data System (ADS)

    Gharehaghaji, Nahideh; Divband, Baharak; Zareei, Loghman

    2018-06-01

    In the current study, Fe3O4/NaA nanocomposites with various amounts of Fe3O4 (3.4, 6.8 & 10.2 wt%) were synthesized and characterized to study the effect of nano iron oxide content on the magnetic resonance (MR) image contrast. The cell viability of the nanocomposites was investigated by MTT assay method. T2 values as well as r2 relaxivities were determined with a 1.5 T MRI scanner. The results of the MTT assay confirmed the nanocomposites cytocompatibility up to 6.8% of the iron oxide content. Although the magnetization saturations and susceptibility values of the nanocomposites were increased as a function of the iron oxide content, their relaxivity was decreased from 921.78 mM-1 s-1 for the nanocomposite with the lowest iron oxide content to 380.16 mM-1 s-1 for the highest one. Therefore, Fe3O4/NaA nanocomposite with 3.4% iron oxide content led to the best MR image contrast. Nano iron oxide content and dispersion in the nanocomposites structure have important role in the nanocomposite r2 relaxivity and the MR image contrast. Aggregation of the iron oxide nanoparticles is a limiting factor in using of the high iron oxide content nanocomposites.

  13. Clinical use of intracoronary imaging. Part 1: guidance and optimization of coronary interventions. An expert consensus document of the European Association of Percutaneous Cardiovascular Interventions: Endorsed by the Chinese Society of Cardiology.

    PubMed

    Räber, Lorenz; Mintz, Gary S; Koskinas, Konstantinos C; Johnson, Thomas W; Holm, Niels R; Onuma, Yoshinubo; Radu, Maria D; Joner, Michael; Yu, Bo; Jia, Haibo; Menevau, Nicolas; de la Torre Hernandez, Jose M; Escaned, Javier; Hill, Jonathan; Prati, Francesco; Colombo, Antonio; di Mario, Carlo; Regar, Evelyn; Capodanno, Davide; Wijns, William; Byrne, Robert A; Guagliumi, Giulio

    2018-05-22

    This Consensus Document is the first of two reports summarizing the views of an expert panel organized by the European Association of Percutaneous Cardiovascular Interventions (EAPCI) on the clinical use of intracoronary imaging including intravascular ultrasound (IVUS) and optical coherence tomography (OCT). The first document appraises the role of intracoronary imaging to guide percutaneous coronary interventions (PCIs) in clinical practice. Current evidence regarding the impact of intracoronary imaging guidance on cardiovascular outcomes is summarized, and patients or lesions most likely to derive clinical benefit from an imaging-guided intervention are identified. The relevance of the use of IVUS or OCT prior to PCI for optimizing stent sizing (stent length and diameter) and planning the procedural strategy is discussed. Regarding post-implantation imaging, the consensus group recommends key parameters that characterize an optimal PCI result and provides cut-offs to guide corrective measures and optimize the stenting result. Moreover, routine performance of intracoronary imaging in patients with stent failure (restenosis or stent thrombosis) is recommended. Finally, strengths and limitations of IVUS and OCT for guiding PCI and assessing stent failures and areas that warrant further research are critically discussed.

  14. Sub-word image clustering in Farsi printed books

    NASA Astrophysics Data System (ADS)

    Soheili, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier

    2015-02-01

    Most OCR systems are designed for the recognition of a single page. In case of unfamiliar font faces, low quality papers and degraded prints, the performance of these products drops sharply. However, an OCR system can use redundancy of word occurrences in large documents to improve recognition results. In this paper, we propose a sub-word image clustering method for the applications dealing with large printed documents. We assume that the whole document is printed by a unique unknown font with low quality print. Our proposed method finds clusters of equivalent sub-word images with an incremental algorithm. Due to the low print quality, we propose an image matching algorithm for measuring the distance between two sub-word images, based on Hamming distance and the ratio of the area to the perimeter of the connected components. We built a ground-truth dataset of more than 111000 sub-word images to evaluate our method. All of these images were extracted from an old Farsi book. We cluster all of these sub-words, including isolated letters and even punctuation marks. Then all centers of created clusters are labeled manually. We show that all sub-words of the book can be recognized with more than 99.7% accuracy by assigning the label of each cluster center to all of its members.

  15. IHE profiles applied to regional PACS.

    PubMed

    Fernandez-Bayó, Josep

    2011-05-01

    PACS has been widely adopted as an image storage solution that perfectly fits the radiology department workflow and that can be easily extended to other hospital departments. Integrations with other hospital systems, like the Radiology Information System, the Hospital Information System and the Electronic Patient Record are fully achieved but still challenging aims. PACS also creates the perfect environment for teleradiology and teleworking setups. One step further is the regional PACS concept where different hospitals or health care enterprises share the images in an integrated Electronic Patient Record. Among the different solutions available to share images between different hospitals IHE (Integrating the Healthcare Enterprise) organization presents the Cross Enterprise Document Sharing profile (XDS) which allows sharing images from different hospitals even if they have different PACS vendors. Adopting XDS has multiple advantages, images do not need to be duplicated in a central archive to be shared among the different healthcare enterprises, they only need to be indexed and published in a central document registry. In the XDS profile IHE defines the mechanisms to publish and index the images in the central document registry. It also defines the mechanisms that each hospital will use to retrieve those images regardless on the Hospital PACS they are stored. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  16. Content standards for medical image metadata

    NASA Astrophysics Data System (ADS)

    d'Ornellas, Marcos C.; da Rocha, Rafael P.

    2003-12-01

    Medical images are at the heart of the healthcare diagnostic procedures. They have provided not only a noninvasive mean to view anatomical cross-sections of internal organs but also a mean for physicians to evaluate the patient"s diagnosis and monitor the effects of the treatment. For a Medical Center, the emphasis may shift from the generation of image to post processing and data management since the medical staff may generate even more processed images and other data from the original image after various analyses and post processing. A medical image data repository for health care information system is becoming a critical need. This data repository would contain comprehensive patient records, including information such as clinical data and related diagnostic images, and post-processed images. Due to the large volume and complexity of the data as well as the diversified user access requirements, the implementation of the medical image archive system will be a complex and challenging task. This paper discusses content standards for medical image metadata. In addition it also focuses on the image metadata content evaluation and metadata quality management.

  17. Changing the Attitudes of Pre-Service Teachers toward Content Literacy Stategies

    ERIC Educational Resources Information Center

    Warren-Kring, Bonnie Z.; Warren, Grace A.

    2013-01-01

    The purpose of this research was to study the impact of an adolescent literacy education course on content area education students' attitudes toward implementing adolescent literacy strategies within their content lessons. Longitudinal data were gathered over five years and then analyzed. The researcher documented changes in the attitudes of the…

  18. Refreshing the "Voluntary National Content Standards in Economics"

    ERIC Educational Resources Information Center

    MacDonald, Richard A.; Siegfried, John J.

    2012-01-01

    The second edition of the "Voluntary National Content Standards in Economics" was published by the Council for Economic Education in 2010. The authors examine the process for revising these precollege content standards and highlight several changes that appear in the new document. They also review the impact the standards have had on precollege…

  19. Ill-informed consent? A content analysis of physical risk disclosure in school-based HPV vaccine programs.

    PubMed

    Steenbeek, Audrey; Macdonald, Noni; Downie, Jocelyn; Appleton, Mary; Baylis, Françoise

    2012-01-01

    This study examines the accuracy, completeness, and consistency of human papilloma virus (HPV) vaccine related physical risks disclosed in documents available to parents, legal guardians, and girls in Canadian jurisdictions with school-based HPV vaccine programs. We conducted an online search for program related HPV vaccine risk/benefit documents for all 13 Canadian jurisdictions between July 2008 and May 2009 including follow-up by e-mail and telephone requests for relevant documents from the respective Ministries or Departments of Health. The physical risks listed in the documents were compared across jurisdictions and against documents prepared by the vaccine manufacturer (Merck Frosst Canada), the National Advisory Committee on Immunization (NACI), the Society of Obstetricians and Gynecologists of Canada (SOGC), and a 2007 article in Maclean's Magazine. No jurisdiction provided the same list of vaccine related physical risks as any other jurisdiction. Major discrepancies were identified. Inaccurate, incomplete, and inconsistent information can threaten the validity of consent/authorization and potentially undermine trust in the vaccine program and the vaccine itself. Efforts are needed to improve the quality, clarity, and standardization of the content of written documents used in school-based HPV vaccine programs across Canada. © 2011 Wiley Periodicals, Inc.

  20. Query-Biased Preview over Outsourced and Encrypted Data

    PubMed Central

    Luo, Guangchun; Qin, Ke; Chen, Aiguo

    2013-01-01

    For both convenience and security, more and more users encrypt their sensitive data before outsourcing it to a third party such as cloud storage service. However, searching for the desired documents becomes problematic since it is costly to download and decrypt each possibly needed document to check if it contains the desired content. An informative query-biased preview feature, as applied in modern search engine, could help the users to learn about the content without downloading the entire document. However, when the data are encrypted, securely extracting a keyword-in-context snippet from the data as a preview becomes a challenge. Based on private information retrieval protocol and the core concept of searchable encryption, we propose a single-server and two-round solution to securely obtain a query-biased snippet over the encrypted data from the server. We achieve this novel result by making a document (plaintext) previewable under any cryptosystem and constructing a secure index to support dynamic computation for a best matched snippet when queried by some keywords. For each document, the scheme has O(d) storage complexity and O(log(d/s) + s + d/s) communication complexity, where d is the document size and s is the snippet length. PMID:24078798

Top