Sample records for image processing library

  1. Integrating digital topology in image-processing libraries.

    PubMed

    Lamy, Julien

    2007-01-01

    This paper describes a method to integrate digital topology informations in image-processing libraries. This additional information allows a library user to write algorithms respecting topological constraints, for example, a seed fill or a skeletonization algorithm. As digital topology is absent from most image-processing libraries, such constraints cannot be fulfilled. We describe and give code samples for all the structures necessary for this integration, and show a use case in the form of a homotopic thinning filter inside ITK. The obtained filter can be up to a hundred times as fast as ITK's thinning filter and works for any image dimension. This paper mainly deals of integration within ITK, but can be adapted with only minor modifications to other image-processing libraries.

  2. SPARX, a new environment for Cryo-EM image processing.

    PubMed

    Hohn, Michael; Tang, Grant; Goodyear, Grant; Baldwin, P R; Huang, Zhong; Penczek, Pawel A; Yang, Chao; Glaeser, Robert M; Adams, Paul D; Ludtke, Steven J

    2007-01-01

    SPARX (single particle analysis for resolution extension) is a new image processing environment with a particular emphasis on transmission electron microscopy (TEM) structure determination. It includes a graphical user interface that provides a complete graphical programming environment with a novel data/process-flow infrastructure, an extensive library of Python scripts that perform specific TEM-related computational tasks, and a core library of fundamental C++ image processing functions. In addition, SPARX relies on the EMAN2 library and cctbx, the open-source computational crystallography library from PHENIX. The design of the system is such that future inclusion of other image processing libraries is a straightforward task. The SPARX infrastructure intelligently handles retention of intermediate values, even those inside programming structures such as loops and function calls. SPARX and all dependencies are free for academic use and available with complete source.

  3. Development of a Reference Image Collection Library for Histopathology Image Processing, Analysis and Decision Support Systems Research.

    PubMed

    Kostopoulos, Spiros; Ravazoula, Panagiota; Asvestas, Pantelis; Kalatzis, Ioannis; Xenogiannopoulos, George; Cavouras, Dionisis; Glotsos, Dimitris

    2017-06-01

    Histopathology image processing, analysis and computer-aided diagnosis have been shown as effective assisting tools towards reliable and intra-/inter-observer invariant decisions in traditional pathology. Especially for cancer patients, decisions need to be as accurate as possible in order to increase the probability of optimal treatment planning. In this study, we propose a new image collection library (HICL-Histology Image Collection Library) comprising 3831 histological images of three different diseases, for fostering research in histopathology image processing, analysis and computer-aided diagnosis. Raw data comprised 93, 116 and 55 cases of brain, breast and laryngeal cancer respectively collected from the archives of the University Hospital of Patras, Greece. The 3831 images were generated from the most representative regions of the pathology, specified by an experienced histopathologist. The HICL Image Collection is free for access under an academic license at http://medisp.bme.teiath.gr/hicl/ . Potential exploitations of the proposed library may span over a board spectrum, such as in image processing to improve visualization, in segmentation for nuclei detection, in decision support systems for second opinion consultations, in statistical analysis for investigation of potential correlations between clinical annotations and imaging findings and, generally, in fostering research on histopathology image processing and analysis. To the best of our knowledge, the HICL constitutes the first attempt towards creation of a reference image collection library in the field of traditional histopathology, publicly and freely available to the scientific community.

  4. Screening of a virtual mirror-image library of natural products.

    PubMed

    Noguchi, Taro; Oishi, Shinya; Honda, Kaori; Kondoh, Yasumitsu; Saito, Tamio; Ohno, Hiroaki; Osada, Hiroyuki; Fujii, Nobutaka

    2016-06-08

    We established a facile access to an unexplored mirror-image library of chiral natural product derivatives using d-protein technology. In this process, two chemical syntheses of mirror-image substances including a target protein and hit compound(s) allow the lead discovery from a virtual mirror-image library without the synthesis of numerous mirror-image compounds.

  5. IJ-OpenCV: Combining ImageJ and OpenCV for processing images in biomedicine.

    PubMed

    Domínguez, César; Heras, Jónathan; Pascual, Vico

    2017-05-01

    The effective processing of biomedical images usually requires the interoperability of diverse software tools that have different aims but are complementary. The goal of this work is to develop a bridge to connect two of those tools: ImageJ, a program for image analysis in life sciences, and OpenCV, a computer vision and machine learning library. Based on a thorough analysis of ImageJ and OpenCV, we detected the features of these systems that could be enhanced, and developed a library to combine both tools, taking advantage of the strengths of each system. The library was implemented on top of the SciJava converter framework. We also provide a methodology to use this library. We have developed the publicly available library IJ-OpenCV that can be employed to create applications combining features from both ImageJ and OpenCV. From the perspective of ImageJ developers, they can use IJ-OpenCV to easily create plugins that use any functionality provided by the OpenCV library and explore different alternatives. From the perspective of OpenCV developers, this library provides a link to the ImageJ graphical user interface and all its features to handle regions of interest. The IJ-OpenCV library bridges the gap between ImageJ and OpenCV, allowing the connection and the cooperation of these two systems. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. The Convergence of Information Technology, Data, and Management in a Library Imaging Program

    ERIC Educational Resources Information Center

    France, Fenella G.; Emery, Doug; Toth, Michael B.

    2010-01-01

    Integrating advanced imaging and processing capabilities in libraries, archives, and museums requires effective systems and information management to ensure that the large amounts of digital data about cultural artifacts can be readily acquired, stored, archived, accessed, processed, and linked to other data. The Library of Congress is developing…

  7. Target recognition for ladar range image using slice image

    NASA Astrophysics Data System (ADS)

    Xia, Wenze; Han, Shaokun; Wang, Liang

    2015-12-01

    A shape descriptor and a complete shape-based recognition system using slice images as geometric feature descriptor for ladar range images are introduced. A slice image is a two-dimensional image generated by three-dimensional Hough transform and the corresponding mathematical transformation. The system consists of two processes, the model library construction and recognition. In the model library construction process, a series of range images are obtained after the model object is sampled at preset attitude angles. Then, all the range images are converted into slice images. The number of slice images is reduced by clustering analysis and finding a representation to reduce the size of the model library. In the recognition process, the slice image of the scene is compared with the slice image in the model library. The recognition results depend on the comparison. Simulated ladar range images are used to analyze the recognition and misjudgment rates, and comparison between the slice image representation method and moment invariants representation method is performed. The experimental results show that whether in conditions without noise or with ladar noise, the system has a high recognition rate and low misjudgment rate. The comparison experiment demonstrates that the slice image has better representation ability than moment invariants.

  8. Research on fast Fourier transforms algorithm of huge remote sensing image technology with GPU and partitioning technology.

    PubMed

    Yang, Xue; Li, Xue-You; Li, Jia-Guo; Ma, Jun; Zhang, Li; Yang, Jan; Du, Quan-Ye

    2014-02-01

    Fast Fourier transforms (FFT) is a basic approach to remote sensing image processing. With the improvement of capacity of remote sensing image capture with the features of hyperspectrum, high spatial resolution and high temporal resolution, how to use FFT technology to efficiently process huge remote sensing image becomes the critical step and research hot spot of current image processing technology. FFT algorithm, one of the basic algorithms of image processing, can be used for stripe noise removal, image compression, image registration, etc. in processing remote sensing image. CUFFT function library is the FFT algorithm library based on CPU and FFTW. FFTW is a FFT algorithm developed based on CPU in PC platform, and is currently the fastest CPU based FFT algorithm function library. However there is a common problem that once the available memory or memory is less than the capacity of image, there will be out of memory or memory overflow when using the above two methods to realize image FFT arithmetic. To address this problem, a CPU and partitioning technology based Huge Remote Fast Fourier Transform (HRFFT) algorithm is proposed in this paper. By improving the FFT algorithm in CUFFT function library, the problem of out of memory and memory overflow is solved. Moreover, this method is proved rational by experiment combined with the CCD image of HJ-1A satellite. When applied to practical image processing, it improves effect of the image processing, speeds up the processing, which saves the time of computation and achieves sound result.

  9. scikit-image: image processing in Python.

    PubMed

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  10. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  11. scikit-image: image processing in Python

    PubMed Central

    Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  12. AutoCNet: A Python library for sparse multi-image correspondence identification for planetary data

    NASA Astrophysics Data System (ADS)

    Laura, Jason; Rodriguez, Kelvin; Paquette, Adam C.; Dunn, Evin

    2018-01-01

    In this work we describe the AutoCNet library, written in Python, to support the application of computer vision techniques for n-image correspondence identification in remotely sensed planetary images and subsequent bundle adjustment. The library is designed to support exploratory data analysis, algorithm and processing pipeline development, and application at scale in High Performance Computing (HPC) environments for processing large data sets and generating foundational data products. We also present a brief case study illustrating high level usage for the Apollo 15 Metric camera.

  13. VIP: Vortex Image Processing Package for High-contrast Direct Imaging

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Absil, Olivier; Christiaens, Valentin; Defrère, Denis; Mawet, Dimitri; Milli, Julien; Absil, Pierre-Antoine; Van Droogenbroeck, Marc; Cantalloube, Faustine; Hinz, Philip M.; Skemer, Andrew J.; Karlsson, Mikael; Surdej, Jean

    2017-07-01

    We present the Vortex Image Processing (VIP) library, a python package dedicated to astronomical high-contrast imaging. Our package relies on the extensive python stack of scientific libraries and aims to provide a flexible framework for high-contrast data and image processing. In this paper, we describe the capabilities of VIP related to processing image sequences acquired using the angular differential imaging (ADI) observing technique. VIP implements functionalities for building high-contrast data processing pipelines, encompassing pre- and post-processing algorithms, potential source position and flux estimation, and sensitivity curve generation. Among the reference point-spread function subtraction techniques for ADI post-processing, VIP includes several flavors of principal component analysis (PCA) based algorithms, such as annular PCA and incremental PCA algorithms capable of processing big datacubes (of several gigabytes) on a computer with limited memory. Also, we present a novel ADI algorithm based on non-negative matrix factorization, which comes from the same family of low-rank matrix approximations as PCA and provides fairly similar results. We showcase the ADI capabilities of the VIP library using a deep sequence on HR 8799 taken with the LBTI/LMIRCam and its recently commissioned L-band vortex coronagraph. Using VIP, we investigated the presence of additional companions around HR 8799 and did not find any significant additional point source beyond the four known planets. VIP is available at http://github.com/vortex-exoplanet/VIP and is accompanied with Jupyter notebook tutorials illustrating the main functionalities of the library.

  14. RVC-CAL library for endmember and abundance estimation in hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Lazcano López, R.; Madroñal Quintín, D.; Juárez Martínez, E.; Sanz Álvaro, C.

    2015-10-01

    Hyperspectral imaging (HI) collects information from across the electromagnetic spectrum, covering a wide range of wavelengths. Although this technology was initially developed for remote sensing and earth observation, its multiple advantages - such as high spectral resolution - led to its application in other fields, as cancer detection. However, this new field has shown specific requirements; for instance, it needs to accomplish strong time specifications, since all the potential applications - like surgical guidance or in vivo tumor detection - imply real-time requisites. Achieving this time requirements is a great challenge, as hyperspectral images generate extremely high volumes of data to process. Thus, some new research lines are studying new processing techniques, and the most relevant ones are related to system parallelization. In that line, this paper describes the construction of a new hyperspectral processing library for RVC-CAL language, which is specifically designed for multimedia applications and allows multithreading compilation and system parallelization. This paper presents the development of the required library functions to implement two of the four stages of the hyperspectral imaging processing chain--endmember and abundances estimation. The results obtained show that the library achieves speedups of 30%, approximately, comparing to an existing software of hyperspectral images analysis; concretely, the endmember estimation step reaches an average speedup of 27.6%, which saves almost 8 seconds in the execution time. It also shows the existence of some bottlenecks, as the communication interfaces among the different actors due to the volume of data to transfer. Finally, it is shown that the library considerably simplifies the implementation process. Thus, experimental results show the potential of a RVC-CAL library for analyzing hyperspectral images in real-time, as it provides enough resources to study the system performance.

  15. Analyzing microtomography data with Python and the scikit-image library.

    PubMed

    Gouillart, Emmanuelle; Nunez-Iglesias, Juan; van der Walt, Stéfan

    2017-01-01

    The exploration and processing of images is a vital aspect of the scientific workflows of many X-ray imaging modalities. Users require tools that combine interactivity, versatility, and performance. scikit-image is an open-source image processing toolkit for the Python language that supports a large variety of file formats and is compatible with 2D and 3D images. The toolkit exposes a simple programming interface, with thematic modules grouping functions according to their purpose, such as image restoration, segmentation, and measurements. scikit-image users benefit from a rich scientific Python ecosystem that contains many powerful libraries for tasks such as visualization or machine learning. scikit-image combines a gentle learning curve, versatile image processing capabilities, and the scalable performance required for the high-throughput analysis of X-ray imaging data.

  16. Document Image Processing: Going beyond the Black-and-White Barrier. Progress, Issues and Options with Greyscale and Colour Image Processing.

    ERIC Educational Resources Information Center

    Hendley, Tom

    1995-01-01

    Discussion of digital document image processing focuses on issues and options associated with greyscale and color image processing. Topics include speed; size of original document; scanning resolution; markets for different categories of scanners, including photographic libraries, publishing, and office applications; hybrid systems; data…

  17. DOCLIB: a software library for document processing

    NASA Astrophysics Data System (ADS)

    Jaeger, Stefan; Zhu, Guangyu; Doermann, David; Chen, Kevin; Sampat, Summit

    2006-01-01

    Most researchers would agree that research in the field of document processing can benefit tremendously from a common software library through which institutions are able to develop and share research-related software and applications across academic, business, and government domains. However, despite several attempts in the past, the research community still lacks a widely-accepted standard software library for document processing. This paper describes a new library called DOCLIB, which tries to overcome the drawbacks of earlier approaches. Many of DOCLIB's features are unique either in themselves or in their combination with others, e.g. the factory concept for support of different image types, the juxtaposition of image data and metadata, or the add-on mechanism. We cherish the hope that DOCLIB serves the needs of researchers better than previous approaches and will readily be accepted by a larger group of scientists.

  18. Rapid development of medical imaging tools with open-source libraries.

    PubMed

    Caban, Jesus J; Joshi, Alark; Nagy, Paul

    2007-11-01

    Rapid prototyping is an important element in researching new imaging analysis techniques and developing custom medical applications. In the last ten years, the open source community and the number of open source libraries and freely available frameworks for biomedical research have grown significantly. What they offer are now considered standards in medical image analysis, computer-aided diagnosis, and medical visualization. A cursory review of the peer-reviewed literature in imaging informatics (indeed, in almost any information technology-dependent scientific discipline) indicates the current reliance on open source libraries to accelerate development and validation of processes and techniques. In this survey paper, we review and compare a few of the most successful open source libraries and frameworks for medical application development. Our dual intentions are to provide evidence that these approaches already constitute a vital and essential part of medical image analysis, diagnosis, and visualization and to motivate the reader to use open source libraries and software for rapid prototyping of medical applications and tools.

  19. Image processing and pattern recognition with CVIPtools MATLAB toolbox: automatic creation of masks for veterinary thermographic images

    NASA Astrophysics Data System (ADS)

    Mishra, Deependra K.; Umbaugh, Scott E.; Lama, Norsang; Dahal, Rohini; Marino, Dominic J.; Sackman, Joseph

    2016-09-01

    CVIPtools is a software package for the exploration of computer vision and image processing developed in the Computer Vision and Image Processing Laboratory at Southern Illinois University Edwardsville. CVIPtools is available in three variants - a) CVIPtools Graphical User Interface, b) CVIPtools C library and c) CVIPtools MATLAB toolbox, which makes it accessible to a variety of different users. It offers students, faculty, researchers and any user a free and easy way to explore computer vision and image processing techniques. Many functions have been implemented and are updated on a regular basis, the library has reached a level of sophistication that makes it suitable for both educational and research purposes. In this paper, the detail list of the functions available in the CVIPtools MATLAB toolbox are presented and how these functions can be used in image analysis and computer vision applications. The CVIPtools MATLAB toolbox allows the user to gain practical experience to better understand underlying theoretical problems in image processing and pattern recognition. As an example application, the algorithm for the automatic creation of masks for veterinary thermographic images is presented.

  20. AstroCV: Astronomy computer vision library

    NASA Astrophysics Data System (ADS)

    González, Roberto E.; Muñoz, Roberto P.; Hernández, Cristian A.

    2018-04-01

    AstroCV processes and analyzes big astronomical datasets, and is intended to provide a community repository of high performance Python and C++ algorithms used for image processing and computer vision. The library offers methods for object recognition, segmentation and classification, with emphasis in the automatic detection and classification of galaxies.

  1. Java Library for Input and Output of Image Data and Metadata

    NASA Technical Reports Server (NTRS)

    Deen, Robert; Levoe, Steven

    2003-01-01

    A Java-language library supports input and output (I/O) of image data and metadata (label data) in the format of the Video Image Communication and Retrieval (VICAR) image-processing software and in several similar formats, including a subset of the Planetary Data System (PDS) image file format. The library does the following: It provides low-level, direct access layer, enabling an application subprogram to read and write specific image files, lines, or pixels, and manipulate metadata directly. Two coding/decoding subprograms ("codecs" for short) based on the Java Advanced Imaging (JAI) software provide access to VICAR and PDS images in a file-format-independent manner. The VICAR and PDS codecs enable any program that conforms to the specification of the JAI codec to use VICAR or PDS images automatically, without specific knowledge of the VICAR or PDS format. The library also includes Image I/O plugin subprograms for VICAR and PDS formats. Application programs that conform to the Image I/O specification of Java version 1.4 can utilize any image format for which such a plug-in subprogram exists, without specific knowledge of the format itself. Like the aforementioned codecs, the VICAR and PDS Image I/O plug-in subprograms support reading and writing of metadata.

  2. Java Image I/O for VICAR, PDS, and ISIS

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Levoe, Steven R.

    2011-01-01

    This library, written in Java, supports input and output of images and metadata (labels) in the VICAR, PDS image, and ISIS-2 and ISIS-3 file formats. Three levels of access exist. The first level comprises the low-level, direct access to the file. This allows an application to read and write specific image tiles, lines, or pixels and to manipulate the label data directly. This layer is analogous to the C-language "VICAR Run-Time Library" (RTL), which is the image I/O library for the (C/C++/Fortran) VICAR image processing system from JPL MIPL (Multimission Image Processing Lab). This low-level library can also be used to read and write labeled, uncompressed images stored in formats similar to VICAR, such as ISIS-2 and -3, and a subset of PDS (image format). The second level of access involves two codecs based on Java Advanced Imaging (JAI) to provide access to VICAR and PDS images in a file-format-independent manner. JAI is supplied by Sun Microsystems as an extension to desktop Java, and has a number of codecs for formats such as GIF, TIFF, JPEG, etc. Although Sun has deprecated the codec mechanism (replaced by IIO), it is still used in many places. The VICAR and PDS codecs allow any program written using the JAI codec spec to use VICAR or PDS images automatically, with no specific knowledge of the VICAR or PDS formats. Support for metadata (labels) is included, but is format-dependent. The PDS codec, when processing PDS images with an embedded VIAR label ("dual-labeled images," such as used for MER), presents the VICAR label in a new way that is compatible with the VICAR codec. The third level of access involves VICAR, PDS, and ISIS Image I/O plugins. The Java core includes an "Image I/O" (IIO) package that is similar in concept to the JAI codec, but is newer and more capable. Applications written to the IIO specification can use any image format for which a plug-in exists, with no specific knowledge of the format itself.

  3. Exploration of Mars by Mariner 9 - Television sensors and image processing.

    NASA Technical Reports Server (NTRS)

    Cutts, J. A.

    1973-01-01

    Two cameras equipped with selenium sulfur slow scan vidicons were used in the orbital reconnaissance of Mars by the U.S. Spacecraft Mariner 9 and the performance characteristics of these devices are presented. Digital image processing techniques have been widely applied in the analysis of images of Mars and its satellites. Photometric and geometric distortion corrections, image detail enhancement and transformation to standard map projection have been routinely employed. More specializing applications included picture differencing, limb profiling, solar lighting corrections, noise removal, line plots and computer mosaics. Information on enhancements as well as important picture geometric information was stored in a master library. Display of the library data in graphic or numerical form was accomplished by a data management computer program.

  4. A hybrid algorithm for the segmentation of books in libraries

    NASA Astrophysics Data System (ADS)

    Hu, Zilong; Tang, Jinshan; Lei, Liang

    2016-05-01

    This paper proposes an algorithm for book segmentation based on bookshelves images. The algorithm can be separated into three parts. The first part is pre-processing, aiming at eliminating or decreasing the effect of image noise and illumination conditions. The second part is near-horizontal line detection based on Canny edge detector, and separating a bookshelves image into multiple sub-images so that each sub-image contains an individual shelf. The last part is book segmentation. In each shelf image, near-vertical line is detected, and obtained lines are used for book segmentation. The proposed algorithm was tested with the bookshelf images taken from OPIE library in MTU, and the experimental results demonstrate good performance.

  5. Digitizing an Analog Radiography Teaching File Under Time Constraint: Trade-Offs in Efficiency and Image Quality.

    PubMed

    Loehfelm, Thomas W; Prater, Adam B; Debebe, Tequam; Sekhar, Aarti K

    2017-02-01

    We digitized the radiography teaching file at Black Lion Hospital (Addis Ababa, Ethiopia) during a recent trip, using a standard digital camera and a fluorescent light box. Our goal was to photograph every radiograph in the existing library while optimizing the final image size to the maximum resolution of a high quality tablet computer, preserving the contrast resolution of the radiographs, and minimizing total library file size. A secondary important goal was to minimize the cost and time required to take and process the images. Three workers were able to efficiently remove the radiographs from their storage folders, hang them on the light box, operate the camera, catalog the image, and repack the radiographs back to the storage folder. Zoom, focal length, and film speed were fixed, while aperture and shutter speed were manually adjusted for each image, allowing for efficiency and flexibility in image acquisition. Keeping zoom and focal length fixed, which kept the view box at the same relative position in all of the images acquired during a single photography session, allowed unused space to be batch-cropped, saving considerable time in post-processing, at the expense of final image resolution. We present an analysis of the trade-offs in workflow efficiency and final image quality, and demonstrate that a few people with minimal equipment can efficiently digitize a teaching file library.

  6. Mechanization in a New Medical School Library II. Serials and Circulation

    PubMed Central

    Payne, Ladye Margarete; Small, Louise; Divett, Robert T.

    1966-01-01

    The serials and circulation phases of the data-processing system in use at the University of New Mexico Library of the Medical Sciences are described. The development of the programs is also reported. The serials program uses simple punched card equipment. The circulation program uses the IBM 357 Data Collection System and punched card data-processing equipment. Images PMID:5921473

  7. SAR target recognition using behaviour library of different shapes in different incidence angles and polarisations

    NASA Astrophysics Data System (ADS)

    Fallahpour, Mojtaba Behzad; Dehghani, Hamid; Jabbar Rashidi, Ali; Sheikhi, Abbas

    2018-05-01

    Target recognition is one of the most important issues in the interpretation of the synthetic aperture radar (SAR) images. Modelling, analysis, and recognition of the effects of influential parameters in the SAR can provide a better understanding of the SAR imaging systems, and therefore facilitates the interpretation of the produced images. Influential parameters in SAR images can be divided into five general categories of radar, radar platform, channel, imaging region, and processing section, each of which has different physical, structural, hardware, and software sub-parameters with clear roles in the finally formed images. In this paper, for the first time, a behaviour library that includes the effects of polarisation, incidence angle, and shape of targets, as radar and imaging region sub-parameters, in the SAR images are extracted. This library shows that the created pattern for each of cylindrical, conical, and cubic shapes is unique, and due to their unique properties these types of shapes can be recognised in the SAR images. This capability is applied to data acquired with the Canadian RADARSAT1 satellite.

  8. EMAN2: an extensible image processing suite for electron microscopy.

    PubMed

    Tang, Guang; Peng, Liwei; Baldwin, Philip R; Mann, Deepinder S; Jiang, Wen; Rees, Ian; Ludtke, Steven J

    2007-01-01

    EMAN is a scientific image processing package with a particular focus on single particle reconstruction from transmission electron microscopy (TEM) images. It was first released in 1999, and new versions have been released typically 2-3 times each year since that time. EMAN2 has been under development for the last two years, with a completely refactored image processing library, and a wide range of features to make it much more flexible and extensible than EMAN1. The user-level programs are better documented, more straightforward to use, and written in the Python scripting language, so advanced users can modify the programs' behavior without any recompilation. A completely rewritten 3D transformation class simplifies translation between Euler angle standards and symmetry conventions. The core C++ library has over 500 functions for image processing and associated tasks, and it is modular with introspection capabilities, so programmers can add new algorithms with minimal effort and programs can incorporate new capabilities automatically. Finally, a flexible new parallelism system has been designed to address the shortcomings in the rigid system in EMAN1.

  9. Iplt--image processing library and toolkit for the electron microscopy community.

    PubMed

    Philippsen, Ansgar; Schenk, Andreas D; Stahlberg, Henning; Engel, Andreas

    2003-01-01

    We present the foundation for establishing a modular, collaborative, integrated, open-source architecture for image processing of electron microscopy images, named iplt. It is designed around object oriented paradigms and implemented using the programming languages C++ and Python. In many aspects it deviates from classical image processing approaches. This paper intends to motivate developers within the community to participate in this on-going project. The iplt homepage can be found at http://www.iplt.org.

  10. Speech Recognition for A Digital Video Library.

    ERIC Educational Resources Information Center

    Witbrock, Michael J.; Hauptmann, Alexander G.

    1998-01-01

    Production of the meta-data supporting the Informedia Digital Video Library interface is automated using techniques derived from artificial intelligence research. Speech recognition and natural-language processing, information retrieval, and image analysis are applied to produce an interface that helps users locate information and navigate more…

  11. The Vector, Signal, and Image Processing Library (VSIPL): an Open Standard for Astronomical Data Processing

    NASA Astrophysics Data System (ADS)

    Kepner, J. V.; Janka, R. S.; Lebak, J.; Richards, M. A.

    1999-12-01

    The Vector/Signal/Image Processing Library (VSIPL) is a DARPA initiated effort made up of industry, government and academic representatives who have defined an industry standard API for vector, signal, and image processing primitives for real-time signal processing on high performance systems. VSIPL supports a wide range of data types (int, float, complex, ...) and layouts (vectors, matrices and tensors) and is ideal for astronomical data processing. The VSIPL API is intended to serve as an open, vendor-neutral, industry standard interface. The object-based VSIPL API abstracts the memory architecture of the underlying machine by using the concept of memory blocks and views. Early experiments with VSIPL code conversions have been carried out by the High Performance Computing Program team at the UCSD. Commercially, several major vendors of signal processors are actively developing implementations. VSIPL has also been explicitly required as part of a recent Rome Labs teraflop procurement. This poster presents the VSIPL API, its functionality and the status of various implementations.

  12. Graph-based active learning of agglomeration (GALA): a Python library to segment 2D and 3D neuroimages

    PubMed Central

    Nunez-Iglesias, Juan; Kennedy, Ryan; Plaza, Stephen M.; Chakraborty, Anirban; Katz, William T.

    2014-01-01

    The aim in high-resolution connectomics is to reconstruct complete neuronal connectivity in a tissue. Currently, the only technology capable of resolving the smallest neuronal processes is electron microscopy (EM). Thus, a common approach to network reconstruction is to perform (error-prone) automatic segmentation of EM images, followed by manual proofreading by experts to fix errors. We have developed an algorithm and software library to not only improve the accuracy of the initial automatic segmentation, but also point out the image coordinates where it is likely to have made errors. Our software, called gala (graph-based active learning of agglomeration), improves the state of the art in agglomerative image segmentation. It is implemented in Python and makes extensive use of the scientific Python stack (numpy, scipy, networkx, scikit-learn, scikit-image, and others). We present here the software architecture of the gala library, and discuss several designs that we consider would be generally useful for other segmentation packages. We also discuss the current limitations of the gala library and how we intend to address them. PMID:24772079

  13. Production of the next-generation library virtual tour.

    PubMed

    Duncan, J M; Roth, L K

    2001-10-01

    While many libraries offer overviews of their services through their Websites, only a small number of health sciences libraries provide Web-based virtual tours. These tours typically feature photographs of major service areas along with textual descriptions. This article describes the process for planning, producing, and implementing a next-generation virtual tour in which a variety of media elements are integrated: photographic images, 360-degree "virtual reality" views, textual descriptions, and contextual floor plans. Hardware and software tools used in the project are detailed, along with a production timeline and budget, tips for streamlining the process, and techniques for improving production. This paper is intended as a starting guide for other libraries considering an investment in such a project.

  14. In vitro selection using a dual RNA library that allows primerless selection

    PubMed Central

    Jarosch, Florian; Buchner, Klaus; Klussmann, Sven

    2006-01-01

    High affinity target-binding aptamers are identified from random oligonucleotide libraries by an in vitro selection process called Systematic Evolution of Ligands by EXponential enrichment (SELEX). Since the SELEX process includes a PCR amplification step the randomized region of the oligonucleotide libraries need to be flanked by two fixed primer binding sequences. These primer binding sites are often difficult to truncate because they may be necessary to maintain the structure of the aptamer or may even be part of the target binding motif. We designed a novel type of RNA library that carries fixed sequences which constrain the oligonucleotides into a partly double-stranded structure, thereby minimizing the risk that the primer binding sequences become part of the target-binding motif. Moreover, the specific design of the library including the use of tandem RNA Polymerase promoters allows the selection of oligonucleotides without any primer binding sequences. The library was used to select aptamers to the mirror-image peptide of ghrelin. Ghrelin is a potent stimulator of growth-hormone release and food intake. After selection, the identified aptamer sequences were directly synthesized in their mirror-image configuration. The final 44 nt-Spiegelmer, named NOX-B11-3, blocks ghrelin action in a cell culture assay displaying an IC50 of 4.5 nM at 37°C. PMID:16855281

  15. SCIFIO: an extensible framework to support scientific image formats.

    PubMed

    Hiner, Mark C; Rueden, Curtis T; Eliceiri, Kevin W

    2016-12-07

    No gold standard exists in the world of scientific image acquisition; a proliferation of instruments each with its own proprietary data format has made out-of-the-box sharing of that data nearly impossible. In the field of light microscopy, the Bio-Formats library was designed to translate such proprietary data formats to a common, open-source schema, enabling sharing and reproduction of scientific results. While Bio-Formats has proved successful for microscopy images, the greater scientific community was lacking a domain-independent framework for format translation. SCIFIO (SCientific Image Format Input and Output) is presented as a freely available, open-source library unifying the mechanisms of reading and writing image data. The core of SCIFIO is its modular definition of formats, the design of which clearly outlines the components of image I/O to encourage extensibility, facilitated by the dynamic discovery of the SciJava plugin framework. SCIFIO is structured to support coexistence of multiple domain-specific open exchange formats, such as Bio-Formats' OME-TIFF, within a unified environment. SCIFIO is a freely available software library developed to standardize the process of reading and writing scientific image formats.

  16. The Cutting Edge: Satellite Chamber, Lasers Spur LC Preservation Effort.

    ERIC Educational Resources Information Center

    Brandehoff, Susan E.

    1982-01-01

    Describes efforts to preserve important library materials at the Library of Congress through the use of two new technologies: a patented deacidification process in which books are placed in a vacuum chamber, and the use of optical disc recording techniques to miniaturize and store print and nonprint images. (JL)

  17. Uranus: a rapid prototyping tool for FPGA embedded computer vision

    NASA Astrophysics Data System (ADS)

    Rosales-Hernández, Victor; Castillo-Jimenez, Liz; Viveros-Velez, Gilberto; Zuñiga-Grajeda, Virgilio; Treviño Torres, Abel; Arias-Estrada, M.

    2007-01-01

    The starting point for all successful system development is the simulation. Performing high level simulation of a system can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in software and provides the necessary support to read and display image sequences as well as video files. The user can use the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the migration to FPGA accelerator platform, and it is distributed for academic purposes.

  18. JDiffraction: A GPGPU-accelerated JAVA library for numerical propagation of scalar wave fields

    NASA Astrophysics Data System (ADS)

    Piedrahita-Quintero, Pablo; Trujillo, Carlos; Garcia-Sucerquia, Jorge

    2017-05-01

    JDiffraction, a GPGPU-accelerated JAVA library for numerical propagation of scalar wave fields, is presented. Angular spectrum, Fresnel transform, and Fresnel-Bluestein transform are the numerical algorithms implemented in the methods and functions of the library to compute the scalar propagation of the complex wavefield. The functionality of the library is tested with the modeling of easy to forecast numerical experiments and also with the numerical reconstruction of a digitally recorded hologram. The performance of JDiffraction is contrasted with a library written for C++, showing great competitiveness in the apparently less complex environment of JAVA language. JDiffraction also includes JAVA easy-to-use methods and functions that take advantage of the computation power of the graphic processing units to accelerate the processing times of 2048×2048 pixel images up to 74 frames per second.

  19. Production of the next-generation library virtual tour

    PubMed Central

    Duncan, James M.; Roth, Linda K.

    2001-01-01

    While many libraries offer overviews of their services through their Websites, only a small number of health sciences libraries provide Web-based virtual tours. These tours typically feature photographs of major service areas along with textual descriptions. This article describes the process for planning, producing, and implementing a next-generation virtual tour in which a variety of media elements are integrated: photographic images, 360-degree “virtual reality” views, textual descriptions, and contextual floor plans. Hardware and software tools used in the project are detailed, along with a production timeline and budget, tips for streamlining the process, and techniques for improving production. This paper is intended as a starting guide for other libraries considering an investment in such a project. PMID:11837254

  20. Distributed data collection for a database of radiological image interpretations

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Ostchega, Yechiam; Goh, Gin-Hua; Thoma, George R.

    1997-01-01

    The National Library of Medicine, in collaboration with the National Center for Health Statistics and the National Institute for Arthritis and Musculoskeletal and Skin Diseases, has built a system for collecting radiological interpretations for a large set of x-ray images acquired as part of the data gathered in the second National Health and Nutrition Examination Survey. This system is capable of delivering across the Internet 5- and 10-megabyte x-ray images to Sun workstations equipped with X Window based 2048 X 2560 image displays, for the purpose of having these images interpreted for the degree of presence of particular osteoarthritic conditions in the cervical and lumbar spines. The collected interpretations can then be stored in a database at the National Library of Medicine, under control of the Illustra DBMS. This system is a client/server database application which integrates (1) distributed server processing of client requests, (2) a customized image transmission method for faster Internet data delivery, (3) distributed client workstations with high resolution displays, image processing functions and an on-line digital atlas, and (4) relational database management of the collected data.

  1. Chimenea and other tools: Automated imaging of multi-epoch radio-synthesis data with CASA

    NASA Astrophysics Data System (ADS)

    Staley, T. D.; Anderson, G. E.

    2015-11-01

    In preparing the way for the Square Kilometre Array and its pathfinders, there is a pressing need to begin probing the transient sky in a fully robotic fashion using the current generation of radio telescopes. Effective exploitation of such surveys requires a largely automated data-reduction process. This paper introduces an end-to-end automated reduction pipeline, AMIsurvey, used for calibrating and imaging data from the Arcminute Microkelvin Imager Large Array. AMIsurvey makes use of several component libraries which have been packaged separately for open-source release. The most scientifically significant of these is chimenea, which implements a telescope-agnostic algorithm for automated imaging of pre-calibrated multi-epoch radio-synthesis data, of the sort typically acquired for transient surveys or follow-up. The algorithm aims to improve upon standard imaging pipelines by utilizing iterative RMS-estimation and automated source-detection to avoid so called 'Clean-bias', and makes use of CASA subroutines for the underlying image-synthesis operations. At a lower level, AMIsurvey relies upon two libraries, drive-ami and drive-casa, built to allow use of mature radio-astronomy software packages from within Python scripts. While targeted at automated imaging, the drive-casa interface can also be used to automate interaction with any of the CASA subroutines from a generic Python process. Additionally, these packages may be of wider technical interest beyond radio-astronomy, since they demonstrate use of the Python library pexpect to emulate terminal interaction with an external process. This approach allows for rapid development of a Python interface to any legacy or externally-maintained pipeline which accepts command-line input, without requiring alterations to the original code.

  2. A survey of medical students on the impact of a new digital imaging library in the dissection room.

    PubMed

    Turmezei, T D; Tam, M D B S; Loughna, S

    2009-09-01

    Radiology has a recognised role in undergraduate anatomy education. The recent digitalisation of radiology has created new learning opportunities involving techniques such as image labelling, 3D reconstruction, and multiplanar reformatting. An opportunity was identified at the University of Nottingham to create a digital library of normal radiology images as a learner-driven adjunct in anatomy dissection sessions. We describe the process of creating a de novo digital library by sourcing images for presentation at computer workstations. Students' attitudes towards this new resource were assessed using a questionnaire which used a 5 point Likert scale and also offered free text responses. One hundred and forty-one out of 260 students (54%) completed the questionnaire. The most notable findings were: a positive response to the relevance of imaging to the session topics (median score 4), strong agreement that images should be available on the university website (median score 5), and disagreement that enough workstations were available (median score 2). About 24% of respondents suggested independently that images needed more labeling to help with orientation and identification. This first phase of supplying a comprehensive imaging library can be regarded as a success. Increasing availability and incorporating dynamic labeling are well recognized as important design concepts for electronic learning resources and these will be improved in the second phase of delivery as a direct result of student feedback. Hopefully other centers can benefit from this experience and will consider such a venture to be worthwhile.

  3. Image acquisition context: procedure description attributes for clinically relevant indexing and selective retrieval of biomedical images.

    PubMed

    Bidgood, W D; Bray, B; Brown, N; Mori, A R; Spackman, K A; Golichowski, A; Jones, R H; Korman, L; Dove, B; Hildebrand, L; Berg, M

    1999-01-01

    To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. The authors introduce the notion of "image acquisition context," the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries.

  4. Image Acquisition Context

    PubMed Central

    Bidgood, W. Dean; Bray, Bruce; Brown, Nicolas; Mori, Angelo Rossi; Spackman, Kent A.; Golichowski, Alan; Jones, Robert H.; Korman, Louis; Dove, Brent; Hildebrand, Lloyd; Berg, Michael

    1999-01-01

    Objective: To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. Design: The authors introduce the notion of “image acquisition context,” the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. Methods: The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. Results: The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries. PMID:9925229

  5. Rugged: an operational, open-source solution for Sentinel-2 mapping

    NASA Astrophysics Data System (ADS)

    Maisonobe, Luc; Seyral, Jean; Prat, Guylaine; Guinet, Jonathan; Espesset, Aude

    2015-10-01

    When you map the entire Earth every 5 days with the aim of generating high-quality time series over land, there is no room for geometrical error: the algorithms have to be stable, reliable, and precise. Rugged, a new open-source library for pixel geolocation, is at the geometrical heart of the operational processing for Sentinel-2. Rugged performs sensor-to-terrain mapping taking into account ground Digital Elevation Models, Earth rotation with all its small irregularities, on-board sensor pixel individual lines-of-sight, spacecraft motion and attitude, and all significant physical effects. It provides direct and inverse location, i.e. it allows the accurate computation of which ground point is viewed from a specific pixel in a spacecraft instrument, and conversely which pixel will view a specified ground point. Direct and inverse location can be used to perform full ortho-rectification of images and correlation between sensors observing the same area. Implemented as an add-on for Orekit (Orbits Extrapolation KIT; a low-level space dynamics library), Rugged also offers the possibility of simulating satellite motion and attitude auxiliary data using Orekit's full orbit propagation capability. This is a considerable advantage for test data generation and mission simulation activities. Together with the Orfeo ToolBox (OTB) image processing library, Rugged provides the algorithmic core of Sentinel-2 Instrument Processing Facilities. The S2 complex viewing model - with 12 staggered push-broom detectors and 13 spectral bands - is built using Rugged objects, enabling the computation of rectification grids for mapping between cartographic and focal plane coordinates. These grids are passed to the OTB library for further image resampling, thus completing the ortho-rectification chain. Sentinel-2 stringent operational requirements to process several terabytes of data per week represented a tough challenge, though one that was well met by Rugged in terms of the robustness and performance of the library.

  6. Global Learning Spectral Archive- A new Way to deal with Unknown Urban Spectra -

    NASA Astrophysics Data System (ADS)

    Jilge, M.; Heiden, U.; Habermeyer, M.; Jürgens, C.

    2015-12-01

    Rapid urbanization processes and the need of identifying urban materials demand urban planners and the remote sensing community since years. Urban planners cannot overcome the issue of up-to-date information of urban materials due to time-intensive fieldwork. Hyperspectral remote sensing can facilitate this issue by interpreting spectral signals to provide information of occurring materials. However, the complexity of urban areas and the occurrence of diverse urban materials vary due to regional and cultural aspects as well as the size of a city, which makes identification of surface materials a challenging analysis task. For the various surface material identification approaches, spectral libraries containing pure material spectra are commonly used, which are derived from field, laboratory or the hyperspectral image itself. One of the requirements for successful image analysis is that all spectrally different surface materials are represented by the library. Currently, a universal library, applicable in every urban area worldwide and taking each spectral variability into account, is and will not be existent. In this study, the issue of unknown surface material spectra and the demand of an urban site-specific spectral library is tackled by the development of a learning spectral archive tool. Starting with an incomplete library of labelled image spectra from several German cities, surface materials of pure image pixels will be identified in a hyperspectral image based on a similarity measure (e.g. SID-SAM). Additionally, unknown image spectra of urban objects are identified based on an object- and spectral-based-rule set. The detected unknown surface material spectra are entered with additional metadata, such as regional occurrence into the existing spectral library and thus, are reusable for further studies. Our approach is suitable for pure surface material detection of urban hyperspectral images that is globally applicable by taking incompleteness into account. The generically development enables the implementation of different hyperspectral sensors.

  7. Using a Web OPAC To Deliver Digital Collections.

    ERIC Educational Resources Information Center

    Mathias, Eileen C.

    2003-01-01

    Describes a major digital imaging project just completed at the Ewell Sale Steward Library of the Academy of Natural Sciences (Philadelphia, PA). Discusses options that were considered for Web delivery of images and text, and reasons for choosing Innovative Interfaces, Inc.'s image management function. Describes the data entry process and reviews…

  8. The Control Point Library Building System. [for Landsat MSS and RBV geometric image correction

    NASA Technical Reports Server (NTRS)

    Niblack, W.

    1981-01-01

    The Earth Resources Observation System (EROS) Data Center in Sioux Falls, South Dakota distributes precision corrected Landsat MSS and RBV data. These data are derived from master data tapes produced by the Master Data Processor (MDP), NASA's system for computing and applying corrections to the data. Included in the MDP is the Control Point Library Building System (CPLBS), an interactive, menu-driven system which permits a user to build and maintain libraries of control points. The control points are required to achieve the high geometric accuracy desired in the output MSS and RBV data. This paper describes the processing performed by CPLBS, the accuracy of the system, and the host computer and special image viewing equipment employed.

  9. The new library building at the University of Texas Health Science Center at San Antonio.

    PubMed Central

    Kronick, D A; Bowden, V M; Olivier, E R

    1985-01-01

    The new University of Texas Health Science Center at San Antonio Library opened in June 1983, replacing the 1968 library building. Planning a new library building provides an opportunity for the staff to rethink their philosophy of service. Of paramount concern and importance is the need to convey this philosophy to the architects. This paper describes the planning process and the building's external features, interior layouts, and accommodations for technology. Details of the move to the building are considered and various aspects of the building are reviewed. Images PMID:3995205

  10. A library for the fifteenth through the twenty-first centuries.

    PubMed Central

    Cooper, R S

    1991-01-01

    The University of California, San Francisco (UCSF), began developing a program for a new library in 1977, started the design in 1985, began construction in 1988, and opened the library in September 1990. The primary objectives were to design and build a facility that would house print collections under optimal conditions, allow for ten years' growth, be flexible enough to permit future reconfiguration, support present and future technologies, and provide beautiful spaces in which to study. The planning process is summarized, planning concepts are outlined, and considerations for the electronic library are briefly reviewed. Images PMID:2039900

  11. Flightspeed Integral Image Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2009-01-01

    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles

  12. Platform for intraoperative analysis of video streams

    NASA Astrophysics Data System (ADS)

    Clements, Logan; Galloway, Robert L., Jr.

    2004-05-01

    Interactive, image-guided surgery (IIGS) has proven to increase the specificity of a variety of surgical procedures. However, current IIGS systems do not compensate for changes that occur intraoperatively and are not reflected in preoperative tomograms. Endoscopes and intraoperative ultrasound, used in minimally invasive surgery, provide real-time (RT) information in a surgical setting. Combining the information from RT imaging modalities with traditional IIGS techniques will further increase surgical specificity by providing enhanced anatomical information. In order to merge these techniques and obtain quantitative data from RT imaging modalities, a platform was developed to allow both the display and processing of video streams in RT. Using a Bandit-II CV frame grabber board (Coreco Imaging, St. Laurent, Quebec) and the associated library API, a dynamic link library was created in Microsoft Visual C++ 6.0 such that the platform could be incorporated into the IIGS system developed at Vanderbilt University. Performance characterization, using two relatively inexpensive host computers, has shown the platform capable of performing simple image processing operations on frames captured from a CCD camera and displaying the processed video data at near RT rates both independent of and while running the IIGS system.

  13. A low-cost vector processor boosting compute-intensive image processing operations

    NASA Technical Reports Server (NTRS)

    Adorf, Hans-Martin

    1992-01-01

    Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.

  14. ARTIP: Automated Radio Telescope Image Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Sharma, Ravi; Gyanchandani, Dolly; Kulkarni, Sarang; Gupta, Neeraj; Pathak, Vineet; Pande, Arti; Joshi, Unmesh

    2018-02-01

    The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.

  15. A method to incorporate interstitial components into the TPS gynecologic rigid applicator library.

    PubMed

    Otal, Antonio; Richart, Jose; Rodriguez, Silvia; Santos, Manuel; Perez-Calatayud, Jose

    2017-02-01

    T2 magnetic resonance imaging (MRI) is recommended as the imaging modality for image-guided brachytherapy. In locally advanced cervical carcinoma, combined endocavitary and interstitial applicators are appropriate (Vienna or Utrecht). To cover extensive disease, Template Benidorm (TB) was developed. Treatment planning system applicator libraries are currently unavailable for the Utrecht applicator or the TB. The purpose of this work is to develop an applicator library for both applicators. The library developed in this work has been used in the Oncentra Brachytherapy TPS, version 4.3.0, which has a brachytherapy module that includes a library of rigid applicators. To add the needles of the Utrecht applicator and to model the TB, we used FreeCAD and MeshLab. The reconstruction process was based on the points that the rigid section and the interstitial part have in common. This, together with the free length, allowed us to ascertain the position of the tip. In case of the Utrecht applicator, one of the sources of uncertainty in the reconstruction was determining the distance of the tip of needle from the ovoid. In case of the TB, the large number of needles involved made their identification time consuming. The developed library resolved both issues. The developed library for the Utrecht and TB is feasible and efficient improving accuracy. It allows all the required treatment planning to proceed using just a T2 MRI sequence. The additional use of specific free available software applications makes it possible to add this information to the already existing library of the Oncentra Brachytherapy TPS. Specific details not included on this manuscript will be available under request. This library is also currently being implemented also into the Sagiplan v 2.0 TPS.

  16. Placement on the Hierarchy.

    ERIC Educational Resources Information Center

    Woolls, Blanche

    1991-01-01

    Discusses the image of school library media specialists, particularly their self-image. Topics discussed include reasons for poor self-image; comparisons with other libraries, including number of professionals, education level, and budget; similarities and differences among libraries; and services of school library media centers. (eight…

  17. Planning a new library in an age of transition: the Washington University School of Medicine Library and Biomedical Communications Center.

    PubMed Central

    Crawford, S; Halbrook, B

    1990-01-01

    In an era of great technological and socioeconomic changes, the Washington University School of Medicine conceptualized and built its first Library and Biomedical Communications Center in seventy-eight years. The planning process, evolution of the electronic library, and translation of functions into operating spaces are discussed. Since 1983, when the project was approved, a whole range of information technologies and services have emerged. The authors consider the kind of library that would operate in a setting where people can do their own searches, order data and materials through an electronic network, analyze and manage information, and use software to create their own publications. Images PMID:2393757

  18. A nanobuffer reporter library for fine-scale imaging and perturbation of endocytic organelles | Office of Cancer Genomics

    Cancer.gov

    Endosomes, lysosomes and related catabolic organelles are a dynamic continuum of vacuolar structures that impact a number of cell physiological processes such as protein/lipid metabolism, nutrient sensing and cell survival. Here we develop a library of ultra-pH-sensitive fluorescent nanoparticles with chemical properties that allow fine-scale, multiplexed, spatio-temporal perturbation and quantification of catabolic organelle maturation at single organelle resolution to support quantitative investigation of these processes in living cells.

  19. Feature Matching of Historical Images Based on Geometry of Quadrilaterals

    NASA Astrophysics Data System (ADS)

    Maiwald, F.; Schneider, D.; Henze, F.; Münster, S.; Niebling, F.

    2018-05-01

    This contribution shows an approach to match historical images from the photo library of the Saxon State and University Library Dresden (SLUB) in the context of a historical three-dimensional city model of Dresden. In comparison to recent images, historical photography provides diverse factors which make an automatical image analysis (feature detection, feature matching and relative orientation of images) difficult. Due to e.g. film grain, dust particles or the digitalization process, historical images are often covered by noise interfering with the image signal needed for a robust feature matching. The presented approach uses quadrilaterals in image space as these are commonly available in man-made structures and façade images (windows, stones, claddings). It is explained how to generally detect quadrilaterals in images. Consequently, the properties of the quadrilaterals as well as the relationship to neighbouring quadrilaterals are used for the description and matching of feature points. The results show that most of the matches are robust and correct but still small in numbers.

  20. Astronomical Image Processing with Hadoop

    NASA Astrophysics Data System (ADS)

    Wiley, K.; Connolly, A.; Krughoff, S.; Gardner, J.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-07-01

    In the coming decade astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. With a requirement that these images be analyzed in real time to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. In the commercial world, new techniques that utilize cloud computing have been developed to handle massive data streams. In this paper we describe how cloud computing, and in particular the map-reduce paradigm, can be used in astronomical data processing. We will focus on our experience implementing a scalable image-processing pipeline for the SDSS database using Hadoop (http://hadoop.apache.org). This multi-terabyte imaging dataset approximates future surveys such as those which will be conducted with the LSST. Our pipeline performs image coaddition in which multiple partially overlapping images are registered, integrated and stitched into a single overarching image. We will first present our initial implementation, then describe several critical optimizations that have enabled us to achieve high performance, and finally describe how we are incorporating a large in-house existing image processing library into our Hadoop system. The optimizations involve prefiltering of the input to remove irrelevant images from consideration, grouping individual FITS files into larger, more efficient indexed files, and a hybrid system in which a relational database is used to determine the input images relevant to the task. The incorporation of an existing image processing library, written in C++, presented difficult challenges since Hadoop is programmed primarily in Java. We will describe how we achieved this integration and the sophisticated image processing routines that were made feasible as a result. We will end by briefly describing the longer term goals of our work, namely detection and classification of transient objects and automated object classification.

  1. Historical Collections | Alaska State Library

    Science.gov Websites

    Microfilm eResources Electronic Books (EBSCO) World Catalog (WorldCat) Free Images and Sounds Journal Finder Publications Catalog and Library Card Info Federal Publications Free Images and Sounds Library Resources Articles & Databases Free Images & Sounds Journal Finder Library Resources Live Homework Help

  2. Automatic Georeferencing of Astronaut Auroral Photography: Providing a New Dataset for Space Physics

    NASA Astrophysics Data System (ADS)

    Riechert, Maik; Walsh, Andrew P.; Taylor, Matt

    2014-05-01

    Astronauts aboard the International Space Station (ISS) have taken tens of thousands of photographs showing the aurora in high temporal and spatial resolution. The use of these images in research though is limited as they often miss accurate pointing and scale information. In this work we develop techniques and software libraries to automatically georeference such images, and provide a time and location-searchable database and website of those images. Aurora photographs very often include a visible starfield due to the necessarily long camera exposure times. We extend on the proof-of-concept of Walsh et al. (2012) who used starfield recognition software, Astrometry.net, to reconstruct the pointing and scale information. Previously a manual pre-processing step, the starfield can now in most cases be separated from earth and spacecraft structures successfully using image recognition. Once the pointing and scale of an image are known, latitudes and longitudes can be calculated for each pixel corner for an assumed auroral emission height. As part of this work, an open-source Python library is developed which automates the georeferencing process and aids in visualization tasks. The library facilitates the resampling of the resulting data from an irregular to a regular coordinate grid in a given pixel per degree density, it supports the export of data in CDF and NetCDF formats, and it generates polygons for drawing graphs and stereographic maps. In addition, the THEMIS all-sky imager web archive has been included as a first transparently accessible imaging source which in this case is useful when drawing maps of ISS passes over North America. The database and website are in development and will use the Python library as their base. Through this work, georeferenced auroral ISS photography is made available as a continously extended and easily accessible dataset. This provides potential not only for new studies on the aurora australis, as there are few all-sky imagers in the southern hemisphere, but also for multi-point observations of the aurora borealis by combining with THEMIS and other imager arrays.

  3. For State Employees | Alaska State Library

    Science.gov Websites

    Microfilm eResources Electronic Books (EBSCO) World Catalog (WorldCat) Free Images and Sounds Journal Finder Publications Catalog and Library Card Info Federal Publications Free Images and Sounds Library Resources Articles & Databases Free Images & Sounds Journal Finder Library Resources Live Homework Help

  4. Medical school libraries in the United States and Canada built between 1961 and 1971.

    PubMed Central

    Beatty, W K; Beatty, V L

    1975-01-01

    Twenty-four medical school libraries in the United States and Canada built between 1961 and 1971 were surveyed by means of questionnaires and visits. Results indicated that half of these libraries will have reached maximum functional capacity approximately six years after they moved into their new quarters. Space for technical processing is generally much less than required. Special features and examples of effective planning are described, and problems in arrangement, traffic patterns for people and materials, and the lack of logical expansion space are discussed. Comparisons are made with a similar survey of twenty medical school libraries made in 1961. Images PMID:1191825

  5. Managing complex processing of medical image sequences by program supervision techniques

    NASA Astrophysics Data System (ADS)

    Crubezy, Monica; Aubry, Florent; Moisan, Sabine; Chameroy, Virginie; Thonnat, Monique; Di Paola, Robert

    1997-05-01

    Our objective is to offer clinicians wider access to evolving medical image processing (MIP) techniques, crucial to improve assessment and quantification of physiological processes, but difficult to handle for non-specialists in MIP. Based on artificial intelligence techniques, our approach consists in the development of a knowledge-based program supervision system, automating the management of MIP libraries. It comprises a library of programs, a knowledge base capturing the expertise about programs and data and a supervision engine. It selects, organizes and executes the appropriate MIP programs given a goal to achieve and a data set, with dynamic feedback based on the results obtained. It also advises users in the development of new procedures chaining MIP programs.. We have experimented the approach for an application of factor analysis of medical image sequences as a means of predicting the response of osteosarcoma to chemotherapy, with both MRI and NM dynamic image sequences. As a result our program supervision system frees clinical end-users from performing tasks outside their competence, permitting them to concentrate on clinical issues. Therefore our approach enables a better exploitation of possibilities offered by MIP and higher quality results, both in terms of robustness and reliability.

  6. Automation of Hessian-Based Tubularity Measure Response Function in 3D Biomedical Images.

    PubMed

    Dzyubak, Oleksandr P; Ritman, Erik L

    2011-01-01

    The blood vessels and nerve trees consist of tubular objects interconnected into a complex tree- or web-like structure that has a range of structural scale 5 μm diameter capillaries to 3 cm aorta. This large-scale range presents two major problems; one is just making the measurements, and the other is the exponential increase of component numbers with decreasing scale. With the remarkable increase in the volume imaged by, and resolution of, modern day 3D imagers, it is almost impossible to make manual tracking of the complex multiscale parameters from those large image data sets. In addition, the manual tracking is quite subjective and unreliable. We propose a solution for automation of an adaptive nonsupervised system for tracking tubular objects based on multiscale framework and use of Hessian-based object shape detector incorporating National Library of Medicine Insight Segmentation and Registration Toolkit (ITK) image processing libraries.

  7. A flexible software architecture for scalable real-time image and video processing applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  8. Image-analysis library

    NASA Technical Reports Server (NTRS)

    1980-01-01

    MATHPAC image-analysis library is collection of general-purpose mathematical and statistical routines and special-purpose data-analysis and pattern-recognition routines for image analysis. MATHPAC library consists of Linear Algebra, Optimization, Statistical-Summary, Densities and Distribution, Regression, and Statistical-Test packages.

  9. The ELISE II Project: A Digital Image Library for Europe.

    ERIC Educational Resources Information Center

    Strunz, Bob; Waters, Mairead

    This paper describes the progress made under the ELISE II electronic image library project from a technical standpoint. The ELISE II project is a European-wide initiative that aims to provide a comprehensive electronic image library service for Europe. It is funded under the European Commission, DG XIII-E, Telematics for Libraries Initiative. The…

  10. The visible human project®: From body to bits.

    PubMed

    Ackerman, Michael J

    2016-08-01

    In the middle 1990's the U.S. National Library sponsored the acquisition and development of the Visible Human Project® data base. This image database contains anatomical cross-sectional images which allow the reconstruction of three dimensional male and female anatomy to an accuracy of less than 1.0 mm. The male anatomy is contained in a 15 gigabyte database, the female in a 39 gigabyte database. This talk will describe why and how this project was accomplished and demonstrate some of the products which the Visible Human dataset has made possible. I will conclude by describing how the Visible Human Project, completed over 20 years ago, has led the National Library of Medicine to a series of image research projects including an open source image processing toolkit which is included in several commercial products.

  11. Development of a DICOM library

    NASA Astrophysics Data System (ADS)

    Kim, Dongsun; Shin, Dongkyu M.; Kim, Dongyoun M.

    2001-08-01

    Object-oriented DICOM decoding library was developed as a type of DLL for MS-Windows environment development. It supports all DICOM standard Transfer Syntaxes, multi-frame images, RLE decoding and window level adjusting. Image library for medical application was also developed as a type of DLL and ActiveX Control using proposed DICOM library. It supports display of DICOM image, cine mode and basic manipulations. For an application of a proposed image library, a couple of DICOM viewers were developed. One can be used as an off-line DICOM Workstation, and the other can be used for browsing the local DICOM files.

  12. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  13. The Visible Human Project of the National Library of Medicine: Remote access and distribution of a multi-gigabyte data set

    NASA Technical Reports Server (NTRS)

    Ackerman, Michael J.

    1993-01-01

    As part of the 1986 Long-Range Plan for the National Library of Medicine (NLM), the Planning Panel on Medical Education wrote that NLM should '...thoroughly and systematically investigate the technical requirements for and feasibility of instituting a biomedical images library.' The panel noted the increasing use of images in clinical practice and biomedical research. An image library would complement NLM's existing bibliographic and factual database services and would ideally be available through the same computer networks as are these current NLM services. Early in 1989, NLM's Board of Regents convened an ad hoc planning panel to explore possible roles for the NLM in the area of electronic image libraries. In its report to the Board of Regents, the NLM Planning Panel on Electronic Image Libraries recommended that 'NLM should undertake a first project building a digital image library of volumetric data representing a complete, normal adult male and female. This Visible Human Project will include digitized photographic images for cryosectioning, digital images derived from computerized tomography, and digital magnetic resonance images of cadavers.' The technologies needed to support digital high resolution image libraries, including rapid development; and that NLM encourage investigator-initiated research into methods for representing and linking spatial and textual information, structural informatics. The first part of the Visible Human Project is the acquisition of cross-sectional CT and MRI digital images and cross-sectional cryosectional photographic images of a representative male and female cadaver at an average of one millimeter intervals. The corresponding cross-sections in each of the three modalities are to be registerable with one another.

  14. Image Processing Occupancy Sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The Image Processing Occupancy Sensor, or IPOS, is a novel sensor technology developed at the National Renewable Energy Laboratory (NREL). The sensor is based on low-cost embedded microprocessors widely used by the smartphone industry and leverages mature open-source computer vision software libraries. Compared to traditional passive infrared and ultrasonic-based motion sensors currently used for occupancy detection, IPOS has shown the potential for improved accuracy and a richer set of feedback signals for occupant-optimized lighting, daylighting, temperature setback, ventilation control, and other occupancy and location-based uses. Unlike traditional passive infrared (PIR) or ultrasonic occupancy sensors, which infer occupancy based only onmore » motion, IPOS uses digital image-based analysis to detect and classify various aspects of occupancy, including the presence of occupants regardless of motion, their number, location, and activity levels of occupants, as well as the illuminance properties of the monitored space. The IPOS software leverages the recent availability of low-cost embedded computing platforms, computer vision software libraries, and camera elements.« less

  15. Dedicated computer system AOTK for image processing and analysis of horse navicular bone

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Fojud, A.; Koszela, K.; Mueller, W.; Górna, K.; Okoń, P.; Piekarska-Boniecka, H.

    2017-07-01

    The aim of the research was made the dedicated application AOTK (pol. Analiza Obrazu Trzeszczki Kopytowej) for image processing and analysis of horse navicular bone. The application was produced by using specialized software like Visual Studio 2013 and the .NET platform. To implement algorithms of image processing and analysis were used libraries of Aforge.NET. Implemented algorithms enabling accurate extraction of the characteristics of navicular bones and saving data to external files. Implemented in AOTK modules allowing the calculations of distance selected by user, preliminary assessment of conservation of structure of the examined objects. The application interface is designed in a way that ensures user the best possible view of the analyzed images.

  16. Fiji: an open-source platform for biological-image analysis.

    PubMed

    Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert

    2012-06-28

    Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.

  17. Image analysis library software development

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Bryant, J.

    1977-01-01

    The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.

  18. SkinScan©: A PORTABLE LIBRARY FOR MELANOMA DETECTION ON HANDHELD DEVICES

    PubMed Central

    Wadhawan, Tarun; Situ, Ning; Lancaster, Keith; Yuan, Xiaojing; Zouridakis, George

    2011-01-01

    We have developed a portable library for automated detection of melanoma termed SkinScan© that can be used on smartphones and other handheld devices. Compared to desktop computers, embedded processors have limited processing speed, memory, and power, but they have the advantage of portability and low cost. In this study we explored the feasibility of running a sophisticated application for automated skin cancer detection on an Apple iPhone 4. Our results demonstrate that the proposed library with the advanced image processing and analysis algorithms has excellent performance on handheld and desktop computers. Therefore, deployment of smartphones as screening devices for skin cancer and other skin diseases can have a significant impact on health care delivery in underserved and remote areas. PMID:21892382

  19. JAtlasView: a Java atlas-viewer for browsing biomedical 3D images and atlases.

    PubMed

    Feng, Guangjie; Burton, Nick; Hill, Bill; Davidson, Duncan; Kerwin, Janet; Scott, Mark; Lindsay, Susan; Baldock, Richard

    2005-03-09

    Many three-dimensional (3D) images are routinely collected in biomedical research and a number of digital atlases with associated anatomical and other information have been published. A number of tools are available for viewing this data ranging from commercial visualization packages to freely available, typically system architecture dependent, solutions. Here we discuss an atlas viewer implemented to run on any workstation using the architecture neutral Java programming language. We report the development of a freely available Java based viewer for 3D image data, descibe the structure and functionality of the viewer and how automated tools can be developed to manage the Java Native Interface code. The viewer allows arbitrary re-sectioning of the data and interactive browsing through the volume. With appropriately formatted data, for example as provided for the Electronic Atlas of the Developing Human Brain, a 3D surface view and anatomical browsing is available. The interface is developed in Java with Java3D providing the 3D rendering. For efficiency the image data is manipulated using the Woolz image-processing library provided as a dynamically linked module for each machine architecture. We conclude that Java provides an appropriate environment for efficient development of these tools and techniques exist to allow computationally efficient image-processing libraries to be integrated relatively easily.

  20. A digital library of radiology images.

    PubMed

    Kahn, Charles E

    2006-01-01

    A web-based virtual library of peer-reviewed radiological images was created for use in education and clinical decision support. Images were obtained from open-access content of five online radiology journals and one e-learning web site. Figure captions were indexed by Medical Subject Heading (MeSH) codes, imaging modality, and patient age and sex. This digital library provides a new, valuable online resource.

  1. IPLIB (Image processing library) user's manual

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.; Monteith, J. H.; Miller, K.

    1985-01-01

    IPLIB is a collection of HP FORTRAN 77 subroutines and functions that facilitate the use of a COMTAL image processing system driven by an HP-1000 computer. It is intended for programmers who want to use the HP 1000 to drive the COMTAL Vision One/20 system. It is assumed that the programmer knows HP 1000 FORTRAN 77 or at least one FORTRAN dialect. It is also assumed that the programmer has some familiarity with the COMTAL Vision One/20 system.

  2. A high-level 3D visualization API for Java and ImageJ.

    PubMed

    Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin

    2010-05-21

    Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  3. Wavelet library for constrained devices

    NASA Astrophysics Data System (ADS)

    Ehlers, Johan Hendrik; Jassim, Sabah A.

    2007-04-01

    The wavelet transform is a powerful tool for image and video processing, useful in a range of applications. This paper is concerned with the efficiency of a certain fast-wavelet-transform (FWT) implementation and several wavelet filters, more suitable for constrained devices. Such constraints are typically found on mobile (cell) phones or personal digital assistants (PDA). These constraints can be a combination of; limited memory, slow floating point operations (compared to integer operations, most often as a result of no hardware support) and limited local storage. Yet these devices are burdened with demanding tasks such as processing a live video or audio signal through on-board capturing sensors. In this paper we present a new wavelet software library, HeatWave, that can be used efficiently for image/video processing/analysis tasks on mobile phones and PDA's. We will demonstrate that HeatWave is suitable for realtime applications with fine control and range to suit transform demands. We shall present experimental results to substantiate these claims. Finally this library is intended to be of real use and applied, hence we considered several well known and common embedded operating system platform differences; such as a lack of common routines or functions, stack limitations, etc. This makes HeatWave suitable for a range of applications and research projects.

  4. The new Health Sciences Library at the State University of New York at Buffalo.

    PubMed Central

    Fabrizio, N; Huang, C K

    1988-01-01

    The new Health Sciences Library at the State University of New York at Buffalo is a harmonious and functional blend of the old and the new. The old is a renovated Georgian style building with formal rooms containing fireplaces, carved woodwork and English oak paneling. The new is a contemporary four-story addition. Through the arrangement of space and the interior design, the new library offers users easy access to services and resources; accommodates the heavy daily flow of users and library materials; provides an environment of comfort, quiet, and safety; and promotes efficient communication among all segments of the library staff. This was accomplished through sound architectural design which included close consultation with the library director and staff during the planning process. The new library is equipped to face the challenge of meeting the needs of biomedical education, research, and clinical programs of the institution and its constituents in the years to come. Images PMID:3370382

  5. Image Algebra Matlab language version 2.3 for image processing and compression research

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric

    2010-08-01

    Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision algorithms and software. The University of Florida has been associated with development of the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the Matlab™ programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation, called Image Algebra Matlab (IAM), has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, IAM offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that IAM programs can employ versatile representations for the operands and operations of the algebra, which are supported by the underlying libraries written in Matlab. In a previous publication, we showed how the functionality of IAC++ could be carried forth into a Matlab implementation, and provided practical details of a prototype implementation called IAM Version 1. In this paper, we further elaborate the purpose and structure of image algebra, then present a maturing implementation of Image Algebra Matlab called IAM Version 2.3, which extends the previous implementation of IAM to include polymorphic operations over different point sets, as well as recursive convolution operations and functional composition. We also show how image algebra and IAM can be employed in image processing and compression research, as well as algorithm development and analysis.

  6. Design and implementation of a cloud based lithography illumination pupil processing application

    NASA Astrophysics Data System (ADS)

    Zhang, Youbao; Ma, Xinghua; Zhu, Jing; Zhang, Fang; Huang, Huijie

    2017-02-01

    Pupil parameters are important parameters to evaluate the quality of lithography illumination system. In this paper, a cloud based full-featured pupil processing application is implemented. A web browser is used for the UI (User Interface), the websocket protocol and JSON format are used for the communication between the client and the server, and the computing part is implemented in the server side, where the application integrated a variety of high quality professional libraries, such as image processing libraries libvips and ImageMagic, automatic reporting system latex, etc., to support the program. The cloud based framework takes advantage of server's superior computing power and rich software collections, and the program could run anywhere there is a modern browser due to its web UI design. Compared to the traditional way of software operation model: purchased, licensed, shipped, downloaded, installed, maintained, and upgraded, the new cloud based approach, which is no installation, easy to use and maintenance, opens up a new way. Cloud based application probably is the future of the software development.

  7. Hello World Deep Learning in Medical Imaging.

    PubMed

    Lakhani, Paras; Gray, Daniel L; Pett, Carl R; Nagy, Paul; Shih, George

    2018-05-03

    There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.

  8. The comparative effectiveness of conventional and digital image libraries.

    PubMed

    McColl, R I; Johnson, A

    2001-03-01

    Before introducing a hospital-wide image database to improve access, navigation and retrieval speed, a comparative study between a conventional slide library and a matching image database was undertaken to assess its relative benefits. Paired time trials and personal questionnaires revealed faster retrieval rates, higher image quality, and easier viewing for the pilot digital image database. Analysis of confidentiality, copyright and data protection exposed similar issues for both systems, thus concluding that the digital image database is a more effective library system. The authors suggest that in the future, medical images will be stored on large, professionally administered, centrally located file servers, allowing specialist image libraries to be tailored locally for individual users. The further integration of the database with web technology will enable cheap and efficient remote access for a wide range of users.

  9. Visual Image Transmission. An Examination of Electronic Delivery of Visual Images and Text from the Library to the Academic Community. Final Report.

    ERIC Educational Resources Information Center

    Smith, Merrill W.; And Others

    Designed to examine the potential for delivering images stored on videodisc and other optical media from the library to the classroom, the pilot project described in this report has focused on ways to transmit still color or black and white images from the library's collection to a constituent academic unit. This report discusses analog and…

  10. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image -Welt...." by Erasmus Francisci, 1680. Library Call Number QC859 .F72 1680. Image ID: wea02217

  11. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image ] unserer Nider-Welt...." by Erasmus Francisci, 1680. Library Call Number QC859 .F72 1680. Image ID

  12. Image management research

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1988-01-01

    Two types of research issues are involved in image management systems with space station applications: image processing research and image perception research. The image processing issues are the traditional ones of digitizing, coding, compressing, storing, analyzing, and displaying, but with a new emphasis on the constraints imposed by the human perceiver. Two image coding algorithms have been developed that may increase the efficiency of image management systems (IMS). Image perception research involves a study of the theoretical and practical aspects of visual perception of electronically displayed images. Issues include how rapidly a user can search through a library of images, how to make this search more efficient, and how to present images in terms of resolution and split screens. Other issues include optimal interface to an IMS and how to code images in a way that is optimal for the human perceiver. A test-bed within which such issues can be addressed has been designed.

  13. Digital Images over the Internet: Rome Reborn at the Library of Congress.

    ERIC Educational Resources Information Center

    Valauskas, Edward J.

    1994-01-01

    Describes digital images of incunabula from the Library of the Vatican that are available over the Internet based on an actual exhibit that was displayed at the Library of Congress. Viewers, i.e., compression routines created to efficiently send color images, are explained; and other digital exhibits are described. (Contains three references.)…

  14. Library based x-ray scatter correction for dedicated cone beam breast CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Linxi; Zhu, Lei, E-mail: leizhu@gatech.edu

    Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the GEANT4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correctionmore » on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal views. Conclusions: The library-based scatter correction does not require increase in radiation dose or hardware modifications, and it improves over the existing methods on implementation simplicity and computational efficiency. As demonstrated through patient studies, the authors’ approach is effective and stable, and is therefore clinically attractive for CBBCT imaging.« less

  15. Library based x-ray scatter correction for dedicated cone beam breast CT

    PubMed Central

    Shi, Linxi; Karellas, Andrew; Zhu, Lei

    2016-01-01

    Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the geant4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correction on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal views. Conclusions: The library-based scatter correction does not require increase in radiation dose or hardware modifications, and it improves over the existing methods on implementation simplicity and computational efficiency. As demonstrated through patient studies, the authors’ approach is effective and stable, and is therefore clinically attractive for CBBCT imaging. PMID:27487870

  16. Strain Library Imaging Protocol for high-throughput, automated single-cell microscopy of large bacterial collections arrayed on multiwell plates.

    PubMed

    Shi, Handuo; Colavin, Alexandre; Lee, Timothy K; Huang, Kerwyn Casey

    2017-02-01

    Single-cell microscopy is a powerful tool for studying gene functions using strain libraries, but it suffers from throughput limitations. Here we describe the Strain Library Imaging Protocol (SLIP), which is a high-throughput, automated microscopy workflow for large strain collections that requires minimal user involvement. SLIP involves transferring arrayed bacterial cultures from multiwell plates onto large agar pads using inexpensive replicator pins and automatically imaging the resulting single cells. The acquired images are subsequently reviewed and analyzed by custom MATLAB scripts that segment single-cell contours and extract quantitative metrics. SLIP yields rich data sets on cell morphology and gene expression that illustrate the function of certain genes and the connections among strains in a library. For a library arrayed on 96-well plates, image acquisition can be completed within 4 min per plate.

  17. High Res at High Speed: Automated Delivery of High-Resolution Images from Digital Library Collections

    ERIC Educational Resources Information Center

    Westbrook, R. Niccole; Watkins, Sean

    2012-01-01

    As primary source materials in the library are digitized and made available online, the focus of related library services is shifting to include new and innovative methods of digital delivery via social media, digital storytelling, and community-based and consortial image repositories. Most images on the Web are not of sufficient quality for most…

  18. The IDL astronomy user's library

    NASA Technical Reports Server (NTRS)

    Landsman, W. B.

    1992-01-01

    IDL (Interactive Data Language) is a commercial programming, plotting, and image display language, which is widely used in astronomy. The IDL Astronomy User's Library is a central repository of over 400 astronomy-related IDL procedures accessible via anonymous FTP. The author will overview the use of IDL within the astronomical community and discuss recent enhancements at the IDL astronomy library. These enhancements include a fairly complete I/O package for FITS images and tables, an image deconvolution package and an image mosaic package, and access to IDL Open Windows/Motif widgets interface. The IDL Astronomy Library is funded by NASA through the Astrophysics Software and Research Aids Program.

  19. TOASTing Your Images With Montage

    NASA Astrophysics Data System (ADS)

    Berriman, G. Bruce; Good, John

    2017-01-01

    The Montage image mosaic engine is a scalable toolkit for creating science-grade mosaics of FITS files, according to the user's specifications of coordinates, projection, sampling, and image rotation. It is written in ANSI-C and runs on all common *nix-based platforms. The code is freely available and is released with a BSD 3-clause license. Version 5 is a major upgrade to Montage, and provides support for creating images that can be consumed by the World Wide Telescope (WWT). Montage treats the TOAST sky tessellation scheme, used by the WWT, as a spherical projection like those in the WCStools library. Thus images in any projection can be converted to the TOAST projection by Montage’s reprojection services. These reprojections can be performed at scale on high-performance platforms and on desktops. WWT consumes PNG or JPEG files, organized according to WWT’s tiling and naming scheme. Montage therefore provides a set of dedicated modules to create the required files from FITS images that contain the TOAST projection. There are two other major features of Version 5. It supports processing of HEALPix files to any projection in the WCS tools library. And it can be built as a library that can be called from other languages, primarily Python. http://montage.ipac.caltech.edu.GitHub download page: https://github.com/Caltech-IPAC/Montage.ASCL record: ascl:1010.036. DOI: dx.doi.org/10.5281/zenodo.49418 Montage is funded by the National Science Foundation under Grant Number ACI-1440620,

  20. Research as Repatriation.

    ERIC Educational Resources Information Center

    Plum, Terry; Smalley, Topsy N.

    1994-01-01

    Discussion of humanities research focuses on the humanist patron as author of the text. Highlights include the research process; style of expression; interpretation; multivocality; reflexivity; social validation; repatriation; the image of the library for the author; patterns of searching behavior; and reference librarian responses. (37…

  1. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    PubMed

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  2. IFLA General Conference, 1992. Presession Seminar on the Status, Reputation, and Image of the Library and Information Profession. Papers.

    ERIC Educational Resources Information Center

    International Federation of Library Associations and Institutions, London (England).

    Seven papers are presented from the presession of the 1992 International Federation of Library Associations and Institutions (IFLA) conference dealing with the status and reputation of the library and information professions, which continue to suffer a lack of image in society. Suggestions for improving the status of the library and information…

  3. Real-time image processing for non-contact monitoring of dynamic displacements using smartphone technologies

    NASA Astrophysics Data System (ADS)

    Min, Jae-Hong; Gelo, Nikolas J.; Jo, Hongki

    2016-04-01

    The newly developed smartphone application, named RINO, in this study allows measuring absolute dynamic displacements and processing them in real time using state-of-the-art smartphone technologies, such as high-performance graphics processing unit (GPU), in addition to already powerful CPU and memories, embedded high-speed/ resolution camera, and open-source computer vision libraries. A carefully designed color-patterned target and user-adjustable crop filter enable accurate and fast image processing, allowing up to 240fps for complete displacement calculation and real-time display. The performances of the developed smartphone application are experimentally validated, showing comparable accuracy with those of conventional laser displacement sensor.

  4. The Heinz Electronic Library Interactive Online System (HELIOS): Building a Digital Archive Using Imaging, OCR, and Natural Language Processing Technologies.

    ERIC Educational Resources Information Center

    Galloway, Edward A.; Michalek, Gabrielle V.

    1995-01-01

    Discusses the conversion project of the congressional papers of Senator John Heinz into digital format and the provision of electronic access to these papers by Carnegie Mellon University. Topics include collection background, project team structure, document processing, scanning, use of optical character recognition software, verification…

  5. A CT and MRI scan to MCNP input conversion program.

    PubMed

    Van Riper, Kenneth A

    2005-01-01

    We describe a new program to read a sequence of tomographic scans and prepare the geometry and material sections of an MCNP input file. Image processing techniques include contrast controls and mapping of grey scales to colour. The user interface provides several tools with which the user can associate a range of image intensities to an MCNP material. Materials are loaded from a library. A separate material assignment can be made to a pixel intensity or range of intensities when that intensity dominates the image boundaries; this material is assigned to all pixels with that intensity contiguous with the boundary. Material fractions are computed in a user-specified voxel grid overlaying the scans. New materials are defined by mixing the library materials using the fractions. The geometry can be written as an MCNP lattice or as individual cells. A combination algorithm can be used to join neighbouring cells with the same material.

  6. Design and clinical evaluation of a high-capacity digital image archival library and high-speed network for the replacement of cinefilm in the cardiac angiography environment

    NASA Astrophysics Data System (ADS)

    Cusma, Jack T.; Spero, Laurence A.; Groshong, Bennett R.; Cho, Teddy; Bashore, Thomas M.

    1993-09-01

    An economical and practical digital solution for the replacement of 35 mm cine film as the archive media in the cardiac x-ray imaging environment has remained lacking to date due to the demanding requirements of high capacity, high acquisition rate, high transfer rate, and a need for application in a distributed environment. A clinical digital image library and network based on the D2 digital video format has been installed in the Duke University Cardiac Catheterization Laboratory. The system architecture includes a central image library with digital video recorders and robotic tape retrieval, three acquisition stations, and remote review stations connected via a serial image network. The library has a capacity for over 20,000 Gigabytes of uncompressed image data, equivalent to records for approximately 20,000 patients. Image acquisition in the clinical laboratories is via a real-time digital interface between the digital angiography system and a local digital recorder. Images are transferred to the library over the serial network at a rate of 14.3 Mbytes/sec and permanently stored for later review. The image library and network are currently undergoing a clinical comparison with cine film for visual and quantitative assessment of coronary artery disease. At the conclusion of the evaluation, the configuration will be expanded to include four additional catheterization laboratories and remote review stations throughout the hospital.

  7. Digital image processing of bone - Problems and potentials

    NASA Technical Reports Server (NTRS)

    Morey, E. R.; Wronski, T. J.

    1980-01-01

    The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.

  8. Associative architecture for image processing

    NASA Astrophysics Data System (ADS)

    Adar, Rutie; Akerib, Avidan

    1997-09-01

    This article presents a new generation in parallel processing architecture for real-time image processing. The approach is implemented in a real time image processor chip, called the XiumTM-2, based on combining a fully associative array which provides the parallel engine with a serial RISC core on the same die. The architecture is fully programmable and can be programmed to implement a wide range of color image processing, computer vision and media processing functions in real time. The associative part of the chip is based on patented pending methodology of Associative Computing Ltd. (ACL), which condenses 2048 associative processors, each of 128 'intelligent' bits. Each bit can be a processing bit or a memory bit. At only 33 MHz and 0.6 micron manufacturing technology process, the chip has a computational power of 3 billion ALU operations per second and 66 billion string search operations per second. The fully programmable nature of the XiumTM-2 chip enables developers to use ACL tools to write their own proprietary algorithms combined with existing image processing and analysis functions from ACL's extended set of libraries.

  9. Hyperspectral Imaging Sensors and the Marine Coastal Zone

    NASA Technical Reports Server (NTRS)

    Richardson, Laurie L.

    2000-01-01

    Hyperspectral imaging sensors greatly expand the potential of remote sensing to assess, map, and monitor marine coastal zones. Each pixel in a hyperspectral image contains an entire spectrum of information. As a result, hyperspectral image data can be processed in two very different ways: by image classification techniques, to produce mapped outputs of features in the image on a regional scale; and by use of spectral analysis of the spectral data embedded within each pixel of the image. The latter is particularly useful in marine coastal zones because of the spectral complexity of suspended as well as benthic features found in these environments. Spectral-based analysis of hyperspectral (AVIRIS) imagery was carried out to investigate a marine coastal zone of South Florida, USA. Florida Bay is a phytoplankton-rich estuary characterized by taxonomically distinct phytoplankton assemblages and extensive seagrass beds. End-member spectra were extracted from AVIRIS image data corresponding to ground-truth sample stations and well-known field sites. Spectral libraries were constructed from the AVIRIS end-member spectra and used to classify images using the Spectral Angle Mapper (SAM) algorithm, a spectral-based approach that compares the spectrum, in each pixel of an image with each spectrum in a spectral library. Using this approach different phytoplankton assemblages containing diatoms, cyanobacteria, and green microalgae, as well as benthic community (seagrasses), were mapped.

  10. Appropriateness of the food-pics image database for experimental eating and appetite research with adolescents.

    PubMed

    Jensen, Chad D; Duraccio, Kara M; Barnett, Kimberly A; Stevens, Kimberly S

    2016-12-01

    Research examining effects of visual food cues on appetite-related brain processes and eating behavior has proliferated. Recently investigators have developed food image databases for use across experimental studies examining appetite and eating behavior. The food-pics image database represents a standardized, freely available image library originally validated in a large sample primarily comprised of adults. The suitability of the images for use with adolescents has not been investigated. The aim of the present study was to evaluate the appropriateness of the food-pics image library for appetite and eating research with adolescents. Three hundred and seven adolescents (ages 12-17) provided ratings of recognizability, palatability, and desire to eat, for images from the food-pics database. Moreover, participants rated the caloric content (high vs. low) and healthiness (healthy vs. unhealthy) of each image. Adolescents rated approximately 75% of the food images as recognizable. Approximately 65% of recognizable images were correctly categorized as high vs. low calorie and 63% were correctly classified as healthy vs. unhealthy in 80% or more of image ratings. These results suggest that a smaller subset of the food-pics image database is appropriate for use with adolescents. With some modifications to included images, the food-pics image database appears to be appropriate for use in experimental appetite and eating-related research conducted with adolescents. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Electronic Document Supply Systems.

    ERIC Educational Resources Information Center

    Cawkell, A. E.

    1991-01-01

    Describes electronic document delivery systems used by libraries and document image processing systems used for business purposes. Topics discussed include technical specifications; analogue read-only laser videodiscs; compact discs and CD-ROM; WORM; facsimile; ADONIS (Article Delivery over Network Information System); DOCDEL; and systems at the…

  12. Developing a culture of lifelong learning in a library environment.

    PubMed Central

    Giuse, N B; Kafantaris, S R; Huber, J T; Lynch, F; Epelbaum, M; Pfeiffer, J

    1999-01-01

    Between 1995 and 1996, the Annette and Irwin Eskind Biomedical Library (EBL) at Vanderbilt University Medical Center (VUMC) radically revised the model of service it provides to the VUMC community. An in-depth training program was developed for librarians, who began to migrate to clinical settings and establish clinical librarianship and information brokerage services beyond the library's walls. To ensure that excellent service would continue within the library, EBL's training program was adapted for library assistants, providing them with access to information about a wide variety of work roles and processes over a four to eight-month training period. Concurrently, customer service areas were reorganized so that any question--whether reference or circulation--could be answered at any of four service points, eliminating the practice of passing customers from person to person between the reference and circulation desks. To provide an incentive for highly trained library assistants to remain at EBL, management and library assistants worked together to redesign the career pathway based on defined stages of achievement, self-directed participation in library-wide projects, and demonstrated commitment to lifelong learning. Education and training were the fundamental principles at the center of all this activity. Images PMID:9934526

  13. Betty Petersen Memorial Library

    Science.gov Websites

    NOAA logo - Click to go to the NOAA homepage Betty Petersen Memorial Library NOAA Library Logo ... library image Betty Petersen Memorial Library is a branch of the NOAA Central Library jointly funded by / NOAA Central Library Betty Petersen Memorial Library 5830 University Research Court Room 1650, E / OC4

  14. Phage display and molecular imaging: expanding fields of vision in living subjects.

    PubMed

    Cochran, R; Cochran, Frank

    2010-01-01

    In vivo molecular imaging enables non-invasive visualization of biological processes within living subjects, and holds great promise for diagnosis and monitoring of disease. The ability to create new agents that bind to molecular targets and deliver imaging probes to desired locations in the body is critically important to further advance this field. To address this need, phage display, an established technology for the discovery and development of novel binding agents, is increasingly becoming a key component of many molecular imaging research programs. This review discusses the expanding role played by phage display in the field of molecular imaging with a focus on in vivo applications. Furthermore, new methodological advances in phage display that can be directly applied to the discovery and development of molecular imaging agents are described. Various phage library selection strategies are summarized and compared, including selections against purified target, intact cells, and ex vivo tissue, plus in vivo homing strategies. An outline of the process for converting polypeptides obtained from phage display library selections into successful in vivo imaging agents is provided, including strategies to optimize in vivo performance. Additionally, the use of phage particles as imaging agents is also described. In the latter part of the review, a survey of phage-derived in vivo imaging agents is presented, and important recent examples are highlighted. Other imaging applications are also discussed, such as the development of peptide tags for site-specific protein labeling and the use of phage as delivery agents for reporter genes. The review concludes with a discussion of how phage display technology will continue to impact both basic science and clinical applications in the field of molecular imaging.

  15. Imaging Technology in Libraries: Photo CD Offers New Possibilities.

    ERIC Educational Resources Information Center

    Beiser, Karl

    1993-01-01

    Describes Kodak's Photo CD technology, a format for the storage and retrieval of photographic images in electronic form. Highlights include current and future Photo CD formats; computer imaging technology; ownership issues; hardware for using Photo CD; software; library and information center applications, including image collections and…

  16. Parallel Processing of Images in Mobile Devices using BOINC

    NASA Astrophysics Data System (ADS)

    Curiel, Mariela; Calle, David F.; Santamaría, Alfredo S.; Suarez, David F.; Flórez, Leonardo

    2018-04-01

    Medical image processing helps health professionals make decisions for the diagnosis and treatment of patients. Since some algorithms for processing images require substantial amounts of resources, one could take advantage of distributed or parallel computing. A mobile grid can be an adequate computing infrastructure for this problem. A mobile grid is a grid that includes mobile devices as resource providers. In a previous step of this research, we selected BOINC as the infrastructure to build our mobile grid. However, parallel processing of images in mobile devices poses at least two important challenges: the execution of standard libraries for processing images and obtaining adequate performance when compared to desktop computers grids. By the time we started our research, the use of BOINC in mobile devices also involved two issues: a) the execution of programs in mobile devices required to modify the code to insert calls to the BOINC API, and b) the division of the image among the mobile devices as well as its merging required additional code in some BOINC components. This article presents answers to these four challenges.

  17. Augmenting atlas-based liver segmentation for radiotherapy treatment planning by incorporating image features proximal to the atlas contours

    NASA Astrophysics Data System (ADS)

    Li, Dengwang; Liu, Li; Chen, Jinhu; Li, Hongsheng; Yin, Yong; Ibragimov, Bulat; Xing, Lei

    2017-01-01

    Atlas-based segmentation utilizes a library of previously delineated contours of similar cases to facilitate automatic segmentation. The problem, however, remains challenging because of limited information carried by the contours in the library. In this studying, we developed a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. This study presented a new concept of atlas based segmentation method. Instead of using the complete volume of the target organs, only information along the organ contours from the atlas images was used for guiding segmentation of the new image. In setting up an atlas-based library, we included not only the coordinates of contour points, but also the image features adjacent to the contour. In this work, 139 CT images with normal appearing livers collected for radiotherapy treatment planning were used to construct the library. The CT images within the library were first registered to each other using affine registration. The nonlinear narrow shell was generated alongside the object contours of registered images. Matching voxels were selected inside common narrow shell image features of a library case and a new case using a speed-up robust features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the new image by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy optimization within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by physicians. A novel atlas-based segmentation technique with inclusion of neighborhood image features through the introduction of a narrow-shell surrounding the target objects was established. Application of the technique to 30 liver cases suggested that the technique was capable to reliably segment liver cases from CT, 4D-CT, and CBCT images with little human interaction. The accuracy and speed of the proposed method are quantitatively validated by comparing automatic segmentation results with the manual delineation results. The Jaccard similarity metric between the automatically generated liver contours obtained by the proposed method and the physician delineated results are on an average 90%-96% for planning images. Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically practical solution for auto-segmentation. The proposed mountainous narrow shell atlas based method can achieve efficient automatic liver propagation for CT, 4D-CT and CBCT images with following treatment planning and should find widespread application in future treatment planning systems.

  18. Augmenting atlas-based liver segmentation for radiotherapy treatment planning by incorporating image features proximal to the atlas contours.

    PubMed

    Li, Dengwang; Liu, Li; Chen, Jinhu; Li, Hongsheng; Yin, Yong; Ibragimov, Bulat; Xing, Lei

    2017-01-07

    Atlas-based segmentation utilizes a library of previously delineated contours of similar cases to facilitate automatic segmentation. The problem, however, remains challenging because of limited information carried by the contours in the library. In this studying, we developed a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. This study presented a new concept of atlas based segmentation method. Instead of using the complete volume of the target organs, only information along the organ contours from the atlas images was used for guiding segmentation of the new image. In setting up an atlas-based library, we included not only the coordinates of contour points, but also the image features adjacent to the contour. In this work, 139 CT images with normal appearing livers collected for radiotherapy treatment planning were used to construct the library. The CT images within the library were first registered to each other using affine registration. The nonlinear narrow shell was generated alongside the object contours of registered images. Matching voxels were selected inside common narrow shell image features of a library case and a new case using a speed-up robust features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the new image by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy optimization within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by physicians. A novel atlas-based segmentation technique with inclusion of neighborhood image features through the introduction of a narrow-shell surrounding the target objects was established. Application of the technique to 30 liver cases suggested that the technique was capable to reliably segment liver cases from CT, 4D-CT, and CBCT images with little human interaction. The accuracy and speed of the proposed method are quantitatively validated by comparing automatic segmentation results with the manual delineation results. The Jaccard similarity metric between the automatically generated liver contours obtained by the proposed method and the physician delineated results are on an average 90%-96% for planning images. Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically practical solution for auto-segmentation. The proposed mountainous narrow shell atlas based method can achieve efficient automatic liver propagation for CT, 4D-CT and CBCT images with following treatment planning and should find widespread application in future treatment planning systems.

  19. The Keck keyword layer

    NASA Technical Reports Server (NTRS)

    Conrad, A. R.; Lupton, W. F.

    1992-01-01

    Each Keck instrument presents a consistent software view to the user interface programmer. The view consists of a small library of functions, which are identical for all instruments, and a large set of keywords, that vary from instrument to instrument. All knowledge of the underlying task structure is hidden from the application programmer by the keyword layer. Image capture software uses the same function library to collect data for the image header. Because the image capture software and the instrument control software are built on top of the same keyword layer, a given observation can be 'replayed' by extracting keyword-value pairs from the image header and passing them back to the control system. The keyword layer features non-blocking as well as blocking I/O. A non-blocking keyword write operation (such as setting a filter position) specifies a callback to be invoked when the operation is complete. A non-blocking keyword read operation specifies a callback to be invoked whenever the keyword changes state. The keyword-callback style meshes well with the widget-callback style commonly used in X window programs. The first keyword library was built for the two Keck optical instruments. More recently, keyword libraries have been developed for the infrared instruments and for telescope control. Although the underlying mechanisms used for inter-process communication by each of these systems vary widely (Lick MUSIC, Sun RPC, and direct socket I/O, respectively), a basic user interface has been written that can be used with any of these systems. Since the keyword libraries are bound to user interface programs dynamically at run time, only a single set of user interface executables is needed. For example, the same program, 'xshow', can be used to display continuously the telescope's position, the time left in an instrument's exposure, or both values simultaneously. Less generic tools that operate on specific keywords, for example an X display that controls optical instrument exposures, have also been written using the keyword layer.

  20. Cognitive engineering of film library transition from film medium to digital environment in a Texas teaching hospital.

    PubMed

    Koperwhats, Martha A; Chang, Wei-Chih; Xiao, Jianguo

    2002-01-01

    Digital imaging technology promises efficient, economical, and fast service for patient care, but the challenges are great in the transition from film to a filmless (digital) environment. This change has a significant impact on the film library's personnel (film librarians) who play a leading roles in storage, classification, and retrieval of images. The objectives of this project were to study film library errors and the usability of a physical computerized system that could not be changed, while developing an intervention to reduce errors and test the usability of the intervention. Cognitive and human factors analysis were used to evaluate human-computer interaction. A workflow analysis was performed to understand the film and digital imaging processes. User and task analyses were applied to account for all behaviors involved in interaction with the system. A heuristic evaluation was used to probe the usability issues in the picture archiving and communication systems (PACS) modules. Simplified paper-based instructions were designed to familiarize the film librarians with the digital system. A usability survey evaluated the effectiveness of the instruction. The user and task analyses indicated that different users faced challenges based on their computer literacy, education, roles, and frequency of use of diagnostic imaging. The workflow analysis showed that the approaches to using the digital library differ among the various departments. The heuristic evaluation of the PACS modules showed the human-computer interface to have usability issues that prevented easy operation. Simplified instructions were designed for operation of the modules. Usability surveys conducted before and after revision of the instructions showed that performance improved. Cognitive and human factor analysis can help film librarians and other users adapt to the filmless system. Use of cognitive science tools will aid in successful transition of the film library from a film environment to a digital environment.

  1. The preprocessed connectomes project repository of manually corrected skull-stripped T1-weighted anatomical MRI data.

    PubMed

    Puccio, Benjamin; Pooley, James P; Pellman, John S; Taverna, Elise C; Craddock, R Cameron

    2016-10-25

    Skull-stripping is the procedure of removing non-brain tissue from anatomical MRI data. This procedure can be useful for calculating brain volume and for improving the quality of other image processing steps. Developing new skull-stripping algorithms and evaluating their performance requires gold standard data from a variety of different scanners and acquisition methods. We complement existing repositories with manually corrected brain masks for 125 T1-weighted anatomical scans from the Nathan Kline Institute Enhanced Rockland Sample Neurofeedback Study. Skull-stripped images were obtained using a semi-automated procedure that involved skull-stripping the data using the brain extraction based on nonlocal segmentation technique (BEaST) software, and manually correcting the worst results. Corrected brain masks were added into the BEaST library and the procedure was repeated until acceptable brain masks were available for all images. In total, 85 of the skull-stripped images were hand-edited and 40 were deemed to not need editing. The results are brain masks for the 125 images along with a BEaST library for automatically skull-stripping other data. Skull-stripped anatomical images from the Neurofeedback sample are available for download from the Preprocessed Connectomes Project. The resulting brain masks can be used by researchers to improve preprocessing of the Neurofeedback data, as training and testing data for developing new skull-stripping algorithms, and for evaluating the impact on other aspects of MRI preprocessing. We have illustrated the utility of these data as a reference for comparing various automatic methods and evaluated the performance of the newly created library on independent data.

  2. Practical Approach for Hyperspectral Image Processing in Python

    NASA Astrophysics Data System (ADS)

    Annala, L.; Eskelinen, M. A.; Hämäläinen, J.; Riihinen, A.; Pölönen, I.

    2018-04-01

    Python is a very popular programming language among data scientists around the world. Python can also be used in hyperspectral data analysis. There are some toolboxes designed for spectral imaging, such as Spectral Python and HyperSpy, but there is a need for analysis pipeline, which is easy to use and agile for different solutions. We propose a Python pipeline which is built on packages xarray, Holoviews and scikit-learn. We have developed some of own tools, MaskAccessor, VisualisorAccessor and a spectral index library. They also fulfill our goal of easy and agile data processing. In this paper we will present our processing pipeline and demonstrate it in practice.

  3. Content-based retrieval of historical Ottoman documents stored as textual images.

    PubMed

    Saykol, Ediz; Sinop, Ali Kemal; Güdükbay, Ugur; Ulusoy, Ozgür; Cetin, A Enis

    2004-03-01

    There is an accelerating demand to access the visual content of documents stored in historical and cultural archives. Availability of electronic imaging tools and effective image processing techniques makes it feasible to process the multimedia data in large databases. In this paper, a framework for content-based retrieval of historical documents in the Ottoman Empire archives is presented. The documents are stored as textual images, which are compressed by constructing a library of symbols occurring in a document, and the symbols in the original image are then replaced with pointers into the codebook to obtain a compressed representation of the image. The features in wavelet and spatial domain based on angular and distance span of shapes are used to extract the symbols. In order to make content-based retrieval in historical archives, a query is specified as a rectangular region in an input image and the same symbol-extraction process is applied to the query region. The queries are processed on the codebook of documents and the query images are identified in the resulting documents using the pointers in textual images. The querying process does not require decompression of images. The new content-based retrieval framework is also applicable to many other document archives using different scripts.

  4. Spectral Reconstruction for Obtaining Virtual Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Perez, G. J. P.; Castro, E. C.

    2016-12-01

    Hyperspectral sensors demonstrated its capabalities in identifying materials and detecting processes in a satellite scene. However, availability of hyperspectral images are limited due to the high development cost of these sensors. Currently, most of the readily available data are from multi-spectral instruments. Spectral reconstruction is an alternative method to address the need for hyperspectral information. The spectral reconstruction technique has been shown to provide a quick and accurate detection of defects in an integrated circuit, recovers damaged parts of frescoes, and it also aids in converting a microscope into an imaging spectrometer. By using several spectral bands together with a spectral library, a spectrum acquired by a sensor can be expressed as a linear superposition of elementary signals. In this study, spectral reconstruction is used to estimate the spectra of different surfaces imaged by Landsat 8. Four atmospherically corrected surface reflectance from three visible bands (499 nm, 585 nm, 670 nm) and one near-infrared band (872 nm) of Landsat 8, and a spectral library of ground elements acquired from the United States Geological Survey (USGS) are used. The spectral library is limited to 420-1020 nm spectral range, and is interpolated at one nanometer resolution. Singular Value Decomposition (SVD) is used to calculate the basis spectra, which are then applied to reconstruct the spectrum. The spectral reconstruction is applied for test cases within the library consisting of vegetation communities. This technique was successful in reconstructing a hyperspectral signal with error of less than 12% for most of the test cases. Hence, this study demonstrated the potential of simulating information at any desired wavelength, creating a virtual hyperspectral sensor without the need for additional satellite bands.

  5. Digital Image Access & Retrieval.

    ERIC Educational Resources Information Center

    Heidorn, P. Bryan, Ed.; Sandore, Beth, Ed.

    Recent technological advances in computing and digital imaging technology have had immediate and permanent consequences for visual resource collections. Libraries are involved in organizing and managing large visual resource collections. The central challenges in working with digital image collections mirror those that libraries have sought to…

  6. Public Relations in Special Libraries.

    ERIC Educational Resources Information Center

    Rutkowski, Hollace Ann; And Others

    1991-01-01

    This theme issue includes 11 articles on public relations (PR) in special libraries. Highlights include PR at the Special Libraries Association (SLA); sources for marketing research for libraries; developing a library image; sample PR releases; brand strategies for libraries; case studies; publicizing a consortium; and a bibliography of pertinent…

  7. OsiriX: an open-source software for navigating in multidimensional DICOM images.

    PubMed

    Rosset, Antoine; Spadola, Luca; Ratib, Osman

    2004-09-01

    A multidimensional image navigation and display software was designed for display and interpretation of large sets of multidimensional and multimodality images such as combined PET-CT studies. The software is developed in Objective-C on a Macintosh platform under the MacOS X operating system using the GNUstep development environment. It also benefits from the extremely fast and optimized 3D graphic capabilities of the OpenGL graphic standard widely used for computer games optimized for taking advantage of any hardware graphic accelerator boards available. In the design of the software special attention was given to adapt the user interface to the specific and complex tasks of navigating through large sets of image data. An interactive jog-wheel device widely used in the video and movie industry was implemented to allow users to navigate in the different dimensions of an image set much faster than with a traditional mouse or on-screen cursors and sliders. The program can easily be adapted for very specific tasks that require a limited number of functions, by adding and removing tools from the program's toolbar and avoiding an overwhelming number of unnecessary tools and functions. The processing and image rendering tools of the software are based on the open-source libraries ITK and VTK. This ensures that all new developments in image processing that could emerge from other academic institutions using these libraries can be directly ported to the OsiriX program. OsiriX is provided free of charge under the GNU open-source licensing agreement at http://homepage.mac.com/rossetantoine/osirix.

  8. Massively parallel data processing for quantitative total flow imaging with optical coherence microscopy and tomography

    NASA Astrophysics Data System (ADS)

    Sylwestrzak, Marcin; Szlag, Daniel; Marchand, Paul J.; Kumar, Ashwin S.; Lasser, Theo

    2017-08-01

    We present an application of massively parallel processing of quantitative flow measurements data acquired using spectral optical coherence microscopy (SOCM). The need for massive signal processing of these particular datasets has been a major hurdle for many applications based on SOCM. In view of this difficulty, we implemented and adapted quantitative total flow estimation algorithms on graphics processing units (GPU) and achieved a 150 fold reduction in processing time when compared to a former CPU implementation. As SOCM constitutes the microscopy counterpart to spectral optical coherence tomography (SOCT), the developed processing procedure can be applied to both imaging modalities. We present the developed DLL library integrated in MATLAB (with an example) and have included the source code for adaptations and future improvements. Catalogue identifier: AFBT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFBT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPLv3 No. of lines in distributed program, including test data, etc.: 913552 No. of bytes in distributed program, including test data, etc.: 270876249 Distribution format: tar.gz Programming language: CUDA/C, MATLAB. Computer: Intel x64 CPU, GPU supporting CUDA technology. Operating system: 64-bit Windows 7 Professional. Has the code been vectorized or parallelized?: Yes, CPU code has been vectorized in MATLAB, CUDA code has been parallelized. RAM: Dependent on users parameters, typically between several gigabytes and several tens of gigabytes Classification: 6.5, 18. Nature of problem: Speed up of data processing in optical coherence microscopy Solution method: Utilization of GPU for massively parallel data processing Additional comments: Compiled DLL library with source code and documentation, example of utilization (MATLAB script with raw data) Running time: 1,8 s for one B-scan (150 × faster in comparison to the CPU data processing time)

  9. Interpretation of AIS Images of Cuprite, Nevada Using Constraints of Spectral Mixtures

    NASA Technical Reports Server (NTRS)

    Smith, M. O.; Adams, J. B.

    1985-01-01

    A technique is outlined that tests the hypothesis Airborne Imaging Spectrometer (AIS) image spectra are produced by mixtures of surface materials. This technique allows separation of AIS images into concentration images of spectral endmembers (e.g., surface materials causing spectral variation). Using a spectral reference library it was possible to uniquely identify these spectral endmembers with respect to the reference library and to calibrate the AIS images.

  10. The NJOY Nuclear Data Processing System, Version 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macfarlane, Robert; Muir, Douglas W.; Boicourt, R. M.

    The NJOY Nuclear Data Processing System, version 2016, is a comprehensive computer code package for producing pointwise and multigroup cross sections and related quantities from evaluated nuclear data in the ENDF-4 through ENDF-6 legacy card-image formats. NJOY works with evaluated files for incident neutrons, photons, and charged particles, producing libraries for a wide variety of particle transport and reactor analysis codes.

  11. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image - spac0020 ESSA I, a TIROS cartwheel satellite launched on February 3, 1966. Image ID: spac0020, NOAA In

  12. ImgLib2--generic image processing in Java.

    PubMed

    Pietzsch, Tobias; Preibisch, Stephan; Tomancák, Pavel; Saalfeld, Stephan

    2012-11-15

    ImgLib2 is an open-source Java library for n-dimensional data representation and manipulation with focus on image processing. It aims at minimizing code duplication by cleanly separating pixel-algebra, data access and data representation in memory. Algorithms can be implemented for classes of pixel types and generic access patterns by which they become independent of the specific dimensionality, pixel type and data representation. ImgLib2 illustrates that an elegant high-level programming interface can be achieved without sacrificing performance. It provides efficient implementations of common data types, storage layouts and algorithms. It is the data model underlying ImageJ2, the KNIME Image Processing toolbox and an increasing number of Fiji-Plugins. ImgLib2 is licensed under BSD. Documentation and source code are available at http://imglib2.net and in a public repository at https://github.com/imagej/imglib. Supplementary data are available at Bioinformatics Online. saalfeld@mpi-cbg.de

  13. A New Effort for Atmospherical Forecast: Meteorological Image Processing Software (MIPS) for Astronomical Observations

    NASA Astrophysics Data System (ADS)

    Shameoni Niaei, M.; Kilic, Y.; Yildiran, B. E.; Yüzlükoglu, F.; Yesilyaprak, C.

    2016-12-01

    We have described a new software (MIPS) about the analysis and image processing of the meteorological satellite (Meteosat) data for an astronomical observatory. This software will be able to help to make some atmospherical forecast (cloud, humidity, rain) using meteosat data for robotic telescopes. MIPS uses a python library for Eumetsat data that aims to be completely open-source and licenced under GNU/General Public Licence (GPL). MIPS is a platform independent and uses h5py, numpy, and PIL with the general-purpose and high-level programming language Python and the QT framework.

  14. The clinical information system GastroBase: integration of image processing and laboratory communication.

    PubMed

    Kocna, P

    1995-01-01

    GastroBase, a clinical information system, incorporates patient identification, medical records, images, laboratory data, patient history, physical examination, and other patient-related information. Program modules are written in C; all data is processed using Novell-Btrieve data manager. Patient identification database represents the main core of this information systems. A graphic library developed in the past year and graphic modules with a special video-card enables the storing, archiving, and linking of different images to the electronic patient-medical-record. GastroBase has been running for more than four years in daily routine and the database contains more than 25,000 medical records and 1,500 images. This new version of GastroBase is now incorporated into the clinical information system of University Clinic in Prague.

  15. Close the Textbook & Open "The Cell: An Image Library"

    ERIC Educational Resources Information Center

    Saunders, Cheston; Taylor, Amy

    2014-01-01

    Many students leave the biology classroom with misconceptions centered on cellular structure. This article presents an activity in which students utilize images from an online database called "The Cell: An Image Library" (http://www.cellimagelibrary. org/) to gain a greater understanding of the diversity of cellular structure and the…

  16. Reducing uncertainty in wind turbine blade health inspection with image processing techniques

    NASA Astrophysics Data System (ADS)

    Zhang, Huiyi

    Structural health inspection has been widely applied in the operation of wind farms to find early cracks in wind turbine blades (WTBs). Increased numbers of turbines and expanded rotor diameters are driving up the workloads and safety risks for site employees. Therefore, it is important to automate the inspection process as well as minimize the uncertainties involved in routine blade health inspection. In addition, crack documentation and trending is vital to assess rotor blade and turbine reliability in the 20 year designed life span. A new crack recognition and classification algorithm is described that can support automated structural health inspection of the surface of large composite WTBs. The first part of the study investigated the feasibility of digital image processing in WTB health inspection and defined the capability of numerically detecting cracks as small as hairline thickness. The second part of the study identified and analyzed the uncertainty of the digital image processing method. A self-learning algorithm was proposed to recognize and classify cracks without comparing a blade image to a library of crack images. The last part of the research quantified the uncertainty in the field conditions and the image processing methods.

  17. Use for Teachers and Students | Galaxy of Images

    Science.gov Websites

    the website. Some Frequently Asked Questions by Students and Teachers May I put unaltered images, text Libraries (http://www.sil.si.edu). May I put unaltered images, text or content from this website on my should include a link back to Smithsonian Libraries (http://www.sil.si.edu). May I put images, text or

  18. Remote Sensing and Imaging Physics

    DTIC Science & Technology

    2012-03-07

    Model Analysis Process Wire-frame Shape Model a s s u m e d a p rio ri k n o w le d g e No material BRDF library employed in retrieval...a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 07 MAR 2012 2. REPORT TYPE 3. DATES COVERED...imaging estimation problems Allows properties of local maxima to be derived from the Kolmogorov model of atmospheric turbulence: Each speckle

  19. Virtual reality for spherical images

    NASA Astrophysics Data System (ADS)

    Pilarczyk, Rafal; Skarbek, Władysław

    2017-08-01

    Paper presents virtual reality application framework and application concept for mobile devices. Framework uses Google Cardboard library for Android operating system. Framework allows to create virtual reality 360 video player using standard OpenGL ES rendering methods. Framework provides network methods in order to connect to web server as application resource provider. Resources are delivered using JSON response as result of HTTP requests. Web server also uses Socket.IO library for synchronous communication between application and server. Framework implements methods to create event driven process of rendering additional content based on video timestamp and virtual reality head point of view.

  20. The Open Microscopy Environment: open image informatics for the biological sciences

    NASA Astrophysics Data System (ADS)

    Blackburn, Colin; Allan, Chris; Besson, Sébastien; Burel, Jean-Marie; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gault, David; Gillen, Kenneth; Leigh, Roger; Leo, Simone; Li, Simon; Lindner, Dominik; Linkert, Melissa; Moore, Josh; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Swedlow, Jason R.

    2016-07-01

    Despite significant advances in biological imaging and analysis, major informatics challenges remain unsolved: file formats are proprietary, storage and analysis facilities are lacking, as are standards for sharing image data and results. While the open FITS file format is ubiquitous in astronomy, astronomical imaging shares many challenges with biological imaging, including the need to share large image sets using secure, cross-platform APIs, and the need for scalable applications for processing and visualization. The Open Microscopy Environment (OME) is an open-source software framework developed to address these challenges. OME tools include: an open data model for multidimensional imaging (OME Data Model); an open file format (OME-TIFF) and library (Bio-Formats) enabling free access to images (5D+) written in more than 145 formats from many imaging domains, including FITS; and a data management server (OMERO). The Java-based OMERO client-server platform comprises an image metadata store, an image repository, visualization and analysis by remote access, allowing sharing and publishing of image data. OMERO provides a means to manage the data through a multi-platform API. OMERO's model-based architecture has enabled its extension into a range of imaging domains, including light and electron microscopy, high content screening, digital pathology and recently into applications using non-image data from clinical and genomic studies. This is made possible using the Bio-Formats library. The current release includes a single mechanism for accessing image data of all types, regardless of original file format, via Java, C/C++ and Python and a variety of applications and environments (e.g. ImageJ, Matlab and R).

  1. SimITK: visual programming of the ITK image-processing library within Simulink.

    PubMed

    Dickinson, Andrew W L; Abolmaesumi, Purang; Gobbi, David G; Mousavi, Parvin

    2014-04-01

    The Insight Segmentation and Registration Toolkit (ITK) is a software library used for image analysis, visualization, and image-guided surgery applications. ITK is a collection of C++ classes that poses the challenge of a steep learning curve should the user not have appropriate C++ programming experience. To remove the programming complexities and facilitate rapid prototyping, an implementation of ITK within a higher-level visual programming environment is presented: SimITK. ITK functionalities are automatically wrapped into "blocks" within Simulink, the visual programming environment of MATLAB, where these blocks can be connected to form workflows: visual schematics that closely represent the structure of a C++ program. The heavily templated C++ nature of ITK does not facilitate direct interaction between Simulink and ITK; an intermediary is required to convert respective data types and allow intercommunication. As such, a SimITK "Virtual Block" has been developed that serves as a wrapper around an ITK class which is capable of resolving the ITK data types to native Simulink data types. Part of the challenge surrounding this implementation involves automatically capturing and storing the pertinent class information that need to be refined from an initial state prior to being reflected within the final block representation. The primary result from the SimITK wrapping procedure is multiple Simulink block libraries. From these libraries, blocks are selected and interconnected to demonstrate two examples: a 3D segmentation workflow and a 3D multimodal registration workflow. Compared to their pure-code equivalents, the workflows highlight ITK usability through an alternative visual interpretation of the code that abstracts away potentially confusing technicalities.

  2. Planetary Image Geometry Library

    NASA Technical Reports Server (NTRS)

    Deen, Robert C.; Pariser, Oleg

    2010-01-01

    The Planetary Image Geometry (PIG) library is a multi-mission library used for projecting images (EDRs, or Experiment Data Records) and managing their geometry for in-situ missions. A collection of models describes cameras and their articulation, allowing application programs such as mosaickers, terrain generators, and pointing correction tools to be written in a multi-mission manner, without any knowledge of parameters specific to the supported missions. Camera model objects allow transformation of image coordinates to and from view vectors in XYZ space. Pointing models, specific to each mission, describe how to orient the camera models based on telemetry or other information. Surface models describe the surface in general terms. Coordinate system objects manage the various coordinate systems involved in most missions. File objects manage access to metadata (labels, including telemetry information) in the input EDRs and RDRs (Reduced Data Records). Label models manage metadata information in output files. Site objects keep track of different locations where the spacecraft might be at a given time. Radiometry models allow correction of radiometry for an image. Mission objects contain basic mission parameters. Pointing adjustment ("nav") files allow pointing to be corrected. The object-oriented structure (C++) makes it easy to subclass just the pieces of the library that are truly mission-specific. Typically, this involves just the pointing model and coordinate systems, and parts of the file model. Once the library was developed (initially for Mars Polar Lander, MPL), adding new missions ranged from two days to a few months, resulting in significant cost savings as compared to rewriting all the application programs for each mission. Currently supported missions include Mars Pathfinder (MPF), MPL, Mars Exploration Rover (MER), Phoenix, and Mars Science Lab (MSL). Applications based on this library create the majority of operational image RDRs for those missions. A Java wrapper around the library allows parts of it to be used from Java code (via a native JNI interface). Future conversions of all or part of the library to Java are contemplated.

  3. BgCut: automatic ship detection from UAV images.

    PubMed

    Xu, Chao; Zhang, Dongping; Zhang, Zhengning; Feng, Zhiyong

    2014-01-01

    Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches.

  4. BgCut: Automatic Ship Detection from UAV Images

    PubMed Central

    Zhang, Zhengning; Feng, Zhiyong

    2014-01-01

    Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches. PMID:24977182

  5. Imaging ATUM ultrathin section libraries with WaferMapper: a multi-scale approach to EM reconstruction of neural circuits

    PubMed Central

    Hayworth, Kenneth J.; Morgan, Josh L.; Schalek, Richard; Berger, Daniel R.; Hildebrand, David G. C.; Lichtman, Jeff W.

    2014-01-01

    The automated tape-collecting ultramicrotome (ATUM) makes it possible to collect large numbers of ultrathin sections quickly—the equivalent of a petabyte of high resolution images each day. However, even high throughput image acquisition strategies generate images far more slowly (at present ~1 terabyte per day). We therefore developed WaferMapper, a software package that takes a multi-resolution approach to mapping and imaging select regions within a library of ultrathin sections. This automated method selects and directs imaging of corresponding regions within each section of an ultrathin section library (UTSL) that may contain many thousands of sections. Using WaferMapper, it is possible to map thousands of tissue sections at low resolution and target multiple points of interest for high resolution imaging based on anatomical landmarks. The program can also be used to expand previously imaged regions, acquire data under different imaging conditions, or re-image after additional tissue treatments. PMID:25018701

  6. DEA Multimedia Drug Library: Marijuana

    MedlinePlus

    ... confidencial Press Room » Multi-Media Library » Image Gallery » Marijuana MARIJUANA To Save Images: First click on the thumbnail ... Save in directory and then click Save. Indoor Marijuana Grow Indoor Marijuana Grow Loose Marijuana Marinol 10mg ...

  7. School and Library Media. Introduction; The Uniform Computer Information Transactions Act (UCITA): More Critical for Educators than Copyright Law?; Redefining Professional Growth: New Attitudes, New Tools--A Case Study; Diversity in School Library Media Center Resources; Image-Text Relationships in Web Pages; Aiming for Effective Student Learning in Web-Based Courses: Insights from Student Experiences.

    ERIC Educational Resources Information Center

    Fitzgerald, Mary Ann; Gregory, Vicki L.; Brock, Kathy; Bennett, Elizabeth; Chen, Shu-Hsien Lai; Marsh, Emily; Moore, Joi L.; Kim, Kyung-Sun; Esser, Linda R.

    2002-01-01

    Chapters in this section of "Educational Media and Technology Yearbook" examine important trends prominent in the landscape of the school library media profession in 2001. Themes include mandated educational reform; diversity in school library resources; communication through image-text juxtaposition in Web pages; and professional development and…

  8. 76 FR 53500 - Notice of the Nuclear Regulatory Commission Issuance of Materials License SUA-1598 and Record of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-26

    ... (ADAMS), which provides text and image files of the NRC's public documents in the NRC Library at http... considered, but eliminated from detailed analysis, include conventional uranium mining and milling, conventional mining and heap leach processing, alternate lixiviants, and alternative wastewater disposal...

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Chao

    Sparx, a new environment for Cryo-EM image processing; Cryo-EM, Single particle reconstruction, principal component analysis; Hardware Req.: PC, MAC, Supercomputer, Mainframe, Multiplatform, Workstation. Software Req.: operating system is Unix; Compiler C++; type of files: source code, object library, executable modules, compilation instructions; sample problem input data. Location/transmission: http://sparx-em.org; User manual & paper: http://sparx-em.org;

  10. The virtual library: Coming of age

    NASA Technical Reports Server (NTRS)

    Hunter, Judy F.; Cotter, Gladys A.

    1994-01-01

    With the high speed networking capabilities, multiple media options, and massive amounts of information that exist in electronic format today, the concept of a 'virtual' library or 'library without walls' is becoming viable. In virtual library environment, the information processed goes beyond the traditional definition of documents to include the results of scientific and technical research and development (reports, software, data) recorded in any format or media: electronic, audio, video, or scanned images. Network access to information must include tools to help locate information sources and navigate the networks to connect to the sources, as well as methods to extract the relevant information. Graphical User Interfaces (GUI's) that are intuitive and navigational tools such as Intelligent Gateway Processors (IGP) will provide users with seamless and transparent use of high speed networks to access, organize, and manage information. Traditional libraries will become points of electronic access to information on multiple medias. The emphasis will be towards unique collections of information at each library rather than entire collections at every library. It is no longer a question of whether there is enough information available; it is more a question of how to manage the vast volumes of information. The future equation will involve being able to organize knowledge, manage information, and provide access at the point of origin.

  11. Life After Press: The Role of the Picture Library in Communicating Astronomy to the Public

    NASA Astrophysics Data System (ADS)

    Evans, G. S.

    2005-12-01

    Science communication is increasingly led by the image, providing opportunities for 'visual' disciplines such as astronomy to receive greater public exposure. In consequence, there is a huge demand for good and exciting images within the publishing media. The picture library is a conduit linking image makers of all kinds to image buyers of all kinds. The image maker benefits from the exposure of their pictures to the people who want to use them, with minimal time investment, and with the safeguards of effective rights management. The image buyer benefits from a wide choice of images available from a single point of contact, stored in a database that offers a choice between subject-based and conceptual searches. By forming this link between astronomer, professional or amateur, and the publishing media, the picture library helps to make the wonders of astronomy visible to a wider public audience.

  12. Development of a large peptoid-DOTA combinatorial library.

    PubMed

    Singh, Jaspal; Lopes, Daniel; Gomika Udugamasooriya, D

    2016-09-01

    Conventional one-bead one-compound (OBOC) library synthesis is typically used to identify molecules with therapeutic value. The design and synthesis of OBOC libraries that contain molecules with imaging or even potentially therapeutic and diagnostic capacities (e.g. theranostic agents) has been overlooked. The development of a therapeutically active molecule with a built-in imaging component for a certain target is a daunting task, and structure-based rational design might not be the best approach. We hypothesize to develop a combinatorial library with potentially therapeutic and imaging components fused together in each molecule. Such molecules in the library can be used to screen, identify, and validate as direct theranostic candidates against targets of interest. As the first step in achieving that aim, we developed an on-bead library of 153,600 Peptoid-DOTA compounds in which the peptoids are the target-recognizing and potentially therapeutic components and the DOTA is the imaging component. We attached the DOTA scaffold to TentaGel beads using one of the four arms of DOTA, and we built a diversified 6-mer peptoid library on the remaining three arms. We evaluated both the synthesis and the mass spectrometric sequencing capacities of the test compounds and of the final library. The compounds displayed unique ionization patterns including direct breakages of the DOTA scaffold into two units, allowing clear decoding of the sequences. Our approach provides a facile synthesis method for the complete on-bead development of large peptidomimetic-DOTA libraries for screening against biological targets for the identification of potential theranostic agents in the future. © 2016 The Authors. Biopolymers Published by Wiley Periodicals, Inc. Biopolymers (Pept Sci) 106: 673-684, 2016. © 2016 The Authors. Biopolymers Published by Wiley Periodicals, Inc.

  13. Health sciences library building projects, 1998 survey.

    PubMed Central

    Bowden, V M

    1999-01-01

    Twenty-eight health sciences library building projects are briefly described, including twelve new buildings and sixteen additions, remodelings, and renovations. The libraries range in size from 2,144 square feet to 190,000 gross square feet. Twelve libraries are described in detail. These include three hospital libraries, one information center sponsored by ten institutions, and eight academic health sciences libraries. Images PMID:10550027

  14. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image Storms Laboratory (NSSL) Collection Credit: NOAA Photo Library, NOAA Central Library; OAR/ERL/National

  15. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image Photographer: Jim Leonard Credit: NOAA Photo Library, NOAA Central Library; OAR/ERL/National Severe Storms

  16. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image Photo Library, NOAA Central Library; OAR/ERL/National Severe Storms Laboratory (NSSL) Category

  17. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image Ainsworth Credit: NOAA Photo Library, NOAA Central Library; OAR/ERL/National Severe Storms Laboratory (NSSL

  18. 3-D Medicine.

    ERIC Educational Resources Information Center

    Reese, Susan

    2001-01-01

    Describes the Visible Human Project of the National Library of Medicine that links the print library of functional-physiological knowledge with the image library of structural-anatomical knowledge into one unified resource. (JOW)

  19. Oil spill characterization thanks to optical airborne imagery during the NOFO campaign 2015

    NASA Astrophysics Data System (ADS)

    Viallefont-Robinet, F.; Ceamanos, X.; Angelliaume, S.; Miegebielle, V.

    2017-10-01

    One of the objectives of the NAOMI (New Advanced Observation Method Integration) research project, fruit of a partnership between Total and ONERA, is to work on the detection, the quantification and the characterization of offshore hydrocarbon at the sea surface using airborne remote sensing. In this framework, work has been done to characterize the spectral signature of hydrocarbons in lab in order to build a database of oil spectral signatures. The main objective of this database is to provide spectral libraries for data processing algorithms to be applied to airborne VNIRSWIR hyperspectral images. A campaign run by the NOFO institute (Norwegian Clean Seas Association for Operating Companies) took place in 2015 to test anti-pollution equipment. During this campaign, several hydrocarbon products, including an oil emulsion, were released into the sea, off the Norwegian coast. The NOFO team allowed the NAOMI project to acquire data over the resulting oil slicks using the SETHI system, which is an airborne remote sensing imaging system developed by ONERA. SETHI integrates a new generation of optoelectronic and radar payloads and can operate over a wide range of frequency bands. SETHI is a pod-based system operating onboard a Falcon 20 Dassault aircraft, which is owned by AvDEF. For these experiments, imaging sensors were constituted by 2 synthetic aperture radar (SAR), working at X and L bands in a full polarimetric mode (HH, HV, VH, VV) and 2 HySpex hyperspectral cameras working in the VNIR (0,4 to 1 μm) and SWIR (1 to 2,5 μm) spectral ranges. A sample of the oil emulsion that was used during the campaign was sent to our laboratory for analysis. Measurements of its transmission and of its reflectance in the VNIR and SWIR spectral domains have been performed at ONERA with a Perkin Elmer spectroradiometer and a spectrogoniometer. Several samples of the oil emulsion were prepared in order to measure spectral variations according to oil thickness, illumination angle and aging. These measurements have been used to build spectral libraries. Spectral matching techniques, relying on these libraries have been applied to the airborne hyperspectral acquisitions. These data processing approaches enable to characterize the oil emulsion by estimating the properties taken into account to build the spectral library, thus going further than unsupervised spectral indices that are able to detect the presence of oil. The paper will describe the airborne hyperspectral data, the measurements performed in the laboratory, and the processing of the optical images with spectral indices for oil detection and with spectral matching techniques for oil characterization. Furthermore, the issue of mixed oil-water pixels in the hyperspectral images due to limited spatial resolution will be addressed by estimating the areal fraction of each.

  20. The software and algorithms for hyperspectral data processing

    NASA Astrophysics Data System (ADS)

    Shyrayeva, Anhelina; Martinov, Anton; Ivanov, Victor; Katkovsky, Leonid

    2017-04-01

    Hyperspectral remote sensing technique is widely used for collecting and processing -information about the Earth's surface objects. Hyperspectral data are combined to form a three-dimensional (x, y, λ) data cube. Department of Aerospace Research of the Institute of Applied Physical Problems of the Belarusian State University presents a general model of the software for hyperspectral image data analysis and processing. The software runs in Windows XP/7/8/8.1/10 environment on any personal computer. This complex has been has been written in C++ language using QT framework and OpenGL for graphical data visualization. The software has flexible structure that consists of a set of independent plugins. Each plugin was compiled as Qt Plugin and represents Windows Dynamic library (dll). Plugins can be categorized in terms of data reading types, data visualization (3D, 2D, 1D) and data processing The software has various in-built functions for statistical and mathematical analysis, signal processing functions like direct smoothing function for moving average, Savitzky-Golay smoothing technique, RGB correction, histogram transformation, and atmospheric correction. The software provides two author's engineering techniques for the solution of atmospheric correction problem: iteration method of refinement of spectral albedo's parameters using Libradtran and analytical least square method. The main advantages of these methods are high rate of processing (several minutes for 1 GB data) and low relative error in albedo retrieval (less than 15%). Also, the software supports work with spectral libraries, region of interest (ROI) selection, spectral analysis such as cluster-type image classification and automatic hypercube spectrum comparison by similarity criterion with similar ones from spectral libraries, and vice versa. The software deals with different kinds of spectral information in order to identify and distinguish spectrally unique materials. Also, the following advantages should be noted: fast and low memory hypercube manipulation features, user-friendly interface, modularity, and expandability.

  1. Mapping species distribution of Canarian Monteverde forest by field spectroradiometry and satellite imagery

    NASA Astrophysics Data System (ADS)

    Martín-Luis, Antonio; Arbelo, Manuel; Hernández-Leal, Pedro; Arbelo-Bayó, Manuel

    2016-10-01

    Reliable and updated maps of vegetation in protected natural areas are essential for a proper management and conservation. Remote sensing is a valid tool for this purpose. In this study, a methodology based on a WorldView-2 (WV-2) satellite image and in situ spectral signatures measurements was applied to map the Canarian Monteverde ecosystem located in the north of the Tenerife Island (Canary Islands, Spain). Due to the high spectral similarity of vegetation species in the study zone, a Multiple Endmember Spectral Mixture Analysis (MESMA) was performed. MESMA determines the fractional cover of different components within one pixel and it allows for a pixel-by-pixel variation of endmembers. Two libraries of endmembers were collected for the most abundant species in the test area. The first library was collected from in situ spectral signatures measured with an ASD spectroradiometer during a field campaign in June 2015. The second library was obtained from pure pixels identified in the satellite image for the same species. The accuracy of the mapping process was assessed from a set of independent validation plots. The overall accuracy for the ASD-based method was 60.51 % compared to the 86.67 % reached for the WV-2 based mapping. The results suggest the possibility of using WV-2 images for monitoring and regularly updating the maps of the Monteverde forest on the island of Tenerife.

  2. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    PubMed

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.

  3. Hyperspectral imaging of polymer banknotes for building and analysis of spectral library

    NASA Astrophysics Data System (ADS)

    Lim, Hoong-Ta; Murukeshan, Vadakke Matham

    2017-11-01

    The use of counterfeit banknotes increases crime rates and cripples the economy. New countermeasures are required to stop counterfeiters who use advancing technologies with criminal intent. Many countries started adopting polymer banknotes to replace paper notes, as polymer notes are more durable and have better quality. The research on authenticating such banknotes is of much interest to the forensic investigators. Hyperspectral imaging can be employed to build a spectral library of polymer notes, which can then be used for classification to authenticate these notes. This is however not widely reported and has become a research interest in forensic identification. This paper focuses on the use of hyperspectral imaging on polymer notes to build spectral libraries, using a pushbroom hyperspectral imager which has been previously reported. As an initial study, a spectral library will be built from three arbitrarily chosen regions of interest of five circulated genuine polymer notes. Principal component analysis is used for dimension reduction and to convert the information in the spectral library to principal components. A 99% confidence ellipse is formed around the cluster of principal component scores of each class and then used as classification criteria. The potential of the adopted methodology is demonstrated by the classification of the imaged regions as training samples.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tomopy is a Python toolbox to perform x-ray data processing, image reconstruction and data exchange tasks at synchrotron facilities. The dependencies of the software are currently as follows: -Python related python standard library (http://docs.python.org/2/library/) numpy (http://www.numpy.org/) scipy (http://scipy.org/) matplotlib (http://matplotlip.org/) sphinx (http://sphinx-doc.org) pil (http://www.pythonware.com/products/pil/) pyhdf (http://pysclint.sourceforge.net/pyhdf/) h5py (http://www.h5py.org) pywt (http://www.pybytes.com/pywavelets/) file.py (https://pyspec.svn.sourceforge.net/svnroot/pyspec/trunk/pyspec/ccd/files.py) -C/C++ related: gridec (anonymous?? C-code written back in 1997 that uses standard C library) fftw (http://www.fftw.org/) tomoRecon (multi-threaded C++ verion of gridrec. Author: Mark Rivers from APS. http://cars9.uchicago.edu/software/epics/tomoRecon.html) epics (http://www.aps.anl.gov/epics/)

  5. Image 100 procedures manual development: Applications system library definition and Image 100 software definition

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Decell, H. P., Jr.

    1975-01-01

    An outline for an Image 100 procedures manual for Earth Resources Program image analysis was developed which sets forth guidelines that provide a basis for the preparation and updating of an Image 100 Procedures Manual. The scope of the outline was limited to definition of general features of a procedures manual together with special features of an interactive system. Computer programs were identified which should be implemented as part of an applications oriented library for the system.

  6. Use of OsiriX in developing a digital radiology teaching library.

    PubMed

    Shamshuddin, S; Matthews, H R

    2014-10-01

    Widespread adoption of digital imaging in clinical practice and for the image-based examinations of the Royal College of Radiologists has created a desire to provide a digital radiology teaching library in many hospital departments around the UK. This article describes our experience of using OsiriX software in developing digital radiology teaching libraries. Copyright © 2014 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  7. MCR Container Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haas, Nicholas Q; Gillen, Robert E; Karnowski, Thomas P

    MathWorks' MATLAB is widely used in academia and industry for prototyping, data analysis, data processing, etc. Many users compile their programs using the MATLAB Compiler to run on workstations/computing clusters via the free MATLAB Compiler Runtime (MCR). The MCR facilitates the execution of code calling Application Programming Interfaces (API) functions from both base MATLAB and MATLAB toolboxes. In a Linux environment, a sizable number of third-party runtime dependencies (i.e. shared libraries) are necessary. Unfortunately, to the MTLAB community's knowledge, these dependencies are not documented, leaving system administrators and/or end-users to find/install the necessary libraries either as runtime errors resulting frommore » them missing or by inspecting the header information of Executable and Linkable Format (ELF) libraries of the MCR to determine which ones are missing from the system. To address various shortcomings, Docker Images based on Community Enterprise Operating System (CentOS) 7, a derivative of Redhat Enterprise Linux (RHEL) 7, containing recent (2015-2017) MCR releases and their dependencies were created. These images, along with a provided sample Docker Compose YAML Script, can be used to create a simulated computing cluster where MATLAB Compiler created binaries can be executed using a sample Slurm Workload Manager script.« less

  8. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image Location: Near Shamrock, Texas Photo Date: May 16, 1977 Credit: NOAA Photo Library, NOAA Central Library

  9. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image Location: Union City, Oklahoma Photo Date: May 24, 1973 Credit: NOAA Photo Library, NOAA Central Library

  10. NOAA Photo Library - Treasures of the Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. treasures of the noaa library The "Treasures of the Library" album and collection has been developed to share images from rare

  11. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image Location: Near Mayfield, Oklahoma Photo Date: May 16, 1977 Credit: NOAA Photo Library, NOAA Central Library

  12. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image Location: Near Lakeview, Texas Photo Date: April 19, 1977 Credit: NOAA Photo Library, NOAA Central Library

  13. New library buildings: the Health Sciences Library, Memorial University of Newfoundland, St. John's.

    PubMed Central

    Fredericksen, R B

    1979-01-01

    The new Health Sciences Library of Memorial University of Newfoundland is described and illustrated. A library facility that forms part of a larger health sciences center, this is a medium-sized academic health sciences library built on a single level. Along with a physical description of the library and its features, the concepts of single-level libraries, phased occupancy, and the project management approach to building a large health center library are discussed in detail. Images PMID:476319

  14. Picture This... Developing Standards for Electronic Images at the National Library of Medicine

    PubMed Central

    Masys, Daniel R.

    1990-01-01

    New computer technologies have made it feasible to represent, store, and communicate high resolution biomedical images via electronic means. Traditional two dimensional medical images such as those on printed pages have been supplemented by three dimensional images which can be rendered, rotated, and “dissected” from any point of view. The library of the future will provide electronic access not only to words and numbers, but to pictures, sounds, and other nontextual information. There currently exist few widely-accepted standards for the representation and communication of complex images, yet such standards will be critical to the feasibility and usefulness of digital image collections in the life sciences. The National Library of Medicine is embarked on a project to develop a complete digital volumetric representation of an adult human male and female. This “Visible Human Project” will address the issue of standards for computer representation of biological structure.

  15. The library without walls: images, medical dictionaries, atlases, medical encyclopedias free on web.

    PubMed

    Giglia, E

    2008-09-01

    The aim of this article was to present the ''reference room'' of the Internet, a real library without walls. The reader will find medical encyclopedias, dictionaries, atlases, e-books, images, and will also learn something useful about the use and reuse of images in a text and in a web site, according to the copyright law.

  16. Digitization and the Creation of Virtual Libraries: The Princeton University Image Card Catalog--Reaping the Benefits of Imaging.

    ERIC Educational Resources Information Center

    Henthorne, Eileen

    1995-01-01

    Describes a project at the Princeton University libraries that converted the pre-1981 public card catalog, using digital imaging and optical character recognition technology, to fully tagged and indexed records of text in MARC format that are available on an online database and will be added to the online catalog. (LRW)

  17. Ensemble learning and model averaging for material identification in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Basener, William F.

    2017-05-01

    In this paper we present a method for identifying the material contained in a pixel or region of pixels in a hyperspectral image. An identification process can be performed on a spectrum from an image from pixels that has been pre-determined to be of interest, generally comparing the spectrum from the image to spectra in an identification library. The metric for comparison used in this paper a Bayesian probability for each material. This probability can be computed either from Bayes' theorem applied to normal distributions for each library spectrum or using model averaging. Using probabilities has the advantage that the probabilities can be summed over spectra for any material class to obtain a class probability. For example, the probability that the spectrum of interest is a fabric is equal to the sum of all probabilities for fabric spectra in the library. We can do the same to determine the probability for a specific type of fabric, or any level of specificity contained in our library. Probabilities not only tell us which material is most likely, the tell us how confident we can be in the material presence; a probability close to 1 indicates near certainty of the presence of a material in the given class, and a probability close to 0.5 indicates that we cannot know if the material is present at the given level of specificity. This is much more informative than a detection score from a target detection algorithm or a label from a classification algorithm. In this paper we present results in the form of a hierarchical tree with probabilities for each node. We use Forest Radiance imagery with 159 bands.

  18. NOAA Photo Library - Credits

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes . Skip Theberge (NOAA Central Library) -- Collection development, site content, image digitization, and database construction. Kristin Ward (NOAA Central Library) -- HTML page construction Without the generosity

  19. Tumor-targeting peptides from combinatorial libraries*

    PubMed Central

    Liu, Ruiwu; Li, Xiaocen; Xiao, Wenwu; Lam, Kit S.

    2018-01-01

    Cancer is one of the major and leading causes of death worldwide. Two of the greatest challenges infighting cancer are early detection and effective treatments with no or minimum side effects. Widespread use of targeted therapies and molecular imaging in clinics requires high affinity, tumor-specific agents as effective targeting vehicles to deliver therapeutics and imaging probes to the primary or metastatic tumor sites. Combinatorial libraries such as phage-display and one-bead one-compound (OBOC) peptide libraries are powerful approaches in discovering tumor-targeting peptides. This review gives an overview of different combinatorial library technologies that have been used for the discovery of tumor-targeting peptides. Examples of tumor-targeting peptides identified from each combinatorial library method will be discussed. Published tumor-targeting peptide ligands and their applications will also be summarized by the combinatorial library methods and their corresponding binding receptors. PMID:27210583

  20. Rapid development of image analysis research tools: Bridging the gap between researcher and clinician with pyOsiriX.

    PubMed

    Blackledge, Matthew D; Collins, David J; Koh, Dow-Mu; Leach, Martin O

    2016-02-01

    We present pyOsiriX, a plugin built for the already popular dicom viewer OsiriX that provides users the ability to extend the functionality of OsiriX through simple Python scripts. This approach allows users to integrate the many cutting-edge scientific/image-processing libraries created for Python into a powerful DICOM visualisation package that is intuitive to use and already familiar to many clinical researchers. Using pyOsiriX we hope to bridge the apparent gap between basic imaging scientists and clinical practice in a research setting and thus accelerate the development of advanced clinical image processing. We provide arguments for the use of Python as a robust scripting language for incorporation into larger software solutions, outline the structure of pyOsiriX and how it may be used to extend the functionality of OsiriX, and we provide three case studies that exemplify its utility. For our first case study we use pyOsiriX to provide a tool for smooth histogram display of voxel values within a user-defined region of interest (ROI) in OsiriX. We used a kernel density estimation (KDE) method available in Python using the scikit-learn library, where the total number of lines of Python code required to generate this tool was 22. Our second example presents a scheme for segmentation of the skeleton from CT datasets. We have demonstrated that good segmentation can be achieved for two example CT studies by using a combination of Python libraries including scikit-learn, scikit-image, SimpleITK and matplotlib. Furthermore, this segmentation method was incorporated into an automatic analysis of quantitative PET-CT in a patient with bone metastases from primary prostate cancer. This enabled repeatable statistical evaluation of PET uptake values for each lesion, before and after treatment, providing estaimes maximum and median standardised uptake values (SUVmax and SUVmed respectively). Following treatment we observed a reduction in lesion volume, SUVmax and SUVmed for all lesions, in agreement with a reduction in concurrent measures of serum prostate-specific antigen (PSA). Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  1. High content screening of defined chemical libraries using normal and glioma-derived neural stem cell lines.

    PubMed

    Danovi, Davide; Folarin, Amos A; Baranowski, Bart; Pollard, Steven M

    2012-01-01

    Small molecules with potent biological effects on the fate of normal and cancer-derived stem cells represent both useful research tools and new drug leads for regenerative medicine and oncology. Long-term expansion of mouse and human neural stem cells is possible using adherent monolayer culture. These cultures represent a useful cellular resource to carry out image-based high content screening of small chemical libraries. Improvements in automated microscopy, desktop computational power, and freely available image processing tools, now means that such chemical screens are realistic to undertake in individual academic laboratories. Here we outline a cost effective and versatile time lapse imaging strategy suitable for chemical screening. Protocols are described for the handling and screening of human fetal Neural Stem (NS) cell lines and their malignant counterparts, Glioblastoma-derived neural stem cells (GNS). We focus on identification of cytostatic and cytotoxic "hits" and discuss future possibilities and challenges for extending this approach to assay lineage commitment and differentiation. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Smart cloud system with image processing server in diagnosing brain diseases dedicated for hospitals with limited resources.

    PubMed

    Fahmi, Fahmi; Nasution, Tigor H; Anggreiny, Anggreiny

    2017-01-01

    The use of medical imaging in diagnosing brain disease is growing. The challenges are related to the big size of data and complexity of the image processing. High standard of hardware and software are demanded, which can only be provided in big hospitals. Our purpose was to provide a smart cloud system to help diagnosing brain diseases for hospital with limited infrastructure. The expertise of neurologists was first implanted in cloud server to conduct an automatic diagnosis in real time using image processing technique developed based on ITK library and web service. Users upload images through website and the result, in this case the size of tumor was sent back immediately. A specific image compression technique was developed for this purpose. The smart cloud system was able to measure the area and location of tumors, with average size of 19.91 ± 2.38 cm2 and an average response time 7.0 ± 0.3 s. The capability of the server decreased when multiple clients accessed the system simultaneously: 14 ± 0 s (5 parallel clients) and 27 ± 0.2 s (10 parallel clients). The cloud system was successfully developed to process and analyze medical images for diagnosing brain diseases in this case for tumor.

  3. Two-dimensional simulation and modeling in scanning electron microscope imaging and metrology research.

    PubMed

    Postek, Michael T; Vladár, András E; Lowney, Jeremiah R; Keery, William J

    2002-01-01

    Traditional Monte Carlo modeling of the electron beam-specimen interactions in a scanning electron microscope (SEM) produces information about electron beam penetration and output signal generation at either a single beam-landing location, or multiple landing positions. If the multiple landings lie on a line, the results can be graphed in a line scan-like format. Monte Carlo results formatted as line scans have proven useful in providing one-dimensional information about the sample (e.g., linewidth). When used this way, this process is called forward line scan modeling. In the present work, the concept of image simulation (or the first step in the inverse modeling of images) is introduced where the forward-modeled line scan data are carried one step further to construct theoretical two-dimensional (2-D) micrographs (i.e., theoretical SEM images) for comparison with similar experimentally obtained micrographs. This provides an ability to mimic and closely match theory and experiment using SEM images. Calculated and/or measured libraries of simulated images can be developed with this technique. The library concept will prove to be very useful in the determination of dimensional and other properties of simple structures, such as integrated circuit parts, where the shape of the features is preferably measured from a single top-down image or a line scan. This paper presents one approach to the generation of 2-D simulated images and presents some suggestions as to their application to critical dimension metrology.

  4. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image Central Library NOAA Privacy Policy | NOAA Disclaimer Last Updated: November 10, 2017

  5. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image Library NOAA Privacy Policy | NOAA Disclaimer Last Updated: November 10, 2017

  6. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image Commerce, National Oceanic & Atmospheric Adminstration (NOAA), NOAA Central Library NOAA Privacy Policy

  7. Art Libraries Section. Special Libraries Division. Papers.

    ERIC Educational Resources Information Center

    International Federation of Library Associations, The Hague (Netherlands).

    Papers on art libraries and information services for the arts, which were presented at the 1983 International Federation of Library Associations (IFLA) conference, include: (1) "'I See All': Information Technology and the Universal Availability of Images" by Philip Pacey (United Kingdom); (2) "Online Databases in the Fine Arts"…

  8. Communication in science.

    PubMed

    Deda, H; Yakupoglu, H

    2002-01-01

    Science must have a common language. For centuries, Latin language carried out this job, but the progress in computer technology and internet world through the last 20 years, began to produce a new language with the new century; the computer language. The information masses, which need data language standardization, are the followings; Digital libraries and medical education systems, Consumer health informatics, Medical education systems, World Wide Web Applications, Database systems, Medical language processing, Automatic indexing systems, Image processing units, Telemedicine, New Generation Internet (NGI).

  9. Software architecture for time-constrained machine vision applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2013-01-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility, because they are normally oriented toward particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse, and inefficient execution on multicore processors. We present a novel software architecture for time-constrained machine vision applications that addresses these issues. The architecture is divided into three layers. The platform abstraction layer provides a high-level application programming interface for the rest of the architecture. The messaging layer provides a message-passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of message. The application layer provides a repository for reusable application modules designed for machine vision applications. These modules, which include acquisition, visualization, communication, user interface, and data processing, take advantage of the power of well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, the proposed architecture is applied to a real machine vision application: a jam detector for steel pickling lines.

  10. A new hospital library: a marketing opportunity.

    PubMed Central

    Walker, M E

    1995-01-01

    A new or remodeled library presents a unique marketing opportunity for the hospital librarian. Furthermore, a well-designed library markets itself through its convenience, attractiveness, and ease of use. A marketing approach to library planning takes into account needs of users and of library staff and considers the librarian's relations with the architect as well as with hospital employees. This paper describes ways to combine library planning with marketing techniques and specifies aspects of the library that contribute to its good image. PMID:7581190

  11. Images of Kilauea East Rift Zone eruption, 1983-1993

    USGS Publications Warehouse

    Takahashi, Taeko Jane; Abston, C.C.; Heliker, C.C.

    1995-01-01

    This CD-ROM disc contains 475 scanned photographs from the U.S. Geological Survey Hawaii Observatory Library. The collection represents a comprehensive range of the best photographic images of volcanic phenomena for Kilauea's East Rift eruption, which continues as of September 1995. Captions of the images present information on location, geologic feature or process, and date. Short documentations of work by the USGS Hawaiian Volcano Observatory in geology, seismology, ground deformation, geophysics, and geochemistry are also included, along with selected references. The CD-ROM was produced in accordance with the ISO 9660 standard; however, it is intended for use only on DOS-based computer systems.

  12. Using Six Sigma to improve the film library.

    PubMed

    Benedetto, Anthony R; Dunnington, Joel S; Oxford-Zelenske, Deborah

    2002-01-01

    The film library of a film-based radiology department is a mission-critical component of the department that is frequently underappreciated and under-staffed. A poorly performing film library causes operational problems for not only the radiology department, but for the institution as a whole. Since Six Sigma techniques had proved successful in an earlier CT throughput improvement project, the University of Texas M.D. Anderson Cancer Center Division of Diagnostic Imaging decided to use Six Sigma techniques to dramatically improve the performance of its film library. Nine mini-project teams were formed to address the basic operating functions of the film library. The teams included film library employees, employees from other sections of radiology, employees from stakeholders outside of radiology, and radiologists and referring physicians, as appropriate to the team's mission. Each Six Sigma team developed a process map of the current process, reviewed or acquired baseline quantitative data to assess the current level of performance, and then modified the process map to incorporate their recommendations for improving the process. An overall project steering committee reviewed recommendations from each Six Sigma team to assure that all of the teams' efforts were coordinated and aligned with the overall project goals. The steering committee also provided advice on implementation strategies, particularly for changes that would have an immediate effect on stakeholders outside of the radiology department. After implementation of recommendations, quantitative data were collected again to determine if the changes were having the desired effect. Improvement in both quantitative metrics and in employee morale have been experienced. We continue to collect data as assurance that the improvements are being sustained over the long haul. Six Sigma techniques, which are as quantitatively-based as possible, are useful in a service-oriented organization, such as a film library. The primary problem we encountered was that most of the important film library customer services are not automatically captured in the RIS or in any other information system. We had to develop manual data collection methods for most of our performance metrics. These collection methods were burden-some to the frontline employees who were required to collect the data. Additionally, we had to invest many hours of effort into assuring that the data were valid since film library employees rarely have the educational background to readily grasp the importance of the statistical methods employed in Six Sigma. One of the most important lessons we learned was that film library employees, including supervisory personnel, must be held accountable for their performance in a manner that is objective, fair and constructive. The best methods involved feedback collected by the employees themselves in the ordinary course of their duties. Another important lesson we learned was that film library employees, as well as stakeholders outside of the film library, need to understand how important the film library is to the smooth functioning of the entire institution. Significant educational efforts must be expended to show film library employees how their duties affect their film library co-workers and the institution's patients. Physicians, nurses and employees outside of the film library must do their part too, which requires educational efforts that highlight the importance of compliance with film library policies.

  13. Tumor-targeting peptides from combinatorial libraries.

    PubMed

    Liu, Ruiwu; Li, Xiaocen; Xiao, Wenwu; Lam, Kit S

    2017-02-01

    Cancer is one of the major and leading causes of death worldwide. Two of the greatest challenges in fighting cancer are early detection and effective treatments with no or minimum side effects. Widespread use of targeted therapies and molecular imaging in clinics requires high affinity, tumor-specific agents as effective targeting vehicles to deliver therapeutics and imaging probes to the primary or metastatic tumor sites. Combinatorial libraries such as phage-display and one-bead one-compound (OBOC) peptide libraries are powerful approaches in discovering tumor-targeting peptides. This review gives an overview of different combinatorial library technologies that have been used for the discovery of tumor-targeting peptides. Examples of tumor-targeting peptides identified from each combinatorial library method will be discussed. Published tumor-targeting peptide ligands and their applications will also be summarized by the combinatorial library methods and their corresponding binding receptors. Copyright © 2017. Published by Elsevier B.V.

  14. Ensuring quality Website redesign: the University of Maryland's experience.

    PubMed

    Fuller, D M; Hinegardner, P G

    2001-10-01

    The Web Redesign Committee at the Health Sciences and Human Services Library (HS/HSL) of the University of Maryland was formed to evaluate its site and oversee the site's redesign. The committee's goal was to design a site that would be functional, be usable, and provide the library with a more current image. Based on a literature review and discussions with colleagues, a usability study was conducted to gain a better understanding of how the Website was used. Volunteers from across the campus participated in the study. A Web-based survey was also used to gather feedback. To complement user input, library staff were asked to review the existing site. A prototype site was developed incorporating suggestions obtained from the evaluation mechanisms. The usability study was particularly useful because it identified problem areas, including terminology, which would have been overlooked by library staff. A second usability study was conducted to refine the prototype. The new site was launched in the spring of 2000. The usability studies were valuable mechanisms in designing the site. Users felt invested in the project, and the committee received valuable feedback. This process led to an improved Website and higher visibility for the library on campus.

  15. Evaluation of methods to produce an image library for automatic patient model localization for dose mapping during fluoroscopically guided procedures

    NASA Astrophysics Data System (ADS)

    Kilian-Meneghin, Josh; Xiong, Z.; Rudin, S.; Oines, A.; Bednarek, D. R.

    2017-03-01

    The purpose of this work is to evaluate methods for producing a library of 2D-radiographic images to be correlated to clinical images obtained during a fluoroscopically-guided procedure for automated patient-model localization. The localization algorithm will be used to improve the accuracy of the skin-dose map superimposed on the 3D patient- model of the real-time Dose-Tracking-System (DTS). For the library, 2D images were generated from CT datasets of the SK-150 anthropomorphic phantom using two methods: Schmid's 3D-visualization tool and Plastimatch's digitally-reconstructed-radiograph (DRR) code. Those images, as well as a standard 2D-radiographic image, were correlated to a 2D-fluoroscopic image of a phantom, which represented the clinical-fluoroscopic image, using the Corr2 function in Matlab. The Corr2 function takes two images and outputs the relative correlation between them, which is fed into the localization algorithm. Higher correlation means better alignment of the 3D patient-model with the patient image. In this instance, it was determined that the localization algorithm will succeed when Corr2 returns a correlation of at least 50%. The 3D-visualization tool images returned 55-80% correlation relative to the fluoroscopic-image, which was comparable to the correlation for the radiograph. The DRR images returned 61-90% correlation, again comparable to the radiograph. Both methods prove to be sufficient for the localization algorithm and can be produced quickly; however, the DRR method produces more accurate grey-levels. Using the DRR code, a library at varying angles can be produced for the localization algorithm.

  16. Designing an Academic Library as a Place and a Space: How a Robotic Storage System Will Create a 21st Century Library Design

    ERIC Educational Resources Information Center

    Bostick, Sharon L.; Irwin, Bryan

    2012-01-01

    Renovating, expanding or building new libraries today is a challenge on several levels. Libraries in general are faced with image issues, such as the assumption that a library building exists only to house print material followed by the equally erroneous assumption that everything is now available online, thus rendering a physical building…

  17. Optical Disc Technology and the Cooperative Television Library.

    ERIC Educational Resources Information Center

    Kranch, Douglas

    1989-01-01

    Discusses the feasibility of individual television film libraries combining film holdings onto optical disks and developing networks that would allow online searching of, access to, and transmission of video images. It is concluded that recent advances in technology would support fast and cost effective image retrieval with no loss in video…

  18. Enhancing Your Library's Public Relations with "Lunch-and-Learn" Workshops.

    ERIC Educational Resources Information Center

    Godbey, Ruth

    The staff of the Creighton University (Omaha, Nebraska) Health Sciences Library has been able to improve not only the library's public relations but also the image of the library by presenting weekly "Lunch-and-Learn" workshops. Since 1990, approximately 15 workshops have been presented each semester with topics ranging from cancer…

  19. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image , Enid Photo Date: June 5, 1966 Photographer: Leo Ainsworth Credit: NOAA Photo Library, NOAA Central

  20. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image ' Waterspouts" by Joseph H. Golden, NOAA Technical Memorandum ERL NSSL-70, 1974. Library Call Number

  1. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image Location: Oklahoma, Altus Photo Date: May 20, 1977 Photographer: D. Burgess Credit: NOAA Photo Library

  2. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image Location: Oklahoma, Arcadia Photo Date: June 8, 1974 Photographer: D. Burgess Credit: NOAA Photo Library

  3. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image Location: SW of Cheyenne, Oklahoma Photo Date: May 16, 1977 Credit: NOAA Photo Library, NOAA Central

  4. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image ) Collection Location: Texas, Wichita Falls Photo Date: April 10, 1979 Credit: NOAA Photo Library, NOAA Central

  5. Open source pipeline for ESPaDOnS reduction and analysis

    NASA Astrophysics Data System (ADS)

    Martioli, Eder; Teeple, Doug; Manset, Nadine; Devost, Daniel; Withington, Kanoa; Venne, Andre; Tannock, Megan

    2012-09-01

    OPERA is a Canada-France-Hawaii Telescope (CFHT) open source collaborative software project currently under development for an ESPaDOnS echelle spectro-polarimetric image reduction pipeline. OPERA is designed to be fully automated, performing calibrations and reduction, producing one-dimensional intensity and polarimetric spectra. The calibrations are performed on two-dimensional images. Spectra are extracted using an optimal extraction algorithm. While primarily designed for CFHT ESPaDOnS data, the pipeline is being written to be extensible to other echelle spectrographs. A primary design goal is to make use of fast, modern object-oriented technologies. Processing is controlled by a harness, which manages a set of processing modules, that make use of a collection of native OPERA software libraries and standard external software libraries. The harness and modules are completely parametrized by site configuration and instrument parameters. The software is open- ended, permitting users of OPERA to extend the pipeline capabilities. All these features have been designed to provide a portable infrastructure that facilitates collaborative development, code re-usability and extensibility. OPERA is free software with support for both GNU/Linux and MacOSX platforms. The pipeline is hosted on SourceForge under the name "opera-pipeline".

  6. Automatic Image Processing Workflow for the Keck/NIRC2 Vortex Coronagraph

    NASA Astrophysics Data System (ADS)

    Xuan, Wenhao; Cook, Therese; Ngo, Henry; Zawol, Zoe; Ruane, Garreth; Mawet, Dimitri

    2018-01-01

    The Keck/NIRC2 camera, equipped with the vortex coronagraph, is an instrument targeted at the high contrast imaging of extrasolar planets. To uncover a faint planet signal from the overwhelming starlight, we utilize the Vortex Image Processing (VIP) library, which carries out principal component analysis to model and remove the stellar point spread function. To bridge the gap between data acquisition and data reduction, we implement a workflow that 1) downloads, sorts, and processes data with VIP, 2) stores the analysis products into a database, and 3) displays the reduced images, contrast curves, and auxiliary information on a web interface. Both angular differential imaging and reference star differential imaging are implemented in the analysis module. A real-time version of the workflow runs during observations, allowing observers to make educated decisions about time distribution on different targets, hence optimizing science yield. The post-night version performs a standardized reduction after the observation, building up a valuable database that not only helps uncover new discoveries, but also enables a statistical study of the instrument itself. We present the workflow, and an examination of the contrast performance of the NIRC2 vortex with respect to factors including target star properties and observing conditions.

  7. Exploiting range imagery: techniques and applications

    NASA Astrophysics Data System (ADS)

    Armbruster, Walter

    2009-07-01

    Practically no applications exist for which automatic processing of 2D intensity imagery can equal human visual perception. This is not the case for range imagery. The paper gives examples of 3D laser radar applications, for which automatic data processing can exceed human visual cognition capabilities and describes basic processing techniques for attaining these results. The examples are drawn from the fields of helicopter obstacle avoidance, object detection in surveillance applications, object recognition at high range, multi-object-tracking, and object re-identification in range image sequences. Processing times and recognition performances are summarized. The techniques used exploit the bijective continuity of the imaging process as well as its independence of object reflectivity, emissivity and illumination. This allows precise formulations of the probability distributions involved in figure-ground segmentation, feature-based object classification and model based object recognition. The probabilistic approach guarantees optimal solutions for single images and enables Bayesian learning in range image sequences. Finally, due to recent results in 3D-surface completion, no prior model libraries are required for recognizing and re-identifying objects of quite general object categories, opening the way to unsupervised learning and fully autonomous cognitive systems.

  8. Accessing seismic data through geological interpretation: Challenges and solutions

    NASA Astrophysics Data System (ADS)

    Butler, R. W.; Clayton, S.; McCaffrey, B.

    2008-12-01

    Between them, the world's research programs, national institutions and corporations, especially oil and gas companies, have acquired substantial volumes of seismic reflection data. Although the vast majority are proprietary and confidential, significant data are released and available for research, including those in public data libraries. The challenge now is to maximise use of these data, by providing routes to seismic not simply on the basis of acquisition or processing attributes but via the geology they image. The Virtual Seismic Atlas (VSA: www.seismicatlas.org) meets this challenge by providing an independent, free-to-use community based internet resource that captures and shares the geological interpretation of seismic data globally. Images and associated documents are explicitly indexed by extensive metadata trees, using not only existing survey and geographical data but also the geology they portray. The solution uses a Documentum database interrogated through Endeca Guided Navigation, to search, discover and retrieve images. The VSA allows users to compare contrasting interpretations of clean data thereby exploring the ranges of uncertainty in the geometric interpretation of subsurface structure. The metadata structures can be used to link reports and published research together with other data types such as wells. And the VSA can link to existing data libraries. Searches can take different paths, revealing arrays of geological analogues, new datasets while providing entirely novel insights and genuine surprises. This can then drive new creative opportunities for research and training, and expose the contents of seismic data libraries to the world.

  9. apART: system for the acquisition, processing, archiving, and retrieval of digital images in an open, distributed imaging environment

    NASA Astrophysics Data System (ADS)

    Schneider, Uwe; Strack, Ruediger

    1992-04-01

    apART reflects the structure of an open, distributed environment. According to the general trend in the area of imaging, network-capable, general purpose workstations with capabilities of open system image communication and image input are used. Several heterogeneous components like CCD cameras, slide scanners, and image archives can be accessed. The system is driven by an object-oriented user interface where devices (image sources and destinations), operators (derived from a commercial image processing library), and images (of different data types) are managed and presented uniformly to the user. Browsing mechanisms are used to traverse devices, operators, and images. An audit trail mechanism is offered to record interactive operations on low-resolution image derivatives. These operations are processed off-line on the original image. Thus, the processing of extremely high-resolution raster images is possible, and the performance of resolution dependent operations is enhanced significantly during interaction. An object-oriented database system (APRIL), which can be browsed, is integrated into the system. Attribute retrieval is supported by the user interface. Other essential features of the system include: implementation on top of the X Window System (X11R4) and the OSF/Motif widget set; a SUN4 general purpose workstation, inclusive ethernet, magneto optical disc, etc., as the hardware platform for the user interface; complete graphical-interactive parametrization of all operators; support of different image interchange formats (GIF, TIFF, IIF, etc.); consideration of current IPI standard activities within ISO/IEC for further refinement and extensions.

  10. SU-F-J-72: A Clinical Usable Integrated Contouring Quality Evaluation Software for Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, S; Dolly, S; Cai, B

    Purpose: To introduce the Auto Contour Evaluation (ACE) software, which is the clinical usable, user friendly, efficient and all-in-one toolbox for automatically identify common contouring errors in radiotherapy treatment planning using supervised machine learning techniques. Methods: ACE is developed with C# using Microsoft .Net framework and Windows Presentation Foundation (WPF) for elegant GUI design and smooth GUI transition animations through the integration of graphics engines and high dots per inch (DPI) settings on modern high resolution monitors. The industrial standard software design pattern, Model-View-ViewModel (MVVM) pattern, is chosen to be the major architecture of ACE for neat coding structure, deepmore » modularization, easy maintainability and seamless communication with other clinical software. ACE consists of 1) a patient data importing module integrated with clinical patient database server, 2) a 2D DICOM image and RT structure simultaneously displaying module, 3) a 3D RT structure visualization module using Visualization Toolkit or VTK library and 4) a contour evaluation module using supervised pattern recognition algorithms to detect contouring errors and display detection results. ACE relies on supervised learning algorithms to handle all image processing and data processing jobs. Implementations of related algorithms are powered by Accord.Net scientific computing library for better efficiency and effectiveness. Results: ACE can take patient’s CT images and RT structures from commercial treatment planning software via direct user input or from patients’ database. All functionalities including 2D and 3D image visualization and RT contours error detection have been demonstrated with real clinical patient cases. Conclusion: ACE implements supervised learning algorithms and combines image processing and graphical visualization modules for RT contours verification. ACE has great potential for automated radiotherapy contouring quality verification. Structured with MVVM pattern, it is highly maintainable and extensible, and support smooth connections with other clinical software tools.« less

  11. GNU Data Language (GDL) - a free and open-source implementation of IDL

    NASA Astrophysics Data System (ADS)

    Arabas, Sylwester; Schellens, Marc; Coulais, Alain; Gales, Joel; Messmer, Peter

    2010-05-01

    GNU Data Language (GDL) is developed with the aim of providing an open-source drop-in replacement for the ITTVIS's Interactive Data Language (IDL). It is free software developed by an international team of volunteers led by Marc Schellens - the project's founder (a list of contributors is available on the project's website). The development is hosted on SourceForge where GDL continuously ranks in the 99th percentile of most active projects. GDL with its library routines is designed as a tool for numerical data analysis and visualisation. As its proprietary counterparts (IDL and PV-WAVE), GDL is used particularly in geosciences and astronomy. GDL is dynamically-typed, vectorized and has object-oriented programming capabilities. The library routines handle numerical calculations, data visualisation, signal/image processing, interaction with host OS and data input/output. GDL supports several data formats such as netCDF, HDF4, HDF5, GRIB, PNG, TIFF, DICOM, etc. Graphical output is handled by X11, PostScript, SVG or z-buffer terminals, the last one allowing output to be saved in a variety of raster graphics formats. GDL is an incremental compiler with integrated debugging facilities. It is written in C++ using the ANTLR language-recognition framework. Most of the library routines are implemented as interfaces to open-source packages such as GNU Scientific Library, PLPlot, FFTW, ImageMagick, and others. GDL features a Python bridge (Python code can be called from GDL; GDL can be compiled as a Python module). Extensions to GDL can be written in C++, GDL, and Python. A number of open software libraries written in IDL, such as the NASA Astronomy Library, MPFIT, CMSVLIB and TeXtoIDL are fully or partially functional under GDL. Packaged versions of GDL are available for several Linux distributions and Mac OS X. The source code compiles on some other UNIX systems, including BSD and OpenSolaris. The presentation will cover the current status of the project, the key accomplishments, and the weaknesses - areas where contributions and users' feedback are welcome! While still being in beta-stage of development, GDL proved to be a useful tool for classroom work on data analysis. Its usage for teaching meteorological-data processing at the University of Warsaw will serve as an example.

  12. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image the late Dr. Benjamin Franklin ....", 1806. Volume II, p. 26. Library Call Number PS745 .A2 1806

  13. Range Image Flow using High-Order Polynomial Expansion

    DTIC Science & Technology

    2013-09-01

    included as a default algorithm in the OpenCV library [2]. The research of estimating the motion between range images, or range flow, is much more...Journal of Computer Vision, vol. 92, no. 1, pp. 1‒31. 2. G. Bradski and A. Kaehler. 2008. Learning OpenCV : Computer Vision with the OpenCV Library

  14. MLS data segmentation using Point Cloud Library procedures. (Polish Title: Segmentacja danych MLS z użyciem procedur Point Cloud Library)

    NASA Astrophysics Data System (ADS)

    Grochocka, M.

    2013-12-01

    Mobile laser scanning is dynamically developing measurement technology, which is becoming increasingly widespread in acquiring three-dimensional spatial information. Continuous technical progress based on the use of new tools, technology development, and thus the use of existing resources in a better way, reveals new horizons of extensive use of MLS technology. Mobile laser scanning system is usually used for mapping linear objects, and in particular the inventory of roads, railways, bridges, shorelines, shafts, tunnels, and even geometrically complex urban spaces. The measurement is done from the perspective of use of the object, however, does not interfere with the possibilities of movement and work. This paper presents the initial results of the segmentation data acquired by the MLS. The data used in this work was obtained as part of an inventory measurement infrastructure railway line. Measurement of point clouds was carried out using a profile scanners installed on the railway platform. To process the data, the tools of 'open source' Point Cloud Library was used. These tools allow to use templates of programming libraries. PCL is an open, independent project, operating on a large scale for processing 2D/3D image and point clouds. Software PCL is released under the terms of the BSD license (Berkeley Software Distribution License), which means it is a free for commercial and research use. The article presents a number of issues related to the use of this software and its capabilities. Segmentation data is based on applying the templates library pcl_ segmentation, which contains the segmentation algorithms to separate clusters. These algorithms are best suited to the processing point clouds, consisting of a number of spatially isolated regions. Template library performs the extraction of the cluster based on the fit of the model by the consensus method samples for various parametric models (planes, cylinders, spheres, lines, etc.). Most of the mathematical operation is carried out on the basis of Eigen library, a set of templates for linear algebra.

  15. An integrated one-step system to extract, analyze and annotate all relevant information from image-based cell screening of chemical libraries.

    PubMed

    Rabal, Obdulia; Link, Wolfgang; Serelde, Beatriz G; Bischoff, James R; Oyarzabal, Julen

    2010-04-01

    Here we report the development and validation of a complete solution to manage and analyze the data produced by image-based phenotypic screening campaigns of small-molecule libraries. In one step initial crude images are analyzed for multiple cytological features, statistical analysis is performed and molecules that produce the desired phenotypic profile are identified. A naïve Bayes classifier, integrating chemical and phenotypic spaces, is built and utilized during the process to assess those images initially classified as "fuzzy"-an automated iterative feedback tuning. Simultaneously, all this information is directly annotated in a relational database containing the chemical data. This novel fully automated method was validated by conducting a re-analysis of results from a high-content screening campaign involving 33 992 molecules used to identify inhibitors of the PI3K/Akt signaling pathway. Ninety-two percent of confirmed hits identified by the conventional multistep analysis method were identified using this integrated one-step system as well as 40 new hits, 14.9% of the total, originally false negatives. Ninety-six percent of true negatives were properly recognized too. A web-based access to the database, with customizable data retrieval and visualization tools, facilitates the posterior analysis of annotated cytological features which allows identification of additional phenotypic profiles; thus, further analysis of original crude images is not required.

  16. On-bead combinatorial synthesis and imaging of chemical exchange saturation transfer magnetic resonance imaging agents to identify factors that influence water exchange.

    PubMed

    Napolitano, Roberta; Soesbe, Todd C; De León-Rodríguez, Luis M; Sherry, A Dean; Udugamasooriya, D Gomika

    2011-08-24

    The sensitivity of magnetic resonance imaging (MRI) contrast agents is highly dependent on the rate of water exchange between the inner sphere of a paramagnetic ion and bulk water. Normally, identifying a paramagnetic complex that has optimal water exchange kinetics is done by synthesizing and testing one compound at a time. We report here a rapid, economical on-bead combinatorial synthesis of a library of imaging agents. Eighty different 1,4,7,10-tetraazacyclododecan-1,4,7,10-tetraacetic acid (DOTA)-tetraamide peptoid derivatives were prepared on beads using a variety of charged, uncharged but polar, hydrophobic, and variably sized primary amines. A single chemical exchange saturation transfer image of the on-bead library easily distinguished those compounds having the most favorable water exchange kinetics. This combinatorial approach will allow rapid screening of libraries of imaging agents to identify the chemical characteristics of a ligand that yield the most sensitive imaging agents. This technique could be automated and readily adapted to other types of MRI or magnetic resonance/positron emission tomography agents as well.

  17. Vaccine-Preventable Disease Photos

    MedlinePlus

    ... Typhoid fever HPV Polio Whooping cough Influenza (flu) Rabies Yellow fever Photo Library Photographs accompanied by text ... images Pneumococcus Three images Polio Twenty-six images Rabies Ten images Rotavirus Two images Rubella Fifteen images ...

  18. VICAR - VIDEO IMAGE COMMUNICATION AND RETRIEVAL

    NASA Technical Reports Server (NTRS)

    Wall, R. J.

    1994-01-01

    VICAR (Video Image Communication and Retrieval) is a general purpose image processing software system that has been under continuous development since the late 1960's. Originally intended for data from the NASA Jet Propulsion Laboratory's unmanned planetary spacecraft, VICAR is now used for a variety of other applications including biomedical image processing, cartography, earth resources, and geological exploration. The development of this newest version of VICAR emphasized a standardized, easily-understood user interface, a shield between the user and the host operating system, and a comprehensive array of image processing capabilities. Structurally, VICAR can be divided into roughly two parts; a suite of applications programs and an executive which serves as the interfaces between the applications, the operating system, and the user. There are several hundred applications programs ranging in function from interactive image editing, data compression/decompression, and map projection, to blemish, noise, and artifact removal, mosaic generation, and pattern recognition and location. An information management system designed specifically for handling image related data can merge image data with other types of data files. The user accesses these programs through the VICAR executive, which consists of a supervisor and a run-time library. From the viewpoint of the user and the applications programs, the executive is an environment that is independent of the operating system. VICAR does not replace the host computer's operating system; instead, it overlays the host resources. The core of the executive is the VICAR Supervisor, which is based on NASA Goddard Space Flight Center's Transportable Applications Executive (TAE). Various modifications and extensions have been made to optimize TAE for image processing applications, resulting in a user friendly environment. The rest of the executive consists of the VICAR Run-Time Library, which provides a set of subroutines (image I/O, label I/O, parameter I/O, etc.) to facilitate image processing and provide the fastest I/O possible while maintaining a wide variety of capabilities. The run-time library also includes the Virtual Raster Display Interface (VRDI) which allows display oriented applications programs to be written for a variety of display devices using a set of common routines. (A display device can be any frame-buffer type device which is attached to the host computer and has memory planes for the display and manipulation of images. A display device may have any number of separate 8-bit image memory planes (IMPs), a graphics overlay plane, pseudo-color capabilities, hardware zoom and pan, and other features). The VRDI supports the following display devices: VICOM (Gould/Deanza) IP8500, RAMTEK RM-9465, ADAGE (Ikonas) IK3000 and the International Imaging Systems IVAS. VRDI's purpose is to provide a uniform operating environment not only for an application programmer, but for the user as well. The programmer is able to write programs without being concerned with the specifics of the device for which the application is intended. The VICAR Interactive Display Subsystem (VIDS) is a collection of utilities for easy interactive display and manipulation of images on a display device. VIDS has characteristics of both the executive and an application program, and offers a wide menu of image manipulation options. VIDS uses the VRDI to communicate with display devices. The first step in using VIDS to analyze and enhance an image (one simple example of VICAR's numerous capabilities) is to examine the histogram of the image. The histogram is a plot of frequency of occurrence for each pixel value (0 - 255) loaded in the image plane. If, for example, the histogram shows that there are no pixel values below 64 or above 192, the histogram can be "stretched" so that the value of 64 is mapped to zero and 192 is mapped to 255. Now the user can use the full dynamic range of the display device to display the data and better see its contents. Another example of a VIDS procedure is the JMOVIE command, which allows the user to run animations interactively on the display device. JMOVIE uses the concept of "frames", which are the individual frames which comprise the animation to be viewed. The user loads images into the frames after the size and number of frames has been selected. VICAR's source languages are primarily FORTRAN and C, with some VAX Assembler and array processor code. The VICAR run-time library is designed to work equally easily from either FORTRAN or C. The program was implemented on a DEC VAX series computer operating under VMS 4.7. The virtual memory required is 1.5MB. Approximately 180,000 blocks of storage are needed for the saveset. VICAR (version 2.3A/3G/13H) is a copyrighted work with all copyright vested in NASA and is available by license for a period of ten (10) years to approved licensees. This program was developed in 1989.

  19. Remote sensing based on hyperspectral data analysis

    NASA Astrophysics Data System (ADS)

    Sharifahmadian, Ershad

    In remote sensing, accurate identification of far objects, especially concealed objects is difficult. In this study, to improve object detection from a distance, the hyperspecral imaging and wideband technology are employed with the emphasis on wideband radar. As the wideband data includes a broad range of frequencies, it can reveal information about both the surface of the object and its content. Two main contributions are made in this study: 1) Developing concept of return loss for target detection: Unlike typical radar detection methods which uses radar cross section to detect an object, it is possible to enhance the process of detection and identification of concealed targets using the wideband radar based on the electromagnetic characteristics --conductivity, permeability, permittivity, and return loss-- of materials. During the identification process, collected wideband data is evaluated with information from wideband signature library which has already been built. In fact, several classes (e.g. metal, wood, etc.) and subclasses (ex. metals with high conductivity) have been defined based on their electromagnetic characteristics. Materials in a scene are then classified based on these classes. As an example, materials with high electrical conductivity can be conveniently detected. In fact, increasing relative conductivity leads to a reduction in the return loss. Therefore, metals with high conductivity (ex. copper) shows stronger radar reflections compared with metals with low conductivity (ex. stainless steel). Thus, it is possible to appropriately discriminate copper from stainless steel. 2) Target recognition techniques: To detect and identify targets, several techniques have been proposed, in particular the Multi-Spectral Wideband Radar Image (MSWRI) which is able to localize and identify concealed targets. The MSWRI is based on the theory of robust capon beamformer. During identification process, information from wideband signature library is utilized. The WB signature library includes such parameters as conductivity, permeability, permittivity, and return loss at different frequencies for possible materials related to a target. In the MSWRI approach, identification procedure is performed by calculating the RLs at different selected frequencies. Based on similarity of the calculated RLs and RL from WB signature library, targets are detected and identified. Based on the simulation and experimental results, it is concluded that the MSWRI technique is a promising approach for standoff target detection.

  20. Combining fluorescence imaging with Hi-C to study 3D genome architecture of the same single cell.

    PubMed

    Lando, David; Basu, Srinjan; Stevens, Tim J; Riddell, Andy; Wohlfahrt, Kai J; Cao, Yang; Boucher, Wayne; Leeb, Martin; Atkinson, Liam P; Lee, Steven F; Hendrich, Brian; Klenerman, Dave; Laue, Ernest D

    2018-05-01

    Fluorescence imaging and chromosome conformation capture assays such as Hi-C are key tools for studying genome organization. However, traditionally, they have been carried out independently, making integration of the two types of data difficult to perform. By trapping individual cell nuclei inside a well of a 384-well glass-bottom plate with an agarose pad, we have established a protocol that allows both fluorescence imaging and Hi-C processing to be carried out on the same single cell. The protocol identifies 30,000-100,000 chromosome contacts per single haploid genome in parallel with fluorescence images. Contacts can be used to calculate intact genome structures to better than 100-kb resolution, which can then be directly compared with the images. Preparation of 20 single-cell Hi-C libraries using this protocol takes 5 d of bench work by researchers experienced in molecular biology techniques. Image acquisition and analysis require basic understanding of fluorescence microscopy, and some bioinformatics knowledge is required to run the sequence-processing tools described here.

  1. Looking backward, 1984-1959: twenty-five years of library automation--a personal view.

    PubMed Central

    Pizer, I H

    1984-01-01

    A brief profile of Janet Doe is given. Twenty-five years of library automation are reviewed from the author's point of view. Major projects such as the SUNY Biomedical Communication Network and the Regional Online Union Catalog of the Greater Midwest Regional Medical Library Network are discussed. Important figures in medical library automation are considered, as is the major role played by the National Library of Medicine. Images PMID:6388691

  2. Programmability in AIPS++

    NASA Technical Reports Server (NTRS)

    Hjellming, R. M.

    1992-01-01

    AIPS++ is an Astronomical Information Processing System being designed and implemented by an international consortium of NRAO and six other radio astronomy institutions in Australia, India, the Netherlands, the United Kingdom, Canada, and the USA. AIPS++ is intended to replace the functionality of AIPS, to be more easily programmable, and will be implemented in C++ using object-oriented techniques. Programmability in AIPS++ is planned at three levels. The first level will be that of a command-line interpreter with characteristics similar to IDL and PV-Wave, but with an intensive set of operations appropriate to telescope data handling, image formation, and image processing. The third level will be in C++ with extensive use of class libraries for both basic operations and advanced applications. The third level will allow input and output of data between external FORTRAN programs and AIPS++ telescope and image databases. In addition to summarizing the above programmability characteristics, this talk will given an overview of the classes currently being designed for telescope data calibration and editing, image formation, and the 'toolkit' of mathematical 'objects' that will perform most of the processing in AIPS++.

  3. Back to the Future, circa 1907.

    ERIC Educational Resources Information Center

    Sullivan, Peggy

    1987-01-01

    Commemorates this journal's 80th anniversary by providing historical background and comparing issues of early library science with those of today--censorship; library education and administration; technical services; librarian image; copyrights; and outreach programs. Reactions to the founding of the Bulletin of the American Library Association in…

  4. Automated microscopy for high-content RNAi screening

    PubMed Central

    2010-01-01

    Fluorescence microscopy is one of the most powerful tools to investigate complex cellular processes such as cell division, cell motility, or intracellular trafficking. The availability of RNA interference (RNAi) technology and automated microscopy has opened the possibility to perform cellular imaging in functional genomics and other large-scale applications. Although imaging often dramatically increases the content of a screening assay, it poses new challenges to achieve accurate quantitative annotation and therefore needs to be carefully adjusted to the specific needs of individual screening applications. In this review, we discuss principles of assay design, large-scale RNAi, microscope automation, and computational data analysis. We highlight strategies for imaging-based RNAi screening adapted to different library and assay designs. PMID:20176920

  5. Building a Digital Library for Multibeam Data, Images and Documents

    NASA Astrophysics Data System (ADS)

    Miller, S. P.; Staudigel, H.; Koppers, A.; Johnson, C.; Cande, S.; Sandwell, D.; Peckman, U.; Becker, J. J.; Helly, J.; Zaslavsky, I.; Schottlaender, B. E.; Starr, S.; Montoya, G.

    2001-12-01

    The Scripps Institution of Oceanography, the UCSD Libraries and the San Diego Supercomputing Center have joined forces to establish a digital library for accessing a wide range of multibeam and marine geophysical data, to a community that ranges from the MGG researcher to K-12 outreach clients. This digital library collection will include 233 multibeam cruises with grids, plots, photographs, station data, technical reports, planning documents and publications, drawn from the holdings of the Geological Data Center and the SIO Archives. Inquiries will be made through an Ocean Exploration Console, reminiscent of a cockpit display where a multitude of data may be displayed individually or in two or three-dimensional projections. These displays will provide access to cruise data as well as global databases such as Global Topography, crustal age, and sediment thickness, thus meeting the day-to-day needs of researchers as well as educators, students, and the public. The prototype contains a few selected expeditions, and a review of the initial approach will be solicited from the user community during the poster session. The search process can be focused by a variety of constraints: geospatial (lat-lon box), temporal (e.g., since 1996), keyword (e.g., cruise, place name, PI, etc.), or expert-level (e.g., K-6 or researcher). The Storage Resource Broker (SRB) software from the SDSC manages the evolving collection as a series of distributed but related archives in various media, from shipboard data through processing and final archiving. The latest version of MB-System provides for the systematic creation of standard metadata, and for the harvesting of metadata from multibeam files. Automated scripts will be used to load the metadata catalog to enable queries with an Oracle database management system. These new efforts to bridge the gap between libraries and data archives are supported by the NSF Information Technology and National Science Digital Library (NSDL) programs, augmented by UC funds, and closely coordinated with Digital Library for Earth System Education (DLESE) activities.

  6. ChRIS--A web-based neuroimaging and informatics system for collecting, organizing, processing, visualizing and sharing of medical data.

    PubMed

    Pienaar, Rudolph; Rannou, Nicolas; Bernal, Jorge; Hahn, Daniel; Grant, P Ellen

    2015-01-01

    The utility of web browsers for general purpose computing, long anticipated, is only now coming into fruition. In this paper we present a web-based medical image data and information management software platform called ChRIS ([Boston] Children's Research Integration System). ChRIS' deep functionality allows for easy retrieval of medical image data from resources typically found in hospitals, organizes and presents information in a modern feed-like interface, provides access to a growing library of plugins that process these data - typically on a connected High Performance Compute Cluster, allows for easy data sharing between users and instances of ChRIS and provides powerful 3D visualization and real time collaboration.

  7. Measuring charged particle multiplicity with early ATLAS public data

    NASA Astrophysics Data System (ADS)

    Üstün, G.; Barut, E.; Bektaş, E.; Özcan, V. E.

    2017-07-01

    We study 100 images of early LHC collisions that were recorded by the ATLAS experiment and made public for outreach purposes, and extract the charged particle multiplicity as a function of momentum for proton-proton collisions at a centre-of-mass energy of 7 TeV. As these collisions have already been pre-processed by the ATLAS Collaboration, the particle tracks are visible, but are available to the public only in the form of low-resolution bitmaps. We describe two separate image processing methods, one based on the industry-standard OpenCV library and C++, another based on self-developed algorithms in Python. We present our analysis of the transverse momentum and azimuthal angle distributions of the particles, in agreement with the literature.

  8. Data Manipulation in an XML-Based Digital Image Library

    ERIC Educational Resources Information Center

    Chang, Naicheng

    2005-01-01

    Purpose: To help to clarify the role of XML tools and standards in supporting transition and migration towards a fully XML-based environment for managing access to information. Design/methodology/approach: The Ching Digital Image Library, built on a three-tier architecture, is used as a source of examples to illustrate a number of methods of data…

  9. Classification of high-resolution multispectral satellite remote sensing images using extended morphological attribute profiles and independent component analysis

    NASA Astrophysics Data System (ADS)

    Wu, Yu; Zheng, Lijuan; Xie, Donghai; Zhong, Ruofei

    2017-07-01

    In this study, the extended morphological attribute profiles (EAPs) and independent component analysis (ICA) were combined for feature extraction of high-resolution multispectral satellite remote sensing images and the regularized least squares (RLS) approach with the radial basis function (RBF) kernel was further applied for the classification. Based on the major two independent components, the geometrical features were extracted using the EAPs method. In this study, three morphological attributes were calculated and extracted for each independent component, including area, standard deviation, and moment of inertia. The extracted geometrical features classified results using RLS approach and the commonly used LIB-SVM library of support vector machines method. The Worldview-3 and Chinese GF-2 multispectral images were tested, and the results showed that the features extracted by EAPs and ICA can effectively improve the accuracy of the high-resolution multispectral image classification, 2% larger than EAPs and principal component analysis (PCA) method, and 6% larger than APs and original high-resolution multispectral data. Moreover, it is also suggested that both the GURLS and LIB-SVM libraries are well suited for the multispectral remote sensing image classification. The GURLS library is easy to be used with automatic parameter selection but its computation time may be larger than the LIB-SVM library. This study would be helpful for the classification application of high-resolution multispectral satellite remote sensing images.

  10. Pattern centric design based sensitive patterns and process monitor in manufacturing

    NASA Astrophysics Data System (ADS)

    Hsiang, Chingyun; Cheng, Guojie; Wu, Kechih

    2017-03-01

    When design rule is mitigating to smaller dimension, process variation requirement is tighter than ever and challenges the limits of device yield. Masks, lithography, etching and other processes have to meet very tight specifications in order to keep defect and CD within the margins of the process window. Conventionally, Inspection and metrology equipments are utilized to monitor and control wafer quality in-line. In high throughput optical inspection, nuisance and review-classification become a tedious labor intensive job in manufacturing. Certain high-resolution SEM images are taken to validate defects after optical inspection. These high resolution SEM images catch not only optical inspection highlighted point, also its surrounding patterns. However, this pattern information is not well utilized in conventional quality control method. Using this complementary design based pattern monitor not only monitors and analyzes the variation of patterns sensitivity but also reduce nuisance and highlight defective patterns or killer defects. After grouping in either single or multiple layers, systematic defects can be identified quickly in this flow. In this paper, we applied design based pattern monitor in different layers to monitor process variation impacts on all kinds of patterns. First, the contour of high resolutions SEM image is extracted and aligned to design with offset adjustment and fine alignment [1]. Second, specified pattern rules can be applied on design clip area, the same size as SEM image, and form POI (pattern of interest) areas. Third, the discrepancy of contour and design measurement at different pattern types in measurement blocks. Fourth, defective patterns are reported by discrepancy detection criteria and pattern grouping [4]. Meanwhile, reported pattern defects are ranked by number and severity by discrepancy. In this step, process sensitive high repeatable systematic defects can be identified quickly Through this design based process pattern monitor method, most of optical inspection nuisances can be filtered out at contour to design discrepancy measurement. Daily analysis results are stored at database as reference to compare with incoming data. Defective pattern library contains existing and known systematic defect patterns which help to catch and identify new pattern defects or process impacts. On the other hand, this defect pattern library provides extra valuable information for mask, pattern and defects verification, inspection care area generation, further OPC fix and process enhancement and investigation.

  11. Health sciences library building projects, 1996-1997 survey.

    PubMed Central

    Bowden, V M

    1998-01-01

    Nine building projects are briefly described, including four new libraries, two renovations, and three combined renovations and additions. The libraries range in size from 657 square feet to 136,832 square feet, with seating varying from 14 to 635. Three hospital libraries and four academic health sciences libraries are described in more detail. In each case an important consideration was the provision for computer access. Two of the libraries expanded their space for historical collections. Three of the libraries added mobile shelving as a way of storing print materials while providing space for other activities. Images PMID:9549012

  12. Characteristics of knowledge content in a curated online evidence library.

    PubMed

    Varada, Sowmya; Lacson, Ronilda; Raja, Ali S; Ip, Ivan K; Schneider, Louise; Osterbur, David; Bain, Paul; Vetrano, Nicole; Cellini, Jacqueline; Mita, Carol; Coletti, Margaret; Whelan, Julia; Khorasani, Ramin

    2018-05-01

    To describe types of recommendations represented in a curated online evidence library, report on the quality of evidence-based recommendations pertaining to diagnostic imaging exams, and assess underlying knowledge representation. The evidence library is populated with clinical decision rules, professional society guidelines, and locally developed best practice guidelines. Individual recommendations were graded based on a standard methodology and compared using chi-square test. Strength of evidence ranged from grade 1 (systematic review) through grade 5 (recommendations based on expert opinion). Finally, variations in the underlying representation of these recommendations were identified. The library contains 546 individual imaging-related recommendations. Only 15% (16/106) of recommendations from clinical decision rules were grade 5 vs 83% (526/636) from professional society practice guidelines and local best practice guidelines that cited grade 5 studies (P < .0001). Minor head trauma, pulmonary embolism, and appendicitis were topic areas supported by the highest quality of evidence. Three main variations in underlying representations of recommendations were "single-decision," "branching," and "score-based." Most recommendations were grade 5, largely because studies to test and validate many recommendations were absent. Recommendation types vary in amount and complexity and, accordingly, the structure and syntax of statements they generate. However, they can be represented in single-decision, branching, and score-based representations. In a curated evidence library with graded imaging-based recommendations, evidence quality varied widely, with decision rules providing the highest-quality recommendations. The library may be helpful in highlighting evidence gaps, comparing recommendations from varied sources on similar clinical topics, and prioritizing imaging recommendations to inform clinical decision support implementation.

  13. Vector-Based Ground Surface and Object Representation Using Cameras

    DTIC Science & Technology

    2009-12-01

    representations and it is a digital data structure used for the representation of a ground surface in geographical information systems ( GIS ). Figure...Vision API library, and the OpenCV library. Also, the Posix thread library was utilized to quickly capture the source images from cameras. Both

  14. Health sciences library building projects: 1994 survey.

    PubMed Central

    Ludwig, L

    1995-01-01

    Designing and building new or renovated space is time consuming and requires politically sensitive discussions concerning a number of both long-term and immediate planning issues. The Medical Library Association's fourth annual survey of library building projects identified ten health sciences libraries that are planning, expanding, or constructing new facilities. Two projects are in predesign stages, four represent new construction, and four involve renovations to existing libraries. The Texas Medical Association Library, the King Faisal Specialist Hospital and Research Centre Library, and the Northwestern University Galter Health Sciences Library illustrate how these libraries are being designed for the future and take into account areas of change produced by new information technologies, curricular trends, and new ways to deliver library services. Images PMID:7599586

  15. Detecting brain tumor in pathological slides using hyperspectral imaging

    PubMed Central

    Ortega, Samuel; Fabelo, Himar; Camacho, Rafael; de la Luz Plaza, María; Callicó, Gustavo M.; Sarmiento, Roberto

    2018-01-01

    Hyperspectral imaging (HSI) is an emerging technology for medical diagnosis. This research work presents a proof-of-concept on the use of HSI data to automatically detect human brain tumor tissue in pathological slides. The samples, consisting of hyperspectral cubes collected from 400 nm to 1000 nm, were acquired from ten different patients diagnosed with high-grade glioma. Based on the diagnosis provided by pathologists, a spectral library of normal and tumor tissues was created and processed using three different supervised classification algorithms. Results prove that HSI is a suitable technique to automatically detect high-grade tumors from pathological slides. PMID:29552415

  16. Detecting brain tumor in pathological slides using hyperspectral imaging.

    PubMed

    Ortega, Samuel; Fabelo, Himar; Camacho, Rafael; de la Luz Plaza, María; Callicó, Gustavo M; Sarmiento, Roberto

    2018-02-01

    Hyperspectral imaging (HSI) is an emerging technology for medical diagnosis. This research work presents a proof-of-concept on the use of HSI data to automatically detect human brain tumor tissue in pathological slides. The samples, consisting of hyperspectral cubes collected from 400 nm to 1000 nm, were acquired from ten different patients diagnosed with high-grade glioma. Based on the diagnosis provided by pathologists, a spectral library of normal and tumor tissues was created and processed using three different supervised classification algorithms. Results prove that HSI is a suitable technique to automatically detect high-grade tumors from pathological slides.

  17. Development of measurement system for gauge block interferometer

    NASA Astrophysics Data System (ADS)

    Chomkokard, S.; Jinuntuya, N.; Wongkokua, W.

    2017-09-01

    We developed a measurement system for collecting and analyzing the fringe pattern images from a gauge block interferometer. The system was based on Raspberry Pi which is an open source system with python programming and opencv image manipulation library. The images were recorded by the Raspberry Pi camera with five-megapixel capacity. The noise of images was suppressed for the best result in analyses. The low noise images were processed to find the edge of fringe patterns using the contour technique for the phase shift analyses. We tested our system with the phase shift patterns between a gauge block and a reference plate. The phase shift patterns were measured by a Twyman-Green type of interferometer using the He-Ne laser with the temperature controlled at 20.0 °C. The results of the measurement will be presented and discussed.

  18. A game-based platform for crowd-sourcing biomedical image diagnosis and standardized remote training and education of diagnosticians

    NASA Astrophysics Data System (ADS)

    Feng, Steve; Woo, Minjae; Chandramouli, Krithika; Ozcan, Aydogan

    2015-03-01

    Over the past decade, crowd-sourcing complex image analysis tasks to a human crowd has emerged as an alternative to energy-inefficient and difficult-to-implement computational approaches. Following this trend, we have developed a mathematical framework for statistically combining human crowd-sourcing of biomedical image analysis and diagnosis through games. Using a web-based smart game (BioGames), we demonstrated this platform's effectiveness for telediagnosis of malaria from microscopic images of individual red blood cells (RBCs). After public release in early 2012 (http://biogames.ee.ucla.edu), more than 3000 gamers (experts and non-experts) used this BioGames platform to diagnose over 2800 distinct RBC images, marking them as positive (infected) or negative (non-infected). Furthermore, we asked expert diagnosticians to tag the same set of cells with labels of positive, negative, or questionable (insufficient information for a reliable diagnosis) and statistically combined their decisions to generate a gold standard malaria image library. Our framework utilized minimally trained gamers' diagnoses to generate a set of statistical labels with an accuracy that is within 98% of our gold standard image library, demonstrating the "wisdom of the crowd". Using the same image library, we have recently launched a web-based malaria training and educational game allowing diagnosticians to compare their performance with their peers. After diagnosing a set of ~500 cells per game, diagnosticians can compare their quantified scores against a leaderboard and view their misdiagnosed cells. Using this platform, we aim to expand our gold standard library with new RBC images and provide a quantified digital tool for measuring and improving diagnostician training globally.

  19. Detecting Unknown Artificial Urban Surface Materials Based on Spectral Dissimilarity Analysis.

    PubMed

    Jilge, Marianne; Heiden, Uta; Habermeyer, Martin; Mende, André; Juergens, Carsten

    2017-08-08

    High resolution imaging spectroscopy data have been recognised as a valuable data resource for augmenting detailed material inventories that serve as input for various urban applications. Image-specific urban spectral libraries are successfully used in urban imaging spectroscopy studies. However, the regional- and sensor-specific transferability of such libraries is limited due to the wide range of different surface materials. With the developed methodology, incomplete urban spectral libraries can be utilised by assuming that unknown surface material spectra are dissimilar to the known spectra in a basic spectral library (BSL). The similarity measure SID-SCA (Spectral Information Divergence-Spectral Correlation Angle) is applied to detect image-specific unknown urban surfaces while avoiding spectral mixtures. These detected unknown materials are categorised into distinct and identifiable material classes based on their spectral and spatial metrics. Experimental results demonstrate a successful redetection of material classes that had been previously erased in order to simulate an incomplete BSL. Additionally, completely new materials e.g., solar panels were identified in the data. It is further shown that the level of incompleteness of the BSL and the defined dissimilarity threshold are decisive for the detection of unknown material classes and the degree of spectral intra-class variability. A detailed accuracy assessment of the pre-classification results, aiming to separate natural and artificial materials, demonstrates spectral confusions between spectrally similar materials utilizing SID-SCA. However, most spectral confusions occur between natural or artificial materials which are not affecting the overall aim. The dissimilarity analysis overcomes the limitations of working with incomplete urban spectral libraries and enables the generation of image-specific training databases.

  20. Detecting Unknown Artificial Urban Surface Materials Based on Spectral Dissimilarity Analysis

    PubMed Central

    Jilge, Marianne; Heiden, Uta; Habermeyer, Martin; Mende, André; Juergens, Carsten

    2017-01-01

    High resolution imaging spectroscopy data have been recognised as a valuable data resource for augmenting detailed material inventories that serve as input for various urban applications. Image-specific urban spectral libraries are successfully used in urban imaging spectroscopy studies. However, the regional- and sensor-specific transferability of such libraries is limited due to the wide range of different surface materials. With the developed methodology, incomplete urban spectral libraries can be utilised by assuming that unknown surface material spectra are dissimilar to the known spectra in a basic spectral library (BSL). The similarity measure SID-SCA (Spectral Information Divergence-Spectral Correlation Angle) is applied to detect image-specific unknown urban surfaces while avoiding spectral mixtures. These detected unknown materials are categorised into distinct and identifiable material classes based on their spectral and spatial metrics. Experimental results demonstrate a successful redetection of material classes that had been previously erased in order to simulate an incomplete BSL. Additionally, completely new materials e.g., solar panels were identified in the data. It is further shown that the level of incompleteness of the BSL and the defined dissimilarity threshold are decisive for the detection of unknown material classes and the degree of spectral intra-class variability. A detailed accuracy assessment of the pre-classification results, aiming to separate natural and artificial materials, demonstrates spectral confusions between spectrally similar materials utilizing SID-SCA. However, most spectral confusions occur between natural or artificial materials which are not affecting the overall aim. The dissimilarity analysis overcomes the limitations of working with incomplete urban spectral libraries and enables the generation of image-specific training databases. PMID:28786947

  1. Processing of food, body and emotional stimuli in anorexia nervosa: a systematic review and meta-analysis of functional magnetic resonance imaging studies.

    PubMed

    Zhu, Yikang; Hu, Xiaochen; Wang, Jijun; Chen, Jue; Guo, Qian; Li, Chunbo; Enck, Paul

    2012-11-01

    The characteristics of the cognitive processing of food, body and emotional information in patients with anorexia nervosa (AN) are debatable. We reviewed functional magnetic resonance imaging studies to assess whether there were consistent neural basis and networks in the studies to date. Searching PubMed, Ovid, Web of Science, The Cochrane Library and Google Scholar between January 1980 and May 2012, we identified 17 relevant studies. Activation likelihood estimation was used to perform a quantitative meta-analysis of functional magnetic resonance imaging studies. For both food stimuli and body stimuli, AN patients showed increased hemodynamic response in the emotion-related regions (frontal, caudate, uncus, insula and temporal) and decreased activation in the parietal region. Although no robust brain activation has been found in response to emotional stimuli, emotion-related neural networks are involved in the processing of food and body stimuli among AN. It suggests that negative emotional arousal is related to cognitive processing bias of food and body stimuli in AN. Copyright © 2012 John Wiley & Sons, Ltd and Eating Disorders Association.

  2. Trainable Cataloging for Digital Image Libraries with Applications to Volcano Detection

    NASA Technical Reports Server (NTRS)

    Burl, M. C.; Fayyad, U. M.; Perona, P.; Smyth, P.

    1995-01-01

    Users of digital image libraries are often not interested in image data per se but in derived products such as catalogs of objects of interest. Converting an image database into a usable catalog is typically carried out manually at present. For many larger image databases the purely manual approach is completely impractical. In this paper we describe the development of a trainable cataloging system: the user indicates the location of the objects of interest for a number of training images and the system learns to detect and catalog these objects in the rest of the database. In particular we describe the application of this system to the cataloging of small volcanoes in radar images of Venus. The volcano problem is of interest because of the scale (30,000 images, order of 1 million detectable volcanoes), technical difficulty (the variability of the volcanoes in appearance) and the scientific importance of the problem. The problem of uncertain or subjective ground truth is of fundamental importance in cataloging problems of this nature and is discussed in some detail. Experimental results are presented which quantify and compare the detection performance of the system relative to human detection performance. The paper concludes by discussing the limitations of the proposed system and the lessons learned of general relevance to the development of digital image libraries.

  3. Building a Hypertextual Digital Library in the Humanities: A Case Study on London.

    ERIC Educational Resources Information Center

    Crane, Gregory; Smith, David A.; Wulfman, Clifford E.

    This paper describes the creation of a new humanities digital library collection: 11,000,000 words and 10,000 images representing books, images, and maps on pre-twentieth century London and its environs. The London collection contained far more dense and precise information than the materials from the Greco-Roman world. The London collection thus…

  4. In flight image processing on multi-rotor aircraft for autonomous landing

    NASA Astrophysics Data System (ADS)

    Henry, Richard, Jr.

    An estimated $6.4 billion was spent during the year 2013 on developing drone technology around the world and is expected to double in the next decade. However, drone applications typically require strong pilot skills, safety, responsibilities and adherence to regulations during flight. If the flight control process could be safer and more reliable in terms of landing, it would be possible to further develop a wider range of applications. The objective of this research effort is to describe the design and evaluation of a fully autonomous Unmanned Aerial system (UAS), specifically a four rotor aircraft, commonly known as quad copter for precise landing applications. The full landing autonomy is achieved by image processing capabilities during flight for target recognition by employing the open source library OpenCV. In addition, all imaging data is processed by a single embedded computer that estimates a relative position with respect to the target landing pad. Results shows a reduction on the average offset error by 67.88% in comparison to the current return to lunch (RTL) method which only relies on GPS positioning. The present work validates the need for relying on image processing for precise landing applications instead of the inexact method of a commercial low cost GPS dependency.

  5. Angular relational signature-based chest radiograph image view classification.

    PubMed

    Santosh, K C; Wendling, Laurent

    2018-01-22

    In a computer-aided diagnosis (CAD) system, especially for chest radiograph or chest X-ray (CXR) screening, CXR image view information is required. Automatically separating CXR image view, frontal and lateral can ease subsequent CXR screening process, since the techniques may not equally work for both views. We present a novel technique to classify frontal and lateral CXR images, where we introduce angular relational signature through force histogram to extract features and apply three different state-of-the-art classifiers: multi-layer perceptron, random forest, and support vector machine to make a decision. We validated our fully automatic technique on a set of 8100 images hosted by the U.S. National Library of Medicine (NLM), National Institutes of Health (NIH), and achieved an accuracy close to 100%. Our method outperforms the state-of-the-art methods in terms of processing time (less than or close to 2 s for the whole test data) while the accuracies can be compared, and therefore, it justifies its practicality. Graphical Abstract Interpreting chest X-ray (CXR) through the angular relational signature.

  6. SU-E-J-131: Augmenting Atlas-Based Segmentation by Incorporating Image Features Proximal to the Atlas Contours

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dengwang; Liu, Li; Kapp, Daniel S.

    2015-06-15

    Purpose: For facilitating the current automatic segmentation, in this work we propose a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. Methods: In setting up an atlas-based library, we include not only the coordinates of contour points, but also the image features adjacent to the contour. 139 planning CT scans with normal appearing livers obtained during their radiotherapy treatment planning were used to construct the library. The CT images within the library were registered each other using affine registration. A nonlinear narrow shell with the regionalmore » thickness determined by the distance between two vertices alongside the contour. The narrow shell was automatically constructed both inside and outside of the liver contours. The common image features within narrow shell between a new case and a library case were first selected by a Speed-up Robust Features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the images of the new patient by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy function within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by a physician. Results: Application of the technique to 30 liver cases suggested that the technique was capable of reliably segment organs such as the liver with little human intervention. Compared with the manual segmentation results by a physician, the average and discrepancies of the volumetric overlap percentage (VOP) was found to be 92.43%+2.14%. Conclusion: Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically practical solution for auto-segmentation. This work is supported by NIH/NIBIB (1R01-EB016777), National Natural Science Foundation of China (No.61471226 and No.61201441), Research funding from Shandong Province (No.BS2012DX038 and No.J12LN23), and Research funding from Jinan City (No.201401221 and No.20120109)« less

  7. Application specific serial arithmetic arrays

    NASA Technical Reports Server (NTRS)

    Winters, K.; Mathews, D.; Thompson, T.

    1990-01-01

    High performance systolic arrays of serial-parallel multiplier elements may be rapidly constructed for specific applications by applying hardware description language techniques to a library of full-custom CMOS building blocks. Single clock pre-charged circuits have been implemented for these arrays at clock rates in excess of 100 Mhz using economical 2-micron (minimum feature size) CMOS processes, which may be quickly configured for a variety of applications. A number of application-specific arrays are presented, including a 2-D convolver for image processing, an integer polynomial solver, and a finite-field polynomial solver.

  8. Interactive Light Stimulus Generation with High Performance Real-Time Image Processing and Simple Scripting.

    PubMed

    Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter

    2017-01-01

    Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations.

  9. Development of High Throughput Process for Constructing 454 Titanium and Illumina Libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deshpande, Shweta; Hack, Christopher; Tang, Eric

    2010-05-28

    We have developed two processes with the Biomek FX robot to construct 454 titanium and Illumina libraries in order to meet the increasing library demands. All modifications in the library construction steps were made to enable the adaptation of the entire processes to work with the 96-well plate format. The key modifications include the shearing of DNA with Covaris E210 and the enzymatic reaction cleaning and fragment size selection with SPRI beads and magnetic plate holders. The construction of 96 Titanium libraries takes about 8 hours from sheared DNA to ssDNA recovery. The processing of 96 Illumina libraries takes lessmore » time than that of the Titanium library process. Although both processes still require manual transfer of plates from robot to other work stations such as thermocyclers, these robotic processes represent about 12- to 24-folds increase of library capacity comparing to the manual processes. To enable the sequencing of many libraries in parallel, we have also developed sets of molecular barcodes for both library types. The requirements for the 454 library barcodes include 10 bases, 40-60percent GC, no consecutive same base, and no less than 3 bases difference between barcodes. We have used 96 of the resulted 270 barcodes to construct libraries and pool to test the ability of accurately assigning reads to the right samples. When allowing 1 base error occurred in the 10 base barcodes, we could assign 99.6percent of the total reads and 100percent of them were uniquely assigned. As for the Illumina barcodes, the requirements include 4 bases, balanced GC, and at least 2 bases difference between barcodes. We have begun to assess the ability to assign reads after pooling different number of libraries. We will discuss the progress and the challenges of these scale-up processes.« less

  10. Education Schools and Library Schools: A Comparison of Their Perceptions by Academia.

    ERIC Educational Resources Information Center

    Lorenzen, Michael

    2000-01-01

    Compares the similarities of education and library schools in regard to status. Topics include image problems of education and library schools; and reasons they are held in low esteem in higher education, including gender bias, low pay, social bias, practical versus theoretical orientation, and a lack of research. (LRW)

  11. Digitizing Technologies for Preservation. SPEC Kit 214.

    ERIC Educational Resources Information Center

    Kellerman, L. Suzanne, Comp.; Wilson, Rebecca, Comp.

    The Association of Research Libraries distributed a survey to its 119 member libraries to assess the use of state-of-the-art digital technologies as a preservation method. Libraries were asked to report detailed data on all projects designed specifically to: (1) enhance images of faded or brittle originals, (2) provide access to digital images…

  12. USGS Spectral Library Version 7

    USGS Publications Warehouse

    Kokaly, Raymond F.; Clark, Roger N.; Swayze, Gregg A.; Livo, K. Eric; Hoefen, Todd M.; Pearson, Neil C.; Wise, Richard A.; Benzel, William M.; Lowers, Heather A.; Driscoll, Rhonda L.; Klein, Anna J.

    2017-04-10

    We have assembled a library of spectra measured with laboratory, field, and airborne spectrometers. The instruments used cover wavelengths from the ultraviolet to the far infrared (0.2 to 200 microns [μm]). Laboratory samples of specific minerals, plants, chemical compounds, and manmade materials were measured. In many cases, samples were purified, so that unique spectral features of a material can be related to its chemical structure. These spectro-chemical links are important for interpreting remotely sensed data collected in the field or from an aircraft or spacecraft. This library also contains physically constructed as well as mathematically computed mixtures. Four different spectrometer types were used to measure spectra in the library: (1) Beckman™ 5270 covering the spectral range 0.2 to 3 µm, (2) standard, high resolution (hi-res), and high-resolution Next Generation (hi-resNG) models of Analytical Spectral Devices (ASD) field portable spectrometers covering the range from 0.35 to 2.5 µm, (3) Nicolet™ Fourier Transform Infra-Red (FTIR) interferometer spectrometers covering the range from about 1.12 to 216 µm, and (4) the NASA Airborne Visible/Infra-Red Imaging Spectrometer AVIRIS, covering the range 0.37 to 2.5 µm. Measurements of rocks, soils, and natural mixtures of minerals were made in laboratory and field settings. Spectra of plant components and vegetation plots, comprising many plant types and species with varying backgrounds, are also in this library. Measurements by airborne spectrometers are included for forested vegetation plots, in which the trees are too tall for measurement by a field spectrometer. This report describes the instruments used, the organization of materials into chapters, metadata descriptions of spectra and samples, and possible artifacts in the spectral measurements. To facilitate greater application of the spectra, the library has also been convolved to selected spectrometer and imaging spectrometers sampling and bandpasses, and resampled to selected broadband multispectral sensors. The native file format of the library is the SPECtrum Processing Routines (SPECPR) data format. This report describes how to access freely available software to read the SPECPR format. To facilitate broader access to the library, we produced generic formats of the spectra and metadata in text files. The library is provided on digital media and online at https://speclab.cr.usgs.gov/spectral-lib.html. A long-term archive of these data are stored on the USGS ScienceBase data server (https://dx.doi.org/10.5066/F7RR1WDJ).

  13. A virtual phantom library for the quantification of deformable image registration uncertainties in patients with cancers of the head and neck.

    PubMed

    Pukala, Jason; Meeks, Sanford L; Staton, Robert J; Bova, Frank J; Mañon, Rafael R; Langen, Katja M

    2013-11-01

    Deformable image registration (DIR) is being used increasingly in various clinical applications. However, the underlying uncertainties of DIR are not well-understood and a comprehensive methodology has not been developed for assessing a range of interfraction anatomic changes during head and neck cancer radiotherapy. This study describes the development of a library of clinically relevant virtual phantoms for the purpose of aiding clinicians in the QA of DIR software. These phantoms will also be available to the community for the independent study and comparison of other DIR algorithms and processes. Each phantom was derived from a pair of kVCT volumetric image sets. The first images were acquired of head and neck cancer patients prior to the start-of-treatment and the second were acquired near the end-of-treatment. A research algorithm was used to autosegment and deform the start-of-treatment (SOT) images according to a biomechanical model. This algorithm allowed the user to adjust the head position, mandible position, and weight loss in the neck region of the SOT images to resemble the end-of-treatment (EOT) images. A human-guided thin-plate splines algorithm was then used to iteratively apply further deformations to the images with the objective of matching the EOT anatomy as closely as possible. The deformations from each algorithm were combined into a single deformation vector field (DVF) and a simulated end-of-treatment (SEOT) image dataset was generated from that DVF. Artificial noise was added to the SEOT images and these images, along with the original SOT images, created a virtual phantom where the underlying "ground-truth" DVF is known. Images from ten patients were deformed in this fashion to create ten clinically relevant virtual phantoms. The virtual phantoms were evaluated to identify unrealistic DVFs using the normalized cross correlation (NCC) and the determinant of the Jacobian matrix. A commercial deformation algorithm was applied to the virtual phantoms to show how they may be used to generate estimates of DIR uncertainty. The NCC showed that the simulated phantom images had greater similarity to the actual EOT images than the images from which they were derived, supporting the clinical relevance of the synthetic deformation maps. Calculation of the Jacobian of the "ground-truth" DVFs resulted in only positive values. As an example, mean error statistics are presented for all phantoms for the brainstem, cord, mandible, left parotid, and right parotid. It is essential that DIR algorithms be evaluated using a range of possible clinical scenarios for each treatment site. This work introduces a library of virtual phantoms intended to resemble real cases for interfraction head and neck DIR that may be used to estimate and compare the uncertainty of any DIR algorithm.

  14. GPU-based real-time trinocular stereo vision

    NASA Astrophysics Data System (ADS)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  15. Image-algebraic design of multispectral target recognition algorithms

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.

    1994-06-01

    In this paper, we discuss methods for multispectral ATR (Automated Target Recognition) of small targets that are sensed under suboptimal conditions, such as haze, smoke, and low light levels. In particular, we discuss our ongoing development of algorithms and software that effect intelligent object recognition by selecting ATR filter parameters according to ambient conditions. Our algorithms are expressed in terms of IA (image algebra), a concise, rigorous notation that unifies linear and nonlinear mathematics in the image processing domain. IA has been implemented on a variety of parallel computers, with preprocessors available for the Ada and FORTRAN languages. An image algebra C++ class library has recently been made available. Thus, our algorithms are both feasible implementationally and portable to numerous machines. Analyses emphasize the aspects of image algebra that aid the design of multispectral vision algorithms, such as parameterized templates that facilitate the flexible specification of ATR filters.

  16. A pipeline for comprehensive and automated processing of electron diffraction data in IPLT.

    PubMed

    Schenk, Andreas D; Philippsen, Ansgar; Engel, Andreas; Walz, Thomas

    2013-05-01

    Electron crystallography of two-dimensional crystals allows the structural study of membrane proteins in their native environment, the lipid bilayer. Determining the structure of a membrane protein at near-atomic resolution by electron crystallography remains, however, a very labor-intense and time-consuming task. To simplify and accelerate the data processing aspect of electron crystallography, we implemented a pipeline for the processing of electron diffraction data using the Image Processing Library and Toolbox (IPLT), which provides a modular, flexible, integrated, and extendable cross-platform, open-source framework for image processing. The diffraction data processing pipeline is organized as several independent modules implemented in Python. The modules can be accessed either from a graphical user interface or through a command line interface, thus meeting the needs of both novice and expert users. The low-level image processing algorithms are implemented in C++ to achieve optimal processing performance, and their interface is exported to Python using a wrapper. For enhanced performance, the Python processing modules are complemented with a central data managing facility that provides a caching infrastructure. The validity of our data processing algorithms was verified by processing a set of aquaporin-0 diffraction patterns with the IPLT pipeline and comparing the resulting merged data set with that obtained by processing the same diffraction patterns with the classical set of MRC programs. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. A pipeline for comprehensive and automated processing of electron diffraction data in IPLT

    PubMed Central

    Schenk, Andreas D.; Philippsen, Ansgar; Engel, Andreas; Walz, Thomas

    2013-01-01

    Electron crystallography of two-dimensional crystals allows the structural study of membrane proteins in their native environment, the lipid bilayer. Determining the structure of a membrane protein at near-atomic resolution by electron crystallography remains, however, a very labor-intense and time-consuming task. To simplify and accelerate the data processing aspect of electron crystallography, we implemented a pipeline for the processing of electron diffraction data using the Image Processing Library & Toolbox (IPLT), which provides a modular, flexible, integrated, and extendable cross-platform, open-source framework for image processing. The diffraction data processing pipeline is organized as several independent modules implemented in Python. The modules can be accessed either from a graphical user interface or through a command line interface, thus meeting the needs of both novice and expert users. The low-level image processing algorithms are implemented in C++ to achieve optimal processing performance, and their interface is exported to Python using a wrapper. For enhanced performance, the Python processing modules are complemented with a central data managing facility that provides a caching infrastructure. The validity of our data processing algorithms was verified by processing a set of aquaporin-0 diffraction patterns with the IPLT pipeline and comparing the resulting merged data set with that obtained by processing the same diffraction patterns with the classical set of MRC programs. PMID:23500887

  18. Synthesis and systematic evaluation of dark resonance energy transfer (DRET)-based library and its application in cell imaging.

    PubMed

    Su, Dongdong; Teoh, Chai Lean; Kang, Nam-Young; Yu, Xiaotong; Sahu, Srikanta; Chang, Young-Tae

    2015-03-01

    In this paper, we report a new strategy for constructing a dye library with large Stokes shifts. By coupling a dark donor with BODIPY acceptors of tunable high quantum yield, a novel dark resonance energy transfer (DRET)-based library, named BNM, has been synthesized. Upon excitation of the dark donor (BDN) at 490 nm, the absorbed energy is transferred to the acceptor (BDM) with high efficiency, which was tunable in a broad range from 557 nm to 716 nm, with a high quantum yield of up to 0.8. It is noteworthy to mention that the majority of the non-radiative energy loss of the donor was converted into the acceptor's fluorescence output with a minimum leak of donor emission. Fluorescence imaging tested in live cells showed that the BNM compounds are cell-permeable and can also be employed for live-cell imaging. This is a new library which can be excited through a dark donor allowing for strong fluorescence emission in a wide range of wavelengths. Thus, the BNM library is well suited for high-throughput screening or multiplex experiments in biological applications by using a single laser excitation source. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Adding and Deleting Images

    EPA Pesticide Factsheets

    Images are added via the Drupal WebCMS Editor. Once an image is uploaded onto a page, it is available via the Library and your files. You can edit the metadata, delete the image permanently, and/or replace images on the Files tab.

  20. Recombination walking: genetic selection of clones from pooled libraries of yeast artificial chromosomes by homologous recombination.

    PubMed Central

    Miller, A M; Savinelli, E A; Couture, S M; Hannigan, G M; Han, Z; Selden, R F; Treco, D A

    1993-01-01

    Recombination walking is based on the genetic selection of specific human clones from a yeast artificial chromosome (YAC) library by homologous recombination. The desired clone is selected from a pooled (unordered) YAC library, eliminating labor-intensive steps typically used in organizing and maintaining ordered YAC libraries. Recombination walking represents an efficient approach to library screening and is well suited for chromosome-walking approaches to the isolation of genes associated with common diseases. Images Fig. 1 Fig. 2 Fig. 3 Fig. 4 PMID:8367472

  1. Introducing ORACLE: Library Processing in a Multi-User Environment.

    ERIC Educational Resources Information Center

    Queensland Library Board, Brisbane (Australia).

    Currently being developed by the State Library of Queensland, Australia, ORACLE (On-Line Retrieval of Acquisitions, Cataloguing, and Circulation Details for Library Enquiries) is a computerized library system designed to provide rapid processing of library materials in a multi-user environment. It is based on the Australian MARC format and fully…

  2. Connecting the Library's Patron Database to Campus Administrative Software: Simplifying the Library's Accounts Receivable Process

    ERIC Educational Resources Information Center

    Oliver, Astrid; Dahlquist, Janet; Tankersley, Jan; Emrich, Beth

    2010-01-01

    This article discusses the processes that occurred when the Library, Controller's Office, and Information Technology Department agreed to create an interface between the Library's Innovative Interfaces patron database and campus administrative software, Banner, using file transfer protocol, in an effort to streamline the Library's accounts…

  3. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image - nssl0059 Tornado in mature stage of development. Photo #3 of a series of classic photographs of this

  4. United States: Reaching out with Library Services for GLBTQ Teens

    ERIC Educational Resources Information Center

    Carter, Julie

    2005-01-01

    Although major strides toward the inclusion of GLBT images in North American libraries have indeed been achieved, censorship is alive and well where children's literature is concerned. It is plain to see that libraries have become a major battleground for the free flow of information about and for GLBTQ children and families. This article…

  5. A 3D Kinematic Measurement of Knee Prosthesis Using X-ray Projection Images

    NASA Astrophysics Data System (ADS)

    Hirokawa, Shunji; Ariyoshi, Shogo; Hossain, Mohammad Abrar

    We have developed a technique for estimating 3D motion of knee prosthesis from its 2D perspective projections. As Fourier descriptors were used for compact representation of library templates and contours extracted from the prosthetic X-ray images, the entire silhouette contour of each prosthetic component was required. This caused such a problem as our algorithm did not function when the silhouettes of tibio and femoral components overlapped with each other. Here we planned a novel method to overcome it; which was processed in two steps. First, the missing part of silhouette contour due to overlap was interpolated using a free-formed curvature such as Bezier. Then the first step position/orientation estimation was performed. In the next step, a clipping window was set in the projective coordinate so as to separate the overlapped silhouette drawn using the first step estimates. After that the localized library whose templates were clipped in shape was prepared and the second step estimation was performed. Computer model simulation demonstrated sufficient accuracies of position/orientation estimation even for overlapped silhouettes; equivalent to those without overlap.

  6. University Faculty Describe Their Use of Moving Images in Teaching and Learning and Their Perceptions of the Library's Role in That Use

    ERIC Educational Resources Information Center

    Otto, Jane Johnson

    2014-01-01

    The moving image plays a significant role in teaching and learning; faculty in a variety of disciplines consider it a crucial component of their coursework. Yet little has been written about how faculty identify, obtain, and use these resources and what role the library plays. This study, which engaged teaching faculty in a dialogue with library…

  7. Extending Digital Repository Architectures to Support Disk Image Preservation and Access

    DTIC Science & Technology

    2011-06-01

    Extending Digital Repository Architectures to Support Disk Image Preservation and Access Kam Woods School of Information and Library Science University...of North Carolina 216 Lenoir Drive, CB #3360 1-(919)-966-3598 kamwoods@email.unc.edu Christopher A. Lee School of Information and Library ... Science University of North Carolina 216 Lenoir Drive, CB #3360 1-(919)-962-7204 callee@ils.unc.edu Simson Garfinkel Graduate School of

  8. Multi-threaded integration of HTC-Vive and MeVisLab

    NASA Astrophysics Data System (ADS)

    Gunacker, Simon; Gall, Markus; Schmalstieg, Dieter; Egger, Jan

    2018-03-01

    This work presents how Virtual Reality (VR) can easily be integrated into medical applications via a plugin for a medical image processing framework called MeVisLab. A multi-threaded plugin has been developed using OpenVR, a VR library that can be used for developing vendor and platform independent VR applications. The plugin is tested using the HTC Vive, a head-mounted display developed by HTC and Valve Corporation.

  9. Preliminary investigation of submerged aquatic vegetation mapping using hyperspectral remote sensing.

    PubMed

    William, David J; Rybicki, Nancy B; Lombana, Alfonso V; O'Brien, Tim M; Gomez, Richard B

    2003-01-01

    The use of airborne hyperspectral remote sensing imagery for automated mapping of submerged aquatic vegetation (SAV) in the tidal Potomac River was investigated for near to real-time resource assessment and monitoring. Airborne hyperspectral imagery and field spectrometer measurements were obtained in October of 2000. A spectral library database containing selected ground-based and airborne sensor spectra was developed for use in image processing. The spectral library is used to automate the processing of hyperspectral imagery for potential real-time material identification and mapping. Field based spectra were compared to the airborne imagery using the database to identify and map two species of SAV (Myriophyllum spicatum and Vallisneria americana). Overall accuracy of the vegetation maps derived from hyperspectral imagery was determined by comparison to a product that combined aerial photography and field based sampling at the end of the SAV growing season. The algorithms and databases developed in this study will be useful with the current and forthcoming space-based hyperspectral remote sensing systems.

  10. A nanobuffer reporter library for fine-scale imaging and perturbation of endocytic organelles

    PubMed Central

    Wang, Chensu; Wang, Yiguang; Li, Yang; Bodemann, Brian; Zhao, Tian; Ma, Xinpeng; Huang, Gang; Hu, Zeping; DeBerardinis, Ralph J.; White, Michael A.; Gao, Jinming

    2015-01-01

    Endosomes, lysosomes and related catabolic organelles are a dynamic continuum of vacuolar structures that impact a number of cell physiological processes such as protein/lipid metabolism, nutrient sensing and cell survival. Here we develop a library of ultra-pH-sensitive fluorescent nanoparticles with chemical properties that allow fine-scale, multiplexed, spatio-temporal perturbation and quantification of catabolic organelle maturation at single organelle resolution to support quantitative investigation of these processes in living cells. Deployment in cells allows quantification of the proton accumulation rate in endosomes; illumination of previously unrecognized regulatory mechanisms coupling pH transitions to endosomal coat protein exchange; discovery of distinct pH thresholds required for mTORC1 activation by free amino acids versus proteins; broad-scale characterization of the consequence of endosomal pH transitions on cellular metabolomic profiles; and functionalization of a context-specific metabolic vulnerability in lung cancer cells. Together, these biological applications indicate the robustness and adaptability of this nanotechnology-enabled ‘detection and perturbation' strategy. PMID:26437053

  11. Web-based interactive 2D/3D medical image processing and visualization software.

    PubMed

    Mahmoudi, Seyyed Ehsan; Akhondi-Asl, Alireza; Rahmani, Roohollah; Faghih-Roohi, Shahrooz; Taimouri, Vahid; Sabouri, Ahmad; Soltanian-Zadeh, Hamid

    2010-05-01

    There are many medical image processing software tools available for research and diagnosis purposes. However, most of these tools are available only as local applications. This limits the accessibility of the software to a specific machine, and thus the data and processing power of that application are not available to other workstations. Further, there are operating system and processing power limitations which prevent such applications from running on every type of workstation. By developing web-based tools, it is possible for users to access the medical image processing functionalities wherever the internet is available. In this paper, we introduce a pure web-based, interactive, extendable, 2D and 3D medical image processing and visualization application that requires no client installation. Our software uses a four-layered design consisting of an algorithm layer, web-user-interface layer, server communication layer, and wrapper layer. To compete with extendibility of the current local medical image processing software, each layer is highly independent of other layers. A wide range of medical image preprocessing, registration, and segmentation methods are implemented using open source libraries. Desktop-like user interaction is provided by using AJAX technology in the web-user-interface. For the visualization functionality of the software, the VRML standard is used to provide 3D features over the web. Integration of these technologies has allowed implementation of our purely web-based software with high functionality without requiring powerful computational resources in the client side. The user-interface is designed such that the users can select appropriate parameters for practical research and clinical studies. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  12. Meet OLAF, a Good Friend of the IAPS! The Open Library of Affective Foods: A Tool to Investigate the Emotional Impact of Food in Adolescents

    PubMed Central

    Miccoli, Laura; Delgado, Rafael; Rodríguez-Ruiz, Sonia; Guerra, Pedro; García-Mármol, Eduardo; Fernández-Santaella, M. Carmen

    2014-01-01

    In the last decades, food pictures have been repeatedly employed to investigate the emotional impact of food on healthy participants as well as individuals who suffer from eating disorders and obesity. However, despite their widespread use, food pictures are typically selected according to each researcher's personal criteria, which make it difficult to reliably select food images and to compare results across different studies and laboratories. Therefore, to study affective reactions to food, it becomes pivotal to identify the emotional impact of specific food images based on wider samples of individuals. In the present paper we introduce the Open Library of Affective Foods (OLAF), which is a set of original food pictures created to reliably select food pictures based on the emotions they prompt, as indicated by affective ratings of valence, arousal, and dominance and by an additional food craving scale. OLAF images were designed to allow simultaneous use with affective images from the International Affective Picture System (IAPS), which is a well-known instrument to investigate emotional reactions in the laboratory. The ultimate goal of the OLAF is to contribute to understanding how food is emotionally processed in healthy individuals and in patients who suffer from eating and weight-related disorders. The present normative data, which was based on a large sample of an adolescent population, indicate that when viewing affective non-food IAPS images, valence, arousal, and dominance ratings were in line with expected patterns based on previous emotion research. Moreover, when viewing food pictures, affective and food craving ratings were consistent with research on food cue processing. As a whole, the data supported the methodological and theoretical reliability of the OLAF ratings, therefore providing researchers with a standardized tool to reliably investigate the emotional and motivational significance of food. The OLAF database is publicly available at zenodo.org. PMID:25490404

  13. Meet OLAF, a good friend of the IAPS! The Open Library of Affective Foods: a tool to investigate the emotional impact of food in adolescents.

    PubMed

    Miccoli, Laura; Delgado, Rafael; Rodríguez-Ruiz, Sonia; Guerra, Pedro; García-Mármol, Eduardo; Fernández-Santaella, M Carmen

    2014-01-01

    In the last decades, food pictures have been repeatedly employed to investigate the emotional impact of food on healthy participants as well as individuals who suffer from eating disorders and obesity. However, despite their widespread use, food pictures are typically selected according to each researcher's personal criteria, which make it difficult to reliably select food images and to compare results across different studies and laboratories. Therefore, to study affective reactions to food, it becomes pivotal to identify the emotional impact of specific food images based on wider samples of individuals. In the present paper we introduce the Open Library of Affective Foods (OLAF), which is a set of original food pictures created to reliably select food pictures based on the emotions they prompt, as indicated by affective ratings of valence, arousal, and dominance and by an additional food craving scale. OLAF images were designed to allow simultaneous use with affective images from the International Affective Picture System (IAPS), which is a well-known instrument to investigate emotional reactions in the laboratory. The ultimate goal of the OLAF is to contribute to understanding how food is emotionally processed in healthy individuals and in patients who suffer from eating and weight-related disorders. The present normative data, which was based on a large sample of an adolescent population, indicate that when viewing affective non-food IAPS images, valence, arousal, and dominance ratings were in line with expected patterns based on previous emotion research. Moreover, when viewing food pictures, affective and food craving ratings were consistent with research on food cue processing. As a whole, the data supported the methodological and theoretical reliability of the OLAF ratings, therefore providing researchers with a standardized tool to reliably investigate the emotional and motivational significance of food. The OLAF database is publicly available at zenodo.org.

  14. Checklist of Library Building Design Considerations. Fourth Edition.

    ERIC Educational Resources Information Center

    Sannwald, William W.

    This checklist serves as a guide during various stages of a library design process to help ensure that all needed spaces and functions are included, to help enable the evaluation of existing library spaces as part of a library's needs assessment process, and to help provide data and support to the library in presentations that might be made to…

  15. Multiple Case Studies of Public Library Systems in New York State: Service Decision-Making Processes

    ERIC Educational Resources Information Center

    Ren, Xiaoai

    2012-01-01

    This research examined the functions and roles of public library systems in New York State and the services they provide for individual libraries and the public. The dissertation further studied the service decision-making processes at three selected New York State cooperative public library systems. Public library systems have played an important…

  16. Software Quality Assurance and Verification for the MPACT Library Generation Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yuxuan; Williams, Mark L.; Wiarda, Dorothea

    This report fulfills the requirements for the Consortium for the Advanced Simulation of Light-Water Reactors (CASL) milestone L2:RTM.P14.02, “SQA and Verification for MPACT Library Generation,” by documenting the current status of the software quality, verification, and acceptance testing of nuclear data libraries for MPACT. It provides a brief overview of the library generation process, from general-purpose evaluated nuclear data files (ENDF/B) to a problem-dependent cross section library for modeling of light-water reactors (LWRs). The software quality assurance (SQA) programs associated with each of the software used to generate the nuclear data libraries are discussed; specific tests within the SCALE/AMPX andmore » VERA/XSTools repositories are described. The methods and associated tests to verify the quality of the library during the generation process are described in detail. The library generation process has been automated to a degree to (1) ensure that it can be run without user intervention and (2) to ensure that the library can be reproduced. Finally, the acceptance testing process that will be performed by representatives from the Radiation Transport Methods (RTM) Focus Area prior to the production library’s release is described in detail.« less

  17. Interactive digital image manipulation system

    NASA Technical Reports Server (NTRS)

    Henze, J.; Dezur, R.

    1975-01-01

    The system is designed for manipulation, analysis, interpretation, and processing of a wide variety of image data. LANDSAT (ERTS) and other data in digital form can be input directly into the system. Photographic prints and transparencies are first converted to digital form with an on-line high-resolution microdensitometer. The system is implemented on a Hewlett-Packard 3000 computer with 128 K bytes of core memory and a 47.5 megabyte disk. It includes a true color display monitor, with processing memories, graphics overlays, and a movable cursor. Image data formats are flexible so that there is no restriction to a given set of remote sensors. Conversion between data types is available to provide a basis for comparison of the various data. Multispectral data is fully supported, and there is no restriction on the number of dimensions. In this way multispectral data collected at more than one point in time may simply be treated as a data collected with twice (three times, etc.) the number of sensors. There are various libraries of functions available to the user: processing functions, display functions, system functions, and earth resources applications functions.

  18. Exploitation of commercial remote sensing images: reality ignored?

    NASA Astrophysics Data System (ADS)

    Allen, Paul C.

    1999-12-01

    The remote sensing market is on the verge of being awash in commercial high-resolution images. Market estimates are based on the growing numbers of planned commercial remote sensing electro-optical, radar, and hyperspectral satellites and aircraft. EarthWatch, Space Imaging, SPOT, and RDL among others are all working towards launch and service of one to five meter panchromatic or radar-imaging satellites. Additionally, new advances in digital air surveillance and reconnaissance systems, both manned and unmanned, are also expected to expand the geospatial customer base. Regardless of platform, image type, or location, each system promises images with some combination of increased resolution, greater spectral coverage, reduced turn-around time (request-to- delivery), and/or reduced image cost. For the most part, however, market estimates for these new sources focus on the raw digital images (from collection to the ground station) while ignoring the requirements for a processing and exploitation infrastructure comprised of exploitation tools, exploitation training, library systems, and image management systems. From this it would appear the commercial imaging community has failed to learn the hard lessons of national government experience choosing instead to ignore reality and replicate the bias of collection over processing and exploitation. While this trend may be not impact the small quantity users that exist today it will certainly adversely affect the mid- to large-sized users of the future.

  19. Missile signal processing common computer architecture for rapid technology upgrade

    NASA Astrophysics Data System (ADS)

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul

    2004-10-01

    Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application may be programmed under existing real-time operating systems using parallel processing software libraries, resulting in highly portable code that can be rapidly migrated to new platforms as processor technology evolves. Use of standardized development tools and 3rd party software upgrades are enabled as well as rapid upgrade of processing components as improved algorithms are developed. The resulting weapon system will have a superior processing capability over a custom approach at the time of deployment as a result of a shorter development cycles and use of newer technology. The signal processing computer may be upgraded over the lifecycle of the weapon system, and can migrate between weapon system variants enabled by modification simplicity. This paper presents a reference design using the new approach that utilizes an Altivec PowerPC parallel COTS platform. It uses a VxWorks-based real-time operating system (RTOS), and application code developed using an efficient parallel vector library (PVL). A quantification of computing requirements and demonstration of interceptor algorithm operating on this real-time platform are provided.

  20. IMAGEP - A FORTRAN ALGORITHM FOR DIGITAL IMAGE PROCESSING

    NASA Technical Reports Server (NTRS)

    Roth, D. J.

    1994-01-01

    IMAGEP is a FORTRAN computer algorithm containing various image processing, analysis, and enhancement functions. It is a keyboard-driven program organized into nine subroutines. Within the subroutines are other routines, also, selected via keyboard. Some of the functions performed by IMAGEP include digitization, storage and retrieval of images; image enhancement by contrast expansion, addition and subtraction, magnification, inversion, and bit shifting; display and movement of cursor; display of grey level histogram of image; and display of the variation of grey level intensity as a function of image position. This algorithm has possible scientific, industrial, and biomedical applications in material flaw studies, steel and ore analysis, and pathology, respectively. IMAGEP is written in VAX FORTRAN for DEC VAX series computers running VMS. The program requires the use of a Grinnell 274 image processor which can be obtained from Mark McCloud Associates, Campbell, CA. An object library of the required GMR series software is included on the distribution media. IMAGEP requires 1Mb of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in VAX FILES-11 format. It is also available on a TK50 tape cartridge in VAX FILES-11 format. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation.

  1. Potential medical applications of TAE

    NASA Technical Reports Server (NTRS)

    Fahy, J. Ben; Kaucic, Robert; Kim, Yongmin

    1986-01-01

    In cooperation with scientists in the University of Washington Medical School, a microcomputer-based image processing system for quantitative microscopy, called DMD1 (Digital Microdensitometer 1) was constructed. In order to make DMD1 transportable to different hosts and image processors, we have been investigating the possibility of rewriting the lower level portions of DMD1 software using Transportable Applications Executive (TAE) libraries and subsystems. If successful, we hope to produce a newer version of DMD1, called DMD2, running on an IBM PC/AT under the SCO XENIX System 5 operating system, using any of seven target image processors available in our laboratory. Following this implementation, copies of the system will be transferred to other laboratories with biomedical imaging applications. By integrating those applications into DMD2, we hope to eventually expand our system into a low-cost general purpose biomedical imaging workstation. This workstation will be useful not only as a self-contained instrument for clinical or research applications, but also as part of a large scale Digital Imaging Network and Picture Archiving and Communication System, (DIN/PACS). Widespread application of these TAE-based image processing and analysis systems should facilitate software exchange and scientific cooperation not only within the medical community, but between the medical and remote sensing communities as well.

  2. Learning Space Attributes: Reflections on Academic Library Design and Its Use

    ERIC Educational Resources Information Center

    Cunningham, Heather V.; Tabur, Susanne

    2012-01-01

    Even though students are not using the print collection, they still choose to go to the library for academic pursuits. The continuing preferences of students for library space can be examined in the light of a hierarchy of needs made up of layers of access and linkages, of uses and activities, of sociability, and of comfort and image. A space…

  3. Mariner 9 television pictures: Microfiche library user's guide. MTC/MTVS real-time pictures

    NASA Technical Reports Server (NTRS)

    Becker, R. A.

    1973-01-01

    This document describes the content and organization of the Mariner 9 Mission Test Computer/Mission Test Video System microfiche library. This 775 card library is intended to supply the user with a complete record of the images received from Mars orbit during the Mariner 9 mission operations, from 15 Nov. 1971 to 1 Nov. 1972.

  4. Feed Me! Rethinking Traditional Modes of Library Access and Content Delivery

    ERIC Educational Resources Information Center

    Hutchens, Chad; Clark, Jason

    2008-01-01

    At their core, XML feeds are content-delivery vehicles. This fact has not always been highlighted in library conversations surrounding RSS and ATOM. The authors have looked to extend the conversation by offering a proof of concept application using RSS as a means to deliver all types of library data: PDFs, docs, images, video--to people where and…

  5. AccessAbility: Overcoming Information Barriers. Proceedings from the 1987 Spring Meeting of the Nebraska Library Association, College and University Section (Omaha, Nebraska, May 29, 1987).

    ERIC Educational Resources Information Center

    Kacena, Barbara J., Ed.

    Various aspects of the theme, "AccessAbility: Overcoming Information Barriers," are considered in the conference papers collected in this document. They include: (1) "The Library Image: A Barrier to Accessibility" (Janice S. Boyer); (2) "The Educationally Disadvantaged Student: How Can the Library Help?" (Michael Poma…

  6. Investigations of Antiangiogenic Mechanisms Using Novel Imaging Techniques

    DTIC Science & Technology

    2011-02-01

    011112-1 Downloaded from SPIE Digital Library on 22 Feb 2010 to 1lular functions that exacerbate treatment resistance and tumor aggressiveness.9 Cycling...measurements, which further complicates ata acquisition and interpretation. Blood flow on the mi- rovessel level has traditionally been measured using laser...34ournal of Biomedical Optics 011112-2 Downloaded from SPIE Digital Library on 22 Feb 2010 to 1The goal of this study was to dynamically image changes in

  7. IFLA General Conference, 1991. Division of Management and Technology: Section of Conservation; Section of Information Technology; Section of Library Buildings and Equipment; Section of Statistics; Management of Library Associations. Booklet 6.

    ERIC Educational Resources Information Center

    International Federation of Library Associations and Institutions, The Hague (Netherlands).

    The eight papers in this collection were presented at five sections of the Division of Management and Technology: (1) "The State Conservation Programme (Concept Approach)" (Tamara Burtseva and Zinaida Dvoriashina, USSR); (2) "La communication a distance de banques d'images pour le grand public (Public Access to Image Databases via…

  8. In-Process Items on LCS.

    ERIC Educational Resources Information Center

    Russell, Thyra K.

    Morris Library at Southern Illinois University computerized its technical processes using the Library Computer System (LCS), which was implemented in the library to streamline order processing by: (1) providing up-to-date online files to track in-process items; (2) encouraging quick, efficient accessing of information; (3) reducing manual files;…

  9. Optimization of RET flow using test layout

    NASA Astrophysics Data System (ADS)

    Zhang, Yunqiang; Sethi, Satyendra; Lucas, Kevin

    2008-11-01

    At advanced technology nodes with extremely low k1 lithography, it is very hard to achieve image fidelity requirements and process window for some layout configurations. Quite often these layouts are within simple design rule constraints for a given technology node. It is important to have these layouts included during early RET flow development. Most of RET developments are based on shrunk layout from the previous technology node, which is possibly not good enough. A better methodology in creating test layout is required for optical proximity correction (OPC) recipe and assists feature development. In this paper we demonstrate the application of programmable test layouts in RET development. Layout pattern libraries are developed and embedded in a layout tool (ICWB). Assessment gauges are generated together with patterns for quick correction accuracy assessment. Several groups of test pattern libraries have been developed based on learning from product patterns and a layout DOE approach. The interaction between layout patterns and OPC recipe has been studied. Correction of a contact layer is quite challenge because of poor convergence and low process window. We developed test pattern library with many different contact configurations. Different OPC schemes are studied on these test layouts. The worst process window patterns are pinpointed for a given illumination condition. Assist features (AF) are frequently placed according to pre-determined rules to improve lithography process window. These rules are usually derived from lithographic models and experiments. Direct validation of AF rules is required at development phase. We use the test layout approach to determine rules in order to eliminate AF printability problem.

  10. Remote hardware-reconfigurable robotic camera

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  11. Multiple Illuminant Colour Estimation via Statistical Inference on Factor Graphs.

    PubMed

    Mutimbu, Lawrence; Robles-Kelly, Antonio

    2016-08-31

    This paper presents a method to recover a spatially varying illuminant colour estimate from scenes lit by multiple light sources. Starting with the image formation process, we formulate the illuminant recovery problem in a statistically datadriven setting. To do this, we use a factor graph defined across the scale space of the input image. In the graph, we utilise a set of illuminant prototypes computed using a data driven approach. As a result, our method delivers a pixelwise illuminant colour estimate being devoid of libraries or user input. The use of a factor graph also allows for the illuminant estimates to be recovered making use of a maximum a posteriori (MAP) inference process. Moreover, we compute the probability marginals by performing a Delaunay triangulation on our factor graph. We illustrate the utility of our method for pixelwise illuminant colour recovery on widely available datasets and compare against a number of alternatives. We also show sample colour correction results on real-world images.

  12. Quantitative digital image analysis of chromogenic assays for high throughput screening of alpha-amylase mutant libraries.

    PubMed

    Shankar, Manoharan; Priyadharshini, Ramachandran; Gunasekaran, Paramasamy

    2009-08-01

    An image analysis-based method for high throughput screening of an alpha-amylase mutant library using chromogenic assays was developed. Assays were performed in microplates and high resolution images of the assay plates were read using the Virtual Microplate Reader (VMR) script to quantify the concentration of the chromogen. This method is fast and sensitive in quantifying 0.025-0.3 mg starch/ml as well as 0.05-0.75 mg glucose/ml. It was also an effective screening method for improved alpha-amylase activity with a coefficient of variance of 18%.

  13. Short-term solar flare prediction using image-case-based reasoning

    NASA Astrophysics Data System (ADS)

    Liu, Jin-Fu; Li, Fei; Zhang, Huai-Peng; Yu, Da-Ren

    2017-10-01

    Solar flares strongly influence space weather and human activities, and their prediction is highly complex. The existing solutions such as data based approaches and model based approaches have a common shortcoming which is the lack of human engagement in the forecasting process. An image-case-based reasoning method is introduced to achieve this goal. The image case library is composed of SOHO/MDI longitudinal magnetograms, the images from which exhibit the maximum horizontal gradient, the length of the neutral line and the number of singular points that are extracted for retrieving similar image cases. Genetic optimization algorithms are employed for optimizing the weight assignment for image features and the number of similar image cases retrieved. Similar image cases and prediction results derived by majority voting for these similar image cases are output and shown to the forecaster in order to integrate his/her experience with the final prediction results. Experimental results demonstrate that the case-based reasoning approach has slightly better performance than other methods, and is more efficient with forecasts improved by humans.

  14. New International School Library Guidelines

    ERIC Educational Resources Information Center

    Oberg, Dianne

    2018-01-01

    The publication in 2015 of new international school library guidelines was the culmination of a two-year process involving a wide network of contributors. The process was guided by the Joint Committee of the International Federation of Library Associations (IFLA) School Libraries Section and the International Association of School Librarianship…

  15. Satisfaction Formation Processes in Library Users: Understanding Multisource Effects

    ERIC Educational Resources Information Center

    Shi, Xi; Holahan, Patricia J.; Jurkat, M. Peter

    2004-01-01

    This study explores whether disconfirmation theory can explain satisfaction formation processes in library users. Both library users' needs and expectations are investigated as disconfirmation standards. Overall library user satisfaction is predicted to be a function of two independent sources--satisfaction with the information product received…

  16. Scaling up the 454 Titanium Library Construction and Pooling of Barcoded Libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phung, Wilson; Hack, Christopher; Shapiro, Harris

    2009-03-23

    We have been developing a high throughput 454 library construction process at the Joint Genome Institute to meet the needs of de novo sequencing a large number of microbial and eukaryote genomes, EST, and metagenome projects. We have been focusing efforts in three areas: (1) modifying the current process to allow the construction of 454 standard libraries on a 96-well format; (2) developing a robotic platform to perform the 454 library construction; and (3) designing molecular barcodes to allow pooling and sorting of many different samples. In the development of a high throughput process to scale up the number ofmore » libraries by adapting the process to a 96-well plate format, the key process change involves the replacement of gel electrophoresis for size selection with Solid Phase Reversible Immobilization (SPRI) beads. Although the standard deviation of the insert sizes increases, the overall quality sequence and distribution of the reads in the genome has not changed. The manual process of constructing 454 shotgun libraries on 96-well plates is a time-consuming, labor-intensive, and ergonomically hazardous process; we have been experimenting to program a BioMek robot to perform the library construction. This will not only enable library construction to be completed in a single day, but will also minimize any ergonomic risk. In addition, we have implemented a set of molecular barcodes (AKA Multiple Identifiers or MID) and a pooling process that allows us to sequence many targets simultaneously. Here we will present the testing of pooling a set of selected fosmids derived from the endomycorrhizal fungus Glomus intraradices. By combining the robotic library construction process and the use of molecular barcodes, it is now possible to sequence hundreds of fosmids that represent a minimal tiling path of this genome. Here we present the progress and the challenges of developing these scaled-up processes.« less

  17. Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems

    DTIC Science & Technology

    2017-04-13

    modelling code, a parallel benchmark , and a communication avoiding version of the QR algorithm. Further, several improvements to the OmpSs model were...movement; and a port of the dynamic load balancing library to OmpSs. Finally, several updates to the tools infrastructure were accomplished, including: an...OmpSs: a basic algorithm on image processing applications, a mini application representative of an ocean modelling code, a parallel benchmark , and a

  18. Interactive Light Stimulus Generation with High Performance Real-Time Image Processing and Simple Scripting

    PubMed Central

    Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter

    2017-01-01

    Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations. PMID:29326579

  19. Recent health sciences library building projects.

    PubMed Central

    Ludwig, L

    1993-01-01

    The Medical Library Association's third annual survey of recent health sciences library building projects identified fourteen libraries planning, expanding, or constructing new library facilities. Three of five new library buildings are freestanding structures where the library occupies all or a major portion of the space. The two other new facilities are for separately administered units where the library is a major tenant. Nine projects involve additions to or renovations of existing space. Six projects are in projected, predesign, or design stages or are awaiting funding approval. This paper describes four projects that illustrate technology's growing effect on librarians and libraries. They are designed to accommodate change, a plethora of electronic gear, and easy use of technology. Outwardly, they do not look much different than many other modern buildings. But, inside, the changes have been dramatic although they have evolved slowly as the building structure has been adapted to new conditions. Images PMID:8251970

  20. The IRI/LDEO Climate Data Library: Helping People use Climate Data

    NASA Astrophysics Data System (ADS)

    Blumenthal, M. B.; Grover-Kopec, E.; Bell, M.; del Corral, J.

    2005-12-01

    The IRI Climate Data Library (http://iridl.ldeo.columbia.edu/) is a library of datasets. By library we mean a collection of things, collected from both near and far, designed to make them more accessible for the library's users. Our datasets come from many different sources, many different "data cultures", many different formats. By dataset we mean a collection of data organized as multidimensional dependent variables, independent variables, and sub-datasets, along with the metadata (particularly use-metadata) that makes it possible to interpret the data in a meaningful manner. Ingrid, which provides the infrastructure for the Data Library, is an environment that lets one work with datasets: read, write, request, serve, view, select, calculate, transform, ... . It hides an extraordinary amount of technical detail from the user, letting the user think in terms of manipulations to datasets rather that manipulations of files of numbers. Among other things, this hidden technical detail could be accessing data on servers in other places, doing only the small needed portion of an enormous calculation, or translating to and from a variety of formats and between "data cultures". These operations are presented as a collection of virtual directories and documents on a web server, so that an ordinary web client can instantiate a calculation simply by requesting the resulting document or image. Building on this infrastructure, we (and others) have created collections of dynamically-updated images to faciliate monitoring aspects of the climate system, as well as linking these images to the underlying data. We have also created specialized interfaces to address the particular needs of user groups that IRI needs to support.

  1. PANDA: a pipeline toolbox for analyzing brain diffusion images.

    PubMed

    Cui, Zaixu; Zhong, Suyu; Xu, Pengfei; He, Yong; Gong, Gaolang

    2013-01-01

    Diffusion magnetic resonance imaging (dMRI) is widely used in both scientific research and clinical practice in in-vivo studies of the human brain. While a number of post-processing packages have been developed, fully automated processing of dMRI datasets remains challenging. Here, we developed a MATLAB toolbox named "Pipeline for Analyzing braiN Diffusion imAges" (PANDA) for fully automated processing of brain diffusion images. The processing modules of a few established packages, including FMRIB Software Library (FSL), Pipeline System for Octave and Matlab (PSOM), Diffusion Toolkit and MRIcron, were employed in PANDA. Using any number of raw dMRI datasets from different subjects, in either DICOM or NIfTI format, PANDA can automatically perform a series of steps to process DICOM/NIfTI to diffusion metrics [e.g., fractional anisotropy (FA) and mean diffusivity (MD)] that are ready for statistical analysis at the voxel-level, the atlas-level and the Tract-Based Spatial Statistics (TBSS)-level and can finish the construction of anatomical brain networks for all subjects. In particular, PANDA can process different subjects in parallel, using multiple cores either in a single computer or in a distributed computing environment, thus greatly reducing the time cost when dealing with a large number of datasets. In addition, PANDA has a friendly graphical user interface (GUI), allowing the user to be interactive and to adjust the input/output settings, as well as the processing parameters. As an open-source package, PANDA is freely available at http://www.nitrc.org/projects/panda/. This novel toolbox is expected to substantially simplify the image processing of dMRI datasets and facilitate human structural connectome studies.

  2. SPAM- SPECTRAL ANALYSIS MANAGER (DEC VAX/VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Solomon, J. E.

    1994-01-01

    The Spectral Analysis Manager (SPAM) was developed to allow easy qualitative analysis of multi-dimensional imaging spectrometer data. Imaging spectrometers provide sufficient spectral sampling to define unique spectral signatures on a per pixel basis. Thus direct material identification becomes possible for geologic studies. SPAM provides a variety of capabilities for carrying out interactive analysis of the massive and complex datasets associated with multispectral remote sensing observations. In addition to normal image processing functions, SPAM provides multiple levels of on-line help, a flexible command interpretation, graceful error recovery, and a program structure which can be implemented in a variety of environments. SPAM was designed to be visually oriented and user friendly with the liberal employment of graphics for rapid and efficient exploratory analysis of imaging spectrometry data. SPAM provides functions to enable arithmetic manipulations of the data, such as normalization, linear mixing, band ratio discrimination, and low-pass filtering. SPAM can be used to examine the spectra of an individual pixel or the average spectra over a number of pixels. SPAM also supports image segmentation, fast spectral signature matching, spectral library usage, mixture analysis, and feature extraction. High speed spectral signature matching is performed by using a binary spectral encoding algorithm to separate and identify mineral components present in the scene. The same binary encoding allows automatic spectral clustering. Spectral data may be entered from a digitizing tablet, stored in a user library, compared to the master library containing mineral standards, and then displayed as a timesequence spectral movie. The output plots, histograms, and stretched histograms produced by SPAM can be sent to a lineprinter, stored as separate RGB disk files, or sent to a Quick Color Recorder. SPAM is written in C for interactive execution and is available for two different machine environments. There is a DEC VAX/VMS version with a central memory requirement of approximately 242K of 8 bit bytes and a machine independent UNIX 4.2 version. The display device currently supported is the Raster Technologies display processor. Other 512 x 512 resolution color display devices, such as De Anza, may be added with minor code modifications. This program was developed in 1986.

  3. SPAM- SPECTRAL ANALYSIS MANAGER (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Solomon, J. E.

    1994-01-01

    The Spectral Analysis Manager (SPAM) was developed to allow easy qualitative analysis of multi-dimensional imaging spectrometer data. Imaging spectrometers provide sufficient spectral sampling to define unique spectral signatures on a per pixel basis. Thus direct material identification becomes possible for geologic studies. SPAM provides a variety of capabilities for carrying out interactive analysis of the massive and complex datasets associated with multispectral remote sensing observations. In addition to normal image processing functions, SPAM provides multiple levels of on-line help, a flexible command interpretation, graceful error recovery, and a program structure which can be implemented in a variety of environments. SPAM was designed to be visually oriented and user friendly with the liberal employment of graphics for rapid and efficient exploratory analysis of imaging spectrometry data. SPAM provides functions to enable arithmetic manipulations of the data, such as normalization, linear mixing, band ratio discrimination, and low-pass filtering. SPAM can be used to examine the spectra of an individual pixel or the average spectra over a number of pixels. SPAM also supports image segmentation, fast spectral signature matching, spectral library usage, mixture analysis, and feature extraction. High speed spectral signature matching is performed by using a binary spectral encoding algorithm to separate and identify mineral components present in the scene. The same binary encoding allows automatic spectral clustering. Spectral data may be entered from a digitizing tablet, stored in a user library, compared to the master library containing mineral standards, and then displayed as a timesequence spectral movie. The output plots, histograms, and stretched histograms produced by SPAM can be sent to a lineprinter, stored as separate RGB disk files, or sent to a Quick Color Recorder. SPAM is written in C for interactive execution and is available for two different machine environments. There is a DEC VAX/VMS version with a central memory requirement of approximately 242K of 8 bit bytes and a machine independent UNIX 4.2 version. The display device currently supported is the Raster Technologies display processor. Other 512 x 512 resolution color display devices, such as De Anza, may be added with minor code modifications. This program was developed in 1986.

  4. Public Library Automation Report: 1984.

    ERIC Educational Resources Information Center

    Gotanda, Masae

    Data processing was introduced to public libraries in Hawaii in 1973 with a feasibility study which outlined the candidate areas for automation. Since then, the Office of Library Services has automated the order procedures for one of the largest book processing centers for public libraries in the country; created one of the first COM…

  5. Small but Pristine--Lessons for Small Library Automation.

    ERIC Educational Resources Information Center

    Clement, Russell; Robertson, Dane

    1990-01-01

    Compares the more positive library automation experiences of a small public library with those of a large research library. Topics addressed include collection size; computer size and the need for outside control of a data processing center; staff size; selection process for hardware and software; and accountability. (LRW)

  6. Semi-Supervised Data Summarization: Using Spectral Libraries to Improve Hyperspectral Clustering

    NASA Technical Reports Server (NTRS)

    Wagstaff, K. L.; Shu, H. P.; Mazzoni, D.; Castano, R.

    2005-01-01

    Hyperspectral imagers produce very large images, with each pixel recorded at hundreds or thousands of different wavelengths. The ability to automatically generate summaries of these data sets enables several important applications, such as quickly browsing through a large image repository or determining the best use of a limited bandwidth link (e.g., determining which images are most critical for full transmission). Clustering algorithms can be used to generate these summaries, but traditional clustering methods make decisions based only on the information contained in the data set. In contrast, we present a new method that additionally leverages existing spectral libraries to identify materials that are likely to be present in the image target area. We find that this approach simultaneously reduces runtime and produces summaries that are more relevant to science goals.

  7. On-board landmark navigation and attitude reference parallel processor system

    NASA Technical Reports Server (NTRS)

    Gilbert, L. E.; Mahajan, D. T.

    1978-01-01

    An approach to autonomous navigation and attitude reference for earth observing spacecraft is described along with the landmark identification technique based on a sequential similarity detection algorithm (SSDA). Laboratory experiments undertaken to determine if better than one pixel accuracy in registration can be achieved consistent with onboard processor timing and capacity constraints are included. The SSDA is implemented using a multi-microprocessor system including synchronization logic and chip library. The data is processed in parallel stages, effectively reducing the time to match the small known image within a larger image as seen by the onboard image system. Shared memory is incorporated in the system to help communicate intermediate results among microprocessors. The functions include finding mean values and summation of absolute differences over the image search area. The hardware is a low power, compact unit suitable to onboard application with the flexibility to provide for different parameters depending upon the environment.

  8. Low Cost Embedded Stereo System for Underwater Surveys

    NASA Astrophysics Data System (ADS)

    Nawaf, M. M.; Boï, J.-M.; Merad, D.; Royer, J.-P.; Drap, P.

    2017-11-01

    This paper provides details of both hardware and software conception and realization of a hand-held stereo embedded system for underwater imaging. The designed system can run most image processing techniques smoothly in real-time. The developed functions provide direct visual feedback on the quality of the taken images which helps taking appropriate actions accordingly in terms of movement speed and lighting conditions. The proposed functionalities can be easily customized or upgraded whereas new functions can be easily added thanks to the available supported libraries. Furthermore, by connecting the designed system to a more powerful computer, a real-time visual odometry can run on the captured images to have live navigation and site coverage map. We use a visual odometry method adapted to low computational resources systems and long autonomy. The system is tested in a real context and showed its robustness and promising further perspectives.

  9. The benefits of the Atlas of Human Cardiac Anatomy website for the design of cardiac devices.

    PubMed

    Spencer, Julianne H; Quill, Jason L; Bateman, Michael G; Eggen, Michael D; Howard, Stephen A; Goff, Ryan P; Howard, Brian T; Quallich, Stephen G; Iaizzo, Paul A

    2013-11-01

    This paper describes how the Atlas of Human Cardiac Anatomy website can be used to improve cardiac device design throughout the process of development. The Atlas is a free-access website featuring novel images of both functional and fixed human cardiac anatomy from over 250 human heart specimens. This website provides numerous educational tutorials on anatomy, physiology and various imaging modalities. For instance, the 'device tutorial' provides examples of devices that were either present at the time of in vitro reanimation or were subsequently delivered, including leads, catheters, valves, annuloplasty rings and stents. Another section of the website displays 3D models of the vasculature, blood volumes and/or tissue volumes reconstructed from computed tomography and magnetic resonance images of various heart specimens. The website shares library images, video clips and computed tomography and MRI DICOM files in honor of the generous gifts received from donors and their families.

  10. Orthogonal Luciferase-Luciferin Pairs for Bioluminescence Imaging.

    PubMed

    Jones, Krysten A; Porterfield, William B; Rathbun, Colin M; McCutcheon, David C; Paley, Miranda A; Prescher, Jennifer A

    2017-02-15

    Bioluminescence imaging with luciferase-luciferin pairs is widely used in biomedical research. Several luciferases have been identified in nature, and many have been adapted for tracking cells in whole animals. Unfortunately, the optimal luciferases for imaging in vivo utilize the same substrate and therefore cannot easily differentiate multiple cell types in a single subject. To develop a broader set of distinguishable probes, we crafted custom luciferins that can be selectively processed by engineered luciferases. Libraries of mutant enzymes were iteratively screened with sterically modified luciferins, and orthogonal enzyme-substrate "hits" were identified. These tools produced light when complementary enzyme-substrate partners interacted both in vitro and in cultured cell models. Based on their selectivity, these designer pairs will bolster multicomponent imaging and enable the direct interrogation of cell networks not currently possible with existing tools. Our screening platform is also general and will expedite the identification of more unique luciferases and luciferins, further expanding the bioluminescence toolkit.

  11. Towards better digital pathology workflows: programming libraries for high-speed sharpness assessment of Whole Slide Images.

    PubMed

    Ameisen, David; Deroulers, Christophe; Perrier, Valérie; Bouhidel, Fatiha; Battistella, Maxime; Legrès, Luc; Janin, Anne; Bertheau, Philippe; Yunès, Jean-Baptiste

    2014-01-01

    Since microscopic slides can now be automatically digitized and integrated in the clinical workflow, quality assessment of Whole Slide Images (WSI) has become a crucial issue. We present a no-reference quality assessment method that has been thoroughly tested since 2010 and is under implementation in multiple sites, both public university-hospitals and private entities. It is part of the FlexMIm R&D project which aims to improve the global workflow of digital pathology. For these uses, we have developed two programming libraries, in Java and Python, which can be integrated in various types of WSI acquisition systems, viewers and image analysis tools. Development and testing have been carried out on a MacBook Pro i7 and on a bi-Xeon 2.7GHz server. Libraries implementing the blur assessment method have been developed in Java, Python, PHP5 and MySQL5. For web applications, JavaScript, Ajax, JSON and Sockets were also used, as well as the Google Maps API. Aperio SVS files were converted into the Google Maps format using VIPS and Openslide libraries. We designed the Java library as a Service Provider Interface (SPI), extendable by third parties. Analysis is computed in real-time (3 billion pixels per minute). Tests were made on 5000 single images, 200 NDPI WSI, 100 Aperio SVS WSI converted to the Google Maps format. Applications based on our method and libraries can be used upstream, as calibration and quality control tool for the WSI acquisition systems, or as tools to reacquire tiles while the WSI is being scanned. They can also be used downstream to reacquire the complete slides that are below the quality threshold for surgical pathology analysis. WSI may also be displayed in a smarter way by sending and displaying the regions of highest quality before other regions. Such quality assessment scores could be integrated as WSI's metadata shared in clinical, research or teaching contexts, for a more efficient medical informatics workflow.

  12. The Hippest History: The Detritus of Your Library's Past Can Help with Your Present-Day Marketing, Fundraising, and Professional Pride

    ERIC Educational Resources Information Center

    Lear, Bernadette A.

    2005-01-01

    The author of this article studies the history of libraries. Few libraries capitalize on their own organizational history, however, even though it can be, at minimum, a resource of images and factoids for everything from answering administrative questions to crafting fundraising and marketing pieces. It can also be a reservoir of professional…

  13. Lunar e-Library: A Research Tool Focused on the Lunar Environment

    NASA Technical Reports Server (NTRS)

    McMahan, Tracy A.; Shea, Charlotte A.; Finckenor, Miria; Ferguson, Dale

    2007-01-01

    As NASA plans and implements the Vision for Space Exploration, managers, engineers, and scientists need lunar environment information that is readily available and easily accessed. For this effort, lunar environment data was compiled from a variety of missions from Apollo to more recent remote sensing missions, such as Clementine. This valuable information comes not only in the form of measurements and images but also from the observations of astronauts who have visited the Moon and people who have designed spacecraft for lunar missions. To provide a research tool that makes the voluminous lunar data more accessible, the Space Environments and Effects (SEE) Program, managed at NASA's Marshall Space Flight Center (MSFC) in Huntsville, AL, organized the data into a DVD knowledgebase: the Lunar e-Library. This searchable collection of 1100 electronic (.PDF) documents and abstracts makes it easy to find critical technical data and lessons learned from past lunar missions and exploration studies. The SEE Program began distributing the Lunar e-Library DVD in 2006. This paper describes the Lunar e-Library development process (including a description of the databases and resources used to acquire the documents) and the contents of the DVD product, demonstrates its usefulness with focused searches, and provides information on how to obtain this free resource.

  14. Representation-based user interfaces for the audiovisual library of the year 2000

    NASA Astrophysics Data System (ADS)

    Aigrain, Philippe; Joly, Philippe; Lepain, Philippe; Longueville, Veronique

    1995-03-01

    The audiovisual library of the future will be based on computerized access to digitized documents. In this communication, we address the user interface issues which will arise from this new situation. One cannot simply transfer a user interface designed for the piece by piece production of some audiovisual presentation and make it a tool for accessing full-length movies in an electronic library. One cannot take a digital sound editing tool and propose it as a means to listen to a musical recording. In our opinion, when computers are used as mediations to existing contents, document representation-based user interfaces are needed. With such user interfaces, a structured visual representation of the document contents is presented to the user, who can then manipulate it to control perception and analysis of these contents. In order to build such manipulable visual representations of audiovisual documents, one needs to automatically extract structural information from the documents contents. In this communication, we describe possible visual interfaces for various temporal media, and we propose methods for the economically feasible large scale processing of documents. The work presented is sponsored by the Bibliotheque Nationale de France: it is part of the program aiming at developing for image and sound documents an experimental counterpart to the digitized text reading workstation of this library.

  15. SU-G-BRB-02: An Open-Source Software Analysis Library for Linear Accelerator Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerns, J; Yaldo, D

    Purpose: Routine linac quality assurance (QA) tests have become complex enough to require automation of most test analyses. A new data analysis software library was built that allows physicists to automate routine linear accelerator quality assurance tests. The package is open source, code tested, and benchmarked. Methods: Images and data were generated on a TrueBeam linac for the following routine QA tests: VMAT, starshot, CBCT, machine logs, Winston Lutz, and picket fence. The analysis library was built using the general programming language Python. Each test was analyzed with the library algorithms and compared to manual measurements taken at the timemore » of acquisition. Results: VMAT QA results agreed within 0.1% between the library and manual measurements. Machine logs (dynalogs & trajectory logs) were successfully parsed; mechanical axis positions were verified for accuracy and MLC fluence agreed well with EPID measurements. CBCT QA measurements were within 10 HU and 0.2mm where applicable. Winston Lutz isocenter size measurements were within 0.2mm of TrueBeam’s Machine Performance Check. Starshot analysis was within 0.2mm of the Winston Lutz results for the same conditions. Picket fence images with and without a known error showed that the library was capable of detecting MLC offsets within 0.02mm. Conclusion: A new routine QA software library has been benchmarked and is available for use by the community. The library is open-source and extensible for use in larger systems.« less

  16. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  17. Incoherent digital holograms acquired by interferenceless coded aperture correlation holography system without refractive lenses.

    PubMed

    Kumar, Manoj; Vijayakumar, A; Rosen, Joseph

    2017-09-14

    We present a lensless, interferenceless incoherent digital holography technique based on the principle of coded aperture correlation holography. The acquired digital hologram by this technique contains a three-dimensional image of some observed scene. Light diffracted by a point object (pinhole) is modulated using a random-like coded phase mask (CPM) and the intensity pattern is recorded and composed as a point spread hologram (PSH). A library of PSHs is created using the same CPM by moving the pinhole to all possible axial locations. Intensity diffracted through the same CPM from an object placed within the axial limits of the PSH library is recorded by a digital camera. The recorded intensity this time is composed as the object hologram. The image of the object at any axial plane is reconstructed by cross-correlating the object hologram with the corresponding component of the PSH library. The reconstruction noise attached to the image is suppressed by various methods. The reconstruction results of multiplane and thick objects by this technique are compared with regular lens-based imaging.

  18. Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms.

    PubMed

    Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori; Yamamoto, Kenji; Kurita, Taiichiro

    2012-09-10

    We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second.

  19. California State Library: Processing Center Design and Specifications. Volume I, System Description and Input Processing.

    ERIC Educational Resources Information Center

    Sherman, Don; Shoffner, Ralph M.

    The scope of the California State Library-Processing Center (CSL-PC) project is to develop the design and specifications for a computerized technical processing center to provide services to a network of participating California libraries. Immediate objectives are: (1) retrospective conversion of card catalogs to a machine-form data base,…

  20. Panel Sessions.

    ERIC Educational Resources Information Center

    Proceedings of the ASIS Mid-Year Meeting, 1992

    1992-01-01

    Lists the speakers and summarizes the issues addressed for 12 panel sessions on topics related to networking, including libraries and national networks, federal national resources and energy programs, multimedia issues, telecommuting, remote image serving, accessing the Internet, library automation, scientific information, applications of Z39.50,…

  1. Designing Tracking Software for Image-Guided Surgery Applications: IGSTK Experience

    PubMed Central

    Enquobahrie, Andinet; Gobbi, David; Turek, Matt; Cheng, Patrick; Yaniv, Ziv; Lindseth, Frank; Cleary, Kevin

    2009-01-01

    Objective Many image-guided surgery applications require tracking devices as part of their core functionality. The Image-Guided Surgery Toolkit (IGSTK) was designed and developed to interface tracking devices with software applications incorporating medical images. Methods IGSTK was designed as an open source C++ library that provides the basic components needed for fast prototyping and development of image-guided surgery applications. This library follows a component-based architecture with several components designed for specific sets of image-guided surgery functions. At the core of the toolkit is the tracker component that handles communication between a control computer and navigation device to gather pose measurements of surgical instruments present in the surgical scene. The representations of the tracked instruments are superimposed on anatomical images to provide visual feedback to the clinician during surgical procedures. Results The initial version of the IGSTK toolkit has been released in the public domain and several trackers are supported. The toolkit and related information are available at www.igstk.org. Conclusion With the increased popularity of minimally invasive procedures in health care, several tracking devices have been developed for medical applications. Designing and implementing high-quality and safe software to handle these different types of trackers in a common framework is a challenging task. It requires establishing key software design principles that emphasize abstraction, extensibility, reusability, fault-tolerance, and portability. IGSTK is an open source library that satisfies these needs for the image-guided surgery community. PMID:20037671

  2. Library Subject Guides: A Case Study of Evidence-Informed Library Development

    ERIC Educational Resources Information Center

    Wakeham, Maurice; Roberts, Angharad; Shelley, Jane; Wells, Paul

    2012-01-01

    This paper describes the process whereby a university library investigated the value of its subject guides to its users. A literature review and surveys of library staff, library users and other libraries were carried out. Existing library subject guides and those of other higher education libraries were evaluated. The project team reported…

  3. libprofit: Image creation from luminosity profiles

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Taranu, D.; Tobar, R.

    2016-12-01

    libprofit is a C++ library for image creation based on different luminosity profiles. It offers fast and accurate two-dimensional integration for a useful number of profiles, including Sersic, Core-Sersic, broken-exponential, Ferrer, Moffat, empirical King, point-source and sky, with a simple mechanism for adding new profiles. libprofit provides a utility to read the model and profile parameters from the command-line and generate the corresponding image. It can output the resulting image as text values, a binary stream, or as a simple FITS file. It also provides a shared library exposing an API that can be used by any third-party application. R and Python interfaces are available: ProFit (ascl:1612.004) and PyProfit (ascl:1612.005).

  4. Aerial Mapping of Forests Affected by Pathogens Using UAVs, Hyperspectral Sensors, and Artificial Intelligence.

    PubMed

    Sandino, Juan; Pegg, Geoff; Gonzalez, Felipe; Smith, Grant

    2018-03-22

    The environmental and economic impacts of exotic fungal species on natural and plantation forests have been historically catastrophic. Recorded surveillance and control actions are challenging because they are costly, time-consuming, and hazardous in remote areas. Prolonged periods of testing and observation of site-based tests have limitations in verifying the rapid proliferation of exotic pathogens and deterioration rates in hosts. Recent remote sensing approaches have offered fast, broad-scale, and affordable surveys as well as additional indicators that can complement on-ground tests. This paper proposes a framework that consolidates site-based insights and remote sensing capabilities to detect and segment deteriorations by fungal pathogens in natural and plantation forests. This approach is illustrated with an experimentation case of myrtle rust ( Austropuccinia psidii ) on paperbark tea trees ( Melaleuca quinquenervia ) in New South Wales (NSW), Australia. The method integrates unmanned aerial vehicles (UAVs), hyperspectral image sensors, and data processing algorithms using machine learning. Imagery is acquired using a Headwall Nano-Hyperspec ® camera, orthorectified in Headwall SpectralView ® , and processed in Python programming language using eXtreme Gradient Boosting (XGBoost), Geospatial Data Abstraction Library (GDAL), and Scikit-learn third-party libraries. In total, 11,385 samples were extracted and labelled into five classes: two classes for deterioration status and three classes for background objects. Insights reveal individual detection rates of 95% for healthy trees, 97% for deteriorated trees, and a global multiclass detection rate of 97%. The methodology is versatile to be applied to additional datasets taken with different image sensors, and the processing of large datasets with freeware tools.

  5. Aerial Mapping of Forests Affected by Pathogens Using UAVs, Hyperspectral Sensors, and Artificial Intelligence

    PubMed Central

    2018-01-01

    The environmental and economic impacts of exotic fungal species on natural and plantation forests have been historically catastrophic. Recorded surveillance and control actions are challenging because they are costly, time-consuming, and hazardous in remote areas. Prolonged periods of testing and observation of site-based tests have limitations in verifying the rapid proliferation of exotic pathogens and deterioration rates in hosts. Recent remote sensing approaches have offered fast, broad-scale, and affordable surveys as well as additional indicators that can complement on-ground tests. This paper proposes a framework that consolidates site-based insights and remote sensing capabilities to detect and segment deteriorations by fungal pathogens in natural and plantation forests. This approach is illustrated with an experimentation case of myrtle rust (Austropuccinia psidii) on paperbark tea trees (Melaleuca quinquenervia) in New South Wales (NSW), Australia. The method integrates unmanned aerial vehicles (UAVs), hyperspectral image sensors, and data processing algorithms using machine learning. Imagery is acquired using a Headwall Nano-Hyperspec® camera, orthorectified in Headwall SpectralView®, and processed in Python programming language using eXtreme Gradient Boosting (XGBoost), Geospatial Data Abstraction Library (GDAL), and Scikit-learn third-party libraries. In total, 11,385 samples were extracted and labelled into five classes: two classes for deterioration status and three classes for background objects. Insights reveal individual detection rates of 95% for healthy trees, 97% for deteriorated trees, and a global multiclass detection rate of 97%. The methodology is versatile to be applied to additional datasets taken with different image sensors, and the processing of large datasets with freeware tools. PMID:29565822

  6. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  7. Intensity Changes in Typhoon Sinlaku and Typhoon Jangmi in Response to Varying Ocean and Atmospheric Conditions

    DTIC Science & Technology

    2011-03-01

    FIGURES Figure 1.  Radar image of the eye of Typhoon Cobra on 18 December 1944 from a ship located at the center of the area shown (from NOAA Library at...System Research and Predictability Experiment T- PARC : THORPEX-Pacific Asian Regional Campaign TS: Tropical Storm TUTT: Tropical Upper...Figure 1. Radar image of the eye of Typhoon Cobra on 18 December 1944 from a ship located at the center of the area shown (from NOAA Library at

  8. The CUBLAS and CULA based GPU acceleration of adaptive finite element framework for bioluminescence tomography.

    PubMed

    Zhang, Bo; Yang, Xiang; Yang, Fei; Yang, Xin; Qin, Chenghu; Han, Dong; Ma, Xibo; Liu, Kai; Tian, Jie

    2010-09-13

    In molecular imaging (MI), especially the optical molecular imaging, bioluminescence tomography (BLT) emerges as an effective imaging modality for small animal imaging. The finite element methods (FEMs), especially the adaptive finite element (AFE) framework, play an important role in BLT. The processing speed of the FEMs and the AFE framework still needs to be improved, although the multi-thread CPU technology and the multi CPU technology have already been applied. In this paper, we for the first time introduce a new kind of acceleration technology to accelerate the AFE framework for BLT, using the graphics processing unit (GPU). Besides the processing speed, the GPU technology can get a balance between the cost and performance. The CUBLAS and CULA are two main important and powerful libraries for programming on NVIDIA GPUs. With the help of CUBLAS and CULA, it is easy to code on NVIDIA GPU and there is no need to worry about the details about the hardware environment of a specific GPU. The numerical experiments are designed to show the necessity, effect and application of the proposed CUBLAS and CULA based GPU acceleration. From the results of the experiments, we can reach the conclusion that the proposed CUBLAS and CULA based GPU acceleration method can improve the processing speed of the AFE framework very much while getting a balance between cost and performance.

  9. Low-frequency chimeric yeast artificial chromosome libraries from flow-sorted human chromosomes 16 and 21.

    PubMed Central

    McCormick, M K; Campbell, E; Deaven, L; Moyzis, R

    1993-01-01

    Construction of chromosome-specific yeast artificial chromosome (YAC) libraries from sorted chromosomes was undertaken (i) to eliminate drawbacks associated with first-generation total genomic YAC libraries, such as the high frequency of chimeric YACs, and (ii) to provide an alternative method for generating chromosome-specific YAC libraries in addition to isolating such collections from a total genomic library. Chromosome-specific YAC libraries highly enriched for human chromosomes 16 and 21 were constructed. By maximizing the percentage of fragments with two ligatable ends and performing yeast transformations with less than saturating amounts of DNA in the presence of carrier DNA, YAC libraries with a low percentage of chimeric clones were obtained. The smaller number of YAC clones in these chromosome-specific libraries reduces the effort involved in PCR-based screening and allows hybridization methods to be a manageable screening approach. Images PMID:8430075

  10. NOAA Photo Library

    Science.gov Websites

    NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes you to the Contacts page. Takes you to the HELP page. Takes you to the Credits page. Takes you to the Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image

  11. Recombinant Peptides as Biomarkers for Metastatic Breast Cancer Response

    DTIC Science & Technology

    2007-10-01

    could be specific to breast cancer tumor models has just been concluded. In vivo biopanning wsa conducted with a T7 phage -based random peptide library...peptides selected from phage -displayed libraries. 15. SUBJECT TERMS Breast cancer, phage display, molecular imaging, personalized medicine 16...recombinant peptides from phage -displayed peptide libraries can be selected that bind to receptors activated in response to therapy. These peptides in turn

  12. Spectral unmixing of urban land cover using a generic library approach

    NASA Astrophysics Data System (ADS)

    Degerickx, Jeroen; Lordache, Marian-Daniel; Okujeni, Akpona; Hermy, Martin; van der Linden, Sebastian; Somers, Ben

    2016-10-01

    Remote sensing based land cover classification in urban areas generally requires the use of subpixel classification algorithms to take into account the high spatial heterogeneity. These spectral unmixing techniques often rely on spectral libraries, i.e. collections of pure material spectra (endmembers, EM), which ideally cover the large EM variability typically present in urban scenes. Despite the advent of several (semi-) automated EM detection algorithms, the collection of such image-specific libraries remains a tedious and time-consuming task. As an alternative, we suggest the use of a generic urban EM library, containing material spectra under varying conditions, acquired from different locations and sensors. This approach requires an efficient EM selection technique, capable of only selecting those spectra relevant for a specific image. In this paper, we evaluate and compare the potential of different existing library pruning algorithms (Iterative Endmember Selection and MUSIC) using simulated hyperspectral (APEX) data of the Brussels metropolitan area. In addition, we develop a new hybrid EM selection method which is shown to be highly efficient in dealing with both imagespecific and generic libraries, subsequently yielding more robust land cover classification results compared to existing methods. Future research will include further optimization of the proposed algorithm and additional tests on both simulated and real hyperspectral data.

  13. Review of Fusion Systems and Contributing Technologies for SIHS-TD (Examen des Systemes de Fusion et des Technologies d’Appui pour la DT SIHS)

    DTIC Science & Technology

    2007-03-31

    Unlimited, Nivisys, Insight technology, Elcan, FLIR Systems, Stanford photonics Hardware Sensor fusion processors Video processing boards Image, video...Engineering The SPIE Digital Library is a resource for optics and photonics information. It contains more than 70,000 full-text papers from SPIE...conditions Top row: Stanford Photonics XR-Mega-10 Extreme 1400 x 1024 pixels ICCD detector, 33 msec exposure, no binning. Middle row: Andor EEV iXon

  14. Video Preservation and Digital Reformatting: Pain and Possibility

    ERIC Educational Resources Information Center

    McDonough, Jerome; Jimenez, Mona

    2006-01-01

    The digital library community is increasingly concerned with long-term preservation of digital materials. This concern presents an opportunity for strategic alliances between digital library units and preservation departments confronting the difficulties inherent in preservation reformatting of moving image materials. However, successful…

  15. The Roles of the Future Library.

    ERIC Educational Resources Information Center

    Murr, Lawrence E.; Williams, James B.

    1987-01-01

    Discusses emerging roles for the library and librarian, including services in the following areas: (1) special collection management and reference; (2) information systems; (3) expert systems; (4) electronic publishing; (5) telecommunications networking; and (6) computer support. The technologies of artificial intelligence, graphic imaging,…

  16. Excellence through Change: SLA in Boston.

    ERIC Educational Resources Information Center

    Mark, Linda

    1986-01-01

    Summary of the 1986 Special Libraries Association Conference covers a general session on managing organizational change and programs on entrepreneurship in corporate libraries, staff training, access to government information, ethics and new technology, networking inside corporations, and creating a positive image through marketing. (EM)

  17. Shifting Priorities: Print and Electronic Serials at the University of Montana

    ERIC Educational Resources Information Center

    Millet, Michelle S.; Mueller, Susan

    2005-01-01

    Following a library-wide brainstorming session and retreat, the Dean of the Maureen and Mike Mansfield Library tasked an ad-hoc committee to discuss implications for the library and its users if certain processes were implemented or eliminated in order to streamline the processing of serials. As the library's collection continues to shift from…

  18. Taking It to the Stacks: An Inventory Project at the University of Mississippi Libraries

    ERIC Educational Resources Information Center

    Greenwood, Judy T.

    2013-01-01

    This article examines multiple inventory methods and findings from the inventory processes at the University of Mississippi Libraries. In an attempt to reduce user frustration from not being able to locate materials, the University of Mississippi Libraries conducted an inventory process beginning with a pilot inventory of a branch library and a…

  19. Document image archive transfer from DOS to UNIX

    NASA Technical Reports Server (NTRS)

    Hauser, Susan E.; Gill, Michael J.; Thoma, George R.

    1994-01-01

    An R&D division of the National Library of Medicine has developed a prototype system for automated document image delivery as an adjunct to the labor-intensive manual interlibrary loan service of the library. The document image archive is implemented by a PC controlled bank of optical disk drives which use 12 inch WORM platters containing bitmapped images of over 200,000 pages of medical journals. Following three years of routine operation which resulted in serving patrons with articles both by mail and fax, an effort is underway to relocate the storage environment from the DOS-based system to a UNIX-based jukebox whose magneto-optical erasable 5 1/4 inch platters hold the images. This paper describes the deficiencies of the current storage system, the design issues of modifying several modules in the system, the alternatives proposed and the tradeoffs involved.

  20. Imaging in Chronic Traumatic Encephalopathy and Traumatic Brain Injury

    PubMed Central

    Shetty, Teena; Raince, Avtar; Manning, Erin; Tsiouris, Apostolos John

    2016-01-01

    Context: The diagnosis of chronic traumatic encephalopathy (CTE) can only be made pathologically, and there is no concordance of defined clinical criteria for premorbid diagnosis. The absence of established criteria and the insufficient imaging findings to detect this disease in a living athlete are of growing concern. Evidence Acquisition: The article is a review of the current literature on CTE. Databases searched include Medline, PubMed, JAMA evidence, and evidence-based medicine guidelines Cochrane Library, Hospital for Special Surgery, and Cornell Library databases. Study Design: Clinical review. Level of Evidence: Level 4. Results: Chronic traumatic encephalopathy cannot be diagnosed on imaging. Examples of imaging findings in common types of head trauma are discussed. Conclusion: Further study is necessary to correlate the clinical and imaging findings of repetitive head injuries with the pathologic diagnosis of CTE. PMID:26733590

  1. A mask quality control tool for the OSIRIS multi-object spectrograph

    NASA Astrophysics Data System (ADS)

    López-Ruiz, J. C.; Vaz Cedillo, Jacinto Javier; Ederoclite, Alessandro; Bongiovanni, Ángel; González Escalera, Víctor

    2012-09-01

    OSIRIS multi object spectrograph uses a set of user-customised-masks, which are manufactured on-demand. The manufacturing process consists of drilling the specified slits on the mask with the required accuracy. Ensuring that slits are on the right place when observing is of vital importance. We present a tool for checking the quality of the process of manufacturing the masks which is based on analyzing the instrument images obtained with the manufactured masks on place. The tool extracts the slit information from these images, relates specifications with the extracted slit information, and finally communicates to the operator if the manufactured mask fulfills the expectations of the mask designer. The proposed tool has been built using scripting languages and using standard libraries such as opencv, pyraf and scipy. The software architecture, advantages and limits of this tool in the lifecycle of a multiobject acquisition are presented.

  2. Hyperspectral Imaging of fecal contamination on chickens

    NASA Technical Reports Server (NTRS)

    2003-01-01

    ProVision Technologies, a NASA research partnership center at Sternis Space Center in Mississippi, has developed a new hyperspectral imaging (HSI) system that is much smaller than the original large units used aboard remote sensing aircraft and satellites. The new apparatus is about the size of a breadbox. Health-related applications of HSI include scanning chickens during processing to help prevent contaminated food from getting to the table. ProVision is working with Sanderson Farms of Mississippi and the U.S. Department of Agriculture. ProVision has a record in its spectral library of the unique spectral signature of fecal contamination, so chickens can be scanned and those with a positive reading can be separated. HSI sensors can also determine the quantity of surface contamination. Research in this application is quite advanced, and ProVision is working on a licensing agreement for the technology. The potential for future use of this equipment in food processing and food safety is enormous.

  3. VSO For Dummies

    NASA Astrophysics Data System (ADS)

    Schwartz, Richard A.; Zarro, D.; Csillaghy, A.; Dennis, B.; Tolbert, A. K.; Etesi, L.

    2009-05-01

    We report on our activities to integrate VSO search and retrieval capabilities into standard data access, display, and analysis tools. In addition to its standard Web-based search form, the VSO provides an Interactive Data Language (IDL) client (vso_search) that is available through the Solar Software (SSW) package. We have incorporated this client into an IDL-widget interface program (show_synop) that allows for more simplified searching and downloading of VSO datasets directly into a user's IDL data analysis environment. In particular, we have provided the capability to read VSO datasets into a general purpose IDL package (plotman) that can display different datatypes (lightcurves, images, and spectra) and perform basic data operations such as zooming, image overlays, solar rotation, etc. Currently, the show_synop tool supports access to ground-based and space-based (SOHO, STEREO, and Hinode) observations, and has the capability to include new datasets as they become available. A user encounters two major hurdles when using the VSO: (1) Instrument-specific software (such as level-0 file readers and data-prepping procedures) may not be available in the user's local SSW distribution. (2) Recent calibration files (such as flat-fields) are not automatically distributed with the analysis software. To address these issues, we have developed a dedicated server (prepserver) that incorporates all the latest instrument-specific software libraries and calibration files. The prepserver uses an IDL-Java bridge to read and implement data processing requests from a client and return a processed data file that can be readily displayed with the show_synop/plotman package. The advantage of the prepserver is that the user is only required to install the general branch (gen) of the SSW tree, and is freed from the more onerous task of installing instrument-specific libraries and calibration files. We will demonstrate how the prepserver can be used to read, process, and overlay SOHO/EIT, TRACE, SECCHI/EUVI, and RHESSI images.

  4. Suggestions for Library Network Design.

    ERIC Educational Resources Information Center

    Salton, Gerald

    1979-01-01

    Various approaches to the design of automatic library systems are described, suggestions for the design of rational and effective automated library processes are posed, and an attempt is made to assess the importance and effect of library network systems on library operations and library effectiveness. (Author/CWM)

  5. 'SON-GO-KU' : a dream of automated library

    NASA Astrophysics Data System (ADS)

    Sato, Mamoru; Kishimoto, Juji

    In the process of automating libraries, the retrieval of books through the browsing of shelves is being overlooked. The telematic library is a document based DBMS which can deliver the content of books by simulating the browsing process. The retrieval actually simulates the process a person would use in selecting a book in a real library, where a visual presentation using a graphic display is substituted. The characteristics of prototype system "Son-Go-Ku" for such retrieval implemented in 1988 are mentioned.

  6. A Macintosh-Based Scientific Images Video Analysis System

    NASA Technical Reports Server (NTRS)

    Groleau, Nicolas; Friedland, Peter (Technical Monitor)

    1994-01-01

    A set of experiments was designed at MIT's Man-Vehicle Laboratory in order to evaluate the effects of zero gravity on the human orientation system. During many of these experiments, the movements of the eyes are recorded on high quality video cassettes. The images must be analyzed off-line to calculate the position of the eyes at every moment in time. To this aim, I have implemented a simple inexpensive computerized system which measures the angle of rotation of the eye from digitized video images. The system is implemented on a desktop Macintosh computer, processes one play-back frame per second and exhibits adequate levels of accuracy and precision. The system uses LabVIEW, a digital output board, and a video input board to control a VCR, digitize video images, analyze them, and provide a user friendly interface for the various phases of the process. The system uses the Concept Vi LabVIEW library (Graftek's Image, Meudon la Foret, France) for image grabbing and displaying as well as translation to and from LabVIEW arrays. Graftek's software layer drives an Image Grabber board from Neotech (Eastleigh, United Kingdom). A Colour Adapter box from Neotech provides adequate video signal synchronization. The system also requires a LabVIEW driven digital output board (MacADIOS II from GW Instruments, Cambridge, MA) controlling a slightly modified VCR remote control used mainly to advance the video tape frame by frame.

  7. Digital Collections, Digital Libraries and the Digitization of Cultural Heritage Information.

    ERIC Educational Resources Information Center

    Lynch, Clifford

    2002-01-01

    Discusses the development of digital collections and digital libraries. Topics include digitization of cultural heritage information; broadband issues; lack of compelling content; training issues; types of materials being digitized; sustainability; digital preservation; infrastructure; digital images; data mining; and future possibilities for…

  8. A Review of High-Performance Computational Strategies for Modeling and Imaging of Electromagnetic Induction Data

    NASA Astrophysics Data System (ADS)

    Newman, Gregory A.

    2014-01-01

    Many geoscientific applications exploit electrostatic and electromagnetic fields to interrogate and map subsurface electrical resistivity—an important geophysical attribute for characterizing mineral, energy, and water resources. In complex three-dimensional geologies, where many of these resources remain to be found, resistivity mapping requires large-scale modeling and imaging capabilities, as well as the ability to treat significant data volumes, which can easily overwhelm single-core and modest multicore computing hardware. To treat such problems requires large-scale parallel computational resources, necessary for reducing the time to solution to a time frame acceptable to the exploration process. The recognition that significant parallel computing processes must be brought to bear on these problems gives rise to choices that must be made in parallel computing hardware and software. In this review, some of these choices are presented, along with the resulting trade-offs. We also discuss future trends in high-performance computing and the anticipated impact on electromagnetic (EM) geophysics. Topics discussed in this review article include a survey of parallel computing platforms, graphics processing units to multicore CPUs with a fast interconnect, along with effective parallel solvers and associated solver libraries effective for inductive EM modeling and imaging.

  9. Improving Usage Statistics Processing for a Library Consortium: The Virtual Library of Virginia's Experience

    ERIC Educational Resources Information Center

    Matthews, Tansy E.

    2009-01-01

    This article describes the development of the Virtual Library of Virginia (VIVA). The VIVA statistics-processing system remains a work in progress. Member libraries will benefit from the ability to obtain the actual data from the VIVA site, rather than just the summaries, so a project to make these data available is currently being planned. The…

  10. Commercial imagery archive product development

    NASA Astrophysics Data System (ADS)

    Sakkas, Alysa

    1999-12-01

    The Lockheed Martin (LM) team had garnered over a decade of operational experience in digital imagery management and analysis for the US Government at numerous worldwide sites. Recently, it set out to create a new commercial product to serve the needs of large-scale imagery archiving and analysis markets worldwide. LM decided to provide a turnkey commercial solution to receive, store, retrieve, process, analyze and disseminate in 'push' or 'pull' modes components and adapted and developed its own algorithms to provide added functionality not commercially available elsewhere. The resultant product, Intelligent Library System, satisfies requirements for (a) a potentially unbounded, data archive automated workflow management for increased user productivity; (c) automatic tracking and management of files stored on shelves; (d) ability to ingest, process and disseminate data involves with bandwidths ranging up to multi-gigabit per second; (e) access through a thin client- to-server network environment; (f) multiple interactive users needing retrieval of filters in seconds from both archived images or in real time, and (g) scalability that maintains information throughput performance as the size of the digital library grows.

  11. WHOI and SIO (I): Next Steps toward Multi-Institution Archiving of Shipboard and Deep Submergence Vehicle Data

    NASA Astrophysics Data System (ADS)

    Detrick, R. S.; Clark, D.; Gaylord, A.; Goldsmith, R.; Helly, J.; Lemmond, P.; Lerner, S.; Maffei, A.; Miller, S. P.; Norton, C.; Walden, B.

    2005-12-01

    The Scripps Institution of Oceanography (SIO) and the Woods Hole Oceanographic Institution (WHOI) have joined forces with the San Diego Supercomputer Center to build a testbed for multi-institutional archiving of shipboard and deep submergence vehicle data. Support has been provided by the Digital Archiving and Preservation program funded by NSF/CISE and the Library of Congress. In addition to the more than 92,000 objects stored in the SIOExplorer Digital Library, the testbed will provide access to data, photographs, video images and documents from WHOI ships, Alvin submersible and Jason ROV dives, and deep-towed vehicle surveys. An interactive digital library interface will allow combinations of distributed collections to be browsed, metadata inspected, and objects displayed or selected for download. The digital library architecture, and the search and display tools of the SIOExplorer project, are being combined with WHOI tools, such as the Alvin Framegrabber and the Jason Virtual Control Van, that have been designed using WHOI's GeoBrowser to handle the vast volumes of digital video and camera data generated by Alvin, Jason and other deep submergence vehicles. Notions of scalability will be tested, as data volumes range from 3 CDs per cruise to 200 DVDs per cruise. Much of the scalability of this proposal comes from an ability to attach digital library data and metadata acquisition processes to diverse sensor systems. We are able to run an entire digital library from a laptop computer as well as from supercomputer-center-size resources. It can be used, in the field, laboratory or classroom, covering data from acquisition-to-archive using a single coherent methodology. The design is an open architecture, supporting applications through well-defined external interfaces maintained as an open-source effort for community inclusion and enhancement.

  12. A Web-Based Library and Algorithm System for Satellite and Airborne Image Products

    DTIC Science & Technology

    2011-06-28

    Sequoia Scientific, Inc., and Dr. Paul Bissett at FERI, under other 6.1/6.2 program funding. 2 A Web-Based Library And Algorithm System For...of the spectrum matching approach to inverting hyperspectral imagery created by Drs. C. Mobley ( Sequoia Scientific) and P. Bissett (FERI...algorithms developed by Sequoia Scientific and FERI. Testing and Implementation of Library This project will result in the delivery of a WeoGeo

  13. The Story of a Volunteer Librarian in South Africa.

    ERIC Educational Resources Information Center

    Fryling, Margo J.

    2003-01-01

    Describes experiences of a librarian who participated in the World Library Partnership to volunteer in a South African elementary school library during a summer. Discusses classroom teaching techniques; library technical processes; training library staff and teachers; student workers; story times; library instruction; and library policy. (LRW)

  14. 1991 survey of recent health sciences library building projects.

    PubMed Central

    Ludwig, L T

    1992-01-01

    Twenty health sciences libraries reported building planning, expansion, or construction of new facilities in the association's second annual survey of recent building projects. Six projects are new, freestanding structures in which the library occupies all or a major portion of the space. Six other projects are part of new construction for separately administered units in which the library is a major tenant. The final eight projects involve additions to or renovations of existing space. Seven of these twenty libraries were still in projected, predesign, or design stages of awaiting funding approval; of those seven, five were not prepared to release the requested information. Six projects are reported here as illustrative of current building projects. Images PMID:1600420

  15. Algorithms and programming tools for image processing on the MPP:3

    NASA Technical Reports Server (NTRS)

    Reeves, Anthony P.

    1987-01-01

    This is the third and final report on the work done for NASA Grant 5-403 on Algorithms and Programming Tools for Image Processing on the MPP:3. All the work done for this grant is summarized in the introduction. Work done since August 1986 is reported in detail. Research for this grant falls under the following headings: (1) fundamental algorithms for the MPP; (2) programming utilities for the MPP; (3) the Parallel Pascal Development System; and (4) performance analysis. In this report, the results of two efforts are reported: region growing, and performance analysis of important characteristic algorithms. In each case, timing results from MPP implementations are included. A paper is included in which parallel algorithms for region growing on the MPP is discussed. These algorithms permit different sized regions to be merged in parallel. Details on the implementation and peformance of several important MPP algorithms are given. These include a number of standard permutations, the FFT, convolution, arbitrary data mappings, image warping, and pyramid operations, all of which have been implemented on the MPP. The permutation and image warping functions have been included in the standard development system library.

  16. Sub-pixel mineral mapping using EO-1 Hyperion hyperspectral data

    NASA Astrophysics Data System (ADS)

    Kumar, C.; Shetty, A.; Raval, S.; Champatiray, P. K.; Sharma, R.

    2014-11-01

    This study describes the utility of Earth Observation (EO)-1 Hyperion data for sub-pixel mineral investigation using Mixture Tuned Target Constrained Interference Minimized Filter (MTTCIMF) algorithm in hostile mountainous terrain of Rajsamand district of Rajasthan, which hosts economic mineralization such as lead, zinc, and copper etc. The study encompasses pre-processing, data reduction, Pixel Purity Index (PPI) and endmember extraction from reflectance image of surface minerals such as illite, montmorillonite, phlogopite, dolomite and chlorite. These endmembers were then assessed with USGS mineral spectral library and lab spectra of rock samples collected from field for spectral inspection. Subsequently, MTTCIMF algorithm was implemented on processed image to obtain mineral distribution map of each detected mineral. A virtual verification method has been adopted to evaluate the classified image, which uses directly image information to evaluate the result and confirm the overall accuracy and kappa coefficient of 68 % and 0.6 respectively. The sub-pixel level mineral information with reasonable accuracy could be a valuable guide to geological and exploration community for expensive ground and/or lab experiments to discover economic deposits. Thus, the study demonstrates the feasibility of Hyperion data for sub-pixel mineral mapping using MTTCIMF algorithm with cost and time effective approach.

  17. Cost savings through multimission code reuse for Mars image products

    NASA Technical Reports Server (NTRS)

    Deen, R. G.

    2003-01-01

    An overview of the library's design will be presented, along with mission adaptation experiences and lessons learned, and the kinds of additional functionality that have been added while still retaining its multimission character. The application programs using the library will also be briefly described.

  18. NOAA Photo Library - Sanctuaries

    Science.gov Websites

    whale tail The word sanctuary evokes images of a sacred place, a refuge from the dangers of the world images contained in the collection. Click on thumbnails to view larger images. ALBUMS Images are arranged by themes. Click on thumbnails to view larger images. Note that not all images are contained in the

  19. NOAA Photo Library - NOAA's Ark/Animals Album

    Science.gov Websites

    options banner CATALOG View ALL images contained in the collection. Click on thumbnails to view larger images. ALBUMS Images are arranged by themes. Click on thumbnails to view larger images. Note that not all images are contained in the albums - select the above option to view ALL current images. NOAA's

  20. SU-E-J-238: First-Order Approximation of Time-Resolved 4DMRI From Cine 2DMRI and Respiratory-Correlated 4DMRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, G; Tyagi, N; Deasy, J

    2015-06-15

    Purpose: Cine 2DMRI is useful in MR-guided radiotherapy but it lacks volumetric information. We explore the feasibility of estimating timeresolved (TR) 4DMRI based on cine 2DMRI and respiratory-correlated (RC) 4DMRI though simulation. Methods: We hypothesize that a volumetric image during free breathing can be approximated by interpolation among 3DMRI image sets generated from a RC-4DMRI. Two patients’ RC-4DMRI with 4 or 5 phases were used to generate additional 3DMRI by interpolation. For each patient, six libraries were created to have total 5-to-35 3DMRI images by 0–6 equi-spaced tri-linear interpolation between adjacent and full-inhalation/full-exhalation phases. Sagittal cine 2DMRI were generated frommore » reference 3DMRIs created from separate, unique interpolations from the original RC-4DMRI. To test if accurate 3DMRI could be generated through rigid registration of the cine 2DMRI to the 3DMRI libraries, each sagittal 2DMRI was registered to sagittal cuts in the same location in the 3DMRI within each library to identify the two best matches: one with greater lung volume and one with smaller. A final interpolation between the corresponding 3DMRI was then performed to produce the first-order-approximation (FOA) 3DMRI. The quality and performance of the FOA as a function of library size was assessed using both the difference in lung volume and average voxel intensity between the FOA and the reference 3DMRI. Results: The discrepancy between the FOA and reference 3DMRI decreases as the library size increases. The 3D lung volume difference decreases from 5–15% to 1–2% as the library size increases from 5 to 35 image sets. The average difference in lung voxel intensity decreases from 7–8 to 5–6 with the lung intensity being 0–135. Conclusion: This study indicates that the quality of FOA 3DMRI improves with increasing 3DMRI library size. On-going investigations will test this approach using actual cine 2DMRI and introduce a higher order approximation for improvements. This study is in part supported by NIH (U54CA137788 and U54CA132378)« less

  1. Branching out with filmless radiology.

    PubMed

    Carbajal, R; Honea, R

    1999-05-01

    Texas Children's Hospital, a 456 bed pediatric hospital located in the Texas Medical Center, has been constructing a large-scale picture archiving and communications system (PACS), including ultrasound (US), computed tomography (CT), magnetic resonance (MR), and computed radiography (CR). Until recently, filmless radiology operations have been confined to the imaging department, the outpatient treatment center, and the emergency center. As filmless services expand to other clinical services, the PACS staff must engage each service in a dialog to determine the appropriate level of support required. The number and type of image examinations, the use of multiple modalities and comparison examinations, and the relationship between viewing and direct patient care activities have a bearing on the number and type of display stations provided. Some of the information about customer services is contained in documentation already maintained by the imaging department. For example, by a custom report from the radiology information system (RIS), we were able to determine the number and type of examinations ordered by each referring physician for the previous 6 months. By compiling these by clinical service, we were able to determine our biggest customers by examination type and volume. Another custom report was used to determine who was requesting old examinations from the film library. More information about imaging usage was gathered by means of a questionnaire. Some customers view images only where patients are also seen, while some services view images independently from the patient. Some services use their conference rooms for critical image viewing such as treatment planning. Additional information was gained from geographical surveys of where films are currently produced, delivered by the film library, and viewed. In some areas, available space dictates the type and configuration of display station that can be used. Active participation in the decision process by the clinical service is a key element to successful filmless operations.

  2. Advanced Transport Operating System (ATOPS) utility library software description

    NASA Technical Reports Server (NTRS)

    Clinedinst, Winston C.; Slominski, Christopher J.; Dickson, Richard W.; Wolverton, David A.

    1993-01-01

    The individual software processes used in the flight computers on-board the Advanced Transport Operating System (ATOPS) aircraft have many common functional elements. A library of commonly used software modules was created for general uses among the processes. The library includes modules for mathematical computations, data formatting, system database interfacing, and condition handling. The modules available in the library and their associated calling requirements are described.

  3. A Rough Approximation of the Relative Labor Effectiveness of the Book Acquisition and Cataloging Process at Three Public Libraries.

    ERIC Educational Resources Information Center

    Applegate, H. C.

    To gain some insight into the effectiveness of the Glendale Public Library Processing Section, it was decided to compare, with some very crude measures, the performance in the acquisition and cataloging areas of that library with that of the neighboring libraries of Pasadena and Burbank. A management consultant on the Glendale City Manager's staff…

  4. AGScan: a pluggable microarray image quantification software based on the ImageJ library.

    PubMed

    Cathelin, R; Lopez, F; Klopp, Ch

    2007-01-15

    Many different programs are available to analyze microarray images. Most programs are commercial packages, some are free. In the latter group only few propose automatic grid alignment and batch mode. More often than not a program implements only one quantification algorithm. AGScan is an open source program that works on all major platforms. It is based on the ImageJ library [Rasband (1997-2006)] and offers a plug-in extension system to add new functions to manipulate images, align grid and quantify spots. It is appropriate for daily laboratory use and also as a framework for new algorithms. The program is freely distributed under X11 Licence. The install instructions can be found in the user manual. The software can be downloaded from http://mulcyber.toulouse.inra.fr/projects/agscan/. The questions and plug-ins can be sent to the contact listed below.

  5. The Daily Image Information Needs and Seeking Behavior of Chinese Undergraduate Students

    ERIC Educational Resources Information Center

    Huang, Kun; Kelly, Diane

    2013-01-01

    A survey was conducted at Beijing Normal University to explore subjects' motives for image seeking; the image types they need; how and where they seek images; and the difficulties they encounter. The survey also explored subjects' attitudes toward current image services and their perceptions of how university libraries might provide assistance.…

  6. Canadian Libraries and Mass Deacidification.

    ERIC Educational Resources Information Center

    Pacey, Antony

    1992-01-01

    Considers the advantages and disadvantages of six mass deacidification processes that libraries can use to salvage printed materials: the Wei T'o process, the Diethyl Zinc (DEZ) process, the FMC (Lithco) process, the Book Preservation Associates (BPA) process, the "Bookkeeper" process, and the "Lyophilization" process. The…

  7. PANDA: a pipeline toolbox for analyzing brain diffusion images

    PubMed Central

    Cui, Zaixu; Zhong, Suyu; Xu, Pengfei; He, Yong; Gong, Gaolang

    2013-01-01

    Diffusion magnetic resonance imaging (dMRI) is widely used in both scientific research and clinical practice in in-vivo studies of the human brain. While a number of post-processing packages have been developed, fully automated processing of dMRI datasets remains challenging. Here, we developed a MATLAB toolbox named “Pipeline for Analyzing braiN Diffusion imAges” (PANDA) for fully automated processing of brain diffusion images. The processing modules of a few established packages, including FMRIB Software Library (FSL), Pipeline System for Octave and Matlab (PSOM), Diffusion Toolkit and MRIcron, were employed in PANDA. Using any number of raw dMRI datasets from different subjects, in either DICOM or NIfTI format, PANDA can automatically perform a series of steps to process DICOM/NIfTI to diffusion metrics [e.g., fractional anisotropy (FA) and mean diffusivity (MD)] that are ready for statistical analysis at the voxel-level, the atlas-level and the Tract-Based Spatial Statistics (TBSS)-level and can finish the construction of anatomical brain networks for all subjects. In particular, PANDA can process different subjects in parallel, using multiple cores either in a single computer or in a distributed computing environment, thus greatly reducing the time cost when dealing with a large number of datasets. In addition, PANDA has a friendly graphical user interface (GUI), allowing the user to be interactive and to adjust the input/output settings, as well as the processing parameters. As an open-source package, PANDA is freely available at http://www.nitrc.org/projects/panda/. This novel toolbox is expected to substantially simplify the image processing of dMRI datasets and facilitate human structural connectome studies. PMID:23439846

  8. Adaptive detection of missed text areas in OCR outputs: application to the automatic assessment of OCR quality in mass digitization projects

    NASA Astrophysics Data System (ADS)

    Ben Salah, Ahmed; Ragot, Nicolas; Paquet, Thierry

    2013-01-01

    The French National Library (BnF*) has launched many mass digitization projects in order to give access to its collection. The indexation of digital documents on Gallica (digital library of the BnF) is done through their textual content obtained thanks to service providers that use Optical Character Recognition softwares (OCR). OCR softwares have become increasingly complex systems composed of several subsystems dedicated to the analysis and the recognition of the elements in a page. However, the reliability of these systems is always an issue at stake. Indeed, in some cases, we can find errors in OCR outputs that occur because of an accumulation of several errors at different levels in the OCR process. One of the frequent errors in OCR outputs is the missed text components. The presence of such errors may lead to severe defects in digital libraries. In this paper, we investigate the detection of missed text components to control the OCR results from the collections of the French National Library. Our verification approach uses local information inside the pages based on Radon transform descriptors and Local Binary Patterns descriptors (LBP) coupled with OCR results to control their consistency. The experimental results show that our method detects 84.15% of the missed textual components, by comparing the OCR ALTO files outputs (produced by the service providers) to the images of the document.

  9. Algorithms and programming tools for image processing on the MPP, part 2

    NASA Technical Reports Server (NTRS)

    Reeves, Anthony P.

    1986-01-01

    A number of algorithms were developed for image warping and pyramid image filtering. Techniques were investigated for the parallel processing of a large number of independent irregular shaped regions on the MPP. In addition some utilities for dealing with very long vectors and for sorting were developed. Documentation pages for the algorithms which are available for distribution are given. The performance of the MPP for a number of basic data manipulations was determined. From these results it is possible to predict the efficiency of the MPP for a number of algorithms and applications. The Parallel Pascal development system, which is a portable programming environment for the MPP, was improved and better documentation including a tutorial was written. This environment allows programs for the MPP to be developed on any conventional computer system; it consists of a set of system programs and a library of general purpose Parallel Pascal functions. The algorithms were tested on the MPP and a presentation on the development system was made to the MPP users group. The UNIX version of the Parallel Pascal System was distributed to a number of new sites.

  10. Polio Pictures

    MedlinePlus

    ... dimensional representation of poliovirus. A few examples from public health professionals Child in Nigeria with a leg partly ... for these sites, which offer more images/photos. Public Health Image Library (PHIL) Immunization Action Coalition Polio Eradication ...

  11. Towards better digital pathology workflows: programming libraries for high-speed sharpness assessment of Whole Slide Images

    PubMed Central

    2014-01-01

    Background Since microscopic slides can now be automatically digitized and integrated in the clinical workflow, quality assessment of Whole Slide Images (WSI) has become a crucial issue. We present a no-reference quality assessment method that has been thoroughly tested since 2010 and is under implementation in multiple sites, both public university-hospitals and private entities. It is part of the FlexMIm R&D project which aims to improve the global workflow of digital pathology. For these uses, we have developed two programming libraries, in Java and Python, which can be integrated in various types of WSI acquisition systems, viewers and image analysis tools. Methods Development and testing have been carried out on a MacBook Pro i7 and on a bi-Xeon 2.7GHz server. Libraries implementing the blur assessment method have been developed in Java, Python, PHP5 and MySQL5. For web applications, JavaScript, Ajax, JSON and Sockets were also used, as well as the Google Maps API. Aperio SVS files were converted into the Google Maps format using VIPS and Openslide libraries. Results We designed the Java library as a Service Provider Interface (SPI), extendable by third parties. Analysis is computed in real-time (3 billion pixels per minute). Tests were made on 5000 single images, 200 NDPI WSI, 100 Aperio SVS WSI converted to the Google Maps format. Conclusions Applications based on our method and libraries can be used upstream, as calibration and quality control tool for the WSI acquisition systems, or as tools to reacquire tiles while the WSI is being scanned. They can also be used downstream to reacquire the complete slides that are below the quality threshold for surgical pathology analysis. WSI may also be displayed in a smarter way by sending and displaying the regions of highest quality before other regions. Such quality assessment scores could be integrated as WSI's metadata shared in clinical, research or teaching contexts, for a more efficient medical informatics workflow. PMID:25565494

  12. galario: Gpu Accelerated Library for Analyzing Radio Interferometer Observations

    NASA Astrophysics Data System (ADS)

    Tazzari, Marco; Beaujean, Frederik; Testi, Leonardo

    2017-10-01

    The galario library exploits the computing power of modern graphic cards (GPUs) to accelerate the comparison of model predictions to radio interferometer observations. It speeds up the computation of the synthetic visibilities given a model image (or an axisymmetric brightness profile) and their comparison to the observations.

  13. Trends in Library and Information Science: 1989. ERIC Digest.

    ERIC Educational Resources Information Center

    Eisenberg, Michael B.

    Based on a content analysis of professional journals, conference proceedings, ERIC documents, annuals, and dissertations in library and information science, the following current trends in the field are discussed: (1) there are important emerging roles and responsibilities for information professionals; (2) the status and image of librarians…

  14. Building Digital Audio Preservation Infrastructure and Workflows

    ERIC Educational Resources Information Center

    Young, Anjanette; Olivieri, Blynne; Eckler, Karl; Gerontakos, Theodore

    2010-01-01

    In 2009 the University of Washington (UW) Libraries special collections received funding for the digital preservation of its audio indigenous language holdings. The university libraries, where the authors work in various capacities, had begun digitizing image and text collections in 1997. Because of this, at the onset of the project, workflows (a…

  15. Safeguarding Copyrighted Contents: Digital Libraries and Intellectual Property Management. CWRU's Rights Management System.

    ERIC Educational Resources Information Center

    Alrashid, Tareq M.; Barker, James A.; Christian, Brian S.; Cox, Steven C.; Rabne, Michael W.; Slotta, Elizabeth A.; Upthegrove, Luella R.

    1998-01-01

    Describes Case Western Reserve University's (CWRU's) digital library project that examines the networked delivery of full-text materials and high-quality images to provide students excellent supplemental instructional resources delivered directly to their dormitory rooms. Reviews intellectual property (IP) management requirements and describes…

  16. Maps & Multimedia | National Agricultural Library

    Science.gov Websites

    Skip to main content Home National Agricultural Library United States Department of Agriculture Ag , tables, graphs), Agricultural Products html A how-to-build guide for Deadpool, a proximal sensing cart platform suitable for proximal sensing and imaging in a wide range of agricultural and environmental

  17. PACE: A Browsable Graphical Interface.

    ERIC Educational Resources Information Center

    Beheshti, Jamshid; And Others

    1996-01-01

    Describes PACE (Public Access Catalogue Extension), an alternative interface designed to enhance online catalogs by simulating images of books and library shelves to help users browse through the catalog. Results of a test in a college library against a text-based online public access catalog, including student attitudes, are described.…

  18. Looking at the Male Librarian Stereotype.

    ERIC Educational Resources Information Center

    Dickinson, Thad E.

    2002-01-01

    Discussion of library profession stereotypes focuses on academic male librarians. Topics include the position of the early academic librarians and the environment in which they worked; the beginnings of reference service; women in academic libraries; men in a feminized profession; and current images of male librarians in motion pictures and…

  19. Library reuse in a rapid development environment

    NASA Technical Reports Server (NTRS)

    Uhde, JO; Weed, Daniel; Gottlieb, Robert; Neal, Douglas

    1995-01-01

    The Aeroscience and Flight Mechanics Division (AFMD) established a Rapid Development Laboratory (RDL) to investigate and improve new 'rapid development' software production processes and refine the use of commercial, off-the-shelf (COTS) tools. These tools and processes take an avionics design project from initial inception through high fidelity, real-time, hardware-in-the-loop (HIL) testing. One central theme of a rapid development process is the use and integration of a variety of COTS tools: This paper discusses the RDL MATRIX(sub x)(R) libraries, as well as the techniques for managing and documenting these libraries. This paper also shows the methods used for building simulations with the Advanced Simulation Development System (ASDS) libraries, and provides metrics to illustrate the amount of reuse for five complete simulations. Combining ASDS libraries with MATRIX(sub x)(R) libraries is discussed.

  20. Omega-3 chicken egg detection system using a mobile-based image processing segmentation method

    NASA Astrophysics Data System (ADS)

    Nurhayati, Oky Dwi; Kurniawan Teguh, M.; Cintya Amalia, P.

    2017-02-01

    An Omega-3 chicken egg is a chicken egg produced through food engineering technology. It is produced by hen fed with high omega-3 fatty acids. So, it has fifteen times nutrient content of omega-3 higher than Leghorn's. Visually, its shell has the same shape and colour as Leghorn's. Each egg can be distinguished by breaking the egg's shell and testing the egg yolk's nutrient content in a laboratory. But, those methods were proven not effective and efficient. Observing this problem, the purpose of this research is to make an application to detect the type of omega-3 chicken egg by using a mobile-based computer vision. This application was built in OpenCV computer vision library to support Android Operating System. This experiment required some chicken egg images taken using an egg candling box. We used 60 omega-3 chicken and Leghorn eggs as samples. Then, using an Android smartphone, image acquisition of the egg was obtained. After that, we applied several steps using image processing methods such as Grab Cut, convert RGB image to eight bit grayscale, median filter, P-Tile segmentation, and morphology technique in this research. The next steps were feature extraction which was used to extract feature values via mean, variance, skewness, and kurtosis from each image. Finally, using digital image measurement, some chicken egg images were classified. The result showed that omega-3 chicken egg and Leghorn egg had different values. This system is able to provide accurate reading around of 91%.

  1. Sharing Control, Embracing Collaboration: Cross-Campus Partnerships for Library Website Design and Management

    ERIC Educational Resources Information Center

    Stephenson, Kimberley

    2012-01-01

    Cross-campus collaboration for library website design and management can be challenging, but the process can produce stronger, more attractive, and more usable library websites. Collaborative library website design and management can also lead to new avenues for marketing library tools and services; expert consultation for library technology…

  2. Standards for Health Sciences Libraries.

    ERIC Educational Resources Information Center

    Stinson, E. Ray

    1982-01-01

    Discusses service standards (level of excellence or adequacy in performance of library service) and their incorporation in the accreditation process for hospital library service and academic health sciences libraries. The certification program developed for health sciences librarians by the Medical Library Association is reviewed. Fifty-nine…

  3. Strategic planning with multitype libraries in the community: a model with extra funding as the main goal.

    PubMed Central

    Gall, C F; Miller, E G

    1997-01-01

    Medical libraries are discovering that ongoing collaboration in fundraising with other types of community libraries is mutually beneficial. Such partnerships may lead to joint grants, increase library visibility and access to decision makers, allow participation in community information networks, and provide leverage in additional fundraising projects. These partnerships have the potential to raise the profile of libraries. The accompanying community recognition for the parent organization may create a positive image, draw patients to the health center, and position the library and institution for future success in fundraising. Within institutions, development officers may become allies, mentors, and beneficiaries of the medical librarian's efforts. For a planned approach to community outreach with extra funding as the major objective, busy medical library administrators need guidelines. Standard participative techniques were applied to strategic planning by Indianapolis libraries to help achieve successful community outreach and to write joint statements of mission, vision, goals, and objectives. PMID:9285125

  4. USGS Digital Spectral Library splib06a

    USGS Publications Warehouse

    Clark, Roger N.; Swayze, Gregg A.; Wise, Richard A.; Livo, K. Eric; Hoefen, Todd M.; Kokaly, Raymond F.; Sutley, Stephen J.

    2007-01-01

    Introduction We have assembled a digital reflectance spectral library that covers the wavelength range from the ultraviolet to far infrared along with sample documentation. The library includes samples of minerals, rocks, soils, physically constructed as well as mathematically computed mixtures, plants, vegetation communities, microorganisms, and man-made materials. The samples and spectra collected were assembled for the purpose of using spectral features for the remote detection of these and similar materials. Analysis of spectroscopic data from laboratory, aircraft, and spacecraft instrumentation requires a knowledge base. The spectral library discussed here forms a knowledge base for the spectroscopy of minerals and related materials of importance to a variety of research programs being conducted at the U.S. Geological Survey. Much of this library grew out of the need for spectra to support imaging spectroscopy studies of the Earth and planets. Imaging spectrometers, such as the National Aeronautics and Space Administration (NASA) Airborne Visible/Infra Red Imaging Spectrometer (AVIRIS) or the NASA Cassini Visual and Infrared Mapping Spectrometer (VIMS) which is currently orbiting Saturn, have narrow bandwidths in many contiguous spectral channels that permit accurate definition of absorption features in spectra from a variety of materials. Identification of materials from such data requires a comprehensive spectral library of minerals, vegetation, man-made materials, and other subjects in the scene. Our research involves the use of the spectral library to identify the components in a spectrum of an unknown. Therefore, the quality of the library must be very good. However, the quality required in a spectral library to successfully perform an investigation depends on the scientific questions to be answered and the type of algorithms to be used. For example, to map a mineral using imaging spectroscopy and the mapping algorithm of Clark and others (1990a, 2003b), one simply needs a diagnostic absorption band. The mapping system uses continuum-removed reference spectral features fitted to features in observed spectra. Spectral features for such algorithms can be obtained from a spectrum of a sample containing large amounts of contaminants, including those that add other spectral features, as long as the shape of the diagnostic feature of interest is not modified. If, however, the data are needed for radiative transfer models to derive mineral abundances from reflectance spectra, then completely uncontaminated spectra are required. This library contains spectra that span a range of quality, with purity indicators to flag spectra for (or against) particular uses. Acquiring spectral measurements and performing sample characterizations for this library has taken about 15 person-years of effort. Software to manage the library and provide scientific analysis capability is provided (Clark, 1980, 1993). A personal computer (PC) reader for the library is also available (Livo and others, 1993). The program reads specpr binary files (Clark, 1980, 1993) and plots spectra. Another program that reads the specpr format is written in IDL (Kokaly, 2005). In our view, an ideal spectral library consists of samples covering a very wide range of materials, has large wavelength range with very high precision, and has enough sample analyses and documentation to establish the quality of the spectra. Time and available resources limit what can be achieved. Ideally, for each mineral, the sample analysis would include X-ray diffraction (XRD), electron microprobe (EM) or X-ray fluorescence (XRF), and petrographic microscopic analyses. For some minerals, such as iron oxides, additional analyses such as Mossbauer would be helpful. We have found that to make the basic spectral measurements, provide XRD, EM or XRF analyses, and microscopic analyses, document the results, and complete an entry of one spectral library sample, all takes about

  5. Precise and Efficient Retrieval of Captioned Images: The MARIE Project.

    ERIC Educational Resources Information Center

    Rowe, Neil C.

    1999-01-01

    The MARIE project explores knowledge-based information retrieval of captioned images of the kind found in picture libraries and on the Internet. MARIE's five-part approach exploits the idea that images are easier to understand with context, especially descriptive text near them, but it also does image analysis. Experiments show MARIE prototypes…

  6. SU-F-T-47: MRI T2 Exclusive Based Planning Using the Endocavitary/interstitial Gynecological Benidorm Applicator: A Proposed TPS Library and Preplan Efficient Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richart, J; Otal, A; Rodriguez, S

    Purpose: ABS and GEC-ESTRO have recommended MRI T2 for image guided brachytherapy. Recently, a new applicator (Benidorm Template, TB) has been developed in our Department (Rodriguez et al 2015). TB is fully MRI compatible because the Titanium needles and it allows the use of intrauterine tandem. Currently, TPS applicators library are not currently available for non-rigid applicators in case of interstitial component as the TB.The purpose of this work is to present the development of a library for the TB, together with its use on a pre-planning technique. Both new goals allow a very efficient and exclusive T2 MRI basedmore » planning clinical TB implementation. Methods: The developed library has been implemented in Oncentra Brachytherapy TPS, version 4.3.0 (Elekta) and now is being implemented on Sagiplan v 2.0 TPS (Eckert&Ziegler BEBIG). To model the TB, free and open software named FreeCAD and MeshLab have been used. The reconstruction process is based on three inserted A-vitamin pellets together with the data provided by the free length. The implemented preplanning procedure is as follow: 1) A MRI T2 acquisition is performed with the template in place just with the vaginal cylinder (no uterine tube nor needles). 2) The CTV is drawn and the required needles are selected using a developed Java based application and 3) A post-implant MRI T2 is performed. Results: This library procedure has been successfully applied by now in 25 patients. In this work the use of the developed library will be illustrated with clinical examples. The preplanning procedure has been applied by now in 6 patients, having significant advantages: needle depth estimation, needle positions and number are optimized a priori, time saving, etc Conclusion: TB library and pre-plan techniques are feasible and very efficient and their use will be illustrated in this work.« less

  7. Cocaine

    MedlinePlus

    ... confidencial Press Room » Multi-Media Library » Image Gallery » Cocaine COCAINE To Save Images: First click on the thumbnail ... your Save in directory and then click Save. Cocaine Crack Cocaine RESOURCE CENTER Controlled Substances Act DEA ...

  8. Translational Imaging Spectroscopy for Proximal Sensing

    PubMed Central

    Rogass, Christian; Koerting, Friederike M.; Mielke, Christian; Brell, Maximilian; Boesche, Nina K.; Bade, Maria; Hohmann, Christian

    2017-01-01

    Proximal sensing as the near field counterpart of remote sensing offers a broad variety of applications. Imaging spectroscopy in general and translational laboratory imaging spectroscopy in particular can be utilized for a variety of different research topics. Geoscientific applications require a precise pre-processing of hyperspectral data cubes to retrieve at-surface reflectance in order to conduct spectral feature-based comparison of unknown sample spectra to known library spectra. A new pre-processing chain called GeoMAP-Trans for at-surface reflectance retrieval is proposed here as an analogue to other algorithms published by the team of authors. It consists of a radiometric, a geometric and a spectral module. Each module consists of several processing steps that are described in detail. The processing chain was adapted to the broadly used HySPEX VNIR/SWIR imaging spectrometer system and tested using geological mineral samples. The performance was subjectively and objectively evaluated using standard artificial image quality metrics and comparative measurements of mineral and Lambertian diffuser standards with standard field and laboratory spectrometers. The proposed algorithm provides highly qualitative results, offers broad applicability through its generic design and might be the first one of its kind to be published. A high radiometric accuracy is achieved by the incorporation of the Reduction of Miscalibration Effects (ROME) framework. The geometric accuracy is higher than 1 μpixel. The critical spectral accuracy was relatively estimated by comparing spectra of standard field spectrometers to those from HySPEX for a Lambertian diffuser. The achieved spectral accuracy is better than 0.02% for the full spectrum and better than 98% for the absorption features. It was empirically shown that point and imaging spectrometers provide different results for non-Lambertian samples due to their different sensing principles, adjacency scattering impacts on the signal and anisotropic surface reflection properties. PMID:28800111

  9. Extraction and labeling high-resolution images from PDF documents

    NASA Astrophysics Data System (ADS)

    Chachra, Suchet K.; Xue, Zhiyun; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-12-01

    Accuracy of content-based image retrieval is affected by image resolution among other factors. Higher resolution images enable extraction of image features that more accurately represent the image content. In order to improve the relevance of search results for our biomedical image search engine, Open-I, we have developed techniques to extract and label high-resolution versions of figures from biomedical articles supplied in the PDF format. Open-I uses the open-access subset of biomedical articles from the PubMed Central repository hosted by the National Library of Medicine. Articles are available in XML and in publisher supplied PDF formats. As these PDF documents contain little or no meta-data to identify the embedded images, the task includes labeling images according to their figure number in the article after they have been successfully extracted. For this purpose we use the labeled small size images provided with the XML web version of the article. This paper describes the image extraction process and two alternative approaches to perform image labeling that measure the similarity between two images based upon the image intensity projection on the coordinate axes and similarity based upon the normalized cross-correlation between the intensities of two images. Using image identification based on image intensity projection, we were able to achieve a precision of 92.84% and a recall of 82.18% in labeling of the extracted images.

  10. Relevance Judging, Evaluation, and Decision Making in Virtual Libraries: A Descriptive Study.

    ERIC Educational Resources Information Center

    Fitzgerald, Mary Ann; Galloway, Chad

    2001-01-01

    Describes a study that investigated the cognitive processes undergraduates used to select information while using a virtual library, GALILEO (Georgia Library Learning Online). Discusses higher order thinking processes, relevance judging, evaluation (critical thinking), decision making, reasoning involving documents, relevance-related reasoning,…

  11. The Cost of Library Services: Activity-Based Costing in an Australian Academic Library.

    ERIC Educational Resources Information Center

    Robinson, Peter; Ellis-Newman, Jennifer

    1998-01-01

    Explains activity-based costing (ABC), discusses the benefits of ABC to library managers, and describes the steps involved in implementing ABC in an Australian academic library. Discusses the budgeting process in universities, and considers benefits to the library. (Author/LRW)

  12. MIST: An Open Source Environmental Modelling Programming Language Incorporating Easy to Use Data Parallelism.

    NASA Astrophysics Data System (ADS)

    Bellerby, Tim

    2014-05-01

    Model Integration System (MIST) is open-source environmental modelling programming language that directly incorporates data parallelism. The language is designed to enable straightforward programming structures, such as nested loops and conditional statements to be directly translated into sequences of whole-array (or more generally whole data-structure) operations. MIST thus enables the programmer to use well-understood constructs, directly relating to the mathematical structure of the model, without having to explicitly vectorize code or worry about details of parallelization. A range of common modelling operations are supported by dedicated language structures operating on cell neighbourhoods rather than individual cells (e.g.: the 3x3 local neighbourhood needed to implement an averaging image filter can be simply accessed from within a simple loop traversing all image pixels). This facility hides details of inter-process communication behind more mathematically relevant descriptions of model dynamics. The MIST automatic vectorization/parallelization process serves both to distribute work among available nodes and separately to control storage requirements for intermediate expressions - enabling operations on very large domains for which memory availability may be an issue. MIST is designed to facilitate efficient interpreter based implementations. A prototype open source interpreter is available, coded in standard FORTRAN 95, with tools to rapidly integrate existing FORTRAN 77 or 95 code libraries. The language is formally specified and thus not limited to FORTRAN implementation or to an interpreter-based approach. A MIST to FORTRAN compiler is under development and volunteers are sought to create an ANSI-C implementation. Parallel processing is currently implemented using OpenMP. However, parallelization code is fully modularised and could be replaced with implementations using other libraries. GPU implementation is potentially possible.

  13. Systematic review and modelling of the cost-effectiveness of cardiac magnetic resonance imaging compared with current existing testing pathways in ischaemic cardiomyopathy.

    PubMed

    Campbell, Fiona; Thokala, Praveen; Uttley, Lesley C; Sutton, Anthea; Sutton, Alex J; Al-Mohammad, Abdallah; Thomas, Steven M

    2014-09-01

    Cardiac magnetic resonance imaging (CMR) is increasingly used to assess patients for myocardial viability prior to revascularisation. This is important to ensure that only those likely to benefit are subjected to the risk of revascularisation. To assess current evidence on the accuracy and cost-effectiveness of CMR to test patients prior to revascularisation in ischaemic cardiomyopathy; to develop an economic model to assess cost-effectiveness for different imaging strategies; and to identify areas for further primary research. Databases searched were: MEDLINE including MEDLINE In-Process & Other Non-Indexed Citations Initial searches were conducted in March 2011 in the following databases with dates: MEDLINE including MEDLINE In-Process & Other Non-Indexed Citations via Ovid (1946 to March 2011); Bioscience Information Service (BIOSIS) Previews via Web of Science (1969 to March 2011); EMBASE via Ovid (1974 to March 2011); Cochrane Database of Systematic Reviews via The Cochrane Library (1996 to March 2011); Cochrane Central Register of Controlled Trials via The Cochrane Library 1998 to March 2011; Database of Abstracts of Reviews of Effects via The Cochrane Library (1994 to March 2011); NHS Economic Evaluation Database via The Cochrane Library (1968 to March 2011); Health Technology Assessment Database via The Cochrane Library (1989 to March 2011); and the Science Citation Index via Web of Science (1900 to March 2011). Additional searches were conducted from October to November 2011 in the following databases with dates: MEDLINE including MEDLINE In-Process & Other Non-Indexed Citations via Ovid (1946 to November 2011); BIOSIS Previews via Web of Science (1969 to October 2011); EMBASE via Ovid (1974 to November 2011); Cochrane Database of Systematic Reviews via The Cochrane Library (1996 to November 2011); Cochrane Central Register of Controlled Trials via The Cochrane Library (1998 to November 2011); Database of Abstracts of Reviews of Effects via The Cochrane Library (1994 to November 2011); NHS Economic Evaluation Database via The Cochrane Library (1968 to November 2011); Health Technology Assessment Database via The Cochrane Library (1989 to November 2011); and the Science Citation Index via Web of Science (1900 to October 2011). Electronic databases were searched March-November 2011. The systematic review selected studies that assessed the clinical effectiveness and cost-effectiveness of CMR to establish the role of CMR in viability assessment compared with other imaging techniques: stress echocardiography, single-photon emission computed tomography (SPECT) and positron emission tomography (PET). Studies had to have an appropriate reference standard and contain accuracy data or sufficient details so that accuracy data could be calculated. Data were extracted by two reviewers and discrepancies resolved by discussion. Quality of studies was assessed using the QUADAS II tool (University of Bristol, Bristol, UK). A rigorous diagnostic accuracy systematic review assessed clinical and cost-effectiveness of CMR in viability assessment. A health economic model estimated costs and quality-adjusted life-years (QALYs) accrued by diagnostic pathways for identifying patients with viable myocardium in ischaemic cardiomyopathy with a view to revascularisation. The pathways involved CMR, stress echocardiography, SPECT, PET alone or in combination. Strategies of no testing and revascularisation were included to determine the most cost-effective strategy. Twenty-four studies met the inclusion criteria. All were prospective. Participant numbers ranged from 8 to 52. The mean left ventricular ejection fraction in studies reporting this outcome was 24-62%. CMR approaches included stress CMR and late gadolinium-enhanced cardiovascular magnetic resonance imaging (CE CMR). Recovery following revascularisation was the reference standard. Twelve studies assessed diagnostic accuracy of stress CMR and 14 studies assessed CE CMR. A bivariate regression model was used to calculate the sensitivity and specificity of CMR. Summary sensitivity and specificity for stress CMR was 82.2% [95% confidence interval (CI) 73.2% to 88.7%] and 87.1% (95% CI 80.4% to 91.7%) and for CE CMR was 95.5% (95% CI 94.1% to 96.7%) and 53% (95% CI 40.4% to 65.2%) respectively. The sensitivity and specificity of PET, SPECT and stress echocardiography were calculated using data from 10 studies and systematic reviews. The sensitivity of PET was 94.7% (95% CI 90.3% to 97.2%), of SPECT was 85.1% (95% CI 78.1% to 90.2%) and of stress echocardiography was 77.6% (95% CI 70.7% to 83.3%). The specificity of PET was 68.8% (95% CI 50% to 82.9%), of SPECT was 62.1% (95% CI 52.7% to 70.7%) and of stress echocardiography was 69.6% (95% CI 62.4% to 75.9%). All currently used diagnostic strategies were cost-effective compared with no testing at current National Institute for Health and Care Excellence thresholds. If the annual mortality rates for non-viable patients were assumed to be higher for revascularised patients, then testing with CE CMR was most cost-effective at a threshold of £20,000/QALY. The proportion of model runs in which each strategy was most cost-effective, at a threshold of £20,000/QALY, was 40% for CE CMR, 42% for PET and 16.5% for revascularising everyone. The expected value of perfect information at £20,000/QALY was £620 per patient. If all patients (viable or not) gained benefit from revascularisation, then it was most cost-effective to revascularise all patients. Definitions and techniques assessing viability were highly variable, making data extraction and comparisons difficult. Lack of evidence meant assumptions were made in the model leading to uncertainty; differing scenarios were generated around key assumptions. All the diagnostic pathways are a cost-effective use of NHS resources. Given the uncertainty in the mortality rates, the cost-effectiveness analysis was performed using a set of scenarios. The cost-effectiveness analyses suggest that CE CMR and revascularising everyone were the optimal strategies. Future research should look at implementation costs for this type of imaging service, provide guidance on consistent reporting of diagnostic testing data for viability assessment, and focus on the impact of revascularisation or best medical therapy in this group of high-risk patients. The National Institute of Health Technology Assessment programme.

  14. Design of a web portal for interdisciplinary image retrieval from multiple online image resources.

    PubMed

    Kammerer, F J; Frankewitsch, T; Prokosch, H-U

    2009-01-01

    Images play an important role in medicine. Finding the desired images within the multitude of online image databases is a time-consuming and frustrating process. Existing websites do not meet all the requirements for an ideal learning environment for medical students. This work intends to establish a new web portal providing a centralized access point to a selected number of online image databases. A back-end system locates images on given websites and extracts relevant metadata. The images are indexed using UMLS and the MetaMap system provided by the US National Library of Medicine. Specially developed functions allow to create individual navigation structures. The front-end system suits the specific needs of medical students. A navigation structure consisting of several medical fields, university curricula and the ICD-10 was created. The images may be accessed via the given navigation structure or using different search functions. Cross-references are provided by the semantic relations of the UMLS. Over 25,000 images were identified and indexed. A pilot evaluation among medical students showed good first results concerning the acceptance of the developed navigation structures and search features. The integration of the images from different sources into the UMLS semantic network offers a quick and an easy-to-use learning environment.

  15. The USF Libraries Virtual Library Project: A Blueprint for Development.

    ERIC Educational Resources Information Center

    Metz-Wiseman, Monica; Silver, Susan; Hanson, Ardis; Johnston, Judy; Grohs, Kim; Neville, Tina; Sanchez, Ed; Gray, Carolyn

    This report of the Virtual Library Planning Committee (VLPC) is intending to serve as a blueprint for the University of South Florida (USF) Libraries as it shifts from print to digital formats in its evolution into a "Virtual Library". A comprehensive planning process is essential for the USF Libraries to make optimum use of technology,…

  16. MINC 2.0: A Flexible Format for Multi-Modal Images.

    PubMed

    Vincent, Robert D; Neelin, Peter; Khalili-Mahani, Najmeh; Janke, Andrew L; Fonov, Vladimir S; Robbins, Steven M; Baghdadi, Leila; Lerch, Jason; Sled, John G; Adalat, Reza; MacDonald, David; Zijdenbos, Alex P; Collins, D Louis; Evans, Alan C

    2016-01-01

    It is often useful that an imaging data format can afford rich metadata, be flexible, scale to very large file sizes, support multi-modal data, and have strong inbuilt mechanisms for data provenance. Beginning in 1992, MINC was developed as a system for flexible, self-documenting representation of neuroscientific imaging data with arbitrary orientation and dimensionality. The MINC system incorporates three broad components: a file format specification, a programming library, and a growing set of tools. In the early 2000's the MINC developers created MINC 2.0, which added support for 64-bit file sizes, internal compression, and a number of other modern features. Because of its extensible design, it has been easy to incorporate details of provenance in the header metadata, including an explicit processing history, unique identifiers, and vendor-specific scanner settings. This makes MINC ideal for use in large scale imaging studies and databases. It also makes it easy to adapt to new scanning sequences and modalities.

  17. Searching for patterns in remote sensing image databases using neural networks

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have investigated a method, based on a successful neural network multispectral image classification system, of searching for single patterns in remote sensing databases. While defining the pattern to search for and the feature to be used for that search (spectral, spatial, temporal, etc.) is challenging, a more difficult task is selecting competing patterns to train against the desired pattern. Schemes for competing pattern selection, including random selection and human interpreted selection, are discussed in the context of an example detection of dense urban areas in Landsat Thematic Mapper imagery. When applying the search to multiple images, a simple normalization method can alleviate the problem of inconsistent image calibration. Another potential problem, that of highly compressed data, was found to have a minimal effect on the ability to detect the desired pattern. The neural network algorithm has been implemented using the PVM (Parallel Virtual Machine) library and nearly-optimal speedups have been obtained that help alleviate the long process of searching through imagery.

  18. Mechanization of library procedures in a medium-sized medical library: XVI. Computer-assisted cataloging, the first decade.

    PubMed Central

    Bolef, D

    1975-01-01

    After ten years of experimentation in computer-assisted cataloging, the Washington University School of Medicine Library has decided to join the Ohio College Library Center network. The history of the library's work preceding this decision is reviewed. The data processing equipment and computers that have permitted librarians to explore different ways of presenting cataloging information are discussed. Certain cataloging processes are facilitated by computer manipulation and printouts, but the intellectual cataloging processes such as descriptive and subject cataloging are not. Networks and shared bibliographic data bases show promise of eliminating the intellectual cataloging for one book by more than one cataloger. It is in this area that future developments can be expected. PMID:1148442

  19. Implementing a Systematic Planning Process in Two Very Small Rural Public Libraries.

    ERIC Educational Resources Information Center

    Senkevitch, Judith J.

    1985-01-01

    Describes a pilot project by a regional library system in central New York State to implement systematic planning activities in rural public libraries serving populations under 5,000. Motivations for the adoption of innovation and key elements in the decision making process during implementation are examined. (CLB)

  20. Earth science photographs from the U.S. Geological Survey Library

    USGS Publications Warehouse

    McGregor, Joseph K.; Abston, Carl C.

    1995-01-01

    This CD-ROM set contains 1,500 scanned photographs from the U.S. Geological Survey Library for use as a photographic glossary of elementary geologic terms. Scholars are encouraged to copy these public domain images into their reports or databases to enhance their presentations. High-quality prints and (or) slides are available upon request from the library. This CD-ROM was produced in accordance with the ISO 9660 standard; however, it is intended for use on DOS-based computer systems only.

  1. The Royal Medical Society of Edinburgh: Sale of its Library at Sotheby's *

    PubMed Central

    Crawford, Helen

    1970-01-01

    The library of the Royal Medical Society of Edinburgh, which has been in existence for nearly 250 years, was sold by Sotheby & Co. of London at three auction sales during 1969. The author describes her attendance at the three sales, with emphasis on the most valuable items sold and the considerable acquisitions made for the Middleton Medical Library of the University of Wisconsin. Concluding observations concern some of the practical problems of acquiring antiquarian books at auction. Images PMID:5496237

  2. Development of an Integrated, Computer-Based Bibliographical Data System for a Large University Library. Annual Report to the National Science Foundation from the University of Chicago Library, 1966/67.

    ERIC Educational Resources Information Center

    Fussler, Herman; Payne, Charles T.

    Part I is a discussion of the following project tasks: A) development of an on-line, real-time bibliographic data processing system; B) implementation in library operations; C) character sets; D) Project MARC; E) circulation; and F) processing operation studies. Part II is a brief discussion of efforts to work out cooperative library systems…

  3. Successes & Failures of Digital Libraries. Papers Presented at the Annual Clinic on Library Applications of Data Processing (35th, Champaign, Illinois, March 22-24, 1998).

    ERIC Educational Resources Information Center

    Harum, Susan, Ed.; Twidale, Michael, Ed.

    This clinic's goal was to address questions arising during the process of transition from theory and research development to deployed useful and usable (and used) digital library systems. The idea was to use the Digital Libraries Initiative (DLI) based at the University of Illinois at Urbana-Champaign and entering its final year, as a detailed…

  4. Quickly Creating Interactive Astronomy Illustrations

    ERIC Educational Resources Information Center

    Slater, Timothy F.

    2015-01-01

    An innate advantage for astronomy teachers is having numerous breathtaking images of the cosmos available to capture students' curiosity, imagination, and wonder. Internet-based astronomy image libraries are numerous and easy to navigate. The Astronomy Picture of the Day, the Hubble Space Telescope image archive, and the NASA Planetary…

  5. Health sciences library building projects: 1995 survey.

    PubMed Central

    Ludwig, L

    1996-01-01

    The Medical Library Association's fifth annual survey of recent health sciences library building projects identified twenty-five libraries planning, expanding, or constructing new library facilities. None of the fifteen new library projects are free standing structures; however, several occupy a major portion of the project space. Ten projects involve renovation of or addition to existing space. Information regarding size, cost of project, type of construction, completion date, and other factual data was provided for twelve projects. The remaining identified projects are in pre-design or early-design stages, or are awaiting funding approval. Library building projects for three hospital libraries, three academic medical libraries, and an association library are described. Each illustrates how considerations of economics and technology are changing the traditional library model from a centrally stored information depository housing a wide range of information under one roof where users come to the information, into an electronic model gradually shifting from investment in the physical presence of resources to investment in creating work space for creditible information specialists who help in-house and distanced users to obtain information electronically from any place and at any time. This new model includes a highly skilled library team to manage, filter, and package the information to users trained by these resident experts. Images PMID:8883981

  6. The Application of Computers to Library Technical Processing

    ERIC Educational Resources Information Center

    Veaner, Allen B.

    1970-01-01

    Describes computer applications to acquisitions and technical processing and reports in detail on Stanford's development work in automated technical processing. Author is Assistant Director for Bibliographic Operation, Stanford University Libraries. (JB)

  7. Toolkits and Libraries for Deep Learning.

    PubMed

    Erickson, Bradley J; Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy; Philbrick, Kenneth

    2017-08-01

    Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.

  8. Special Libraries and the Corporate Political Process.

    ERIC Educational Resources Information Center

    White, Herbert S.

    1984-01-01

    This examination of the position of the special library and its services in the corporate setting highlights reasons why libraries are often taken for granted, library's role in corporate financial calculations, generalizations concerning librarian characteristics, and situations that may indicate trouble for a library that is not serving its…

  9. New Technology and the Public Library. Final Report and Executive Summary.

    ERIC Educational Resources Information Center

    Griffiths, Jose-Marie; King, Donald W.

    This report presents current and potential library applications of new technologies, issues surrounding their introduction into public libraries, and activities suggested for use during the introduction procedure. A brief appraisal of the public library's role in the information transfer process precedes a review of library automation in…

  10. New Jersey State Library Technology Plan, 1999-2001.

    ERIC Educational Resources Information Center

    Breedlove, Elizabeth A., Ed.

    This document represents the New Jersey State Library Technology Plan for 1999-2001. Contents include: the mission statement; technology planning process of the Technology Committee (convened by the State Library); specific goals of the Technology Plan 1999-2001; technology assumptions for the operational library and statewide library services;…

  11. Creating a New Definition of Library Cooperation: Past, Present, and Future Models.

    ERIC Educational Resources Information Center

    Lenzini, Rebecca T.; Shaw, Ward

    1991-01-01

    Describes the creation and purpose of the Colorado Alliance of Research Libraries (CARL), the subsequent development of CARL Systems, and its current research projects. Topics discussed include online catalogs; UnCover, a journal article database; full text data; document delivery; visual images in computer systems; networks; and implications for…

  12. Local Places, Global Connections: Libraries in the Digital Age. What's Going On Series.

    ERIC Educational Resources Information Center

    Benton Foundation, Washington, DC.

    Libraries have long been pivotal community institutions--public spaces where people can come together to learn, reflect, and interact. Today, information is rapidly spreading beyond books and journals to digital government archives, business databases, electronic sound and image collections, and the flow of electronic impulses over computer…

  13. Virtual digital library

    NASA Astrophysics Data System (ADS)

    Thoma, George R.

    1996-03-01

    The virtual digital library, a concept that is quickly becoming a reality, offers rapid and geography-independent access to stores of text, images, graphics, motion video and other datatypes. Furthermore, a user may move from one information source to another through hypertext linkages. The projects described here further the notion of such an information paradigm from an end user viewpoint.

  14. SAIL: automating interlibrary loan.

    PubMed Central

    Lacroix, E M

    1994-01-01

    The National Library of Medicine (NLM) initiated the System for Automated Interlibrary Loan (SAIL) pilot project to study the feasibility of using imaging technology linked to the DOCLINE system to deliver copies of journal articles. During the project, NLM converted a small number of print journal issues to electronic form, linking the captured articles to the MEDLINE citation unique identifier. DOCLINE requests for these journals that could not be filled by network libraries were routed to SAIL. Nearly 23,000 articles from sixty-four journals recently selected for indexing in Index Medicus were scanned to convert them to electronic images. During fiscal year 1992, 4,586 scanned articles were used to fill 10,444 interlibrary loan (ILL) requests, and more than half of these were used only once. Eighty percent of all the articles were not requested at all. The total cost per article delivered was $10.76, substantially more than it costs to process a photocopy request. Because conversion costs were the major component of the total SAIL cost, and most of the articles captured for the project were not requested, this model was not cost-effective. Data on SAIL journal article use was compared with all ILL requests filled by NLM for the same period. Eighty-eight percent of all articles requested from NLM were requested only once. The results of the SAIL project demonstrated that converting journal articles to electronic images and storing them in anticipation of repeated requests would not meet NLM's objective to improve interlibrary loan. PMID:8004020

  15. Development of a vision-based pH reading system

    NASA Astrophysics Data System (ADS)

    Hur, Min Goo; Kong, Young Bae; Lee, Eun Je; Park, Jeong Hoon; Yang, Seung Dae; Moon, Ha Jung; Lee, Dong Hoon

    2015-10-01

    pH paper is generally used for pH interpretation in the QC (quality control) process of radiopharmaceuticals. pH paper is easy to handle and useful for small samples such as radio-isotopes and radioisotope (RI)-labeled compounds for positron emission tomography (PET). However, pHpaper-based detecting methods may have some errors due limitations of eye sight and inaccurate readings. In this paper, we report a new device for pH reading and related software. The proposed pH reading system is developed with a vision algorithm based on the RGB library. The pH reading system is divided into two parts. First is the reading device that consists of a light source, a CCD camera and a data acquisition (DAQ) board. To improve the accuracy of the sensitivity, we utilize the three primary colors of the LED (light emission diode) in the reading device. The use of three colors is better than the use of a single color for a white LED because of wavelength. The other is a graph user interface (GUI) program for a vision interface and report generation. The GUI program inserts the color codes of the pH paper into the database; then, the CCD camera captures the pH paper and compares its color with the RGB database image in the reading mode. The software captures and reports information on the samples, such as pH results, capture images, and library images, and saves them as excel files.

  16. SAIL: automating interlibrary loan.

    PubMed

    Lacroix, E M

    1994-04-01

    The National Library of Medicine (NLM) initiated the System for Automated Interlibrary Loan (SAIL) pilot project to study the feasibility of using imaging technology linked to the DOCLINE system to deliver copies of journal articles. During the project, NLM converted a small number of print journal issues to electronic form, linking the captured articles to the MEDLINE citation unique identifier. DOCLINE requests for these journals that could not be filled by network libraries were routed to SAIL. Nearly 23,000 articles from sixty-four journals recently selected for indexing in Index Medicus were scanned to convert them to electronic images. During fiscal year 1992, 4,586 scanned articles were used to fill 10,444 interlibrary loan (ILL) requests, and more than half of these were used only once. Eighty percent of all the articles were not requested at all. The total cost per article delivered was $10.76, substantially more than it costs to process a photocopy request. Because conversion costs were the major component of the total SAIL cost, and most of the articles captured for the project were not requested, this model was not cost-effective. Data on SAIL journal article use was compared with all ILL requests filled by NLM for the same period. Eighty-eight percent of all articles requested from NLM were requested only once. The results of the SAIL project demonstrated that converting journal articles to electronic images and storing them in anticipation of repeated requests would not meet NLM's objective to improve interlibrary loan.

  17. Process monitoring using automatic physical measurement based on electrical and physical variability analysis

    NASA Astrophysics Data System (ADS)

    Shauly, Eitan N.; Levi, Shimon; Schwarzband, Ishai; Adan, Ofer; Latinsky, Sergey

    2015-04-01

    A fully automated silicon-based methodology for systematic analysis of electrical features is shown. The system was developed for process monitoring and electrical variability reduction. A mapping step was created by dedicated structures such as static-random-access-memory (SRAM) array or standard cell library, or by using a simple design rule checking run-set. The resulting database was then used as an input for choosing locations for critical dimension scanning electron microscope images and for specific layout parameter extraction then was input to SPICE compact modeling simulation. Based on the experimental data, we identified two items that must be checked and monitored using the method described here: transistor's sensitivity to the distance between the poly end cap and edge of active area (AA) due to AA rounding, and SRAM leakage due to a too close N-well to P-well. Based on this example, for process monitoring and variability analyses, we extensively used this method to analyze transistor gates having different shapes. In addition, analysis for a large area of high density standard cell library was done. Another set of monitoring focused on a high density SRAM array is also presented. These examples provided information on the poly and AA layers, using transistor parameters such as leakage current and drive current. We successfully define "robust" and "less-robust" transistor configurations included in the library and identified unsymmetrical transistors in the SRAM bit-cells. These data were compared to data extracted from the same devices at the end of the line. Another set of analyses was done to samples after Cu M1 etch. Process monitoring information on M1 enclosed contact was extracted based on contact resistance as a feedback. Guidelines for the optimal M1 space for different layout configurations were also extracted. All these data showed the successful in-field implementation of our methodology as a useful process monitoring method.

  18. Library Information-Processing System

    NASA Technical Reports Server (NTRS)

    1985-01-01

    System works with Library of Congress MARC II format. System composed of subsystems that provide wide range of library informationprocessing capabilities. Format is American National Standards Institute (ANSI) format for machine-readable bibliographic data. Adaptable to any medium-to-large library.

  19. A new tool for supervised classification of satellite images available on web servers: Google Maps as a case study

    NASA Astrophysics Data System (ADS)

    García-Flores, Agustín.; Paz-Gallardo, Abel; Plaza, Antonio; Li, Jun

    2016-10-01

    This paper describes a new web platform dedicated to the classification of satellite images called Hypergim. The current implementation of this platform enables users to perform classification of satellite images from any part of the world thanks to the worldwide maps provided by Google Maps. To perform this classification, Hypergim uses unsupervised algorithms like Isodata and K-means. Here, we present an extension of the original platform in which we adapt Hypergim in order to use supervised algorithms to improve the classification results. This involves a significant modification of the user interface, providing the user with a way to obtain samples of classes present in the images to use in the training phase of the classification process. Another main goal of this development is to improve the runtime of the image classification process. To achieve this goal, we use a parallel implementation of the Random Forest classification algorithm. This implementation is a modification of the well-known CURFIL software package. The use of this type of algorithms to perform image classification is widespread today thanks to its precision and ease of training. The actual implementation of Random Forest was developed using CUDA platform, which enables us to exploit the potential of several models of NVIDIA graphics processing units using them to execute general purpose computing tasks as image classification algorithms. As well as CUDA, we use other parallel libraries as Intel Boost, taking advantage of the multithreading capabilities of modern CPUs. To ensure the best possible results, the platform is deployed in a cluster of commodity graphics processing units (GPUs), so that multiple users can use the tool in a concurrent way. The experimental results indicate that this new algorithm widely outperform the previous unsupervised algorithms implemented in Hypergim, both in runtime as well as precision of the actual classification of the images.

  20. A Neuroimaging Web Services Interface as a Cyber Physical System for Medical Imaging and Data Management in Brain Research: Design Study

    PubMed Central

    2018-01-01

    Background Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. Objective The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Methods Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. Results All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. Conclusions To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. PMID:29699962

  1. About | Galaxy of Images

    Science.gov Websites

    -resolution images directly from the web site for personal, research or study purposes for free. This includes , promotional material, etc. The usage fee is not a copyright fee. You are free to obtain a copy of these images and how our images may be used. Smithsonian Libraries provides free and open access to its digital

  2. Near Real-Time Georeference of Umanned Aerial Vehicle Images for Post-Earthquake Response

    NASA Astrophysics Data System (ADS)

    Wang, S.; Wang, X.; Dou, A.; Yuan, X.; Ding, L.; Ding, X.

    2018-04-01

    The rapid collection of Unmanned Aerial Vehicle (UAV) remote sensing images plays an important role in the fast submitting disaster information and the monitored serious damaged objects after the earthquake. However, for hundreds of UAV images collected in one flight sortie, the traditional data processing methods are image stitching and three-dimensional reconstruction, which take one to several hours, and affect the speed of disaster response. If the manual searching method is employed, we will spend much more time to select the images and the find images do not have spatial reference. Therefore, a near-real-time rapid georeference method for UAV remote sensing disaster data is proposed in this paper. The UAV images are achieved georeference combined with the position and attitude data collected by UAV flight control system, and the georeferenced data is organized by means of world file which is developed by ESRI. The C # language is adopted to compile the UAV images rapid georeference software, combined with Geospatial Data Abstraction Library (GDAL). The result shows that it can realize rapid georeference of remote sensing disaster images for up to one thousand UAV images within one minute, and meets the demand of rapid disaster response, which is of great value in disaster emergency application.

  3. WebMedSA: a web-based framework for segmenting and annotating medical images using biomedical ontologies

    NASA Astrophysics Data System (ADS)

    Vega, Francisco; Pérez, Wilson; Tello, Andrés.; Saquicela, Victor; Espinoza, Mauricio; Solano-Quinde, Lizandro; Vidal, Maria-Esther; La Cruz, Alexandra

    2015-12-01

    Advances in medical imaging have fostered medical diagnosis based on digital images. Consequently, the number of studies by medical images diagnosis increases, thus, collaborative work and tele-radiology systems are required to effectively scale up to this diagnosis trend. We tackle the problem of the collaborative access of medical images, and present WebMedSA, a framework to manage large datasets of medical images. WebMedSA relies on a PACS and supports the ontological annotation, as well as segmentation and visualization of the images based on their semantic description. Ontological annotations can be performed directly on the volumetric image or at different image planes (e.g., axial, coronal, or sagittal); furthermore, annotations can be complemented after applying a segmentation technique. WebMedSA is based on three main steps: (1) RDF-ization process for extracting, anonymizing, and serializing metadata comprised in DICOM medical images into RDF/XML; (2) Integration of different biomedical ontologies (using L-MOM library), making this approach ontology independent; and (3) segmentation and visualization of annotated data which is further used to generate new annotations according to expert knowledge, and validation. Initial user evaluations suggest that WebMedSA facilitates the exchange of knowledge between radiologists, and provides the basis for collaborative work among them.

  4. Use of focus groups in a library's strategic planning process.

    PubMed

    Higa-Moore, Mori Lou; Bunnett, Brian; Mayo, Helen G; Olney, Cynthia A

    2002-01-01

    The use of focus groups to determine patron satisfaction with library resources and services is extensive and well established. This article demonstrates how focus groups can also be used to help shape the future direction of a library as part of the strategic planning process. By responding to questions about their long-term library and information needs, focus group participants at the University of Texas Southwestern Medical Center at Dallas Library contributed an abundance of qualitative patron data that was previously lacking from this process. The selection and recruitment of these patrons is discussed along with the line of questioning used in the various focus group sessions. Of special interest is the way the authors utilized these sessions to mobilize and involve the staff in creating the library's strategic plan. This was accomplished not only by having staff members participate in one of the sessions but also by sharing the project's major findings with them and instructing them in how these findings related to the library's future. The authors' experience demonstrates that focus groups are an effective strategic planning tool for libraries and emphasizes the need to share information broadly, if active involvement of the staff is desired in both the development and implementation of the library's strategic plan.

  5. Use of focus groups in a library's strategic planning process

    PubMed Central

    Higa-Moore, Mori Lou; Bunnett, Brian; Mayo, Helen G.; Olney, Cynthia A.

    2002-01-01

    The use of focus groups to determine patron satisfaction with library resources and services is extensive and well established. This article demonstrates how focus groups can also be used to help shape the future direction of a library as part of the strategic planning process. By responding to questions about their long-term library and information needs, focus group participants at the University of Texas Southwestern Medical Center at Dallas Library contributed an abundance of qualitative patron data that was previously lacking from this process. The selection and recruitment of these patrons is discussed along with the line of questioning used in the various focus group sessions. Of special interest is the way the authors utilized these sessions to mobilize and involve the staff in creating the library's strategic plan. This was accomplished not only by having staff members participate in one of the sessions but also by sharing the project's major findings with them and instructing them in how these findings related to the library's future. The authors' experience demonstrates that focus groups are an effective strategic planning tool for libraries and emphasizes the need to share information broadly, if active involvement of the staff is desired in both the development and implementation of the library's strategic plan. PMID:11838465

  6. The Library Work Order Processing System: A New Approach to Motivate Employees and to Increase Production in the Technical Service Department of Mercer County Community College Library. Applied Educational Research and Evaluation.

    ERIC Educational Resources Information Center

    Sim, Yong Sup

    After reviewing the current movement toward job enrichment, a system was designed for the technical services department of the Mercer County Community College Library. The Library Work Order Processing System, as tried between January and March, 1974, was designed to permit each worker more variety of jobs. The technical services department was…

  7. Improved GO/PO method and its application to wideband SAR image of conducting objects over rough surface

    NASA Astrophysics Data System (ADS)

    Jiang, Wang-Qiang; Zhang, Min; Nie, Ding; Jiao, Yong-Chang

    2018-04-01

    To simulate the multiple scattering effect of target in synthetic aperture radar (SAR) image, the hybrid method GO/PO method, which combines the geometrical optics (GO) and physical optics (PO), is employed to simulate the scattering field of target. For ray tracing is time-consuming, the Open Graphics Library (OpenGL) is usually employed to accelerate the process of ray tracing. Furthermore, the GO/PO method is improved for the simulation in low pixel situation. For the improved GO/PO method, the pixels are arranged corresponding to the rectangular wave beams one by one, and the GO/PO result is the sum of the contribution values of all the rectangular wave beams. To get high-resolution SAR image, the wideband echo signal is simulated which includes information of many electromagnetic (EM) waves with different frequencies. Finally, the improved GO/PO method is used to simulate the SAR image of targets above rough surface. And the effects of reflected rays and the size of pixel matrix on the SAR image are also discussed.

  8. Library Standards: Evidence of Library Effectiveness and Accreditation.

    ERIC Educational Resources Information Center

    Ebbinghouse, Carol

    1999-01-01

    Discusses accreditation standards for libraries based on experiences in an academic law library. Highlights include the accreditation process; the impact of distance education and remote technologies on accreditation; and a list of Internet sources of standards and information. (LRW)

  9. The adjustments experienced by persons with an ostomy: an integrative review of the literature.

    PubMed

    Torquato Lopes, Ana Patrícia Araujo; Decesaro, Maria das Neves

    2014-10-01

    The person with an ostomy may focus on the negative aspects of the stoma rather than its function to the detriment of self-image, acceptance of a new lifestyle, and ability to self-care. The purpose of this integrative literature review was to explore factors involved in the adaption process of persons with a gastrointestinal stoma with a focus on the role of nonspecialist professional nurses in the process. The authors searched the databases of the Virtual Health Library, the Latin American and Caribbean of Health Sciences Information System, the Scientific Electronic Library Online, the Spanish Bibliographic Index of Health Sciences, International Literature on Health Sciences (MEDLINE), and the Cochrane Library using the keywords ostomy, adaption, and nursing for full text articles in all languages published between 2008 and 2013. Of the 612 articles identified, 21 were not duplicates and met the inclusion criteria of availability of full text, published in the past 5 years, indexed, and covering the topic of stoma adaption; this literature was analyzed using Bardin's thematic analysis. Three categories emerged: experiences and adaption strategies employed by the person with a stoma, the role of the care provider, and education as a tool in healthcare. Persons with a stoma need time and support from caregivers, family, and friends to adjust to the changes and adapt to having a stoma. This includes the ability to overcome the stigma of appearance and activities involving social interaction. Caregivers and health professionals need to serve as information resources while encouraging care autonomy. The more informed the patient, the smoother the adaption process. The literature also suggests nursing education may affect caregiving. Further research to elucidate the adaption experienced by each person with an ostomy is needed to help the multidisciplinary team plan care appropriately.

  10. Organizational Change in the Harvard College Library: A Continued Struggle for Redefinition and Renewal.

    ERIC Educational Resources Information Center

    Lee, Susan

    1993-01-01

    Chronicles the process of change begun at the Harvard College Library in 1990. Key factors are analyzed, including support from the University Library, Association of Research Libraries, and Council on Library Resources; strong leadership; organizational development; composition of task forces; time frame; concurrent changes; and development of a…

  11. The PREP pipeline: standardized preprocessing for large-scale EEG analysis.

    PubMed

    Bigdely-Shamlo, Nima; Mullen, Tim; Kothe, Christian; Su, Kyung-Min; Robbins, Kay A

    2015-01-01

    The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP) and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode.

  12. A new Python library to analyse skeleton images confirms malaria parasite remodelling of the red blood cell membrane skeleton.

    PubMed

    Nunez-Iglesias, Juan; Blanch, Adam J; Looker, Oliver; Dixon, Matthew W; Tilley, Leann

    2018-01-01

    We present Skan (Skeleton analysis), a Python library for the analysis of the skeleton structures of objects. It was inspired by the "analyse skeletons" plugin for the Fiji image analysis software, but its extensive Application Programming Interface (API) allows users to examine and manipulate any intermediate data structures produced during the analysis. Further, its use of common Python data structures such as SciPy sparse matrices and pandas data frames opens the results to analysis within the extensive ecosystem of scientific libraries available in Python. We demonstrate the validity of Skan's measurements by comparing its output to the established Analyze Skeletons Fiji plugin, and, with a new scanning electron microscopy (SEM)-based method, we confirm that the malaria parasite Plasmodium falciparum remodels the host red blood cell cytoskeleton, increasing the average distance between spectrin-actin junctions.

  13. A new Python library to analyse skeleton images confirms malaria parasite remodelling of the red blood cell membrane skeleton

    PubMed Central

    Looker, Oliver; Dixon, Matthew W.; Tilley, Leann

    2018-01-01

    We present Skan (Skeleton analysis), a Python library for the analysis of the skeleton structures of objects. It was inspired by the “analyse skeletons” plugin for the Fiji image analysis software, but its extensive Application Programming Interface (API) allows users to examine and manipulate any intermediate data structures produced during the analysis. Further, its use of common Python data structures such as SciPy sparse matrices and pandas data frames opens the results to analysis within the extensive ecosystem of scientific libraries available in Python. We demonstrate the validity of Skan’s measurements by comparing its output to the established Analyze Skeletons Fiji plugin, and, with a new scanning electron microscopy (SEM)-based method, we confirm that the malaria parasite Plasmodium falciparum remodels the host red blood cell cytoskeleton, increasing the average distance between spectrin-actin junctions. PMID:29472997

  14. 76 FR 26317 - Advisory Committee on Presidential Library-Foundation Partnerships

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-06

    ... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Advisory Committee on Presidential Library-Foundation... Library-Foundation Partnerships. The meeting will be held to discuss the reorganization of the National Archives as they relate to Presidential Libraries, Social Media Initiatives, Processing of Presidential...

  15. Large-scale image-based profiling of single-cell phenotypes in arrayed CRISPR-Cas9 gene perturbation screens.

    PubMed

    de Groot, Reinoud; Lüthi, Joel; Lindsay, Helen; Holtackers, René; Pelkmans, Lucas

    2018-01-23

    High-content imaging using automated microscopy and computer vision allows multivariate profiling of single-cell phenotypes. Here, we present methods for the application of the CISPR-Cas9 system in large-scale, image-based, gene perturbation experiments. We show that CRISPR-Cas9-mediated gene perturbation can be achieved in human tissue culture cells in a timeframe that is compatible with image-based phenotyping. We developed a pipeline to construct a large-scale arrayed library of 2,281 sequence-verified CRISPR-Cas9 targeting plasmids and profiled this library for genes affecting cellular morphology and the subcellular localization of components of the nuclear pore complex (NPC). We conceived a machine-learning method that harnesses genetic heterogeneity to score gene perturbations and identify phenotypically perturbed cells for in-depth characterization of gene perturbation effects. This approach enables genome-scale image-based multivariate gene perturbation profiling using CRISPR-Cas9. © 2018 The Authors. Published under the terms of the CC BY 4.0 license.

  16. Vision requirements for Space Station applications

    NASA Technical Reports Server (NTRS)

    Crouse, K. R.

    1985-01-01

    Problems which will be encountered by computer vision systems in Space Station operations are discussed, along with solutions be examined at Johnson Space Station. Lighting cannot be controlled in space, nor can the random presence of reflective surfaces. Task-oriented capabilities are to include docking to moving objects, identification of unexpected objects during autonomous flights to different orbits, and diagnoses of damage and repair requirements for autonomous Space Station inspection robots. The approaches being examined to provide these and other capabilities are television IR sensors, advanced pattern recognition programs feeding on data from laser probes, laser radar for robot eyesight and arrays of SMART sensors for automated location and tracking of target objects. Attention is also being given to liquid crystal light valves for optical processing of images for comparisons with on-board electronic libraries of images.

  17. Two fast approximate wavelet algorithms for image processing, classification, and recognition

    NASA Astrophysics Data System (ADS)

    Wickerhauser, Mladen V.

    1994-07-01

    We use large libraries of template waveforms with remarkable orthogonality properties to recast the relatively complex principal orthogonal decomposition (POD) into an optimization problem with a fast solution algorithm. Then it becomes practical to use POD to solve two related problems: recognizing or classifying images, and inverting a complicated map from a low-dimensional configuration space to a high-dimensional measurement space. In the case where the number N of pixels or measurements is more than 1000 or so, the classical O(N3) POD algorithms becomes very costly, but it can be replaced with an approximate best-basis method that has complexity O(N2logN). A variation of POD can also be used to compute an approximate Jacobian for the complicated map.

  18. Efficient LIDAR Point Cloud Data Managing and Processing in a Hadoop-Based Distributed Framework

    NASA Astrophysics Data System (ADS)

    Wang, C.; Hu, F.; Sha, D.; Han, X.

    2017-10-01

    Light Detection and Ranging (LiDAR) is one of the most promising technologies in surveying and mapping city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop's storage and computing ability. At the same time, the Point Cloud Library (PCL), an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.

  19. Describing Images: A Case Study of Visual Literacy among Library and Information Science Students

    ERIC Educational Resources Information Center

    Beaudoin, Joan E.

    2016-01-01

    This paper reports on a study that examined the development of pedagogical methods for increasing the visual literacy skills of a group of library and information science students. Through a series of three assignments, students were asked to provide descriptive information for a set of historical photographs and record reflections on their…

  20. NOAA Photo Library

    Science.gov Websites

    reached the mature stage of formation. Image ID: nssl0065, NOAA's National Severe Storms Laboratory (NSSL Available Publication of the U.S. Department of Commerce, National Oceanic & Atmospheric Adminstration NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes

  1. NOAA Photo Library

    Science.gov Websites

    in its early stage of formation. Image ID: nssl0061, NOAA's National Severe Storms Laboratory (NSSL Available Publication of the U.S. Department of Commerce, National Oceanic & Atmospheric Adminstration NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes

  2. NOAA Photo Library

    Science.gov Websites

    in its early stage of formation. Image ID: nssl0062, NOAA's National Severe Storms Laboratory (NSSL Available Publication of the U.S. Department of Commerce, National Oceanic & Atmospheric Adminstration NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes

  3. NOAA Photo Library

    Science.gov Websites

    in its early stage of formation. Image ID: nssl0064, NOAA's National Severe Storms Laboratory (NSSL Available Publication of the U.S. Department of Commerce, National Oceanic & Atmospheric Adminstration NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes

  4. Visual Literacy Standards in Higher Education: New Opportunities for Libraries and Student Learning

    ERIC Educational Resources Information Center

    Hattwig, Denise; Bussert, Kaila; Medaille, Ann; Burgess, Joanna

    2013-01-01

    Visual literacy is essential for 21st century learners. Across the higher education curriculum, students are being asked to use and produce images and visual media in their academic work, and they must be prepared to do so. The Association of College and Research Libraries has published the "Visual Literacy Competency Standards for Higher…

  5. Butte Digital Image Project: Shifting Focus from Collection to Community

    ERIC Educational Resources Information Center

    Pierson, Patricia

    2010-01-01

    The Butte Free Public Library was established in 1894. At that time, head librarian J. Davies published a catalog of the opening collection. Two fires and one flood later, many of the monographs from that original collection list have, remarkably, survived. Because of this, in part, the library, now known as the Butte-Silver Bow Public Library…

  6. Moving Digital Libraries into the Student Learning Space: The GetSmart Experience

    ERIC Educational Resources Information Center

    Marshall, Byron B.; Chen, Hsinchun; Shen, Rao; Fox, Edward A.

    2006-01-01

    The GetSmart system was built to support theoretically sound learning processes in a digital library environment by integrating course management, digital library, and concept mapping components to support a constructivist, six-step, information search process. In the fall of 2002 more than 100 students created 1400 concept maps as part of…

  7. Planning and Preparation for CD-ROM Implementation: The Citadel Library.

    ERIC Educational Resources Information Center

    Maynard, J. Edmund

    Management guidelines for library planning and a strategic planning program profile based on the literature were used in the planning process for implementing access to databases on CD-ROM at the Daniel Library of the Citadel, Military College of South Carolina. According to this model, the planning process would consist of five stages: (1)…

  8. Handbook of Data Processing for Libraries.

    ERIC Educational Resources Information Center

    Hayes, Robert M.; Becker, Joseph

    The purpose of this book is to assist libraries and librarians in resolving some of the problems faced in utilizing the new computer technology. The intent is to provide a concrete, factual guide to the principles and methods available for the application of modern data processing to library operations. For the librarian it is a handbook to guide…

  9. SET: a pupil detection method using sinusoidal approximation

    PubMed Central

    Javadi, Amir-Homayoun; Hakimi, Zahra; Barati, Morteza; Walsh, Vincent; Tcheang, Lili

    2015-01-01

    Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as “SET”) that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations (“Natural”); and images of less challenging indoor scenes (“CASIA-Iris-Thousand”). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library (“DLL”), which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk). PMID:25914641

  10. NOAA Photo Library - Navigating the Collection

    Science.gov Websites

    will have to change the setting to 800x600 to view the full image without having to scroll from left to view or download the highest resolution image available, click on the message "High Resolution viewing individual images associated with albums. If wishing to view the image ID number of a thumbnail

  11. Commercial imagery archive, management, exploitation, and distribution project development

    NASA Astrophysics Data System (ADS)

    Hollinger, Bruce; Sakkas, Alysa

    1999-10-01

    The Lockheed Martin (LM) team had garnered over a decade of operational experience on the U.S. Government's IDEX II (Imagery Dissemination and Exploitation) system. Recently, it set out to create a new commercial product to serve the needs of large-scale imagery archiving and analysis markets worldwide. LM decided to provide a turnkey commercial solution to receive, store, retrieve, process, analyze and disseminate in 'push' or 'pull' modes imagery, data and data products using a variety of sources and formats. LM selected 'best of breed' hardware and software components and adapted and developed its own algorithms to provide added functionality not commercially available elsewhere. The resultant product, Intelligent Library System (ILS)TM, satisfies requirements for (1) a potentially unbounded, data archive (5000 TB range) (2) automated workflow management for increased user productivity; (3) automatic tracking and management of files stored on shelves; (4) ability to ingest, process and disseminate data volumes with bandwidths ranging up to multi- gigabit per second; (5) access through a thin client-to-server network environment; (6) multiple interactive users needing retrieval of files in seconds from both archived images or in real time, and (7) scalability that maintains information throughput performance as the size of the digital library grows.

  12. Commercial imagery archive, management, exploitation, and distribution product development

    NASA Astrophysics Data System (ADS)

    Hollinger, Bruce; Sakkas, Alysa

    1999-12-01

    The Lockheed Martin (LM) team had garnered over a decade of operational experience on the U.S. Government's IDEX II (Imagery Dissemination and Exploitation) system. Recently, it set out to create a new commercial product to serve the needs of large-scale imagery archiving and analysis markets worldwide. LM decided to provide a turnkey commercial solution to receive, store, retrieve, process, analyze and disseminate in 'push' or 'pull' modes imagery, data and data products using a variety of sources and formats. LM selected 'best of breed' hardware and software components and adapted and developed its own algorithms to provide added functionality not commercially available elsewhere. The resultant product, Intelligent Library System (ILS)TM, satisfies requirements for (a) a potentially unbounded, data archive (5000 TB range) (b) automated workflow management for increased user productivity; (c) automatic tracking and management of files stored on shelves; (d) ability to ingest, process and disseminate data volumes with bandwidths ranging up to multi- gigabit per second; (e) access through a thin client-to-server network environment; (f) multiple interactive users needing retrieval of files in seconds from both archived images or in real time, and (g) scalability that maintains information throughput performance as the size of the digital library grows.

  13. Automation process for morphometric analysis of volumetric CT data from pulmonary vasculature in rats.

    PubMed

    Shingrani, Rahul; Krenz, Gary; Molthen, Robert

    2010-01-01

    With advances in medical imaging scanners, it has become commonplace to generate large multidimensional datasets. These datasets require tools for a rapid, thorough analysis. To address this need, we have developed an automated algorithm for morphometric analysis incorporating A Visualization Workshop computational and image processing libraries for three-dimensional segmentation, vascular tree generation and structural hierarchical ordering with a two-stage numeric optimization procedure for estimating vessel diameters. We combine this new technique with our mathematical models of pulmonary vascular morphology to quantify structural and functional attributes of lung arterial trees. Our physiological studies require repeated measurements of vascular structure to determine differences in vessel biomechanical properties between animal models of pulmonary disease. Automation provides many advantages including significantly improved speed and minimized operator interaction and biasing. The results are validated by comparison with previously published rat pulmonary arterial micro-CT data analysis techniques, in which vessels were manually mapped and measured using intense operator intervention. Published by Elsevier Ireland Ltd.

  14. ODIN-object-oriented development interface for NMR.

    PubMed

    Jochimsen, Thies H; von Mengershausen, Michael

    2004-09-01

    A cross-platform development environment for nuclear magnetic resonance (NMR) experiments is presented. It allows rapid prototyping of new pulse sequences and provides a common programming interface for different system types. With this object-oriented interface implemented in C++, the programmer is capable of writing applications to control an experiment that can be executed on different measurement devices, even from different manufacturers, without the need to modify the source code. Due to the clear design of the software, new pulse sequences can be created, tested, and executed within a short time. To post-process the acquired data, an interface to well-known numerical libraries is part of the framework. This allows a transparent integration of the data processing instructions into the measurement module. The software focuses mainly on NMR imaging, but can also be used with limitations for spectroscopic experiments. To demonstrate the capabilities of the framework, results of the same experiment, carried out on two NMR imaging systems from different manufacturers are shown and compared with the results of a simulation.

  15. MorphoGraphX: A platform for quantifying morphogenesis in 4D.

    PubMed

    Barbier de Reuille, Pierre; Routier-Kierzkowska, Anne-Lise; Kierzkowski, Daniel; Bassel, George W; Schüpbach, Thierry; Tauriello, Gerardo; Bajpai, Namrata; Strauss, Sören; Weber, Alain; Kiss, Annamaria; Burian, Agata; Hofhuis, Hugo; Sapala, Aleksandra; Lipowczan, Marcin; Heimlicher, Maria B; Robinson, Sarah; Bayer, Emmanuelle M; Basler, Konrad; Koumoutsakos, Petros; Roeder, Adrienne H K; Aegerter-Wilmsen, Tinri; Nakayama, Naomi; Tsiantis, Miltos; Hay, Angela; Kwiatkowska, Dorota; Xenarios, Ioannis; Kuhlemeier, Cris; Smith, Richard S

    2015-05-06

    Morphogenesis emerges from complex multiscale interactions between genetic and mechanical processes. To understand these processes, the evolution of cell shape, proliferation and gene expression must be quantified. This quantification is usually performed either in full 3D, which is computationally expensive and technically challenging, or on 2D planar projections, which introduces geometrical artifacts on highly curved organs. Here we present MorphoGraphX ( www.MorphoGraphX.org), a software that bridges this gap by working directly with curved surface images extracted from 3D data. In addition to traditional 3D image analysis, we have developed algorithms to operate on curved surfaces, such as cell segmentation, lineage tracking and fluorescence signal quantification. The software's modular design makes it easy to include existing libraries, or to implement new algorithms. Cell geometries extracted with MorphoGraphX can be exported and used as templates for simulation models, providing a powerful platform to investigate the interactions between shape, genes and growth.

  16. Real-time implementation of optimized maximum noise fraction transform for feature extraction of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun

    2014-01-01

    We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.

  17. Library-Specific Microcomputer Software.

    ERIC Educational Resources Information Center

    Levert, Virginia M.

    1985-01-01

    Discusses number and type of microcomputer software programs useful to libraries and types of hardware on which they run, as identified by Nolan Information Management Services. Highlights include general application programs, applications designed to support library technical processes, producers of library software, and choosing among options.…

  18. EDI.

    ERIC Educational Resources Information Center

    Bluh, Pamela; And Others

    1996-01-01

    This special section on EDI (Electronic Data Interchange) in libraries includes eight articles that discuss experiences in libraries in the United Kingdom, EDI and the acquisitions process in Europe, a Canadian viewpoint, the ILS (Integrated Library Systems) vendor and EDI, public libraries, and EDI in the information services industry. (LRW)

  19. 78 FR 75375 - Advisory Committee on the Presidential Library-Foundation Partnerships

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-11

    ... Presidential Library-Foundation Partnerships AGENCY: National Archives and Records Administration (NARA... Advisory Committee on Presidential Library-Foundation Partnerships. The meeting will be held to discuss NARA's budget and its strategic planning process as it relates to Presidential Libraries. The meeting...

  20. 77 FR 29391 - Advisory Committee on the Presidential Library-Foundation Partnerships

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-17

    ... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Advisory Committee on the Presidential Library... Presidential Library-Foundation Partnerships. The meeting will be held to discuss the National Archives and Records Administration's budget and its strategic planning process as it relates to Presidential Libraries...

  1. Federal Library Programs for Acquisition of Foreign Materials.

    ERIC Educational Resources Information Center

    Cylke, Frank Kurt

    Sixteen libraries representing those agencies holding membership on the Federal Library Committee were surveyed to determine library foreign language or imprint holdings, acquisitions techniques, procedures and/or problems. Specific questions, relating to holdings, staff, budget and the acquisition, processing, reference and translation services…

  2. Microcomputers in the Anesthesia Library.

    ERIC Educational Resources Information Center

    Wright, A. J.

    The combination of computer technology and library operation is helping to alleviate such library problems as escalating costs, increasing collection size, deteriorating materials, unwieldy arrangement schemes, poor subject control, and the acquisition and processing of large numbers of rarely used documents. Small special libraries such as…

  3. The National and University Library in Zagreb: The Goal Is Known--How Can It Be Attained?

    ERIC Educational Resources Information Center

    Miletic-Vejzovic, Laila

    1994-01-01

    Provides an overview of the state of libraries and their resources in Croatia. Highlights include destruction of libraries resulting from the war; the need for centralization, uniformity, and standards; the role of the National and University Library; processing library materials; and the development of an automated system and network. (Contains…

  4. Library Spaces for 21st-Century Learners: A Planning Guide for Creating New School Library Concepts

    ERIC Educational Resources Information Center

    Sullivan, Margaret

    2013-01-01

    "Library Spaces for 21st-Century Learners: A Planning Guide for Creating New School Library Concepts" focuses on planning contemporary school library spaces with user-based design strategies. The book walks school librarians and administrators through the process of gathering information from students and other stakeholders involved in…

  5. Library Use and Library Skills of Research Assistants: Pilot Study.

    ERIC Educational Resources Information Center

    Jacob, Lisa Hall; And Others

    This paper presents the results of a pilot study of University of Illinois at Chicago faculty members, their assistants who use the library for them, and the role of the Library of the Health Sciences in that process. The Library of the Health Sciences public services staff members, College of Pharmacy faculty, and their assistants were…

  6. Computer Program User’s Manual for FIREFINDER Digital Topographic Data Verification Library Dubbing System,

    DTIC Science & Technology

    1981-11-30

    COMPUTER PROGRAM USER’S MANUAL FOR FIREFINDER DIGITAL TOPOGRAPHIC DATA VERIFICATION LIBRARY DUBBING SYSTEM 30 NOVEMBER 1981 by: Marie Ceres Leslie R...Library .............................. 1-2 1.2.3 Dubbing .......................... 1-2 1.3 Library Process Overview ..................... 1-3 2 LIBRARY...RPOSE AND SCOPE This manual describes the computer programs for the FIREFINDER Digital Topographic Data Veri fication-Library- Dubbing System (FFDTDVLDS

  7. Data-Acquisition Software for PSP/TSP Wind-Tunnel Cameras

    NASA Technical Reports Server (NTRS)

    Amer, Tahani R.; Goad, William K.

    2005-01-01

    Wing-Viewer is a computer program for acquisition and reduction of image data acquired by any of five different scientificgrade commercial electronic cameras used at Langley Research center to observe wind-tunnel models coated with pressure or temperature-sensitive paints (PSP/TSP). Wing-Viewer provides full automation of camera operation and acquisition of image data, and has limited data-preprocessing capability for quick viewing of the results of PSP/TSP test images. Wing- Viewer satisfies a requirement for a standard interface between all the cameras and a single personal computer: Written by use of Microsoft Visual C++ and the Microsoft Foundation Class Library as a framework, Wing-Viewer has the ability to communicate with the C/C++ software libraries that run on the controller circuit cards of all five cameras.

  8. The Health Sciences and Human Services Library: "this is one sweet library".

    PubMed Central

    Weise, F O; Tooey, M J

    1999-01-01

    The opening of the Health Sciences and Human Services Library at the University of Maryland, Baltimore, in April, 1998, was a highly anticipated event. With its unique architecture and stunning interior features, it is a signature building for the university in downtown Baltimore. The building is equipped with state-of-the-art technology, but has a warm, inviting atmosphere making it a focal point for the campus community. Its highly functional, flexible design will serve the staff and users well into the twenty-first century. Images PMID:10219476

  9. Proceedings of a Conference on Telecommunication Technologies, Networkings and Libraries

    NASA Astrophysics Data System (ADS)

    Knight, N. K.

    1981-12-01

    Current and developing technologies for digital transmission of image data likely to have an impact on the operations of libraries and information centers or provide support for information networking are reviewed. Technologies reviewed include slow scan television, teleconferencing, and videodisc technology and standards development for computer network interconnection through hardware and software, particularly packet switched networks computer network protocols for library and information service applications, the structure of a national bibliographic telecommunications network; and the major policy issues involved in the regulation or deregulation of the common communications carriers industry.

  10. Digital Archival Image Collections: Who Are the Users?

    ERIC Educational Resources Information Center

    Herold, Irene M. H.

    2010-01-01

    Archival digital image collections are a relatively new phenomenon in college library archives. Digitizing archival image collections may make them accessible to users worldwide. There has been no study to explore whether collections on the Internet lead to users who are beyond the institution or a comparison of users to a national or…

  11. Blanco Webcams | CTIO

    Science.gov Websites

    the slit, then the DECam image is being occluded. The small circle is the field of view of DECam on Meetings Upcoming Colloquia Sky Conditions CTIO Site Conditions TASCA RASICAM Infrared Sky Image CTIO Guidelines Library Facilities Outreach NOAO-S EPO Program team Art of Darkness Image Gallery EPO/CADIAS

  12. Computational scalability of large size image dissemination

    NASA Astrophysics Data System (ADS)

    Kooper, Rob; Bajcsy, Peter

    2011-01-01

    We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.

  13. PLUS: open-source toolkit for ultrasound-guided intervention systems.

    PubMed

    Lasso, Andras; Heffter, Tamas; Rankin, Adam; Pinter, Csaba; Ungi, Tamas; Fichtinger, Gabor

    2014-10-01

    A variety of advanced image analysis methods have been under the development for ultrasound-guided interventions. Unfortunately, the transition from an image analysis algorithm to clinical feasibility trials as part of an intervention system requires integration of many components, such as imaging and tracking devices, data processing algorithms, and visualization software. The objective of our paper is to provide a freely available open-source software platform-PLUS: Public software Library for Ultrasound-to facilitate rapid prototyping of ultrasound-guided intervention systems for translational clinical research. PLUS provides a variety of methods for interventional tool pose and ultrasound image acquisition from a wide range of tracking and imaging devices, spatial and temporal calibration, volume reconstruction, simulated image generation, and recording and live streaming of the acquired data. This paper introduces PLUS, explains its functionality and architecture, and presents typical uses and performance in ultrasound-guided intervention systems. PLUS fulfills the essential requirements for the development of ultrasound-guided intervention systems and it aspires to become a widely used translational research prototyping platform. PLUS is freely available as open source software under BSD license and can be downloaded from http://www.plustoolkit.org.

  14. The Public Library Trustee and Changing Community Needs.

    ERIC Educational Resources Information Center

    The Bookmark, 1987

    1987-01-01

    This issue of "The Bookmark" contains 23 essays examining the challenges facing public library trustees and changing community needs. Topics considered include: the role of the trustee, fund raising, lobbying and the legislative process, library building programs, library automation, literacy, intellectual freedom, information access, preschool…

  15. Promoting Trade Books to Libraries.

    ERIC Educational Resources Information Center

    Teguis, Ellen

    1982-01-01

    Discusses the role and functions of the library promotion director within a commercial publishing house, outlining in the process the responsibilities and activities of the library services director for a specific firm (Dial Press/Delacourte Press). The nature of book promotion for bookstores and libraries is described. (JL)

  16. Library design practices for success in lead generation with small molecule libraries.

    PubMed

    Goodnow, R A; Guba, W; Haap, W

    2003-11-01

    The generation of novel structures amenable to rapid and efficient lead optimization comprises an emerging strategy for success in modern drug discovery. Small molecule libraries of sufficient size and diversity to increase the chances of discovery of novel structures make the high throughput synthesis approach the method of choice for lead generation. Despite an industry trend for smaller, more focused libraries, the need to generate novel lead structures makes larger libraries a necessary strategy. For libraries of a several thousand or more members, solid phase synthesis approaches are the most suitable. While the technology and chemistry necessary for small molecule library synthesis continue to advance, success in lead generation requires rigorous consideration in the library design process to ensure the synthesis of molecules possessing the proper characteristics for subsequent lead optimization. Without proper selection of library templates and building blocks, solid phase synthesis methods often generate molecules which are too heavy, too lipophilic and too complex to be useful for lead optimization. The appropriate filtering of virtual library designs with multiple computational tools allows the generation of information-rich libraries within a drug-like molecular property space. An understanding of the hit-to-lead process provides a practical guide to molecular design characteristics. Examples of leads generated from library approaches also provide a benchmarking of successes as well as aspects for continued development of library design practices.

  17. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  18. SU-G-201-04: Can the Dynamic Library of Flap Applicators Replace Treatment Planning in Surface Brachytherapy?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buzurovic, I; Devlin, P; Hansen, J

    Purpose: Contemporary brachytherapy treatment planning systems-(TPS) include the applicator model libraries to improve digitization; however, the library of surface-flap-applicators-(SFA) is not incorporated into the commercial TPS. We propose the dynamic library-(DL) for SFA and investigate if such library can eliminate applicator reconstruction, source activation and dose normalization. Methods: DL was generated for the SFA using the C++class libraries of the Visualization Toolkit-(VTK) and Qt-application framework for complete abstraction of the graphical interface. DL was designed such that the user can initially choose the size of the applicator that corresponds to the one clinically placed to the patient. The virtual applicator-(VA)more » has an elastic property so that it can be registered to the clinical CT images with a real applicator-(RA) on it. The VA and RA matching is performed by adjusting the position and curvature of the VA. The VA does not elongate or change its size so each catheter could always be at a distance of 5mm from the skin and 10mm apart from the closest catheter maintaining the physical accuracy of the clinical setup. Upon the applicator placement, the dwell positions were automatically activated, and the dose is normalized to the prescription depth. The accuracy of source positioning was evaluated using various applicator sizes. Results: The accuracy of the applicator placement was in the sub-millimeter range. The time-study reveals that up to 50% of the planning time can be saved depending on the complexity of the clinical setup. Unlike in the classic approach, the planning time was not highly dependent on the applicator size. Conclusion: The practical benefits of the DL of the SFA were demonstrated. The time demanding planning processes can be partially automated. Consequently, the planner can dedicate effort to fine tuning, which can result in the improvement of the quality of treatment plans in surface brachytherapy.« less

  19. How To Organize and Operate a Small Library. A Comprehensive Guide to the Organization and Operation of a Small Library for Your School, Church, Law Firm, Business, Hospital, Community, Court, Historical Museum or Association.

    ERIC Educational Resources Information Center

    Bernhard, Genore H.

    A guide is presented for those unfamiliar with library procedures who wish to organize and operate a small library. Following a discussion of the modern library's role, there is detailed information about library boards, librarians, finance, legal problems, policies, equipment, supplies, and book acquisition and processing. Shelving, filing, book…

  20. Proceedings of the Conference on Machine-Readable Catalog Copy (3rd, Library of Congress, February 25, 1966).

    ERIC Educational Resources Information Center

    Library of Congress, Washington, DC.

    A conference was held to permit a discussion between the libraries that will participate in the Library of Congress machine-readable cataloging (MARC) pilot project. The MARC pilot will provide an opportunity for the Library of Congress to assess the effect which data conversion places on the Library's normal processing procedures; the suitability…

  1. webpic: A flexible web application for collecting distance and count measurements from images

    PubMed Central

    2018-01-01

    Despite increasing ability to store and analyze large amounts of data for organismal and ecological studies, the process of collecting distance and count measurements from images has largely remained time consuming and error-prone, particularly for tasks for which automation is difficult or impossible. Improving the efficiency of these tasks, which allows for more high quality data to be collected in a shorter amount of time, is therefore a high priority. The open-source web application, webpic, implements common web languages and widely available libraries and productivity apps to streamline the process of collecting distance and count measurements from images. In this paper, I introduce the framework of webpic and demonstrate one readily available feature of this application, linear measurements, using fossil leaf specimens. This application fills the gap between workflows accomplishable by individuals through existing software and those accomplishable by large, unmoderated crowds. It demonstrates that flexible web languages can be used to streamline time-intensive research tasks without the use of specialized equipment or proprietary software and highlights the potential for web resources to facilitate data collection in research tasks and outreach activities with improved efficiency. PMID:29608592

  2. Next-generation sequencing library construction on a surface.

    PubMed

    Feng, Kuan; Costa, Justin; Edwards, Jeremy S

    2018-05-30

    Next-generation sequencing (NGS) has revolutionized almost all fields of biology, agriculture and medicine, and is widely utilized to analyse genetic variation. Over the past decade, the NGS pipeline has been steadily improved, and the entire process is currently relatively straightforward. However, NGS instrumentation still requires upfront library preparation, which can be a laborious process, requiring significant hands-on time. Herein, we present a simple but robust approach to streamline library preparation by utilizing surface bound transposases to construct DNA libraries directly on a flowcell surface. The surface bound transposases directly fragment genomic DNA while simultaneously attaching the library molecules to the flowcell. We sequenced and analysed a Drosophila genome library generated by this surface tagmentation approach, and we showed that our surface bound library quality was comparable to the quality of the library from a commercial kit. In addition to the time and cost savings, our approach does not require PCR amplification of the library, which eliminates potential problems associated with PCR duplicates. We described the first study to construct libraries directly on a flowcell. We believe our technique could be incorporated into the existing Illumina sequencing pipeline to simplify the workflow, reduce costs, and improve data quality.

  3. Building a High-Tech Library in a Period of Austerity.

    ERIC Educational Resources Information Center

    Bazillion, Richard J.; Scott, Sue

    1991-01-01

    Describes the planning process for designing a new library for Algoma University College (Ontario). Topics discussed include the building committee, library policy, design considerations, an electric system that supports computer technology, library automation, the online public access catalog (OPAC), furnishings and interior environment, and…

  4. Interlibrary Lending with Computerized Union Catalogues.

    ERIC Educational Resources Information Center

    Lehmann, Klaus-Dieter

    Interlibrary loans in the Federal Republic of Germany are facilitated by applying techniques of data processing and computer output microfilm (COM) to the union catalogs of the national library system. The German library system consists of two national libraries, four central specialized libraries of technology, medicine, agriculture, and…

  5. Technostress in Libraries: Causes, Effects and Solutions.

    ERIC Educational Resources Information Center

    Bichteler, Julie

    1987-01-01

    Examines some of the fears, frustrations, and misconceptions of library staff and patrons that hamper the effective use of computers in libraries. Strategies that library administrators could use to alleviate stress are outlined, including staff participation in the automation process, well-designed workstations, and adequate training for staff…

  6. Material Identification and Quantification in Spectral X-ray Micro-CT

    NASA Astrophysics Data System (ADS)

    Holmes, Thomas Wesley

    The identification and quantification of all the voxels within a reconstructed microCT image was possible through making comparisons of the attenuation profile from an unknown voxel with precalculated signatures of known materials. This was accomplished through simulations with the MCNP6 general-purpose radiation-transport package that modeled a CdTe detector array consisting of 200 elements which were able to differentiate between 100 separate energy bins over the entire range of the emitted 110 kVp tungsten x-ray spectra. The information from each of the separate energy bins was then used to create a single reconstructed image that was then grouped back together to produce a final image where each voxel had a corresponding attenuation pro le. A library of known attenuation profiles was created for each of the materials expected to be within an object with otherwise unknown parameters. A least squares analysis was performed, and comparisons were then made for each voxel's attenuation profile in the unknown object and combinations of each possible library combination of attenuation profiles. Based on predetermined thresholds that the results must meet, some of the combinations were then removed. Of the remaining combinations, a voting system based on statistical evaluations of the fits was designed to select the most appropriate material combination to the input unknown voxel. This was performed over all of the voxels in the reconstructed image and a final resulting material map was produced. These material locations were then quantified by creating an equation of the response from several different densities of the same material and recording the response of the base library. This entire process was called the All Combinations Library Least Squares (ACLLS)analysis and was used to test several Different models. These models investigated a range of densities for the x-ray contrast agents of gold and gadolinium that can be used in many medical applications, as well as a range of densities of bone to test the ACLLS ability to be used with bone density estimation. A final test used a model with five different materials present within the object and consisted of two separate features with mixtures of three materials as gold, iodine and water, and another feature with gadolinium, iodine and water. The remaining four features were all mixtures of water with bone, gold, gadolinium, and iodine. All of the various material mixtures were successfully identified and quantified using the ACLLS analysis package within an acceptable statistical range. The ACLLS method has proven itself as a viable analysis tool for determining both the physical locations and the amount of all the materials present within a given object. This tool could be implemented in the future so as to further assist a team of medical practitioners in diagnosing a subject through reducing ambiguities in an image and providing a quantifiable solution to all of the voxels.

  7. MORS Workshop on Improving Defense Analysis through Better Data Practices, held in Alexandria, Virginia on March 25, 26 and 27, 2003

    DTIC Science & Technology

    2004-12-03

    other process improvements could also enhance DoD data practices. These include the incorporation of library science techniques as well as processes to...coalition communities as well as adapting the approaches and lessons of the library science community. Second, there is a need to generate a plan of...Best Practices (2 of 2) - Processes - Incorporate library science techniques in repository design - Improve visibility and accessibility of DoD data

  8. Specification of Fenix MPI Fault Tolerance library version 1.0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamble, Marc; Van Der Wijngaart, Rob; Teranishi, Keita

    This document provides a specification of Fenix, a software library compatible with the Message Passing Interface (MPI) to support fault recovery without application shutdown. The library consists of two modules. The first, termed process recovery , restores an application to a consistent state after it has suffered a loss of one or more MPI processes (ranks). The second specifies functions the user can invoke to store application data in Fenix managed redundant storage, and to retrieve it from that storage after process recovery.

  9. Perl Extension to the Bproc Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grunau, Daryl W.

    2004-06-07

    The Beowulf Distributed process Space (Bproc) software stack is comprised of UNIX/Linux kernel modifications and a support library by which a cluster of machines, each running their own private kernel, can present itself as a unified process space to the user. A Bproc cluster contains a single front-end machine and many back-end nodes which receive and run processes given to them by the front-end. Any process which is migrated to a back-end node is also visible as a ghost process on the fron-end, and may be controlled there using traditional UNIX semantics (e.g. ps(1), kill(1), etc). This software is amore » Perl extension to the Bproc library which enables the Perl programmer to make direct calls to functions within the Bproc library. See http://www.clustermatic.org, http://bproc.sourceforge.net, and http://www.perl.org« less

  10. Detection of bacterial infection by a technetium-99m-labeled peptidoglycan aptamer.

    PubMed

    Ferreira, Iêda Mendes; de Sousa Lacerda, Camila Maria; Dos Santos, Sara Roberta; de Barros, André Luís Branco; Fernandes, Simone Odília; Cardoso, Valbert Nascimento; de Andrade, Antero Silva Ribeiro

    2017-09-01

    Nuclear medicine clinicians are still waiting for the optimal scintigraphic imaging agents capable of distinguishing between infection and inflammation, and between fungal and bacterial infections. Aptamers have several properties that make them suitable for molecular imaging. In the present study, a peptidoglycan aptamer (Antibac1) was labeled with 99m Tc and evaluated by biodistribution studies and scintigraphic imaging in infection-bearing mice. Labeling with 99m Tc was performed by the direct method and the complex stability was evaluated in saline, plasma and in the molar excess of cysteine. The biodistribution and scintigraphic imaging studies with the 99m Tc-Antibac1 were carried out in two different experimental infection models: Bacterial-infected mice (S. aureus) and fungal-infected mice (C. albicans). A 99m Tc radiolabeled library, consisting of oligonucleotides with random sequences, was used as a control for both models. Radiolabeling yields were superior to 90% and 99m Tc-Antibac1 was highly stable in presence of saline, plasma, and cysteine up to 6h. Scintigraphic images of S. aureus infected mice at 1.5 and 3.0h after 99m Tc-Antibac1 injection showed target to non-target ratios of 4.7±0.9 and 4.6±0.1, respectively. These values were statistically higher than those achieved for the 99m Tc-library at the same time frames (1.6±0.4 and 1.7±0.4, respectively). Noteworthy, 99m Tc-Antibac1 and 99m Tc-library showed similar low target to non-target ratios in the fungal-infected model: 2.0±0.3 and 2.0±0.6for 99m Tc-Antibac1 and 2.1±0.3 and 1.9 ± 0.6 for 99m Tc-library, at the same times. These findings suggest that the 99m Tc-Antibac1 is a feasible imaging probe to identify a bacterial infection focus. In addition, this radiolabeled aptamer seems to be suitable in distinguishing between bacterial and fungal infection. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  11. Hypercluster parallel processing library user's manual

    NASA Technical Reports Server (NTRS)

    Quealy, Angela

    1990-01-01

    This User's Manual describes the Hypercluster Parallel Processing Library, composed of FORTRAN-callable subroutines which enable a FORTRAN programmer to manipulate and transfer information throughout the Hypercluster at NASA Lewis Research Center. Each subroutine and its parameters are described in detail. A simple heat flow application using Laplace's equation is included to demonstrate the use of some of the library's subroutines. The manual can be used initially as an introduction to the parallel features provided by the library. Thereafter it can be used as a reference when programming an application.

  12. Texture Classification by Texton: Statistical versus Binary

    PubMed Central

    Guo, Zhenhua; Zhang, Zhongcheng; Li, Xiu; Li, Qin; You, Jane

    2014-01-01

    Using statistical textons for texture classification has shown great success recently. The maximal response 8 (Statistical_MR8), image patch (Statistical_Joint) and locally invariant fractal (Statistical_Fractal) are typical statistical texton algorithms and state-of-the-art texture classification methods. However, there are two limitations when using these methods. First, it needs a training stage to build a texton library, thus the recognition accuracy will be highly depended on the training samples; second, during feature extraction, local feature is assigned to a texton by searching for the nearest texton in the whole library, which is time consuming when the library size is big and the dimension of feature is high. To address the above two issues, in this paper, three binary texton counterpart methods were proposed, Binary_MR8, Binary_Joint, and Binary_Fractal. These methods do not require any training step but encode local feature into binary representation directly. The experimental results on the CUReT, UIUC and KTH-TIPS databases show that binary texton could get sound results with fast feature extraction, especially when the image size is not big and the quality of image is not poor. PMID:24520346

  13. COSMIC monthly progress report

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Activities of the Computer Software Management and Information Center (COSMIC) are summarized for the month of May 1994. Tables showing the current inventory of programs available from COSMIC are presented and program processing and evaluation activities are summarized. Nine articles were prepared for publication in the NASA Tech Brief Journal. These articles (included in this report) describe the following software items: (1) WFI - Windowing System for Test and Simulation; (2) HZETRN - A Free Space Radiation Transport and Shielding Program; (3) COMGEN-BEM - Composite Model Generation-Boundary Element Method; (4) IDDS - Interactive Data Display System; (5) CET93/PC - Chemical Equilibrium with Transport Properties, 1993; (6) SDVIC - Sub-pixel Digital Video Image Correlation; (7) TRASYS - Thermal Radiation Analyzer System (HP9000 Series 700/800 Version without NASADIG); (8) NASADIG - NASA Device Independent Graphics Library, Version 6.0 (VAX VMS Version); and (9) NASADIG - NASA Device Independent Graphics Library, Version 6.0 (UNIX Version). Activities in the areas of marketing, customer service, benefits identification, maintenance and support, and dissemination are also described along with a budget summary.

  14. "And They Let You Know You're Not Alone and That's What They're Here For": Persistence Narratives of Women Immigrants in Public Library Literacy Programs.

    ERIC Educational Resources Information Center

    Cuban, Sondra

    This study examines persistence narratives of female immigrants in three public library literacy programs. The narrative analysis method was used. Transcripts of student interviews and biographical portraits were read, and incidents, images, events, and statements concerning persistence supports and barriers, literacy, and language learning were…

  15. Selected Conference Proceedings from the 1985 Videodisc, Optical Disk, and CD-ROM Conference and Exposition (Philadelphia, PA, December 10-12, 1985).

    ERIC Educational Resources Information Center

    Cerva, John R.; And Others

    1986-01-01

    Eight papers cover: optical storage technology; cross-cultural videodisc design; optical disk technology use at the Library of Congress Research Service and National Library of Medicine; Internal Revenue Service image storage and retrieval system; solving business problems with CD-ROM; a laser disk operating system; and an optical disk for…

  16. PDT - PARTICLE DISPLACEMENT TRACKING SOFTWARE

    NASA Technical Reports Server (NTRS)

    Wernet, M. P.

    1994-01-01

    Particle Imaging Velocimetry (PIV) is a quantitative velocity measurement technique for measuring instantaneous planar cross sections of a flow field. The technique offers very high precision (1%) directionally resolved velocity vector estimates, but its use has been limited by high equipment costs and complexity of operation. Particle Displacement Tracking (PDT) is an all-electronic PIV data acquisition and reduction procedure which is simple, fast, and easily implemented. The procedure uses a low power, continuous wave laser and a Charged Coupled Device (CCD) camera to electronically record the particle images. A frame grabber board in a PC is used for data acquisition and reduction processing. PDT eliminates the need for photographic processing, system costs are moderately low, and reduced data are available within seconds of acquisition. The technique results in velocity estimate accuracies on the order of 5%. The software is fully menu-driven from the acquisition to the reduction and analysis of the data. Options are available to acquire a single image or 5- or 25-field series of images separated in time by multiples of 1/60 second. The user may process each image, specifying its boundaries to remove unwanted glare from the periphery and adjusting its background level to clearly resolve the particle images. Data reduction routines determine the particle image centroids and create time history files. PDT then identifies the velocity vectors which describe the particle movement in the flow field. Graphical data analysis routines are included which allow the user to graph the time history files and display the velocity vector maps, interpolated velocity vector grids, iso-velocity vector contours, and flow streamlines. The PDT data processing software is written in FORTRAN 77 and the data acquisition routine is written in C-Language for 80386-based IBM PC compatibles running MS-DOS v3.0 or higher. Machine requirements include 4 MB RAM (3 MB Extended), a single or multiple frequency RGB monitor (EGA or better), a math co-processor, and a pointing device. The printers supported by the graphical analysis routines are the HP Laserjet+, Series II, and Series III with at least 1.5 MB memory. The data acquisition routines require the EPIX 4-MEG video board and optional 12.5MHz oscillator, and associated EPIX software. Data can be acquired from any CCD or RS-170 compatible video camera with pixel resolution of 600hX400v or better. PDT is distributed on one 5.25 inch 360K MS-DOS format diskette. Due to the use of required proprietary software, executable code is not provided on the distribution media. Compiling the source code requires the Microsoft C v5.1 compiler, Microsoft QuickC v2.0, the Microsoft Mouse Library, EPIX Image Processing Libraries, the Microway NDP-Fortran-386 v2.1 compiler, and the Media Cybernetics HALO Professional Graphics Kernal System. Due to the complexities of the machine requirements, COSMIC strongly recommends the purchase and review of the documentation prior to the purchase of the program. The source code, and sample input and output files are provided in PKZIP format; the PKUNZIP utility is included. PDT was developed in 1990. All trade names used are the property of their respective corporate owners.

  17. A Neuroimaging Web Services Interface as a Cyber Physical System for Medical Imaging and Data Management in Brain Research: Design Study.

    PubMed

    Lizarraga, Gabriel; Li, Chunfei; Cabrerizo, Mercedes; Barker, Warren; Loewenstein, David A; Duara, Ranjan; Adjouadi, Malek

    2018-04-26

    Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. ©Gabriel Lizarraga, Chunfei Li, Mercedes Cabrerizo, Warren Barker, David A Loewenstein, Ranjan Duara, Malek Adjouadi. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 26.04.2018.

  18. Fringe pattern demodulation using the one-dimensional continuous wavelet transform: field-programmable gate array implementation.

    PubMed

    Abid, Abdulbasit

    2013-03-01

    This paper presents a thorough discussion of the proposed field-programmable gate array (FPGA) implementation for fringe pattern demodulation using the one-dimensional continuous wavelet transform (1D-CWT) algorithm. This algorithm is also known as wavelet transform profilometry. Initially, the 1D-CWT is programmed using the C programming language and compiled into VHDL using the ImpulseC tool. This VHDL code is implemented on the Altera Cyclone IV GX EP4CGX150DF31C7 FPGA. A fringe pattern image with a size of 512×512 pixels is presented to the FPGA, which processes the image using the 1D-CWT algorithm. The FPGA requires approximately 100 ms to process the image and produce a wrapped phase map. For performance comparison purposes, the 1D-CWT algorithm is programmed using the C language. The C code is then compiled using the Intel compiler version 13.0. The compiled code is run on a Dell Precision state-of-the-art workstation. The time required to process the fringe pattern image is approximately 1 s. In order to further reduce the execution time, the 1D-CWT is reprogramed using Intel Integrated Primitive Performance (IPP) Library Version 7.1. The execution time was reduced to approximately 650 ms. This confirms that at least sixfold speedup was gained using FPGA implementation over a state-of-the-art workstation that executes heavily optimized implementation of the 1D-CWT algorithm.

  19. GALARIO: a GPU accelerated library for analysing radio interferometer observations

    NASA Astrophysics Data System (ADS)

    Tazzari, Marco; Beaujean, Frederik; Testi, Leonardo

    2018-06-01

    We present GALARIO, a computational library that exploits the power of modern graphical processing units (GPUs) to accelerate the analysis of observations from radio interferometers like Atacama Large Millimeter and sub-millimeter Array or the Karl G. Jansky Very Large Array. GALARIO speeds up the computation of synthetic visibilities from a generic 2D model image or a radial brightness profile (for axisymmetric sources). On a GPU, GALARIO is 150 faster than standard PYTHON and 10 times faster than serial C++ code on a CPU. Highly modular, easy to use, and to adopt in existing code, GALARIO comes as two compiled libraries, one for Nvidia GPUs and one for multicore CPUs, where both have the same functions with identical interfaces. GALARIO comes with PYTHON bindings but can also be directly used in C or C++. The versatility and the speed of GALARIO open new analysis pathways that otherwise would be prohibitively time consuming, e.g. fitting high-resolution observations of large number of objects, or entire spectral cubes of molecular gas emission. It is a general tool that can be applied to any field that uses radio interferometer observations. The source code is available online at http://github.com/mtazzari/galario under the open source GNU Lesser General Public License v3.

  20. Scanning electron microscope measurement of width and shape of 10nm patterned lines using a JMONSEL-modeled library.

    PubMed

    Villarrubia, J S; Vladár, A E; Ming, B; Kline, R J; Sunday, D F; Chawla, J S; List, S

    2015-07-01

    The width and shape of 10nm to 12 nm wide lithographically patterned SiO2 lines were measured in the scanning electron microscope by fitting the measured intensity vs. position to a physics-based model in which the lines' widths and shapes are parameters. The approximately 32 nm pitch sample was patterned at Intel using a state-of-the-art pitch quartering process. Their narrow widths and asymmetrical shapes are representative of near-future generation transistor gates. These pose a challenge: the narrowness because electrons landing near one edge may scatter out of the other, so that the intensity profile at each edge becomes width-dependent, and the asymmetry because the shape requires more parameters to describe and measure. Modeling was performed by JMONSEL (Java Monte Carlo Simulation of Secondary Electrons), which produces a predicted yield vs. position for a given sample shape and composition. The simulator produces a library of predicted profiles for varying sample geometry. Shape parameter values are adjusted until interpolation of the library with those values best matches the measured image. Profiles thereby determined agreed with those determined by transmission electron microscopy and critical dimension small-angle x-ray scattering to better than 1 nm. Published by Elsevier B.V.

Top