Sample records for common image processing

  1. A Software Platform for Post-Processing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Donald J.; Martin, Richard E.; Seebo, Jeff P.; Trinh, Long B.; Walker, James L.; Winfree, William P.

    2007-01-01

    Ultrasonic, microwave, and terahertz nondestructive evaluation imaging systems generally require the acquisition of waveforms at each scan point to form an image. For such systems, signal and image processing methods are commonly needed to extract information from the waves and improve resolution of, and highlight, defects in the image. Since some similarity exists for all waveform-based NDE methods, it would seem a common software platform containing multiple signal and image processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. This presentation describes NASA Glenn Research Center's approach in developing a common software platform for processing waveform-based NDE signals and images. This platform is currently in use at NASA Glenn and at Lockheed Martin Michoud Assembly Facility for processing of pulsed terahertz and ultrasonic data. Highlights of the software operation will be given. A case study will be shown for use with terahertz data. The authors also request scientists and engineers who are interested in sharing customized signal and image processing algorithms to contribute to this effort by letting the authors code up and include these algorithms in future releases.

  2. Image Processing Using a Parallel Architecture.

    DTIC Science & Technology

    1987-12-01

    ENG/87D-25 Abstract This study developed a set o± low level image processing tools on a parallel computer that allows concurrent processing of images...environment, the set of tools offers a significant reduction in the time required to perform some commonly used image processing operations. vI IMAGE...step toward developing these systems, a structured set of image processing tools was implemented using a parallel computer. More important than

  3. Resiliency of the Multiscale Retinex Image Enhancement Algorithm

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.

    1998-01-01

    The multiscale retinex with color restoration (MSRCR) continues to prove itself in extensive testing to be very versatile automatic image enhancement algorithm that simultaneously provides dynamic range compression, color constancy, and color rendition, However, issues remain with regard to the resiliency of the MSRCR to different image sources and arbitrary image manipulations which may have been applied prior to retinex processing. In this paper we define these areas of concern, provide experimental results, and, examine the effects of commonly occurring image manipulation on retinex performance. In virtually all cases the MSRCR is highly resilient to the effects of both the image source variations and commonly encountered prior image-processing. Significant artifacts are primarily observed for the case of selective color channel clipping in large dark zones in a image. These issues are of concerning the processing of digital image archives and other applications where there is neither control over the image acquisition process, nor knowledge about any processing done on th data beforehand.

  4. An efficient approach to integrated MeV ion imaging.

    PubMed

    Nikbakht, T; Kakuee, O; Solé, V A; Vosuoghi, Y; Lamehi-Rachti, M

    2018-03-01

    An ionoluminescence (IL) spectral imaging system, besides the common MeV ion imaging facilities such as µ-PIXE and µ-RBS, is implemented at the Van de Graaff laboratory of Tehran. A versatile processing software is required to handle the large amount of data concurrently collected in µ-IL and common MeV ion imaging measurements through the respective methodologies. The open-source freeware PyMca, with image processing and multivariate analysis capabilities, is employed to simultaneously process common MeV ion imaging and µ-IL data. Herein, the program was adapted to support the OM_DAQ listmode data format. The appropriate performance of the µ-IL data acquisition system is confirmed through a case study. Moreover, the capabilities of the software for simultaneous analysis of µ-PIXE and µ-RBS experimental data are presented. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Eliminating "Hotspots" in Digital Image Processing

    NASA Technical Reports Server (NTRS)

    Salomon, P. M.

    1984-01-01

    Signals from defective picture elements rejected. Image processing program for use with charge-coupled device (CCD) or other mosaic imager augmented with algorithm that compensates for common type of electronic defect. Algorithm prevents false interpretation of "hotspots". Used for robotics, image enhancement, image analysis and digital television.

  6. Platform for Post-Processing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Don J.

    2010-01-01

    Signal- and image-processing methods are commonly needed to extract information from the waves, improve resolution of, and highlight defects in an image. Since some similarity exists for all waveform-based nondestructive evaluation (NDE) methods, it would seem that a common software platform containing multiple signal- and image-processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. NDE Wave & Image Processor Version 2.0 software provides a single, integrated signal- and image-processing and analysis environment for total NDE data processing and analysis. It brings some of the most useful algorithms developed for NDE over the past 20 years into a commercial-grade product. The software can import signal/spectroscopic data, image data, and image series data. This software offers the user hundreds of basic and advanced signal- and image-processing capabilities including esoteric 1D and 2D wavelet-based de-noising, de-trending, and filtering. Batch processing is included for signal- and image-processing capability so that an optimized sequence of processing operations can be applied to entire folders of signals, spectra, and images. Additionally, an extensive interactive model-based curve-fitting facility has been included to allow fitting of spectroscopy data such as from Raman spectroscopy. An extensive joint-time frequency module is included for analysis of non-stationary or transient data such as that from acoustic emission, vibration, or earthquake data.

  7. Pattern recognition and expert image analysis systems in biomedical image processing (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Oosterlinck, A.; Suetens, P.; Wu, Q.; Baird, M.; F. M., C.

    1987-09-01

    This paper gives an overview of pattern recoanition techniques (P.R.) used in biomedical image processing and problems related to the different P.R. solutions. Also the use of knowledge based systems to overcome P.R. difficulties, is described. This is illustrated by a common example ofabiomedical image processing application.

  8. Multi-frame image processing with panning cameras and moving subjects

    NASA Astrophysics Data System (ADS)

    Paolini, Aaron; Humphrey, John; Curt, Petersen; Kelmelis, Eric

    2014-06-01

    Imaging scenarios commonly involve erratic, unpredictable camera behavior or subjects that are prone to movement, complicating multi-frame image processing techniques. To address these issues, we developed three techniques that can be applied to multi-frame image processing algorithms in order to mitigate the adverse effects observed when cameras are panning or subjects within the scene are moving. We provide a detailed overview of the techniques and discuss the applicability of each to various movement types. In addition to this, we evaluated algorithm efficacy with demonstrated benefits using field test video, which has been processed using our commercially available surveillance product. Our results show that algorithm efficacy is significantly improved in common scenarios, expanding our software's operational scope. Our methods introduce little computational burden, enabling their use in real-time and low-power solutions, and are appropriate for long observation periods. Our test cases focus on imaging through turbulence, a common use case for multi-frame techniques. We present results of a field study designed to test the efficacy of these techniques under expanded use cases.

  9. Applicability of common measures in multifocus image fusion comparison

    NASA Astrophysics Data System (ADS)

    Vajgl, Marek

    2017-11-01

    Image fusion is an image processing area aimed at fusion of multiple input images to achieve an output image somehow better then each of the input ones. In the case of "multifocus fusion", input images are capturing the same scene differing ina focus distance. The aim is to obtain an image, which is sharp in all its areas. The are several different approaches and methods used to solve this problem. However, it is common question which one is the best. This work describes a research covering the field of common measures with a question, if some of them can be used as a quality measure of the fusion result evaluation.

  10. Identification of uncommon objects in containers

    DOEpatents

    Bremer, Peer-Timo; Kim, Hyojin; Thiagarajan, Jayaraman J.

    2017-09-12

    A system for identifying in an image an object that is commonly found in a collection of images and for identifying a portion of an image that represents an object based on a consensus analysis of segmentations of the image. The system collects images of containers that contain objects for generating a collection of common objects within the containers. To process the images, the system generates a segmentation of each image. The image analysis system may also generate multiple segmentations for each image by introducing variations in the selection of voxels to be merged into a segment. The system then generates clusters of the segments based on similarity among the segments. Each cluster represents a common object found in the containers. Once the clustering is complete, the system may be used to identify common objects in images of new containers based on similarity between segments of images and the clusters.

  11. [Image processing system of visual prostheses based on digital signal processor DM642].

    PubMed

    Xie, Chengcheng; Lu, Yanyu; Gu, Yun; Wang, Jing; Chai, Xinyu

    2011-09-01

    This paper employed a DSP platform to create the real-time and portable image processing system, and introduced a series of commonly used algorithms for visual prostheses. The results of performance evaluation revealed that this platform could afford image processing algorithms to be executed in real time.

  12. Hyperspectral imaging for food processing automation

    NASA Astrophysics Data System (ADS)

    Park, Bosoon; Lawrence, Kurt C.; Windham, William R.; Smith, Doug P.; Feldner, Peggy W.

    2002-11-01

    This paper presents the research results that demonstrates hyperspectral imaging could be used effectively for detecting feces (from duodenum, ceca, and colon) and ingesta on the surface of poultry carcasses, and potential application for real-time, on-line processing of poultry for automatic safety inspection. The hyperspectral imaging system included a line scan camera with prism-grating-prism spectrograph, fiber optic line lighting, motorized lens control, and hyperspectral image processing software. Hyperspectral image processing algorithms, specifically band ratio of dual-wavelength (565/517) images and thresholding were effective on the identification of fecal and ingesta contamination of poultry carcasses. A multispectral imaging system including a common aperture camera with three optical trim filters (515.4 nm with 8.6- nm FWHM), 566.4 nm with 8.8-nm FWHM, and 631 nm with 10.2-nm FWHM), which were selected and validated by a hyperspectral imaging system, was developed for a real-time, on-line application. A total image processing time required to perform the current multispectral images captured by a common aperture camera was approximately 251 msec or 3.99 frames/sec. A preliminary test shows that the accuracy of real-time multispectral imaging system to detect feces and ingesta on corn/soybean fed poultry carcasses was 96%. However, many false positive spots that cause system errors were also detected.

  13. Real-time hyperspectral imaging for food safety applications

    USDA-ARS?s Scientific Manuscript database

    Multispectral imaging systems with selected bands can commonly be used for real-time applications of food processing. Recent research has demonstrated several image processing methods including binning, noise removal filter, and appropriate morphological analysis in real-time mode can remove most fa...

  14. Millimeter wave scattering characteristics and radar cross section measurements of common roadway objects

    NASA Astrophysics Data System (ADS)

    Zoratti, Paul K.; Gilbert, R. Kent; Majewski, Ronald; Ference, Jack

    1995-12-01

    Development of automotive collision warning systems has progressed rapidly over the past several years. A key enabling technology for these systems is millimeter-wave radar. This paper addresses a very critical millimeter-wave radar sensing issue for automotive radar, namely the scattering characteristics of common roadway objects such as vehicles, roadsigns, and bridge overpass structures. The data presented in this paper were collected on ERIM's Fine Resolution Radar Imaging Rotary Platform Facility and processed with ERIM's image processing tools. The value of this approach is that it provides system developers with a 2D radar image from which information about individual point scatterers `within a single target' can be extracted. This information on scattering characteristics will be utilized to refine threat assessment processing algorithms and automotive radar hardware configurations. (1) By evaluating the scattering characteristics identified in the radar image, radar signatures as a function of aspect angle for common roadway objects can be established. These signatures will aid in the refinement of threat assessment processing algorithms. (2) Utilizing ERIM's image manipulation tools, total RCS and RCS as a function of range and azimuth can be extracted from the radar image data. This RCS information will be essential in defining the operational envelope (e.g. dynamic range) within which any radar sensor hardware must be designed.

  15. Information theoretic analysis of edge detection in visual communication

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2010-08-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the artifacts introduced into the process by the image gathering process. However, experiments show that the image gathering process profoundly impacts the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. In this paper, we perform an end-to-end information theory based system analysis to assess edge detection methods. We evaluate the performance of the different algorithms as a function of the characteristics of the scene, and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge detection algorithm is regarded to have high performance only if the information rate from the scene to the edge approaches the maximum possible. This goal can be achieved only by jointly optimizing all processes. People generally use subjective judgment to compare different edge detection methods. There is not a common tool that can be used to evaluate the performance of the different algorithms, and to give people a guide for selecting the best algorithm for a given system or scene. Our information-theoretic assessment becomes this new tool to which allows us to compare the different edge detection operators in a common environment.

  16. Radiography for intensive care: participatory process analysis in a PACS-equipped and film/screen environment

    NASA Astrophysics Data System (ADS)

    Peer, Regina; Peer, Siegfried; Sander, Heike; Marsolek, Ingo; Koller, Wolfgang; Pappert, Dirk; Hierholzer, Johannes

    2002-05-01

    If new technology is introduced into medical practice it must prove to make a difference. However traditional approaches of outcome analysis failed to show a direct benefit of PACS on patient care and economical benefits are still in debate. A participatory process analysis was performed to compare workflow in a film based hospital and a PACS environment. This included direct observation of work processes, interview of involved staff, structural analysis and discussion of observations with staff members. After definition of common structures strong and weak workflow steps were evaluated. With a common workflow structure in both hospitals, benefits of PACS were revealed in workflow steps related to image reporting with simultaneous image access for ICU-physicians and radiologists, archiving of images as well as image and report distribution. However PACS alone is not able to cover the complete process of 'radiography for intensive care' from ordering of an image till provision of the final product equals image + report. Interference of electronic workflow with analogue process steps such as paper based ordering reduces the potential benefits of PACS. In this regard workflow modeling proved to be very helpful for the evaluation of complex work processes linking radiology and the ICU.

  17. Platform-independent software for medical image processing on the Internet

    NASA Astrophysics Data System (ADS)

    Mancuso, Michael E.; Pathak, Sayan D.; Kim, Yongmin

    1997-05-01

    We have developed a software tool for image processing over the Internet. The tool is a general purpose, easy to use, flexible, platform independent image processing software package with functions most commonly used in medical image processing.It provides for processing of medical images located wither remotely on the Internet or locally. The software was written in Java - the new programming language developed by Sun Microsystems. It was compiled and tested using Microsoft's Visual Java 1.0 and Microsoft's Just in Time Compiler 1.00.6211. The software is simple and easy to use. In order to use the tool, the user needs to download the software from our site before he/she runs it using any Java interpreter, such as those supplied by Sun, Symantec, Borland or Microsoft. Future versions of the operating systems supplied by Sun, Microsoft, Apple, IBM, and others will include Java interpreters. The software is then able to access and process any image on the iNternet or on the local computer. Using a 512 X 512 X 8-bit image, a 3 X 3 convolution took 0.88 seconds on an Intel Pentium Pro PC running at 200 MHz with 64 Mbytes of memory. A window/level operation took 0.38 seconds while a 3 X 3 median filter took 0.71 seconds. These performance numbers demonstrate the feasibility of using this software interactively on desktop computes. Our software tool supports various image processing techniques commonly used in medical image processing and can run without the need of any specialized hardware. It can become an easily accessible resource over the Internet to promote the learning and of understanding image processing algorithms. Also, it could facilitate sharing of medical image databases and collaboration amongst researchers and clinicians, regardless of location.

  18. ED breast cases and other breast emergencies.

    PubMed

    Khadem, Nasim; Reddy, Sravanthi; Lee, Sandy; Larsen, Linda; Walker, Daphne

    2016-02-01

    Patients with pathologic processes of the breast commonly present in the Emergency Department (ED). Familiarity with the imaging and management of the most common entities is essential for the radiologist. Additionally, it is important to understand the limitations of ED imaging and management in the acute setting and to recognize when referrals to a specialty breast center are necessary. The goal of this article is to review the clinical presentations, pathophysiology, imaging, and management of emergency breast cases and common breast pathology seen in the ED.

  19. Matrix decomposition graphics processing unit solver for Poisson image editing

    NASA Astrophysics Data System (ADS)

    Lei, Zhao; Wei, Li

    2012-10-01

    In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.

  20. Robust watermark technique using masking and Hermite transform.

    PubMed

    Coronel, Sandra L Gomez; Ramírez, Boris Escalante; Mosqueda, Marco A Acevedo

    2016-01-01

    The following paper evaluates a watermark algorithm designed for digital images by using a perceptive mask and a normalization process, thus preventing human eye detection, as well as ensuring its robustness against common processing and geometric attacks. The Hermite transform is employed because it allows a perfect reconstruction of the image, while incorporating human visual system properties; moreover, it is based on the Gaussian functions derivates. The applied watermark represents information of the digital image proprietor. The extraction process is blind, because it does not require the original image. The following techniques were utilized in the evaluation of the algorithm: peak signal-to-noise ratio, the structural similarity index average, the normalized crossed correlation, and bit error rate. Several watermark extraction tests were performed, with against geometric and common processing attacks. It allowed us to identify how many bits in the watermark can be modified for its adequate extraction.

  1. Exploring the complementarity of THz pulse imaging and DCE-MRIs: Toward a unified multi-channel classification and a deep learning framework.

    PubMed

    Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S

    2016-12-01

    We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Quantum Image Processing and Its Application to Edge Detection: Theory and Experiment

    NASA Astrophysics Data System (ADS)

    Yao, Xi-Wei; Wang, Hengyan; Liao, Zeyang; Chen, Ming-Cheng; Pan, Jian; Li, Jun; Zhang, Kechao; Lin, Xingcheng; Wang, Zhehui; Luo, Zhihuang; Zheng, Wenqiang; Li, Jianzhong; Zhao, Meisheng; Peng, Xinhua; Suter, Dieter

    2017-07-01

    Processing of digital images is continuously gaining in volume and relevance, with concomitant demands on data storage, transmission, and processing power. Encoding the image information in quantum-mechanical systems instead of classical ones and replacing classical with quantum information processing may alleviate some of these challenges. By encoding and processing the image information in quantum-mechanical systems, we here demonstrate the framework of quantum image processing, where a pure quantum state encodes the image information: we encode the pixel values in the probability amplitudes and the pixel positions in the computational basis states. Our quantum image representation reduces the required number of qubits compared to existing implementations, and we present image processing algorithms that provide exponential speed-up over their classical counterparts. For the commonly used task of detecting the edge of an image, we propose and implement a quantum algorithm that completes the task with only one single-qubit operation, independent of the size of the image. This demonstrates the potential of quantum image processing for highly efficient image and video processing in the big data era.

  3. Digital radiography: spatial and contrast resolution

    NASA Astrophysics Data System (ADS)

    Bjorkholm, Paul; Annis, M.; Frederick, E.; Stein, J.; Swift, R.

    1981-07-01

    The addition of digital image collection and storage to standard and newly developed x-ray imaging techniques has allowed spectacular improvements in some diagnostic procedures. There is no reason to expect that the developments in this area are yet complete. But no matter what further developments occur in this field, all the techniques will share a common element, digital image storage and processing. This common element alone determines some of the important imaging characteristics. These will be discussed using one system, the Medical MICRODOSE System as an example.

  4. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    PubMed Central

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  5. A survey of camera error sources in machine vision systems

    NASA Astrophysics Data System (ADS)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  6. DICOM to print, 35-mm slides, web, and video projector: tutorial using Adobe Photoshop.

    PubMed

    Gurney, Jud W

    2002-10-01

    Preparing images for publication has dealt with film and the photographic process. With picture archiving and communications systems, many departments will no longer produce film. This will change how images are produced for publication. DICOM, the file format for radiographic images, has to be converted and then prepared for traditional publication, 35-mm slides, the newest techniques of video projection, and the World Wide Web. Tagged image file format is the common format for traditional print publication, whereas joint photographic expert group is the current file format for the World Wide Web. Each medium has specific requirements that can be met with a common image-editing program such as Adobe Photoshop (Adobe Systems, San Jose, CA). High-resolution images are required for print, a process that requires interpolation. However, the Internet requires images with a small file size for rapid transmission. The resolution of each output differs and the image resolution must be optimized to match the output of the publishing medium.

  7. Advanced Digital Forensic and Steganalysis Methods

    DTIC Science & Technology

    2009-02-01

    investigation is simultaneously cropped, scaled, and processed, extending the technology when the digital image is printed, developing technology capable ...or other common processing operations). TECNOLOGY APPLICATIONS 1. Determining the origin of digital images 2. Matching an image to a camera...Technology Transfer and Innovation Partnerships Division of Research P.O. Box 6000 State University of New York Binghamton, NY 13902-6000 Phone: 607-777

  8. WE-G-209-03: PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kemp, B.

    2016-06-15

    Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less

  9. WE-G-209-02: CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kofler, J.

    2016-06-15

    Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less

  10. WE-G-209-04: MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pooley, R.

    2016-06-15

    Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less

  11. Super-resolution for scanning light stimulation systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bitzer, L. A.; Neumann, K.; Benson, N., E-mail: niels.benson@uni-due.de

    Super-resolution (SR) is a technique used in digital image processing to overcome the resolution limitation of imaging systems. In this process, a single high resolution image is reconstructed from multiple low resolution images. SR is commonly used for CCD and CMOS (Complementary Metal-Oxide-Semiconductor) sensor images, as well as for medical applications, e.g., magnetic resonance imaging. Here, we demonstrate that super-resolution can be applied with scanning light stimulation (LS) systems, which are common to obtain space-resolved electro-optical parameters of a sample. For our purposes, the Projection Onto Convex Sets (POCS) was chosen and modified to suit the needs of LS systems.more » To demonstrate the SR adaption, an Optical Beam Induced Current (OBIC) LS system was used. The POCS algorithm was optimized by means of OBIC short circuit current measurements on a multicrystalline solar cell, resulting in a mean square error reduction of up to 61% and improved image quality.« less

  12. Technical Review: Microscopy and Image Processing Tools to Analyze Plant Chromatin: Practical Considerations.

    PubMed

    Baroux, Célia; Schubert, Veit

    2018-01-01

    In situ nucleus and chromatin analyses rely on microscopy imaging that benefits from versatile, efficient fluorescent probes and proteins for static or live imaging. Yet the broad choice in imaging instruments offered to the user poses orientation problems. Which imaging instrument should be used for which purpose? What are the main caveats and what are the considerations to best exploit each instrument's ability to obtain informative and high-quality images? How to infer quantitative information on chromatin or nuclear organization from microscopy images? In this review, we present an overview of common, fluorescence-based microscopy systems and discuss recently developed super-resolution microscopy systems, which are able to bridge the resolution gap between common fluorescence microscopy and electron microscopy. We briefly present their basic principles and discuss their possible applications in the field, while providing experience-based recommendations to guide the user toward best-possible imaging. In addition to raw data acquisition methods, we discuss commercial and noncommercial processing tools required for optimal image presentation and signal evaluation in two and three dimensions.

  13. MO-DE-207-04: Imaging educational program on solutions to common pediatric imaging challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnamurthy, R.

    This imaging educational program will focus on solutions to common pediatric imaging challenges. The speakers will present collective knowledge on best practices in pediatric imaging from their experience at dedicated children’s hospitals. The educational program will begin with a detailed discussion of the optimal configuration of fluoroscopes for general pediatric procedures. Following this introduction will be a focused discussion on the utility of Dual Energy CT for imaging children. The third lecture will address the substantial challenge of obtaining consistent image post -processing in pediatric digital radiography. The fourth and final lecture will address best practices in pediatric MRI includingmore » a discussion of ancillary methods to reduce sedation and anesthesia rates. Learning Objectives: To learn techniques for optimizing radiation dose and image quality in pediatric fluoroscopy To become familiar with the unique challenges and applications of Dual Energy CT in pediatric imaging To learn solutions for consistent post-processing quality in pediatric digital radiography To understand the key components of an effective MRI safety and quality program for the pediatric practice.« less

  14. Medical image processing on the GPU - past, present and future.

    PubMed

    Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M

    2013-12-01

    Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Computer vision applications for coronagraphic optical alignment and image processing.

    PubMed

    Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A

    2013-05-10

    Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

  16. Embedded, real-time UAV control for improved, image-based 3D scene reconstruction

    Treesearch

    Jean Liénard; Andre Vogs; Demetrios Gatziolis; Nikolay Strigul

    2016-01-01

    Unmanned Aerial Vehicles (UAVs) are already broadly employed for 3D modeling of large objects such as trees and monuments via photogrammetry. The usual workflow includes two distinct steps: image acquisition with UAV and computationally demanding postflight image processing. Insufficient feature overlaps across images is a common shortcoming in post-flight image...

  17. a Comparative Case Study of Reflection Seismic Imaging Method

    NASA Astrophysics Data System (ADS)

    Alamooti, M.; Aydin, A.

    2017-12-01

    Seismic imaging is the most common means of gathering information about subsurface structural features. The accuracy of seismic images may be highly variable depending on the complexity of the subsurface and on how seismic data is processed. One of the crucial steps in this process, especially in layered sequences with complicated structure, is the time and/or depth migration of seismic data.The primary purpose of the migration is to increase the spatial resolution of seismic images by repositioning the recorded seismic signal back to its original point of reflection in time/space, which enhances information about complex structure. In this study, our objective is to process a seismic data set (courtesy of the University of South Carolina) to generate an image on which the Magruder fault near Allendale SC can be clearly distinguished and its attitude can be accurately depicted. The data was gathered by common mid-point method with 60 geophones equally spaced along an about 550 m long traverse over a nearly flat ground. The results obtained from the application of different migration algorithms (including finite-difference and Kirchhoff) are compared in time and depth domains to investigate the efficiency of each algorithm in reducing the processing time and improving the accuracy of seismic images in reflecting the correct position of the Magruder fault.

  18. Advances in interpretation of subsurface processes with time-lapse electrical imaging

    USGS Publications Warehouse

    Singha, Kaminit; Day-Lewis, Frederick D.; Johnson, Tim B.; Slater, Lee D.

    2015-01-01

    Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.

  19. Advances in interpretation of subsurface processes with time-lapse electrical imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singha, Kamini; Day-Lewis, Frederick D.; Johnson, Timothy C.

    2015-03-15

    Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.

  20. WE-G-209-00: Identifying Image Artifacts, Their Causes, and How to Fix Them

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less

  1. Fast perceptual image hash based on cascade algorithm

    NASA Astrophysics Data System (ADS)

    Ruchay, Alexey; Kober, Vitaly; Yavtushenko, Evgeniya

    2017-09-01

    In this paper, we propose a perceptual image hash algorithm based on cascade algorithm, which can be applied in image authentication, retrieval, and indexing. Image perceptual hash uses for image retrieval in sense of human perception against distortions caused by compression, noise, common signal processing and geometrical modifications. The main disadvantage of perceptual hash is high time expenses. In the proposed cascade algorithm of image retrieval initializes with short hashes, and then a full hash is applied to the processed results. Computer simulation results show that the proposed hash algorithm yields a good performance in terms of robustness, discriminability, and time expenses.

  2. Using quantum filters to process images of diffuse axonal injury

    NASA Astrophysics Data System (ADS)

    Pineda Osorio, Mateo

    2014-06-01

    Some images corresponding to a diffuse axonal injury (DAI) are processed using several quantum filters such as Hermite Weibull and Morse. Diffuse axonal injury is a particular, common and severe case of traumatic brain injury (TBI). DAI involves global damage on microscopic scale of brain tissue and causes serious neurologic abnormalities. New imaging techniques provide excellent images showing cellular damages related to DAI. Said images can be processed with quantum filters, which accomplish high resolutions of dendritic and axonal structures both in normal and pathological state. Using the Laplacian operators from the new quantum filters, excellent edge detectors for neurofiber resolution are obtained. Image quantum processing of DAI images is made using computer algebra, specifically Maple. Quantum filter plugins construction is proposed as a future research line, which can incorporated to the ImageJ software package, making its use simpler for medical personnel.

  3. Age effect in generating mental images of buildings but not common objects.

    PubMed

    Piccardi, L; Nori, R; Palermo, L; Guariglia, C; Giusberti, F

    2015-08-18

    Imagining a familiar environment is different from imagining an environmental map and clinical evidence demonstrated the existence of double dissociations in brain-damaged patients due to the contents of mental images. Here, we assessed a large sample of young and old participants by considering their ability to generate different kinds of mental images, namely, buildings or common objects. As buildings are environmental stimuli that have an important role in human navigation, we expected that elderly participants would have greater difficulty in generating images of buildings than common objects. We found that young and older participants differed in generating both buildings and common objects. For young participants there were no differences between buildings and common objects, but older participants found easier to generate common objects than buildings. Buildings are a special type of visual stimuli because in urban environments they are commonly used as landmarks for navigational purposes. Considering that topographical orientation is one of the abilities mostly affected in normal and pathological aging, the present data throw some light on the impaired processes underlying human navigation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. Quantitative Imaging in Laboratory: Fast Kinetics and Fluorescence Quenching

    ERIC Educational Resources Information Center

    Cumberbatch, Tanya; Hanley, Quentin S.

    2007-01-01

    The process of quantitative imaging, which is very commonly used in laboratory, is shown to be very useful for studying the fast kinetics and fluorescence quenching of many experiments. The imaging technique is extremely cheap and hence can be used in many absorption and luminescence experiments.

  5. Information theoretic analysis of linear shift-invariant edge-detection operators

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2012-06-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the influences by the image gathering process. However, experiments show that the image gathering process has a profound impact on the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. We perform an end-to-end information theory based system analysis to assess linear shift-invariant edge-detection algorithms. We evaluate the performance of the different algorithms as a function of the characteristics of the scene and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge-detection algorithm is regarded as having high performance only if the information rate from the scene to the edge image approaches its maximum possible. This goal can be achieved only by jointly optimizing all processes. Our information-theoretic assessment provides a new tool that allows us to compare different linear shift-invariant edge detectors in a common environment.

  6. Improved high-throughput quantification of luminescent microplate assays using a common Western-blot imaging system.

    PubMed

    Hawkins, Liam J; Storey, Kenneth B

    2017-01-01

    Common Western-blot imaging systems have previously been adapted to measure signals from luminescent microplate assays. This can be a cost saving measure as Western-blot imaging systems are common laboratory equipment and could substitute a dedicated luminometer if one is not otherwise available. One previously unrecognized limitation is that the signals captured by the cameras in these systems are not equal for all wells. Signals are dependent on the angle of incidence to the camera, and thus the location of the well on the microplate. Here we show that: •The position of a well on a microplate significantly affects the signal captured by a common Western-blot imaging system from a luminescent assay.•The effect of well position can easily be corrected for.•This method can be applied to commercially available luminescent assays, allowing for high-throughput quantification of a wide range of biological processes and biochemical reactions.

  7. WE-G-209-01: Digital Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schueler, B.

    Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less

  8. Single image interpolation via adaptive nonlocal sparsity-based modeling.

    PubMed

    Romano, Yaniv; Protter, Matan; Elad, Michael

    2014-07-01

    Single image interpolation is a central and extensively studied problem in image processing. A common approach toward the treatment of this problem in recent years is to divide the given image into overlapping patches and process each of them based on a model for natural image patches. Adaptive sparse representation modeling is one such promising image prior, which has been shown to be powerful in filling-in missing pixels in an image. Another force that such algorithms may use is the self-similarity that exists within natural images. Processing groups of related patches together exploits their correspondence, leading often times to improved results. In this paper, we propose a novel image interpolation method, which combines these two forces-nonlocal self-similarities and sparse representation modeling. The proposed method is contrasted with competitive and related algorithms, and demonstrated to achieve state-of-the-art results.

  9. Identifying regions of interest in medical images using self-organizing maps.

    PubMed

    Teng, Wei-Guang; Chang, Ping-Lin

    2012-10-01

    Advances in data acquisition, processing and visualization techniques have had a tremendous impact on medical imaging in recent years. However, the interpretation of medical images is still almost always performed by radiologists. Developments in artificial intelligence and image processing have shown the increasingly great potential of computer-aided diagnosis (CAD). Nevertheless, it has remained challenging to develop a general approach to process various commonly used types of medical images (e.g., X-ray, MRI, and ultrasound images). To facilitate diagnosis, we recommend the use of image segmentation to discover regions of interest (ROI) using self-organizing maps (SOM). We devise a two-stage SOM approach that can be used to precisely identify the dominant colors of a medical image and then segment it into several small regions. In addition, by appropriately conducting the recursive merging steps to merge smaller regions into larger ones, radiologists can usually identify one or more ROIs within a medical image.

  10. Reducing Speckle In One-Look SAR Images

    NASA Technical Reports Server (NTRS)

    Nathan, K. S.; Curlander, J. C.

    1990-01-01

    Local-adaptive-filter algorithm incorporated into digital processing of synthetic-aperture-radar (SAR) echo data to reduce speckle in resulting imagery. Involves use of image statistics in vicinity of each picture element, in conjunction with original intensity of element, to estimate brightness more nearly proportional to true radar reflectance of corresponding target. Increases ratio of signal to speckle noise without substantial degradation of resolution common to multilook SAR images. Adapts to local variations of statistics within scene, preserving subtle details. Computationally simple. Lends itself to parallel processing of different segments of image, making possible increased throughput.

  11. A new method of cardiographic image segmentation based on grammar

    NASA Astrophysics Data System (ADS)

    Hamdi, Salah; Ben Abdallah, Asma; Bedoui, Mohamed H.; Alimi, Adel M.

    2011-10-01

    The measurement of the most common ultrasound parameters, such as aortic area, mitral area and left ventricle (LV) volume, requires the delineation of the organ in order to estimate the area. In terms of medical image processing this translates into the need to segment the image and define the contours as accurately as possible. The aim of this work is to segment an image and make an automated area estimation based on grammar. The entity "language" will be projected to the entity "image" to perform structural analysis and parsing of the image. We will show how the idea of segmentation and grammar-based area estimation is applied to real problems of cardio-graphic image processing.

  12. AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves

    NASA Astrophysics Data System (ADS)

    Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.

    2017-02-01

    ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.

  13. Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Tucker, Deanne (Technical Monitor)

    1994-01-01

    Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

  14. Image reconstruction: an overview for clinicians.

    PubMed

    Hansen, Michael S; Kellman, Peter

    2015-03-01

    Image reconstruction plays a critical role in the clinical use of magnetic resonance imaging (MRI). The MRI raw data is not acquired in image space and the role of the image reconstruction process is to transform the acquired raw data into images that can be interpreted clinically. This process involves multiple signal processing steps that each have an impact on the image quality. This review explains the basic terminology used for describing and quantifying image quality in terms of signal-to-noise ratio and point spread function. In this context, several commonly used image reconstruction components are discussed. The image reconstruction components covered include noise prewhitening for phased array data acquisition, interpolation needed to reconstruct square pixels, raw data filtering for reducing Gibbs ringing artifacts, Fourier transforms connecting the raw data with image space, and phased array coil combination. The treatment of phased array coils includes a general explanation of parallel imaging as a coil combination technique. The review is aimed at readers with no signal processing experience and should enable them to understand what role basic image reconstruction steps play in the formation of clinical images and how the resulting image quality is described. © 2014 Wiley Periodicals, Inc.

  15. Evidential Reasoning in Expert Systems for Image Analysis.

    DTIC Science & Technology

    1985-02-01

    techniques to image analysis (IA). There is growing evidence that these techniques offer significant improvements in image analysis , particularly in the...2) to provide a common framework for analysis, (3) to structure the ER process for major expert-system tasks in image analysis , and (4) to identify...approaches to three important tasks for expert systems in the domain of image analysis . This segment concluded with an assessment of the strengths

  16. Computer-aided light sheet flow visualization using photogrammetry

    NASA Technical Reports Server (NTRS)

    Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.

    1994-01-01

    A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and a visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) results, was chosen to interactively display the reconstructed light sheet images with the numerical surface geometry for the model or aircraft under study. The photogrammetric reconstruction technique and the image processing and computer graphics techniques and equipment are described. Results of the computer-aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images with CFD solutions in the same graphics environment is also demonstrated.

  17. Computer-Aided Light Sheet Flow Visualization

    NASA Technical Reports Server (NTRS)

    Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.

    1993-01-01

    A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.

  18. Computer-aided light sheet flow visualization

    NASA Technical Reports Server (NTRS)

    Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.

    1993-01-01

    A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.

  19. Hardware architecture design of image restoration based on time-frequency domain computation

    NASA Astrophysics Data System (ADS)

    Wen, Bo; Zhang, Jing; Jiao, Zipeng

    2013-10-01

    The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed. Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality, hardware realizability and high efficiency.

  20. FAST: framework for heterogeneous medical image computing and visualization.

    PubMed

    Smistad, Erik; Bozorgi, Mohammadmehdi; Lindseth, Frank

    2015-11-01

    Computer systems are becoming increasingly heterogeneous in the sense that they consist of different processors, such as multi-core CPUs and graphic processing units. As the amount of medical image data increases, it is crucial to exploit the computational power of these processors. However, this is currently difficult due to several factors, such as driver errors, processor differences, and the need for low-level memory handling. This paper presents a novel FrAmework for heterogeneouS medical image compuTing and visualization (FAST). The framework aims to make it easier to simultaneously process and visualize medical images efficiently on heterogeneous systems. FAST uses common image processing programming paradigms and hides the details of memory handling from the user, while enabling the use of all processors and cores on a system. The framework is open-source, cross-platform and available online. Code examples and performance measurements are presented to show the simplicity and efficiency of FAST. The results are compared to the insight toolkit (ITK) and the visualization toolkit (VTK) and show that the presented framework is faster with up to 20 times speedup on several common medical imaging algorithms. FAST enables efficient medical image computing and visualization on heterogeneous systems. Code examples and performance evaluations have demonstrated that the toolkit is both easy to use and performs better than existing frameworks, such as ITK and VTK.

  1. Hybrid Discrete Wavelet Transform and Gabor Filter Banks Processing for Features Extraction from Biomedical Images

    PubMed Central

    Lahmiri, Salim; Boukadoum, Mounir

    2013-01-01

    A new methodology for automatic feature extraction from biomedical images and subsequent classification is presented. The approach exploits the spatial orientation of high-frequency textural features of the processed image as determined by a two-step process. First, the two-dimensional discrete wavelet transform (DWT) is applied to obtain the HH high-frequency subband image. Then, a Gabor filter bank is applied to the latter at different frequencies and spatial orientations to obtain new Gabor-filtered image whose entropy and uniformity are computed. Finally, the obtained statistics are fed to a support vector machine (SVM) binary classifier. The approach was validated on mammograms, retina, and brain magnetic resonance (MR) images. The obtained classification accuracies show better performance in comparison to common approaches that use only the DWT or Gabor filter banks for feature extraction. PMID:27006906

  2. A new machine classification method applied to human peripheral blood leukocytes

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.; Fitzpatrick, Steven J.; Vitthal, Sanjay; Ladoulis, Charles T.

    1994-01-01

    Human beings judge images by complex mental processes, whereas computing machines extract features. By reducing scaled human judgments and machine extracted features to a common metric space and fitting them by regression, the judgments of human experts rendered on a sample of images may be imposed on an image population to provide automatic classification.

  3. Image Navigation and Registration Performance Assessment Evaluation Tools for GOES-R ABI and GLM

    NASA Technical Reports Server (NTRS)

    Houchin, Scott; Porter, Brian; Graybill, Justin; Slingerland, Philip

    2017-01-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. This paper describes the software design and implementation of IPATS and provides preliminary test results.

  4. The AAPM/RSNA physics tutorial for residents: digital fluoroscopy.

    PubMed

    Pooley, R A; McKinney, J M; Miller, D A

    2001-01-01

    A digital fluoroscopy system is most commonly configured as a conventional fluoroscopy system (tube, table, image intensifier, video system) in which the analog video signal is converted to and stored as digital data. Other methods of acquiring the digital data (eg, digital or charge-coupled device video and flat-panel detectors) will become more prevalent in the future. Fundamental concepts related to digital imaging in general include binary numbers, pixels, and gray levels. Digital image data allow the convenient use of several image processing techniques including last image hold, gray-scale processing, temporal frame averaging, and edge enhancement. Real-time subtraction of digital fluoroscopic images after injection of contrast material has led to widespread use of digital subtraction angiography (DSA). Additional image processing techniques used with DSA include road mapping, image fade, mask pixel shift, frame summation, and vessel size measurement. Peripheral angiography performed with an automatic moving table allows imaging of the peripheral vasculature with a single contrast material injection.

  5. Research on fast Fourier transforms algorithm of huge remote sensing image technology with GPU and partitioning technology.

    PubMed

    Yang, Xue; Li, Xue-You; Li, Jia-Guo; Ma, Jun; Zhang, Li; Yang, Jan; Du, Quan-Ye

    2014-02-01

    Fast Fourier transforms (FFT) is a basic approach to remote sensing image processing. With the improvement of capacity of remote sensing image capture with the features of hyperspectrum, high spatial resolution and high temporal resolution, how to use FFT technology to efficiently process huge remote sensing image becomes the critical step and research hot spot of current image processing technology. FFT algorithm, one of the basic algorithms of image processing, can be used for stripe noise removal, image compression, image registration, etc. in processing remote sensing image. CUFFT function library is the FFT algorithm library based on CPU and FFTW. FFTW is a FFT algorithm developed based on CPU in PC platform, and is currently the fastest CPU based FFT algorithm function library. However there is a common problem that once the available memory or memory is less than the capacity of image, there will be out of memory or memory overflow when using the above two methods to realize image FFT arithmetic. To address this problem, a CPU and partitioning technology based Huge Remote Fast Fourier Transform (HRFFT) algorithm is proposed in this paper. By improving the FFT algorithm in CUFFT function library, the problem of out of memory and memory overflow is solved. Moreover, this method is proved rational by experiment combined with the CCD image of HJ-1A satellite. When applied to practical image processing, it improves effect of the image processing, speeds up the processing, which saves the time of computation and achieves sound result.

  6. Angle-domain common-image gathers from anisotropic Gaussian beam migration and its application to anisotropy-induced imaging errors analysis

    NASA Astrophysics Data System (ADS)

    Han, Jianguang; Wang, Yun; Yu, Changqing; Chen, Peng

    2017-02-01

    An approach for extracting angle-domain common-image gathers (ADCIGs) from anisotropic Gaussian beam prestack depth migration (GB-PSDM) is presented in this paper. The propagation angle is calculated in the process of migration using the real-value traveltime information of Gaussian beam. Based on the above, we further investigate the effects of anisotropy on GB-PSDM, where the corresponding ADCIGs are extracted to assess the quality of migration images. The test results of the VTI syncline model and the TTI thrust sheet model show that anisotropic parameters ɛ, δ, and tilt angle 𝜃, have a great influence on the accuracy of the migrated image in anisotropic media, and ignoring any one of them will cause obvious imaging errors. The anisotropic GB-PSDM with the true anisotropic parameters can obtain more accurate seismic images of subsurface structures in anisotropic media.

  7. Methods and apparatuses for detection of radiation with semiconductor image sensors

    DOEpatents

    Cogliati, Joshua Joseph

    2018-04-10

    A semiconductor image sensor is repeatedly exposed to high-energy photons while a visible light obstructer is in place to block visible light from impinging on the sensor to generate a set of images from the exposures. A composite image is generated from the set of images with common noise substantially removed so the composite image includes image information corresponding to radiated pixels that absorbed at least some energy from the high-energy photons. The composite image is processed to determine a set of bright points in the composite image, each bright point being above a first threshold. The set of bright points is processed to identify lines with two or more bright points that include pixels therebetween that are above a second threshold and identify a presence of the high-energy particles responsive to a number of lines.

  8. Effects of Task Requirements on Rapid Natural Scene Processing: From Common Sensory Encoding to Distinct Decisional Mechanisms

    ERIC Educational Resources Information Center

    Bacon-Mace, Nadege; Kirchner, Holle; Fabre-Thorpe, Michele; Thorpe, Simon J.

    2007-01-01

    Using manual responses, human participants are remarkably fast and accurate at deciding if a natural scene contains an animal, but recent data show that they are even faster to indicate with saccadic eye movements which of 2 scenes contains an animal. How could it be that 2 images can apparently be processed faster than a single image? To better…

  9. A Tentative Application Of Morphological Filters To Time-Varying Images

    NASA Astrophysics Data System (ADS)

    Billard, D.; Poquillon, B.

    1989-03-01

    In this paper, morphological filters, which are commonly used to process either 2D or multidimensional static images, are generalized to the analysis of time-varying image sequence. The introduction of the time dimension induces then interesting prop-erties when designing such spatio-temporal morphological filters. In particular, the specification of spatio-temporal structuring ele-ments (equivalent to time-varying spatial structuring elements) can be adjusted according to the temporal variations of the image sequences to be processed : this allows to derive specific morphological transforms to perform noise filtering or moving objects discrimination on dynamic images viewed by a non-stationary sensor. First, a brief introduction to the basic principles underlying morphological filters will be given. Then, a straightforward gener-alization of these principles to time-varying images will be pro-posed. This will lead us to define spatio-temporal opening and closing and to introduce some of their possible applications to process dynamic images. At last, preliminary results obtained us-ing a natural forward looking infrared (FUR) image sequence are presented.

  10. Current advances in molecular imaging: noninvasive in vivo bioluminescent and fluorescent optical imaging in cancer research.

    PubMed

    Choy, Garry; Choyke, Peter; Libutti, Steven K

    2003-10-01

    Recently, there has been tremendous interest in developing techniques such as MRI, micro-CT, micro-PET, and SPECT to image function and processes in small animals. These technologies offer deep tissue penetration and high spatial resolution, but compared with noninvasive small animal optical imaging, these techniques are very costly and time consuming to implement. Optical imaging is cost-effective, rapid, easy to use, and can be readily applied to studying disease processes and biology in vivo. In vivo optical imaging is the result of a coalescence of technologies from chemistry, physics, and biology. The development of highly sensitive light detection systems has allowed biologists to use imaging in studying physiological processes. Over the last few decades, biochemists have also worked to isolate and further develop optical reporters such as GFP, luciferase, and cyanine dyes. This article reviews the common types of fluorescent and bioluminescent optical imaging, the typical system platforms and configurations, and the applications in the investigation of cancer biology.

  11. Condenser for illuminating a ringfield camera with synchrotron emission light

    DOEpatents

    Sweatt, W.C.

    1996-04-30

    The present invention relates generally to the field of condensers for collecting light from a synchrotron radiation source and directing the light into a ringfield of a lithography camera. The present invention discloses a condenser comprising collecting, processing, and imaging optics. The collecting optics are comprised of concave and convex spherical mirrors that collect the light beams. The processing optics, which receive the light beams, are comprised of flat mirrors that converge and direct the light beams into a real entrance pupil of the camera in a symmetrical pattern. In the real entrance pupil are located flat mirrors, common to the beams emitted from the preceding mirrors, for generating substantially parallel light beams and for directing the beams toward the ringfield of a camera. Finally, the imaging optics are comprised of a spherical mirror, also common to the beams emitted from the preceding mirrors, images the real entrance pupil through the resistive mask and into the virtual entrance pupil of the camera. Thus, the condenser is comprised of a plurality of beams with four mirrors corresponding to a single beam plus two common mirrors. 9 figs.

  12. Condenser for illuminating a ringfield camera with synchrotron emission light

    DOEpatents

    Sweatt, William C.

    1996-01-01

    The present invention relates generally to the field of condensers for collecting light from a synchrotron radiation source and directing the light into a ringfield of a lithography camera. The present invention discloses a condenser comprising collecting, processing, and imaging optics. The collecting optics are comprised of concave and convex spherical mirrors that collect the light beams. The processing optics, which receive the light beams, are comprised of flat mirrors that converge and direct the light beams into a real entrance pupil of the camera in a symmetrical pattern. In the real entrance pupil are located flat mirrors, common to the beams emitted from the preceding mirrors, for generating substantially parallel light beams and for directing the beams toward the ringfield of a camera. Finally, the imaging optics are comprised of a spherical mirror, also common to the beams emitted from the preceding mirrors, images the real entrance pupil through the resistive mask and into the virtual entrance pupil of the camera. Thus, the condenser is comprised of a plurality of beams with four mirrors corresponding to a single beam plus two common mirrors.

  13. Application on-line imagery for photogrammetry comparison of natural hazards events

    NASA Astrophysics Data System (ADS)

    Voumard, Jérémie; Jaboyedoff, Michel; Derron, Marc-Henri

    2015-04-01

    The airborne (ALS) and terrestrial laser scanner (TLS) technologies are well known and actually one of the most common technics to obtain 3D terrain models. However those technologies are expensive and logistically demanding. Another way to obtain DEM without those inconveniences is photogrammetry, in particular the structure from motion (SfM) technic that allows high quality 3D model extraction from common digital camera images without need of a expensive material. If the usual way to get images for SfM 3D modelling is to take pictures on-site, on-line imagery offer the possibility to get images from many roads and other places. The most common on-line street view resource is Google Street View. Since April 2014, this service proposes a back-in-time function on a certain number of locations. Google Street View images are composed from many pictures taken with a set of panoramic cameras mounted on a platform like a car roof. Those images are affected by strong deformations, which are not recommended for photogrammetry. At first sight, using street view images to make photogrammetry may bring some processing problems. The aim of this project is to study the possibility to made SfM 3D model from Google Street View images with open source processing software (Visual SFM) and low-cost software (Agisoft). The main interest of this method is to evaluate at low cost changes without terrain visit. Study areas are landslides (such those of Séchilienne in France) and cliffs near or far away from roads. Human-made terrain changes like stone wall collapse by high rain precipitations near of Monaco are also studied. For each case, 50 to 200 pictures have been used. The mains conditions to obtain 3D model results are: to have a street view image of the area of interest. Some countries like USA or France are well documented. Other countries like Switzerland are only partially or not at all like Germany. The second constraint is to have two or more sets of images at different time. Third condition is to have enough quality images. Over- or underexposed images, bad meteorological conditions (fog, rain, etc.) or bad images resolution compromise the SfM process. In our case studies, distances from the road to the object of interest range from 1 to 500 m. First results show that SfM processing with on-line images is not obvious. Poor-resolution and deformed images with unknown camera features make the process often difficult and not predictive. The use of Agisoft software give bad results because of the abovementioned features while Visual SFM give interesting results for about two thirds of cases. It is also demonstrated that 3D photogrammetry is possible with on-line images under certain restrictive conditions. Under these conditions of images quality, this technique can then be used to estimate volumes changes.

  14. Assessment of bacterial biofilm on stainless steel by hyperspectral fluorescence imaging

    USDA-ARS?s Scientific Manuscript database

    Hyperspectral fluorescence imaging techniques were investigated for detection of two genera of microbial biofilms on stainless steel material which is commonly used to manufacture food processing equipment. Stainless steel coupons were deposited in nonpathogenic E. coli O157:H7 and Salmonella cultu...

  15. Dietary Assessment on a Mobile Phone Using Image Processing and Pattern Recognition Techniques: Algorithm Design and System Prototyping.

    PubMed

    Probst, Yasmine; Nguyen, Duc Thanh; Tran, Minh Khoi; Li, Wanqing

    2015-07-27

    Dietary assessment, while traditionally based on pen-and-paper, is rapidly moving towards automatic approaches. This study describes an Australian automatic food record method and its prototype for dietary assessment via the use of a mobile phone and techniques of image processing and pattern recognition. Common visual features including scale invariant feature transformation (SIFT), local binary patterns (LBP), and colour are used for describing food images. The popular bag-of-words (BoW) model is employed for recognizing the images taken by a mobile phone for dietary assessment. Technical details are provided together with discussions on the issues and future work.

  16. Watermarking scheme for authentication of compressed image

    NASA Astrophysics Data System (ADS)

    Hsieh, Tsung-Han; Li, Chang-Tsun; Wang, Shuo

    2003-11-01

    As images are commonly transmitted or stored in compressed form such as JPEG, to extend the applicability of our previous work, a new scheme for embedding watermark in compressed domain without resorting to cryptography is proposed. In this work, a target image is first DCT transformed and quantised. Then, all the coefficients are implicitly watermarked in order to minimize the risk of being attacked on the unwatermarked coefficients. The watermarking is done through registering/blending the zero-valued coefficients with a binary sequence to create the watermark and involving the unembedded coefficients during the process of embedding the selected coefficients. The second-order neighbors and the block itself are considered in the process of the watermark embedding in order to thwart different attacks such as cover-up, vector quantisation, and transplantation. The experiments demonstrate the capability of the proposed scheme in thwarting local tampering, geometric transformation such as cropping, and common signal operations such as lowpass filtering.

  17. A Review of the Quantification and Classification of Pigmented Skin Lesions: From Dedicated to Hand-Held Devices.

    PubMed

    Filho, Mercedes; Ma, Zhen; Tavares, João Manuel R S

    2015-11-01

    In recent years, the incidence of skin cancer cases has risen, worldwide, mainly due to the prolonged exposure to harmful ultraviolet radiation. Concurrently, the computer-assisted medical diagnosis of skin cancer has undergone major advances, through an improvement in the instrument and detection technology, and the development of algorithms to process the information. Moreover, because there has been an increased need to store medical data, for monitoring, comparative and assisted-learning purposes, algorithms for data processing and storage have also become more efficient in handling the increase of data. In addition, the potential use of common mobile devices to register high-resolution images of skin lesions has also fueled the need to create real-time processing algorithms that may provide a likelihood for the development of malignancy. This last possibility allows even non-specialists to monitor and follow-up suspected skin cancer cases. In this review, we present the major steps in the pre-processing, processing and post-processing of skin lesion images, with a particular emphasis on the quantification and classification of pigmented skin lesions. We further review and outline the future challenges for the creation of minimum-feature, automated and real-time algorithms for the detection of skin cancer from images acquired via common mobile devices.

  18. [Applicability of non-invasive imaging methods in forensic medicine and forensic anthropology in particular].

    PubMed

    Marcinková, Mária; Straka, Ľubomír; Novomeský, František; Janík, Martin; Štuller, František; Krajčovič, Jozef

    2018-01-01

    Massive progress in developing even more precise imaging modalities influenced all medical branches including the forensic medicine. In forensic anthropology, an inevitable part of forensic medicine itself, the use of all imaging modalities becomes even more important. Despite of acquiring more accurate informations about the deceased, all of them can be used in the process of identification and/or age estimation. X - ray imaging is most commonly used in detecting foreign bodies or various pathological changes of the deceased. Computed tomography, on the other hand, can be very helpful in the process of identification, whereas outcomes of this examination can be used for virtual reconstruction of living objects. Magnetic resonance imaging offers new opportunities in detecting cardiovascular pathological processes or develompental anomalies. Ultrasonography provides promising results in age estimation of living subjects without excessive doses of radiation. Processing the latest information sources available, authors introduce the application examples of X - ray imaging, computed tomography, magnetic resonance imaging and ultrasonography in everyday forensic medicine routine, with particular focusing on forensic anthropology.

  19. SIproc: an open-source biomedical data processing platform for large hyperspectral images.

    PubMed

    Berisha, Sebastian; Chang, Shengyuan; Saki, Sam; Daeinejad, Davar; He, Ziqi; Mankar, Rupali; Mayerich, David

    2017-04-10

    There has recently been significant interest within the vibrational spectroscopy community to apply quantitative spectroscopic imaging techniques to histology and clinical diagnosis. However, many of the proposed methods require collecting spectroscopic images that have a similar region size and resolution to the corresponding histological images. Since spectroscopic images contain significantly more spectral samples than traditional histology, the resulting data sets can approach hundreds of gigabytes to terabytes in size. This makes them difficult to store and process, and the tools available to researchers for handling large spectroscopic data sets are limited. Fundamental mathematical tools, such as MATLAB, Octave, and SciPy, are extremely powerful but require that the data be stored in fast memory. This memory limitation becomes impractical for even modestly sized histological images, which can be hundreds of gigabytes in size. In this paper, we propose an open-source toolkit designed to perform out-of-core processing of hyperspectral images. By taking advantage of graphical processing unit (GPU) computing combined with adaptive data streaming, our software alleviates common workstation memory limitations while achieving better performance than existing applications.

  20. Digital image comparison by subtracting contextual transformations—percentile rank order differentiation

    USGS Publications Warehouse

    Wehde, M. E.

    1995-01-01

    The common method of digital image comparison by subtraction imposes various constraints on the image contents. Precise registration of images is required to assure proper evaluation of surface locations. The attribute being measured and the calibration and scaling of the sensor are also important to the validity and interpretability of the subtraction result. Influences of sensor gains and offsets complicate the subtraction process. The presence of any uniform systematic transformation component in one of two images to be compared distorts the subtraction results and requires analyst intervention to interpret or remove it. A new technique has been developed to overcome these constraints. Images to be compared are first transformed using the cumulative relative frequency as a transfer function. The transformed images represent the contextual relationship of each surface location with respect to all others within the image. The process of differentiating between the transformed images results in a percentile rank ordered difference. This process produces consistent terrain-change information even when the above requirements necessary for subtraction are relaxed. This technique may be valuable to an appropriately designed hierarchical terrain-monitoring methodology because it does not require human participation in the process.

  1. Study on parallel and distributed management of RS data based on spatial database

    NASA Astrophysics Data System (ADS)

    Chen, Yingbiao; Qian, Qinglan; Wu, Hongqiao; Liu, Shijin

    2009-10-01

    With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.

  2. Study on parallel and distributed management of RS data based on spatial data base

    NASA Astrophysics Data System (ADS)

    Chen, Yingbiao; Qian, Qinglan; Liu, Shijin

    2006-12-01

    With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.

  3. Informatics in radiology (infoRAD): free DICOM image viewing and processing software for the Macintosh computer: what's available and what it can do for you.

    PubMed

    Escott, Edward J; Rubinstein, David

    2004-01-01

    It is often necessary for radiologists to use digital images in presentations and conferences. Most imaging modalities produce images in the Digital Imaging and Communications in Medicine (DICOM) format. The image files tend to be large and thus cannot be directly imported into most presentation software, such as Microsoft PowerPoint; the large files also consume storage space. There are many free programs that allow viewing and processing of these files on a personal computer, including conversion to more common file formats such as the Joint Photographic Experts Group (JPEG) format. Free DICOM image viewing and processing software for computers running on the Microsoft Windows operating system has already been evaluated. However, many people use the Macintosh (Apple Computer) platform, and a number of programs are available for these users. The World Wide Web was searched for free DICOM image viewing or processing software that was designed for the Macintosh platform or is written in Java and is therefore platform independent. The features of these programs and their usability were evaluated. There are many free programs for the Macintosh platform that enable viewing and processing of DICOM images. (c) RSNA, 2004.

  4. The vectorization of a ray tracing program for image generation

    NASA Technical Reports Server (NTRS)

    Plunkett, D. J.; Cychosz, J. M.; Bailey, M. J.

    1984-01-01

    Ray tracing is a widely used method for producing realistic computer generated images. Ray tracing involves firing an imaginary ray from a view point, through a point on an image plane, into a three dimensional scene. The intersections of the ray with the objects in the scene determines what is visible at the point on the image plane. This process must be repeated many times, once for each point (commonly called a pixel) in the image plane. A typical image contains more than a million pixels making this process computationally expensive. A traditional ray tracing program processes one ray at a time. In such a serial approach, as much as ninety percent of the execution time is spent computing the intersection of a ray with the surface in the scene. With the CYBER 205, many rays can be intersected with all the bodies im the scene with a single series of vector operations. Vectorization of this intersection process results in large decreases in computation time. The CADLAB's interest in ray tracing stems from the need to produce realistic images of mechanical parts. A high quality image of a part during the design process can increase the productivity of the designer by helping him visualize the results of his work. To be useful in the design process, these images must be produced in a reasonable amount of time. This discussion will explain how the ray tracing process was vectorized and gives examples of the images obtained.

  5. Statistical Signal Models and Algorithms for Image Analysis

    DTIC Science & Technology

    1984-10-25

    In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction

  6. Parallel programming of gradient-based iterative image reconstruction schemes for optical tomography.

    PubMed

    Hielscher, Andreas H; Bartel, Sebastian

    2004-02-01

    Optical tomography (OT) is a fast developing novel imaging modality that uses near-infrared (NIR) light to obtain cross-sectional views of optical properties inside the human body. A major challenge remains the time-consuming, computational-intensive image reconstruction problem that converts NIR transmission measurements into cross-sectional images. To increase the speed of iterative image reconstruction schemes that are commonly applied for OT, we have developed and implemented several parallel algorithms on a cluster of workstations. Static process distribution as well as dynamic load balancing schemes suitable for heterogeneous clusters and varying machine performances are introduced and tested. The resulting algorithms are shown to accelerate the reconstruction process to various degrees, substantially reducing the computation times for clinically relevant problems.

  7. Estimation of breast percent density in raw and processed full field digital mammography images via adaptive fuzzy c-means clustering and support vector machine segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, Brad M.; Nathan, Diane L.; Wang Yan

    Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., 'FOR PROCESSING') andmore » vendor postprocessed (i.e., 'FOR PRESENTATION'), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then aggregated into a final dense tissue segmentation that is used to compute breast PD%. Our method is validated on a group of 81 women for whom bilateral, mediolateral oblique, raw and processed screening digital mammograms were available, and agreement is assessed with both continuous and categorical density estimates made by a trained breast-imaging radiologist. Results: Strong association between algorithm-estimated and radiologist-provided breast PD% was detected for both raw (r= 0.82, p < 0.001) and processed (r= 0.85, p < 0.001) digital mammograms on a per-breast basis. Stronger agreement was found when overall breast density was assessed on a per-woman basis for both raw (r= 0.85, p < 0.001) and processed (0.89, p < 0.001) mammograms. Strong agreement between categorical density estimates was also seen (weighted Cohen's {kappa}{>=} 0.79). Repeated measures analysis of variance demonstrated no statistically significant differences between the PD% estimates (p > 0.1) due to either presentation of the image (raw vs processed) or method of PD% assessment (radiologist vs algorithm). Conclusions: The proposed fully automated algorithm was successful in estimating breast percent density from both raw and processed digital mammographic images. Accurate assessment of a woman's breast density is critical in order for the estimate to be incorporated into risk assessment models. These results show promise for the clinical application of the algorithm in quantifying breast density in a repeatable manner, both at time of imaging as well as in retrospective studies.« less

  8. Estimation of breast percent density in raw and processed full field digital mammography images via adaptive fuzzy c-means clustering and support vector machine segmentation.

    PubMed

    Keller, Brad M; Nathan, Diane L; Wang, Yan; Zheng, Yuanjie; Gee, James C; Conant, Emily F; Kontos, Despina

    2012-08-01

    The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., "FOR PROCESSING") and vendor postprocessed (i.e., "FOR PRESENTATION"), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then aggregated into a final dense tissue segmentation that is used to compute breast PD%. Our method is validated on a group of 81 women for whom bilateral, mediolateral oblique, raw and processed screening digital mammograms were available, and agreement is assessed with both continuous and categorical density estimates made by a trained breast-imaging radiologist. Strong association between algorithm-estimated and radiologist-provided breast PD% was detected for both raw (r = 0.82, p < 0.001) and processed (r = 0.85, p < 0.001) digital mammograms on a per-breast basis. Stronger agreement was found when overall breast density was assessed on a per-woman basis for both raw (r = 0.85, p < 0.001) and processed (0.89, p < 0.001) mammograms. Strong agreement between categorical density estimates was also seen (weighted Cohen's κ ≥ 0.79). Repeated measures analysis of variance demonstrated no statistically significant differences between the PD% estimates (p > 0.1) due to either presentation of the image (raw vs processed) or method of PD% assessment (radiologist vs algorithm). The proposed fully automated algorithm was successful in estimating breast percent density from both raw and processed digital mammographic images. Accurate assessment of a woman's breast density is critical in order for the estimate to be incorporated into risk assessment models. These results show promise for the clinical application of the algorithm in quantifying breast density in a repeatable manner, both at time of imaging as well as in retrospective studies.

  9. ISLE (Image and Signal LISP Environment): A functional language interface for signal and image processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azevedo, S.G.; Fitch, J.P.

    1987-10-21

    Conventional software interfaces that use imperative computer commands or menu interactions are often restrictive environments when used for researching new algorithms or analyzing processed experimental data. We found this to be true with current signal-processing software (SIG). As an alternative, ''functional language'' interfaces provide features such as command nesting for a more natural interaction with the data. The Image and Signal LISP Environment (ISLE) is an example of an interpreted functional language interface based on common LISP. Advantages of ISLE include multidimensional and multiple data-type independence through dispatching functions, dynamic loading of new functions, and connections to artificial intelligence (AI)more » software. 10 refs.« less

  10. ISLE (Image and Signal Lisp Environment): A functional language interface for signal and image processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azevedo, S.G.; Fitch, J.P.

    1987-05-01

    Conventional software interfaces which utilize imperative computer commands or menu interactions are often restrictive environments when used for researching new algorithms or analyzing processed experimental data. We found this to be true with current signal processing software (SIG). Existing ''functional language'' interfaces provide features such as command nesting for a more natural interaction with the data. The Image and Signal Lisp Environment (ISLE) will be discussed as an example of an interpreted functional language interface based on Common LISP. Additional benefits include multidimensional and multiple data-type independence through dispatching functions, dynamic loading of new functions, and connections to artificial intelligencemore » software.« less

  11. Generation of synthetic image sequences for the verification of matching and tracking algorithms for deformation analysis

    NASA Astrophysics Data System (ADS)

    Bethmann, F.; Jepping, C.; Luhmann, T.

    2013-04-01

    This paper reports on a method for the generation of synthetic image data for almost arbitrary static or dynamic 3D scenarios. Image data generation is based on pre-defined 3D objects, object textures, camera orientation data and their imaging properties. The procedure does not focus on the creation of photo-realistic images under consideration of complex imaging and reflection models as they are used by common computer graphics programs. In contrast, the method is designed with main emphasis on geometrically correct synthetic images without radiometric impact. The calculation process includes photogrammetric distortion models, hence cameras with arbitrary geometric imaging characteristics can be applied. Consequently, image sets can be created that are consistent to mathematical photogrammetric models to be used as sup-pixel accurate data for the assessment of high-precision photogrammetric processing methods. In the first instance the paper describes the process of image simulation under consideration of colour value interpolation, MTF/PSF and so on. Subsequently the geometric quality of the synthetic images is evaluated with ellipse operators. Finally, simulated image sets are used to investigate matching and tracking algorithms as they have been developed at IAPG for deformation measurement in car safety testing.

  12. An image-processing methodology for extracting bloodstain pattern features.

    PubMed

    Arthur, Ravishka M; Humburg, Philomena J; Hoogenboom, Jerry; Baiker, Martin; Taylor, Michael C; de Bruin, Karla G

    2017-08-01

    There is a growing trend in forensic science to develop methods to make forensic pattern comparison tasks more objective. This has generally involved the application of suitable image-processing methods to provide numerical data for identification or comparison. This paper outlines a unique image-processing methodology that can be utilised by analysts to generate reliable pattern data that will assist them in forming objective conclusions about a pattern. A range of features were defined and extracted from a laboratory-generated impact spatter pattern. These features were based in part on bloodstain properties commonly used in the analysis of spatter bloodstain patterns. The values of these features were consistent with properties reported qualitatively for such patterns. The image-processing method developed shows considerable promise as a way to establish measurable discriminating pattern criteria that are lacking in current bloodstain pattern taxonomies. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Photogrammetry and Its Potential Application in Medical Science on the Basis of Selected Literature.

    PubMed

    Ey-Chmielewska, Halina; Chruściel-Nogalska, Małgorzata; Frączak, Bogumiła

    2015-01-01

    Photogrammetry is a science and technology which allows quantitative traits to be determined, i.e. the reproduction of object shapes, sizes and positions on the basis of their photographs. Images can be recorded in a wide range of wavelengths of electromagnetic radiation. The most common is the visible range, but near- and medium-infrared, thermal infrared, microwaves and X-rays are also used. The importance of photogrammetry has increased with the development of computer software. Digital image processing and real-time measurement have allowed the automation of many complex manufacturing processes. Photogrammetry has been widely used in many areas, especially in geodesy and cartography. In medicine, this method is used for measuring the widely understood human body for the planning and monitoring of therapeutic treatment and its results. Digital images obtained from optical-electronic sensors combined with computer technology have the potential of objective measurement thanks to the remote nature of the data acquisition, with no contact with the measured object and with high accuracy. Photogrammetry also allows the adoption of common standards for archiving and processing patient data.

  14. TheHiveDB image data management and analysis framework.

    PubMed

    Muehlboeck, J-Sebastian; Westman, Eric; Simmons, Andrew

    2014-01-06

    The hive database system (theHiveDB) is a web-based brain imaging database, collaboration, and activity system which has been designed as an imaging workflow management system capable of handling cross-sectional and longitudinal multi-center studies. It can be used to organize and integrate existing data from heterogeneous projects as well as data from ongoing studies. It has been conceived to guide and assist the researcher throughout the entire research process, integrating all relevant types of data across modalities (e.g., brain imaging, clinical, and genetic data). TheHiveDB is a modern activity and resource management system capable of scheduling image processing on both private compute resources and the cloud. The activity component supports common image archival and management tasks as well as established pipeline processing (e.g., Freesurfer for extraction of scalar measures from magnetic resonance images). Furthermore, via theHiveDB activity system algorithm developers may grant access to virtual machines hosting versioned releases of their tools to collaborators and the imaging community. The application of theHiveDB is illustrated with a brief use case based on organizing, processing, and analyzing data from the publically available Alzheimer Disease Neuroimaging Initiative.

  15. TheHiveDB image data management and analysis framework

    PubMed Central

    Muehlboeck, J-Sebastian; Westman, Eric; Simmons, Andrew

    2014-01-01

    The hive database system (theHiveDB) is a web-based brain imaging database, collaboration, and activity system which has been designed as an imaging workflow management system capable of handling cross-sectional and longitudinal multi-center studies. It can be used to organize and integrate existing data from heterogeneous projects as well as data from ongoing studies. It has been conceived to guide and assist the researcher throughout the entire research process, integrating all relevant types of data across modalities (e.g., brain imaging, clinical, and genetic data). TheHiveDB is a modern activity and resource management system capable of scheduling image processing on both private compute resources and the cloud. The activity component supports common image archival and management tasks as well as established pipeline processing (e.g., Freesurfer for extraction of scalar measures from magnetic resonance images). Furthermore, via theHiveDB activity system algorithm developers may grant access to virtual machines hosting versioned releases of their tools to collaborators and the imaging community. The application of theHiveDB is illustrated with a brief use case based on organizing, processing, and analyzing data from the publically available Alzheimer Disease Neuroimaging Initiative. PMID:24432000

  16. Visual Imagery and False Memory for Pictures: A Functional Magnetic Resonance Imaging Study in Healthy Participants.

    PubMed

    Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Muñoz-Samons, Daniel; Ochoa, Susana; Sánchez-Laforga, Ana María; Brébion, Gildas

    2017-01-01

    Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted. Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures. The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures.

  17. Dietary Assessment on a Mobile Phone Using Image Processing and Pattern Recognition Techniques: Algorithm Design and System Prototyping

    PubMed Central

    Probst, Yasmine; Nguyen, Duc Thanh; Tran, Minh Khoi; Li, Wanqing

    2015-01-01

    Dietary assessment, while traditionally based on pen-and-paper, is rapidly moving towards automatic approaches. This study describes an Australian automatic food record method and its prototype for dietary assessment via the use of a mobile phone and techniques of image processing and pattern recognition. Common visual features including scale invariant feature transformation (SIFT), local binary patterns (LBP), and colour are used for describing food images. The popular bag-of-words (BoW) model is employed for recognizing the images taken by a mobile phone for dietary assessment. Technical details are provided together with discussions on the issues and future work. PMID:26225994

  18. 3CCD image segmentation and edge detection based on MATLAB

    NASA Astrophysics Data System (ADS)

    He, Yong; Pan, Jiazhi; Zhang, Yun

    2006-09-01

    This research aimed to identify weeds from crops in early stage in the field operation by using image-processing technology. As 3CCD images offer greater binary value difference between weed and crop section than ordinary digital images taken by common cameras. It has 3 channels (green, red, ifred) which takes a snap-photo of the same area, and the three images can be composed into one image, which facilitates the segmentation of different areas. By the application of image-processing toolkit on MATLAB, the different areas in the image can be segmented clearly. As edge detection technique is the first and very important step in image processing, The different result of different processing method was compared. Especially, by using the wavelet packet transform toolkit on MATLAB, An image was preprocessed and then the edge was extracted, and getting more clearly cut image of edge. The segmentation methods include operations as erosion, dilation and other algorithms to preprocess the images. It is of great importance to segment different areas in digital images in field real time, so as to be applied in precision farming, to saving energy and herbicide and many other materials. At present time Large scale software as MATLAB on PC was used, but the computation can be reduced and integrated into a small embed system, which means that the application of this technique in agricultural engineering is feasible and of great economical value.

  19. Content-aware dark image enhancement through channel division.

    PubMed

    Rivera, Adin Ramirez; Ryu, Byungyong; Chae, Oksam

    2012-09-01

    The current contrast enhancement algorithms occasionally result in artifacts, overenhancement, and unnatural effects in the processed images. These drawbacks increase for images taken under poor illumination conditions. In this paper, we propose a content-aware algorithm that enhances dark images, sharpens edges, reveals details in textured regions, and preserves the smoothness of flat regions. The algorithm produces an ad hoc transformation for each image, adapting the mapping functions to each image's characteristics to produce the maximum enhancement. We analyze the contrast of the image in the boundary and textured regions, and group the information with common characteristics. These groups model the relations within the image, from which we extract the transformation functions. The results are then adaptively mixed, by considering the human vision system characteristics, to boost the details in the image. Results show that the algorithm can automatically process a wide range of images-e.g., mixed shadow and bright areas, outdoor and indoor lighting, and face images-without introducing artifacts, which is an improvement over many existing methods.

  20. A Genome-Wide Association Study of Amygdala Activation in Youths with and without Bipolar Disorder

    ERIC Educational Resources Information Center

    Liu, Xinmin; Akula, Nirmala; Skup, Martha; Brotman, Melissa A.; Leibenluft, Ellen; McMahon, Francis J.

    2010-01-01

    Objective: Functional magnetic resonance imaging is commonly used to characterize brain activity underlying a variety of psychiatric disorders. A previous functional magnetic resonance imaging study found that amygdala activation during a face-processing task differed between pediatric patients with bipolar disorder (BD) and healthy controls. We…

  1. Ice Age Art, Autism, and Vision: How We See/How We Draw.

    ERIC Educational Resources Information Center

    Kellman, Julia

    1998-01-01

    Explores the nature of images created by Paleolithic artists and autistic artists in regard to drawing techniques and image function. Explains the commonalities based on a discussion of the role of the early vision process and the construction of meaning. Notes the importance of this research for understanding autistic artists. (DSK)

  2. Common hyperspectral image database design

    NASA Astrophysics Data System (ADS)

    Tian, Lixun; Liao, Ningfang; Chai, Ali

    2009-11-01

    This paper is to introduce Common hyperspectral image database with a demand-oriented Database design method (CHIDB), which comprehensively set ground-based spectra, standardized hyperspectral cube, spectral analysis together to meet some applications. The paper presents an integrated approach to retrieving spectral and spatial patterns from remotely sensed imagery using state-of-the-art data mining and advanced database technologies, some data mining ideas and functions were associated into CHIDB to make it more suitable to serve in agriculture, geological and environmental areas. A broad range of data from multiple regions of the electromagnetic spectrum is supported, including ultraviolet, visible, near-infrared, thermal infrared, and fluorescence. CHIDB is based on dotnet framework and designed by MVC architecture including five main functional modules: Data importer/exporter, Image/spectrum Viewer, Data Processor, Parameter Extractor, and On-line Analyzer. The original data were all stored in SQL server2008 for efficient search, query and update, and some advance Spectral image data Processing technology are used such as Parallel processing in C#; Finally an application case is presented in agricultural disease detecting area.

  3. Image gathering and processing - Information and fidelity

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Halyo, N.; Samms, R. W.; Stacy, K.

    1985-01-01

    In this paper we formulate and use information and fidelity criteria to assess image gathering and processing, combining optical design with image-forming and edge-detection algorithms. The optical design of the image-gathering system revolves around the relationship among sampling passband, spatial response, and signal-to-noise ratio (SNR). Our formulations of information, fidelity, and optimal (Wiener) restoration account for the insufficient sampling (i.e., aliasing) common in image gathering as well as for the blurring and noise that conventional formulations account for. Performance analyses and simulations for ordinary optical-design constraints and random scences indicate that (1) different image-forming algorithms prefer different optical designs; (2) informationally optimized designs maximize the robustness of optimal image restorations and lead to the highest-spatial-frequency channel (relative to the sampling passband) for which edge detection is reliable (if the SNR is sufficiently high); and (3) combining the informationally optimized design with a 3 by 3 lateral-inhibitory image-plane-processing algorithm leads to a spatial-response shape that approximates the optimal edge-detection response of (Marr's model of) human vision and thus reduces the data preprocessing and transmission required for machine vision.

  4. Edge detection - Image-plane versus digital processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.; Park, Stephen K.; Triplett, Judith A.

    1987-01-01

    To optimize edge detection with the familiar Laplacian-of-Gaussian operator, it has become common to implement this operator with a large digital convolution mask followed by some interpolation of the processed data to determine the zero crossings that locate edges. It is generally recognized that this large mask causes substantial blurring of fine detail. It is shown that the spatial detail can be improved by a factor of about four with either the Wiener-Laplacian-of-Gaussian filter or an image-plane processor. The Wiener-Laplacian-of-Gaussian filter minimizes the image-gathering degradations if the scene statistics are at least approximately known and also serves as an interpolator to determine the desired zero crossings directly. The image-plane processor forms the Laplacian-of-Gaussian response by properly combining the optical design of the image-gathering system with a minimal three-by-three lateral-inhibitory processing mask. This approach, which is suggested by Marr's model of early processing in human vision, also reduces data processing by about two orders of magnitude and data transmission by up to an order of magnitude.

  5. Effect of Experience of Use on The Process of Formation of Stereotype Images on Shapes of Products

    NASA Astrophysics Data System (ADS)

    Kwak, Yong-Min; Yamanaka, Toshimasa

    It is necessary to explain the terms used in this research to help the readers better understand the contents of this research. Originally stereotype meant the lead plate cast from a mold of letterpress printing, but now it is used as a term indicating a simplified and fixed notion toward certain group “Knowledge in fixed form” or a term indicating an image simplified and generalized over the members of certain group.[1] Generally, stereotype is used in negative cases, but has both sides of positive and negative view.[2, 3] I believe that a research on the factors of forming stereotype[4] images commonly felt by a large number of persons may suggest a new research methodology for the areas which require high level of creative thinking such as areas of design and researches on emotions. Stereotype images appear between persons, groups and images of countries, enterprises and other organizations. For example, as we usually hear words saying ‘He maybe oo because he is from oo’, we have strong images of characteristics commonly held by the persons who belong to certain categories after tying regions and persons to dividing categories.[5, 6] In this research, I define such images as the stereotype images. This kind of phenomenon appears for the articles used in daily lives. In this research, I established a hypothesis that stereotype images exist for products and underwent the process of verification through experiments.

  6. Adjacent slice prostate cancer prediction to inform MALDI imaging biomarker analysis

    NASA Astrophysics Data System (ADS)

    Chuang, Shao-Hui; Sun, Xiaoyan; Cazares, Lisa; Nyalwidhe, Julius; Troyer, Dean; Semmes, O. John; Li, Jiang; McKenzie, Frederic D.

    2010-03-01

    Prostate cancer is the second most common type of cancer among men in US [1]. Traditionally, prostate cancer diagnosis is made by the analysis of prostate-specific antigen (PSA) levels and histopathological images of biopsy samples under microscopes. Proteomic biomarkers can improve upon these methods. MALDI molecular spectra imaging is used to visualize protein/peptide concentrations across biopsy samples to search for biomarker candidates. Unfortunately, traditional processing methods require histopathological examination on one slice of a biopsy sample while the adjacent slice is subjected to the tissue destroying desorption and ionization processes of MALDI. The highest confidence tumor regions gained from the histopathological analysis are then mapped to the MALDI spectra data to estimate the regions for biomarker identification from the MALDI imaging. This paper describes a process to provide a significantly better estimate of the cancer tumor to be mapped onto the MALDI imaging spectra coordinates using the high confidence region to predict the true area of the tumor on the adjacent MALDI imaged slice.

  7. Single-random-phase holographic encryption of images

    NASA Astrophysics Data System (ADS)

    Tsang, P. W. M.

    2017-02-01

    In this paper, a method is proposed for encrypting an optical image onto a phase-only hologram, utilizing a single random phase mask as the private encryption key. The encryption process can be divided into 3 stages. First the source image to be encrypted is scaled in size, and pasted onto an arbitrary position in a larger global image. The remaining areas of the global image that are not occupied by the source image could be filled with randomly generated contents. As such, the global image as a whole is very different from the source image, but at the same time the visual quality of the source image is preserved. Second, a digital Fresnel hologram is generated from the new image, and converted into a phase-only hologram based on bi-directional error diffusion. In the final stage, a fixed random phase mask is added to the phase-only hologram as the private encryption key. In the decryption process, the global image together with the source image it contained, can be reconstructed from the phase-only hologram if it is overlaid with the correct decryption key. The proposed method is highly resistant to different forms of Plain-Text-Attacks, which are commonly used to deduce the encryption key in existing holographic encryption process. In addition, both the encryption and the decryption processes are simple and easy to implement.

  8. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments.

    PubMed

    Gorgolewski, Krzysztof J; Auer, Tibor; Calhoun, Vince D; Craddock, R Cameron; Das, Samir; Duff, Eugene P; Flandin, Guillaume; Ghosh, Satrajit S; Glatard, Tristan; Halchenko, Yaroslav O; Handwerker, Daniel A; Hanke, Michael; Keator, David; Li, Xiangrui; Michael, Zachary; Maumet, Camille; Nichols, B Nolan; Nichols, Thomas E; Pellman, John; Poline, Jean-Baptiste; Rokem, Ariel; Schaefer, Gunnar; Sochat, Vanessa; Triplett, William; Turner, Jessica A; Varoquaux, Gaël; Poldrack, Russell A

    2016-06-21

    The development of magnetic resonance imaging (MRI) techniques has defined modern neuroimaging. Since its inception, tens of thousands of studies using techniques such as functional MRI and diffusion weighted imaging have allowed for the non-invasive study of the brain. Despite the fact that MRI is routinely used to obtain data for neuroscience research, there has been no widely adopted standard for organizing and describing the data collected in an imaging experiment. This renders sharing and reusing data (within or between labs) difficult if not impossible and unnecessarily complicates the application of automatic pipelines and quality assurance protocols. To solve this problem, we have developed the Brain Imaging Data Structure (BIDS), a standard for organizing and describing MRI datasets. The BIDS standard uses file formats compatible with existing software, unifies the majority of practices already common in the field, and captures the metadata necessary for most common data processing operations.

  9. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments

    PubMed Central

    Gorgolewski, Krzysztof J.; Auer, Tibor; Calhoun, Vince D.; Craddock, R. Cameron; Das, Samir; Duff, Eugene P.; Flandin, Guillaume; Ghosh, Satrajit S.; Glatard, Tristan; Halchenko, Yaroslav O.; Handwerker, Daniel A.; Hanke, Michael; Keator, David; Li, Xiangrui; Michael, Zachary; Maumet, Camille; Nichols, B. Nolan; Nichols, Thomas E.; Pellman, John; Poline, Jean-Baptiste; Rokem, Ariel; Schaefer, Gunnar; Sochat, Vanessa; Triplett, William; Turner, Jessica A.; Varoquaux, Gaël; Poldrack, Russell A.

    2016-01-01

    The development of magnetic resonance imaging (MRI) techniques has defined modern neuroimaging. Since its inception, tens of thousands of studies using techniques such as functional MRI and diffusion weighted imaging have allowed for the non-invasive study of the brain. Despite the fact that MRI is routinely used to obtain data for neuroscience research, there has been no widely adopted standard for organizing and describing the data collected in an imaging experiment. This renders sharing and reusing data (within or between labs) difficult if not impossible and unnecessarily complicates the application of automatic pipelines and quality assurance protocols. To solve this problem, we have developed the Brain Imaging Data Structure (BIDS), a standard for organizing and describing MRI datasets. The BIDS standard uses file formats compatible with existing software, unifies the majority of practices already common in the field, and captures the metadata necessary for most common data processing operations. PMID:27326542

  10. Some new classification methods for hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia

    2006-10-01

    Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.

  11. Free Surface Downgoing VSP Multiple Imaging

    NASA Astrophysics Data System (ADS)

    Maula, Fahdi; Dac, Nguyen

    2018-03-01

    The common usage of a vertical seismic profile is to capture the reflection wavefield (upgoing wavefield) so that it can be used for further well tie or other interpretations. Borehole Seismic (VSP) receivers capture the reflection from below the well trajectory, traditionally no seismic image information above trajectory. The non-traditional way of processing the VSP multiple can be used to expand the imaging above the well trajectory. This paper presents the case study of using VSP downgoing multiples for further non-traditional imaging applications. In general, VSP processing, upgoing and downgoing arrivals are separated during processing. The up-going wavefield is used for subsurface illumination, whereas the downgoing wavefield and multiples are normally excluded from the processing. In a situation where the downgoing wavefield passes the reflectors several times (multiple), the downgoing wavefield carries reflection information. Its benefit is that it can be used for seismic tie up to seabed, and possibility for shallow hazards identifications. One of the concepts of downgoing imaging is widely known as mirror-imaging technique. This paper presents a case study from deep water offshore Vietnam. The case study is presented to demonstrate the robustness of the technique, and the limitations encountered during its processing.

  12. Estimation of breast percent density in raw and processed full field digital mammography images via adaptive fuzzy c-means clustering and support vector machine segmentation

    PubMed Central

    Keller, Brad M.; Nathan, Diane L.; Wang, Yan; Zheng, Yuanjie; Gee, James C.; Conant, Emily F.; Kontos, Despina

    2012-01-01

    Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., “FOR PROCESSING”) and vendor postprocessed (i.e., “FOR PRESENTATION”), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then aggregated into a final dense tissue segmentation that is used to compute breast PD%. Our method is validated on a group of 81 women for whom bilateral, mediolateral oblique, raw and processed screening digital mammograms were available, and agreement is assessed with both continuous and categorical density estimates made by a trained breast-imaging radiologist. Results: Strong association between algorithm-estimated and radiologist-provided breast PD% was detected for both raw (r = 0.82, p < 0.001) and processed (r = 0.85, p < 0.001) digital mammograms on a per-breast basis. Stronger agreement was found when overall breast density was assessed on a per-woman basis for both raw (r = 0.85, p < 0.001) and processed (0.89, p < 0.001) mammograms. Strong agreement between categorical density estimates was also seen (weighted Cohen's κ ≥ 0.79). Repeated measures analysis of variance demonstrated no statistically significant differences between the PD% estimates (p > 0.1) due to either presentation of the image (raw vs processed) or method of PD% assessment (radiologist vs algorithm). Conclusions: The proposed fully automated algorithm was successful in estimating breast percent density from both raw and processed digital mammographic images. Accurate assessment of a woman's breast density is critical in order for the estimate to be incorporated into risk assessment models. These results show promise for the clinical application of the algorithm in quantifying breast density in a repeatable manner, both at time of imaging as well as in retrospective studies. PMID:22894417

  13. CBEFF Common Biometric Exchange File Format

    DTIC Science & Technology

    2001-01-03

    and systems. Points of contact for CBEFF and liaisons to other organizations can be found in Appendix F. 2. Purpose The purpose of CBEFF is...0x40 Signature Dynamics 0x80 Keystroke Dynamics 0x100 Lip Movement 0x200 Thermal Face Image 0x400 Thermal Hand Image 0x800 Gait 0x1000 Body...this process is negligible from the Biometric Objects point of view, unless the process creating the livescan sample to compare against the

  14. Within-subject template estimation for unbiased longitudinal image analysis.

    PubMed

    Reuter, Martin; Schmansky, Nicholas J; Rosas, H Diana; Fischl, Bruce

    2012-07-16

    Longitudinal image analysis has become increasingly important in clinical studies of normal aging and neurodegenerative disorders. Furthermore, there is a growing appreciation of the potential utility of longitudinally acquired structural images and reliable image processing to evaluate disease modifying therapies. Challenges have been related to the variability that is inherent in the available cross-sectional processing tools, to the introduction of bias in longitudinal processing and to potential over-regularization. In this paper we introduce a novel longitudinal image processing framework, based on unbiased, robust, within-subject template creation, for automatic surface reconstruction and segmentation of brain MRI of arbitrarily many time points. We demonstrate that it is essential to treat all input images exactly the same as removing only interpolation asymmetries is not sufficient to remove processing bias. We successfully reduce variability and avoid over-regularization by initializing the processing in each time point with common information from the subject template. The presented results show a significant increase in precision and discrimination power while preserving the ability to detect large anatomical deviations; as such they hold great potential in clinical applications, e.g. allowing for smaller sample sizes or shorter trials to establish disease specific biomarkers or to quantify drug effects. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. A Electro-Optical Image Algebra Processing System for Automatic Target Recognition

    NASA Astrophysics Data System (ADS)

    Coffield, Patrick Cyrus

    The proposed electro-optical image algebra processing system is designed specifically for image processing and other related computations. The design is a hybridization of an optical correlator and a massively paralleled, single instruction multiple data processor. The architecture of the design consists of three tightly coupled components: a spatial configuration processor (the optical analog portion), a weighting processor (digital), and an accumulation processor (digital). The systolic flow of data and image processing operations are directed by a control buffer and pipelined to each of the three processing components. The image processing operations are defined in terms of basic operations of an image algebra developed by the University of Florida. The algebra is capable of describing all common image-to-image transformations. The merit of this architectural design is how it implements the natural decomposition of algebraic functions into spatially distributed, point use operations. The effect of this particular decomposition allows convolution type operations to be computed strictly as a function of the number of elements in the template (mask, filter, etc.) instead of the number of picture elements in the image. Thus, a substantial increase in throughput is realized. The implementation of the proposed design may be accomplished in many ways. While a hybrid electro-optical implementation is of primary interest, the benefits and design issues of an all digital implementation are also discussed. The potential utility of this architectural design lies in its ability to control a large variety of the arithmetic and logic operations of the image algebra's generalized matrix product. The generalized matrix product is the most powerful fundamental operation in the algebra, thus allowing a wide range of applications. No other known device or design has made this claim of processing speed and general implementation of a heterogeneous image algebra.

  16. MToS: A Tree of Shapes for Multivariate Images.

    PubMed

    Carlinet, Edwin; Géraud, Thierry

    2015-12-01

    The topographic map of a gray-level image, also called tree of shapes, provides a high-level hierarchical representation of the image contents. This representation, invariant to contrast changes and to contrast inversion, has been proved very useful to achieve many image processing and pattern recognition tasks. Its definition relies on the total ordering of pixel values, so this representation does not exist for color images, or more generally, multivariate images. Common workarounds, such as marginal processing, or imposing a total order on data, are not satisfactory and yield many problems. This paper presents a method to build a tree-based representation of multivariate images, which features marginally the same properties of the gray-level tree of shapes. Briefly put, we do not impose an arbitrary ordering on values, but we only rely on the inclusion relationship between shapes in the image definition domain. The interest of having a contrast invariant and self-dual representation of multivariate image is illustrated through several applications (filtering, segmentation, and object recognition) on different types of data: color natural images, document images, satellite hyperspectral imaging, multimodal medical imaging, and videos.

  17. PAT-tools for process control in pharmaceutical film coating applications.

    PubMed

    Knop, Klaus; Kleinebudde, Peter

    2013-12-05

    Recent development of analytical techniques to monitor the coating process of pharmaceutical solid dosage forms such as pellets and tablets are described. The progress from off- or at-line measurements to on- or in-line applications is shown for the spectroscopic methods near infrared (NIR) and Raman spectroscopy as well as for terahertz pulsed imaging (TPI) and image analysis. The common goal of all these methods is to control or at least to monitor the coating process and/or to estimate the coating end point through timely measurements. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.

    PubMed

    Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos

    2013-12-31

    Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.

  19. Computer image processing in marine resource exploration

    NASA Technical Reports Server (NTRS)

    Paluzzi, P. R.; Normark, W. R.; Hess, G. R.; Hess, H. D.; Cruickshank, M. J.

    1976-01-01

    Pictographic data or imagery is commonly used in marine exploration. Pre-existing image processing techniques (software) similar to those used on imagery obtained from unmanned planetary exploration were used to improve marine photography and side-scan sonar imagery. Features and details not visible by conventional photo processing methods were enhanced by filtering and noise removal on selected deep-sea photographs. Information gained near the periphery of photographs allows improved interpretation and facilitates construction of bottom mosaics where overlapping frames are available. Similar processing techniques were applied to side-scan sonar imagery, including corrections for slant range distortion, and along-track scale changes. The use of digital data processing and storage techniques greatly extends the quantity of information that can be handled, stored, and processed.

  20. 78 FR 67076 - Practices and Procedures

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-08

    ... as an attachment in any common electronic format, including word processing applications, HTML and PDF. If possible, commenters are asked to use a text format and not an image format for attachments...

  1. ImgLib2--generic image processing in Java.

    PubMed

    Pietzsch, Tobias; Preibisch, Stephan; Tomancák, Pavel; Saalfeld, Stephan

    2012-11-15

    ImgLib2 is an open-source Java library for n-dimensional data representation and manipulation with focus on image processing. It aims at minimizing code duplication by cleanly separating pixel-algebra, data access and data representation in memory. Algorithms can be implemented for classes of pixel types and generic access patterns by which they become independent of the specific dimensionality, pixel type and data representation. ImgLib2 illustrates that an elegant high-level programming interface can be achieved without sacrificing performance. It provides efficient implementations of common data types, storage layouts and algorithms. It is the data model underlying ImageJ2, the KNIME Image Processing toolbox and an increasing number of Fiji-Plugins. ImgLib2 is licensed under BSD. Documentation and source code are available at http://imglib2.net and in a public repository at https://github.com/imagej/imglib. Supplementary data are available at Bioinformatics Online. saalfeld@mpi-cbg.de

  2. Solar physics applications of computer graphics and image processing

    NASA Technical Reports Server (NTRS)

    Altschuler, M. D.

    1985-01-01

    Computer graphics devices coupled with computers and carefully developed software provide new opportunities to achieve insight into the geometry and time evolution of scalar, vector, and tensor fields and to extract more information quickly and cheaply from the same image data. Two or more different fields which overlay in space can be calculated from the data (and the physics), then displayed from any perspective, and compared visually. The maximum regions of one field can be compared with the gradients of another. Time changing fields can also be compared. Images can be added, subtracted, transformed, noise filtered, frequency filtered, contrast enhanced, color coded, enlarged, compressed, parameterized, and histogrammed, in whole or section by section. Today it is possible to process multiple digital images to reveal spatial and temporal correlations and cross correlations. Data from different observatories taken at different times can be processed, interpolated, and transformed to a common coordinate system.

  3. Acquisition and Post-Processing of Immunohistochemical Images.

    PubMed

    Sedgewick, Jerry

    2017-01-01

    Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.

  4. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  5. Visual Imagery and False Memory for Pictures: A Functional Magnetic Resonance Imaging Study in Healthy Participants

    PubMed Central

    Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Muñoz-Samons, Daniel; Ochoa, Susana; Sánchez-Laforga, Ana María; Brébion, Gildas

    2017-01-01

    Background Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. Methods A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted. Results Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures. Conclusions The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures. PMID:28046076

  6. PROPAGATING DISTURBANCES IN THE SOLAR CORONA AND SPICULAR CONNECTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samanta, Tanmoy; Pant, Vaibhav; Banerjee, Dipankar, E-mail: tsamanta@iiap.res.in

    Spicules are small, hairy-like structures seen at the solar limb, mainly at chromospheric and transition region lines. They generally live for 3–10 minutes. We study these spicules in a south polar region of the Sun with coordinated observations using the Interface Region Imaging Spectrograph (IRIS) and the Atmospheric Imaging Assembly (AIA) instruments on board the Solar Dynamics Observatory. Propagating disturbances (PDs) are observed everywhere in the polar off-limb regions of the Sun at coronal heights. From these simultaneous observations, we show that the spicules and the PDs may have originated through a common process. From spacetime maps, we find thatmore » the start of the trajectory of PDs is almost cotemporal with the time of the rise of the spicular envelope as seen by IRIS slit-jaw images at 2796 and 1400 Å. During the return of spicular material, brightenings are seen in AIA 171 and 193 Å images. The quasi-periodic nature of the spicular activity, as revealed by the IRIS spectral image sequences, and its relation to coronal PDs, as recorded by the coronal AIA channels, suggest that they share a common origin. We propose that reconnection-like processes generate the spicules and waves simultaneously. The waves escape while the cool spicular material falls back.« less

  7. Propagating Disturbances in the Solar Corona and Spicular Connection

    NASA Astrophysics Data System (ADS)

    Samanta, Tanmoy; Pant, Vaibhav; Banerjee, Dipankar

    2015-12-01

    Spicules are small, hairy-like structures seen at the solar limb, mainly at chromospheric and transition region lines. They generally live for 3-10 minutes. We study these spicules in a south polar region of the Sun with coordinated observations using the Interface Region Imaging Spectrograph (IRIS) and the Atmospheric Imaging Assembly (AIA) instruments on board the Solar Dynamics Observatory. Propagating disturbances (PDs) are observed everywhere in the polar off-limb regions of the Sun at coronal heights. From these simultaneous observations, we show that the spicules and the PDs may have originated through a common process. From spacetime maps, we find that the start of the trajectory of PDs is almost cotemporal with the time of the rise of the spicular envelope as seen by IRIS slit-jaw images at 2796 and 1400 Å. During the return of spicular material, brightenings are seen in AIA 171 and 193 Å images. The quasi-periodic nature of the spicular activity, as revealed by the IRIS spectral image sequences, and its relation to coronal PDs, as recorded by the coronal AIA channels, suggest that they share a common origin. We propose that reconnection-like processes generate the spicules and waves simultaneously. The waves escape while the cool spicular material falls back.

  8. Mirror-Image Equivalence and Interhemispheric Mirror-Image Reversal

    PubMed Central

    Corballis, Michael C.

    2018-01-01

    Mirror-image confusions are common, especially in children and in some cases of neurological impairment. They can be a special impediment in activities such as reading and writing directional scripts, where mirror-image patterns (such as b and d) must be distinguished. Treating mirror images as equivalent, though, can also be adaptive in the natural world, which carries no systematic left-right bias and where the same object or event can appear in opposite viewpoints. Mirror-image equivalence and confusion are natural consequences of a bilaterally symmetrical brain. In the course of learning, mirror-image equivalence may be established through a process of symmetrization, achieved through homotopic interhemispheric exchange in the formation of memory circuits. Such circuits would not distinguish between mirror images. Learning to discriminate mirror-image discriminations may depend either on existing brain asymmetries, or on extensive learning overriding the symmetrization process. The balance between mirror-image equivalence and mirror-image discrimination may nevertheless be precarious, with spontaneous confusions or reversals, such as mirror writing, sometimes appearing naturally or as a manifestation of conditions like dyslexia. PMID:29706878

  9. Mirror-Image Equivalence and Interhemispheric Mirror-Image Reversal.

    PubMed

    Corballis, Michael C

    2018-01-01

    Mirror-image confusions are common, especially in children and in some cases of neurological impairment. They can be a special impediment in activities such as reading and writing directional scripts, where mirror-image patterns (such as b and d ) must be distinguished. Treating mirror images as equivalent, though, can also be adaptive in the natural world, which carries no systematic left-right bias and where the same object or event can appear in opposite viewpoints. Mirror-image equivalence and confusion are natural consequences of a bilaterally symmetrical brain. In the course of learning, mirror-image equivalence may be established through a process of symmetrization, achieved through homotopic interhemispheric exchange in the formation of memory circuits. Such circuits would not distinguish between mirror images. Learning to discriminate mirror-image discriminations may depend either on existing brain asymmetries, or on extensive learning overriding the symmetrization process. The balance between mirror-image equivalence and mirror-image discrimination may nevertheless be precarious, with spontaneous confusions or reversals, such as mirror writing, sometimes appearing naturally or as a manifestation of conditions like dyslexia.

  10. Applying local binary patterns in image clustering problems

    NASA Astrophysics Data System (ADS)

    Skorokhod, Nikolai N.; Elizarov, Alexey I.

    2017-11-01

    Due to the fact that the cloudiness plays a critical role in the Earth radiative balance, the study of the distribution of different types of clouds and their movements is relevant. The main sources of such information are artificial satellites that provide data in the form of images. The most commonly used method of solving tasks of processing and classification of images of clouds is based on the description of texture features. The use of a set of local binary patterns is proposed to describe the texture image.

  11. NASA IMAGESEER: NASA IMAGEs for Science, Education, Experimentation and Research

    NASA Technical Reports Server (NTRS)

    Le Moigne, Jacqueline; Grubb, Thomas G.; Milner, Barbara C.

    2012-01-01

    A number of web-accessible databases, including medical, military or other image data, offer universities and other users the ability to teach or research new Image Processing techniques on relevant and well-documented data. However, NASA images have traditionally been difficult for researchers to find, are often only available in hard-to-use formats, and do not always provide sufficient context and background for a non-NASA Scientist user to understand their content. The new IMAGESEER (IMAGEs for Science, Education, Experimentation and Research) database seeks to address these issues. Through a graphically-rich web site for browsing and downloading all of the selected datasets, benchmarks, and tutorials, IMAGESEER provides a widely accessible database of NASA-centric, easy to read, image data for teaching or validating new Image Processing algorithms. As such, IMAGESEER fosters collaboration between NASA and research organizations while simultaneously encouraging development of new and enhanced Image Processing algorithms. The first prototype includes a representative sampling of NASA multispectral and hyperspectral images from several Earth Science instruments, along with a few small tutorials. Image processing techniques are currently represented with cloud detection, image registration, and map cover/classification. For each technique, corresponding data are selected from four different geographic regions, i.e., mountains, urban, water coastal, and agriculture areas. Satellite images have been collected from several instruments - Landsat-5 and -7 Thematic Mappers, Earth Observing-1 (EO-1) Advanced Land Imager (ALI) and Hyperion, and the Moderate Resolution Imaging Spectroradiometer (MODIS). After geo-registration, these images are available in simple common formats such as GeoTIFF and raw formats, along with associated benchmark data.

  12. A New Color Image Encryption Scheme Using CML and a Fractional-Order Chaotic System

    PubMed Central

    Wu, Xiangjun; Li, Yang; Kurths, Jürgen

    2015-01-01

    The chaos-based image cryptosystems have been widely investigated in recent years to provide real-time encryption and transmission. In this paper, a novel color image encryption algorithm by using coupled-map lattices (CML) and a fractional-order chaotic system is proposed to enhance the security and robustness of the encryption algorithms with a permutation-diffusion structure. To make the encryption procedure more confusing and complex, an image division-shuffling process is put forward, where the plain-image is first divided into four sub-images, and then the position of the pixels in the whole image is shuffled. In order to generate initial conditions and parameters of two chaotic systems, a 280-bit long external secret key is employed. The key space analysis, various statistical analysis, information entropy analysis, differential analysis and key sensitivity analysis are introduced to test the security of the new image encryption algorithm. The cryptosystem speed is analyzed and tested as well. Experimental results confirm that, in comparison to other image encryption schemes, the new algorithm has higher security and is fast for practical image encryption. Moreover, an extensive tolerance analysis of some common image processing operations such as noise adding, cropping, JPEG compression, rotation, brightening and darkening, has been performed on the proposed image encryption technique. Corresponding results reveal that the proposed image encryption method has good robustness against some image processing operations and geometric attacks. PMID:25826602

  13. High-performance image processing architecture

    NASA Astrophysics Data System (ADS)

    Coffield, Patrick C.

    1992-04-01

    The proposed architecture is a logical design specifically for image processing and other related computations. The design is a hybrid electro-optical concept consisting of three tightly coupled components: a spatial configuration processor (the optical analog portion), a weighting processor (digital), and an accumulation processor (digital). The systolic flow of data and image processing operations are directed by a control buffer and pipelined to each of the three processing components. The image processing operations are defined by an image algebra developed by the University of Florida. The algebra is capable of describing all common image-to-image transformations. The merit of this architectural design is how elegantly it handles the natural decomposition of algebraic functions into spatially distributed, point-wise operations. The effect of this particular decomposition allows convolution type operations to be computed strictly as a function of the number of elements in the template (mask, filter, etc.) instead of the number of picture elements in the image. Thus, a substantial increase in throughput is realized. The logical architecture may take any number of physical forms. While a hybrid electro-optical implementation is of primary interest, the benefits and design issues of an all digital implementation are also discussed. The potential utility of this architectural design lies in its ability to control all the arithmetic and logic operations of the image algebra's generalized matrix product. This is the most powerful fundamental formulation in the algebra, thus allowing a wide range of applications.

  14. Hyperspectral imaging for simultaneous measurements of two FRET biosensors in pancreatic β-cells.

    PubMed

    Elliott, Amicia D; Bedard, Noah; Ustione, Alessandro; Baird, Michelle A; Davidson, Michael W; Tkaczyk, Tomasz; Piston, David W

    2017-01-01

    Fluorescent protein (FP) biosensors based on Förster resonance energy transfer (FRET) are commonly used to study molecular processes in living cells. There are FP-FRET biosensors for many cellular molecules, but it remains difficult to perform simultaneous measurements of multiple biosensors. The overlapping emission spectra of the commonly used FPs, including CFP/YFP and GFP/RFP make dual FRET measurements challenging. In addition, a snapshot imaging modality is required for simultaneous imaging. The Image Mapping Spectrometer (IMS) is a snapshot hyperspectral imaging system that collects high resolution spectral data and can be used to overcome these challenges. We have previously demonstrated the IMS's capabilities for simultaneously imaging GFP and CFP/YFP-based biosensors in pancreatic β-cells. Here, we demonstrate a further capability of the IMS to image simultaneously two FRET biosensors with a single excitation band, one for cAMP and the other for Caspase-3. We use these measurements to measure simultaneously cAMP signaling and Caspase-3 activation in pancreatic β-cells during oxidative stress and hyperglycemia, which are essential components in the pathology of diabetes.

  15. Plant phenomics: an overview of image acquisition technologies and image data analysis algorithms

    PubMed Central

    Perez-Sanz, Fernando; Navarro, Pedro J

    2017-01-01

    Abstract The study of phenomes or phenomics has been a central part of biology. The field of automatic phenotype acquisition technologies based on images has seen an important advance in the last years. As with other high-throughput technologies, it addresses a common set of problems, including data acquisition and analysis. In this review, we give an overview of the main systems developed to acquire images. We give an in-depth analysis of image processing with its major issues and the algorithms that are being used or emerging as useful to obtain data out of images in an automatic fashion. PMID:29048559

  16. Functional Heterogeneity and Convergence in the Right Temporoparietal Junction

    PubMed Central

    Lee, Su Mei; McCarthy, Gregory

    2016-01-01

    The right temporoparietal junction (rTPJ) is engaged by tasks that manipulate biological motion processing, Theory of Mind attributions, and attention reorienting. The proximity of activations elicited by these tasks raises the question of whether these tasks share common cognitive component processes that are subserved by common neural substrates. Here, we used high-resolution whole-brain functional magnetic resonance imaging in a within-subjects design to determine whether these tasks activate common regions of the rTPJ. Each participant was presented with the 3 tasks in the same imaging session. In a whole-brain analysis, we found that only the right and left TPJs were activated by all 3 tasks. Multivoxel pattern analysis revealed that the regions of overlap could still discriminate the 3 tasks. Notably, we found significant cross-task classification in the right TPJ, which suggests a shared neural process between the 3 tasks. Taken together, these results support prior studies that have indicated functional heterogeneity within the rTPJ but also suggest a convergence of function within a region of overlap. These results also call for further investigation into the nature of the function subserved in this overlap region. PMID:25477367

  17. Improving the scalability of hyperspectral imaging applications on heterogeneous platforms using adaptive run-time data compression

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Plaza, Javier; Paz, Abel

    2010-10-01

    Latest generation remote sensing instruments (called hyperspectral imagers) are now able to generate hundreds of images, corresponding to different wavelength channels, for the same area on the surface of the Earth. In previous work, we have reported that the scalability of parallel processing algorithms dealing with these high-dimensional data volumes is affected by the amount of data to be exchanged through the communication network of the system. However, large messages are common in hyperspectral imaging applications since processing algorithms are pixel-based, and each pixel vector to be exchanged through the communication network is made up of hundreds of spectral values. Thus, decreasing the amount of data to be exchanged could improve the scalability and parallel performance. In this paper, we propose a new framework based on intelligent utilization of wavelet-based data compression techniques for improving the scalability of a standard hyperspectral image processing chain on heterogeneous networks of workstations. This type of parallel platform is quickly becoming a standard in hyperspectral image processing due to the distributed nature of collected hyperspectral data as well as its flexibility and low cost. Our experimental results indicate that adaptive lossy compression can lead to improvements in the scalability of the hyperspectral processing chain without sacrificing analysis accuracy, even at sub-pixel precision levels.

  18. Optical sectioning and 3D reconstructions as an alternative to scanning electron microscopy for analysis of cell shape.

    PubMed

    Landis, Jacob B; Ventura, Kayla L; Soltis, Douglas E; Soltis, Pamela S; Oppenheimer, David G

    2015-04-01

    Visualizing flower epidermal cells is often desirable for investigating the interaction between flowers and their pollinators, in addition to the broader range of ecological interactions in which flowers are involved. We developed a protocol for visualizing petal epidermal cells without the limitations of the commonly used method of scanning electron microscopy (SEM). Flower material was collected and fixed in glutaraldehyde, followed by dehydration in an ethanol series. Flowers were dissected to collect petals, and subjected to a Histo-Clear series to remove the cuticle. Material was then stained with aniline blue, mounted on microscope slides, and imaged using a compound fluorescence microscope to obtain optical sections that were reconstructed into a 3D image. This optical sectioning method yielded high-quality images of the petal epidermal cells with virtually no damage to cells. Flowers were processed in larger batches than are possible using common SEM methods. Also, flower size was not a limiting factor as often observed in SEM studies. Flowers up to 5 cm in length were processed and mounted for visualization. This method requires no special equipment for sample preparation prior to imaging and should be seen as an alternative method to SEM.

  19. Gamma activity modulated by naming of ambiguous and unambiguous images: intracranial recording

    PubMed Central

    Cho-Hisamoto, Yoshimi; Kojima, Katsuaki; Brown, Erik C; Matsuzaki, Naoyuki; Asano, Eishi

    2014-01-01

    OBJECTIVE Humans sometimes need to recognize objects based on vague and ambiguous silhouettes. Recognition of such images may require an intuitive guess. We determined the spatial-temporal characteristics of intracranially-recorded gamma activity (at 50–120 Hz) augmented differentially by naming of ambiguous and unambiguous images. METHODS We studied ten patients who underwent epilepsy surgery. Ambiguous and unambiguous images were presented during extraoperative electrocorticography recording, and patients were instructed to overtly name the object as it is first perceived. RESULTS Both naming tasks were commonly associated with gamma-augmentation sequentially involving the occipital and occipital-temporal regions, bilaterally, within 200 ms after the onset of image presentation. Naming of ambiguous images elicited gamma-augmentation specifically involving portions of the inferior-frontal, orbitofrontal, and inferior-parietal regions at 400 ms and after. Unambiguous images were associated with more intense gamma-augmentation in portions of the occipital and occipital-temporal regions. CONCLUSIONS Frontal-parietal gamma-augmentation specific to ambiguous images may reflect the additional cortical processing involved in exerting intuitive guess. Occipital gamma-augmentation enhanced during naming of unambiguous images can be explained by visual processing of stimuli with richer detail. SIGNIFICANCE Our results support the theoretical model that guessing processes in visual domain occur following the accumulation of sensory evidence resulting from the bottom-up processing in the occipital-temporal visual pathways. PMID:24815577

  20. Processing of Intentional and Automatic Number Magnitudes in Children Born Prematurely: Evidence From fMRI

    PubMed Central

    Klein, Elise; Moeller, Korbinian; Kiechl-Kohlendorfer, Ursula; Kremser, Christian; Starke, Marc; Cohen Kadosh, Roi; Pupp-Peglow, Ulrike; Schocke, Michael; Kaufmann, Liane

    2014-01-01

    This study examined the neural correlates of intentional and automatic number processing (indexed by number comparison and physical Stroop task, respectively) in 6- and 7-year-old children born prematurely. Behavioral results revealed significant numerical distance and size congruity effects. Imaging results disclosed (1) largely overlapping fronto-parietal activation for intentional and automatic number processing, (2) a frontal to parietal shift of activation upon considering the risk factors gestational age and birth weight, and (3) a task-specific link between math proficiency and functional magnetic resonance imaging (fMRI) signal within distinct regions of the parietal lobes—indicating commonalities but also specificities of intentional and automatic number processing. PMID:25090014

  1. Electrophoresis gel image processing and analysis using the KODAK 1D software.

    PubMed

    Pizzonia, J

    2001-06-01

    The present article reports on the performance of the KODAK 1D Image Analysis Software for the acquisition of information from electrophoresis experiments and highlights the utility of several mathematical functions for subsequent image processing, analysis, and presentation. Digital images of Coomassie-stained polyacrylamide protein gels containing molecular weight standards and ethidium bromide stained agarose gels containing DNA mass standards are acquired using the KODAK Electrophoresis Documentation and Analysis System 290 (EDAS 290). The KODAK 1D software is used to optimize lane and band identification using features such as isomolecular weight lines. Mathematical functions for mass standard representation are presented, and two methods for estimation of unknown band mass are compared. Given the progressive transition of electrophoresis data acquisition and daily reporting in peer-reviewed journals to digital formats ranging from 8-bit systems such as EDAS 290 to more expensive 16-bit systems, the utility of algorithms such as Gaussian modeling, which can correct geometric aberrations such as clipping due to signal saturation common at lower bit depth levels, is discussed. Finally, image-processing tools that can facilitate image preparation for presentation are demonstrated.

  2. Despeckle filtering software toolbox for ultrasound imaging of the common carotid artery.

    PubMed

    Loizou, Christos P; Theofanous, Charoula; Pantziaris, Marios; Kasparis, Takis

    2014-04-01

    Ultrasound imaging of the common carotid artery (CCA) is a non-invasive tool used in medicine to assess the severity of atherosclerosis and monitor its progression through time. It is also used in border detection and texture characterization of the atherosclerotic carotid plaque in the CCA, the identification and measurement of the intima-media thickness (IMT) and the lumen diameter that all are very important in the assessment of cardiovascular disease (CVD). Visual perception, however, is hindered by speckle, a multiplicative noise, that degrades the quality of ultrasound B-mode imaging. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image segmentation of the IMT and the atherosclerotic carotid plaque in ultrasound images. In order to facilitate this preprocessing step, we have developed in MATLAB(®) a unified toolbox that integrates image despeckle filtering (IDF), texture analysis and image quality evaluation techniques to automate the pre-processing and complement the disease evaluation in ultrasound CCA images. The proposed software, is based on a graphical user interface (GUI) and incorporates image normalization, 10 different despeckle filtering techniques (DsFlsmv, DsFwiener, DsFlsminsc, DsFkuwahara, DsFgf, DsFmedian, DsFhmedian, DsFad, DsFnldif, DsFsrad), image intensity normalization, 65 texture features, 15 quantitative image quality metrics and objective image quality evaluation. The software is publicly available in an executable form, which can be downloaded from http://www.cs.ucy.ac.cy/medinfo/. It was validated on 100 ultrasound images of the CCA, by comparing its results with quantitative visual analysis performed by a medical expert. It was observed that the despeckle filters DsFlsmv, and DsFhmedian improved image quality perception (based on the expert's assessment and the image texture and quality metrics). It is anticipated that the system could help the physician in the assessment of cardiovascular image analysis. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. Fast segmentation and high-quality three-dimensional volume mesh creation from medical images for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jermyn, Michael; Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.

    2013-08-01

    Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform.

  4. Quantitative comparison of clustered microcalcifications in for-presentation and for-processing mammograms in full-field digital mammography.

    PubMed

    Wang, Juan; Nishikawa, Robert M; Yang, Yongyi

    2017-07-01

    Mammograms acquired with full-field digital mammography (FFDM) systems are provided in both "for-processing'' and "for-presentation'' image formats. For-presentation images are traditionally intended for visual assessment by the radiologists. In this study, we investigate the feasibility of using for-presentation images in computerized analysis and diagnosis of microcalcification (MC) lesions. We make use of a set of 188 matched mammogram image pairs of MC lesions from 95 cases (biopsy proven), in which both for-presentation and for-processing images are provided for each lesion. We then analyze and characterize the MC lesions from for-presentation images and compare them with their counterparts in for-processing images. Specifically, we consider three important aspects in computer-aided diagnosis (CAD) of MC lesions. First, we quantify each MC lesion with a set of 10 image features of clustered MCs and 12 textural features of the lesion area. Second, we assess the detectability of individual MCs in each lesion from the for-presentation images by a commonly used difference-of-Gaussians (DoG) detector. Finally, we study the diagnostic accuracy in discriminating between benign and malignant MC lesions from the for-presentation images by a pretrained support vector machine (SVM) classifier. To accommodate the underlying background suppression and image enhancement in for-presentation images, a normalization procedure is applied. The quantitative image features of MC lesions from for-presentation images are highly consistent with that from for-processing images. The values of Pearson's correlation coefficient between features from the two formats range from 0.824 to 0.961 for the 10 MC image features, and from 0.871 to 0.963 for the 12 textural features. In detection of individual MCs, the FROC curve from for-presentation is similar to that from for-processing. In particular, at sensitivity level of 80%, the average number of false-positives (FPs) per image region is 9.55 for both for-presentation and for-processing images. Finally, for classifying MC lesions as malignant or benign, the area under the ROC curve is 0.769 in for-presentation, compared to 0.761 in for-processing (P = 0.436). The quantitative results demonstrate that MC lesions in for-presentation images are highly consistent with that in for-processing images in terms of image features, detectability of individual MCs, and classification accuracy between malignant and benign lesions. These results indicate that for-presentation images can be compatible with for-processing images for use in CAD algorithms for MC lesions. © 2017 American Association of Physicists in Medicine.

  5. 3D correlative light and electron microscopy of cultured cells using serial blockface scanning electron microscopy

    PubMed Central

    Lerner, Thomas R.; Burden, Jemima J.; Nkwe, David O.; Pelchen-Matthews, Annegret; Domart, Marie-Charlotte; Durgan, Joanne; Weston, Anne; Jones, Martin L.; Peddie, Christopher J.; Carzaniga, Raffaella; Florey, Oliver; Marsh, Mark; Gutierrez, Maximiliano G.

    2017-01-01

    ABSTRACT The processes of life take place in multiple dimensions, but imaging these processes in even three dimensions is challenging. Here, we describe a workflow for 3D correlative light and electron microscopy (CLEM) of cell monolayers using fluorescence microscopy to identify and follow biological events, combined with serial blockface scanning electron microscopy to analyse the underlying ultrastructure. The workflow encompasses all steps from cell culture to sample processing, imaging strategy, and 3D image processing and analysis. We demonstrate successful application of the workflow to three studies, each aiming to better understand complex and dynamic biological processes, including bacterial and viral infections of cultured cells and formation of entotic cell-in-cell structures commonly observed in tumours. Our workflow revealed new insight into the replicative niche of Mycobacterium tuberculosis in primary human lymphatic endothelial cells, HIV-1 in human monocyte-derived macrophages, and the composition of the entotic vacuole. The broad application of this 3D CLEM technique will make it a useful addition to the correlative imaging toolbox for biomedical research. PMID:27445312

  6. Slide Set: Reproducible image analysis and batch processing with ImageJ.

    PubMed

    Nanes, Benjamin A

    2015-11-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.

  7. Dynamic-Receive Focusing with High-Frequency Annular Arrays

    NASA Astrophysics Data System (ADS)

    Ketterling, J. A.; Mamou, J.; Silverman, R. H.

    High-frequency ultrasound is commonly employed for ophthalmic and small-animal imaging because of the fine-resolution images it affords. Annular arrays allow improved depth of field and lateral resolution versus commonly used single-element, focused transducers. The best image quality from an annular array is achieved by using synthetic transmit-to-receive focusing while utilizing data from all transmit-to-receive element combinations. However, annular arrays must be laterally scanned to form an image and this requires one pass for each of the array elements when implementing full synthetic transmit-to-receive focusing. A dynamic-receive focusing approach permits a single pass, although at a sacrifice of depth of field and lateral resolution. A five-element, 20-MHz annular array is examined to determine the acoustic beam properties for synthetic and dynamic-receive focusing. A spatial impulse response model is used to simulate the acoustic beam properties for each focusing case and then data acquired from a human eye-bank eye are processed to demonstrate the effect of each approach on image quality.

  8. Three-dimensional image orientation through only one rotation applied to image processing in engineering.

    PubMed

    Rodríguez, Jaime; Martín, María T; Herráez, José; Arias, Pedro

    2008-12-10

    Photogrammetry is a science with many fields of application in civil engineering where image processing is used for different purposes. In most cases, the use of multiple images simultaneously for the reconstruction of 3D scenes is commonly used. However, the use of isolated images is becoming more and more frequent, for which it is necessary to calculate the orientation of the image with respect to the object space (exterior orientation), which is usually made through three rotations through known points in the object space (Euler angles). We describe the resolution of this problem by means of a single rotation through the vanishing line of the image space and completely external to the object, to be more precise, without any contact with it. The results obtained appear to be optimal, and the procedure is simple and of great utility, since no points over the object are required, which is very useful in situations where access is difficult.

  9. Training site statistics from Landsat and Seasat satellite imagery registered to a common map base

    NASA Technical Reports Server (NTRS)

    Clark, J.

    1981-01-01

    Landsat and Seasat satellite imagery and training site boundary coordinates were registered to a common Universal Transverse Mercator map base in the Newport Beach area of Orange County, California. The purpose was to establish a spatially-registered, multi-sensor data base which would test the use of Seasat synthetic aperture radar imagery to improve spectral separability of channels used for land use classification of an urban area. Digital image processing techniques originally developed for the digital mosaics of the California Desert and the State of Arizona were adapted to spatially register multispectral and radar data. Techniques included control point selection from imagery and USGS topographic quadrangle maps, control point cataloguing with the Image Based Information System, and spatial and spectral rectifications of the imagery. The radar imagery was pre-processed to reduce its tendency toward uniform data distributions, so that training site statistics for selected Landsat and pre-processed Seasat imagery indicated good spectral separation between channels.

  10. SIP: A Web-Based Astronomical Image Processing Program

    NASA Astrophysics Data System (ADS)

    Simonetti, J. H.

    1999-12-01

    I have written an astronomical image processing and analysis program designed to run over the internet in a Java-compatible web browser. The program, Sky Image Processor (SIP), is accessible at the SIP webpage (http://www.phys.vt.edu/SIP). Since nothing is installed on the user's machine, there is no need to download upgrades; the latest version of the program is always instantly available. Furthermore, the Java programming language is designed to work on any computer platform (any machine and operating system). The program could be used with students in web-based instruction or in a computer laboratory setting; it may also be of use in some research or outreach applications. While SIP is similar to other image processing programs, it is unique in some important respects. For example, SIP can load images from the user's machine or from the Web. An instructor can put images on a web server for students to load and analyze on their own personal computer. Or, the instructor can inform the students of images to load from any other web server. Furthermore, since SIP was written with students in mind, the philosophy is to present the user with the most basic tools necessary to process and analyze astronomical images. Images can be combined (by addition, subtraction, multiplication, or division), multiplied by a constant, smoothed, cropped, flipped, rotated, and so on. Statistics can be gathered for pixels within a box drawn by the user. Basic tools are available for gathering data from an image which can be used for performing simple differential photometry, or astrometry. Therefore, students can learn how astronomical image processing works. Since SIP is not part of a commercial CCD camera package, the program is written to handle the most common denominator image file, the FITS format.

  11. Automated detection using natural language processing of radiologists recommendations for additional imaging of incidental findings.

    PubMed

    Dutta, Sayon; Long, William J; Brown, David F M; Reisner, Andrew T

    2013-08-01

    As use of radiology studies increases, there is a concurrent increase in incidental findings (eg, lung nodules) for which the radiologist issues recommendations for additional imaging for follow-up. Busy emergency physicians may be challenged to carefully communicate recommendations for additional imaging not relevant to the patient's primary evaluation. The emergence of electronic health records and natural language processing algorithms may help address this quality gap. We seek to describe recommendations for additional imaging from our institution and develop and validate an automated natural language processing algorithm to reliably identify recommendations for additional imaging. We developed a natural language processing algorithm to detect recommendations for additional imaging, using 3 iterative cycles of training and validation. The third cycle used 3,235 radiology reports (1,600 for algorithm training and 1,635 for validation) of discharged emergency department (ED) patients from which we determined the incidence of discharge-relevant recommendations for additional imaging and the frequency of appropriate discharge documentation. The test characteristics of the 3 natural language processing algorithm iterations were compared, using blinded chart review as the criterion standard. Discharge-relevant recommendations for additional imaging were found in 4.5% (95% confidence interval [CI] 3.5% to 5.5%) of ED radiology reports, but 51% (95% CI 43% to 59%) of discharge instructions failed to note those findings. The final natural language processing algorithm had 89% (95% CI 82% to 94%) sensitivity and 98% (95% CI 97% to 98%) specificity for detecting recommendations for additional imaging. For discharge-relevant recommendations for additional imaging, sensitivity improved to 97% (95% CI 89% to 100%). Recommendations for additional imaging are common, and failure to document relevant recommendations for additional imaging in ED discharge instructions occurs frequently. The natural language processing algorithm's performance improved with each iteration and offers a promising error-prevention tool. Copyright © 2013 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.

  12. Motion estimation of subcellular structures from fluorescence microscopy images.

    PubMed

    Vallmitjana, A; Civera-Tregon, A; Hoenicka, J; Palau, F; Benitez, R

    2017-07-01

    We present an automatic image processing framework to study moving intracellular structures from live cell fluorescence microscopy. The system includes the identification of static and dynamic structures from time-lapse images using data clustering as well as the identification of the trajectory of moving objects with a probabilistic tracking algorithm. The method has been successfully applied to study mitochondrial movement in neurons. The approach provides excellent performance under different experimental conditions and is robust to common sources of noise including experimental, molecular and biological fluctuations.

  13. PCA based clustering for brain tumor segmentation of T1w MRI images.

    PubMed

    Kaya, Irem Ersöz; Pehlivanlı, Ayça Çakmak; Sekizkardeş, Emine Gezmez; Ibrikci, Turgay

    2017-03-01

    Medical images are huge collections of information that are difficult to store and process consuming extensive computing time. Therefore, the reduction techniques are commonly used as a data pre-processing step to make the image data less complex so that a high-dimensional data can be identified by an appropriate low-dimensional representation. PCA is one of the most popular multivariate methods for data reduction. This paper is focused on T1-weighted MRI images clustering for brain tumor segmentation with dimension reduction by different common Principle Component Analysis (PCA) algorithms. Our primary aim is to present a comparison between different variations of PCA algorithms on MRIs for two cluster methods. Five most common PCA algorithms; namely the conventional PCA, Probabilistic Principal Component Analysis (PPCA), Expectation Maximization Based Principal Component Analysis (EM-PCA), Generalize Hebbian Algorithm (GHA), and Adaptive Principal Component Extraction (APEX) were applied to reduce dimensionality in advance of two clustering algorithms, K-Means and Fuzzy C-Means. In the study, the T1-weighted MRI images of the human brain with brain tumor were used for clustering. In addition to the original size of 512 lines and 512 pixels per line, three more different sizes, 256 × 256, 128 × 128 and 64 × 64, were included in the study to examine their effect on the methods. The obtained results were compared in terms of both the reconstruction errors and the Euclidean distance errors among the clustered images containing the same number of principle components. According to the findings, the PPCA obtained the best results among all others. Furthermore, the EM-PCA and the PPCA assisted K-Means algorithm to accomplish the best clustering performance in the majority as well as achieving significant results with both clustering algorithms for all size of T1w MRI images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Model-based estimation of breast percent density in raw and processed full-field digital mammography images from image-acquisition physics and patient-image characteristics

    NASA Astrophysics Data System (ADS)

    Keller, Brad M.; Nathan, Diane L.; Conant, Emily F.; Kontos, Despina

    2012-03-01

    Breast percent density (PD%), as measured mammographically, is one of the strongest known risk factors for breast cancer. While the majority of studies to date have focused on PD% assessment from digitized film mammograms, digital mammography (DM) is becoming increasingly common, and allows for direct PD% assessment at the time of imaging. This work investigates the accuracy of a generalized linear model-based (GLM) estimation of PD% from raw and postprocessed digital mammograms, utilizing image acquisition physics, patient characteristics and gray-level intensity features of the specific image. The model is trained in a leave-one-woman-out fashion on a series of 81 cases for which bilateral, mediolateral-oblique DM images were available in both raw and post-processed format. Baseline continuous and categorical density estimates were provided by a trained breast-imaging radiologist. Regression analysis is performed and Pearson's correlation, r, and Cohen's kappa, κ, are computed. The GLM PD% estimation model performed well on both processed (r=0.89, p<0.001) and raw (r=0.75, p<0.001) images. Model agreement with radiologist assigned density categories was also high for processed (κ=0.79, p<0.001) and raw (κ=0.76, p<0.001) images. Model-based prediction of breast PD% could allow for a reproducible estimation of breast density, providing a rapid risk assessment tool for clinical practice.

  15. Motion-compensated hand-held common-path Fourier-domain optical coherence tomography probe for image-guided intervention

    NASA Astrophysics Data System (ADS)

    Huang, Yong; Song, Cheol; Liu, Xuan; Kang, Jin U.

    2013-03-01

    A motion-compensated hand-held common-path Fourier-domain optical coherence tomography imaging probe has been developed for image guided intervention during microsurgery. A hand-held prototype instrument was designed and fabricated by integrating an imaging fiber probe inside a stainless steel needle which is attached to the ceramic shaft of a piezoelectric motor housed in an aluminum handle. The fiber probe obtains A-scan images. The distance information was extracted from the A-scans to track the sample surface distance and a fixed distance was maintained by a feedback motor control which effectively compensated hand tremor and target movements in the axial direction. Graphical user interface, real-time data processing, and visualization based on a CPU-GPU hybrid programming architecture were developed and used in the implantation of this system. To validate the system, free-hand optical coherence tomography images using various samples were obtained. The system can be easily integrated into microsurgical tools and robotics for a wide range of clinical applications. Such tools could offer physicians the freedom to easily image sites of interest with reduced risk and higher image quality.

  16. The Parallel Implementation of Algorithms for Finding the Reflection Symmetry of the Binary Images

    NASA Astrophysics Data System (ADS)

    Fedotova, S.; Seredin, O.; Kushnir, O.

    2017-05-01

    In this paper, we investigate the exact method of searching an axis of binary image symmetry, based on brute-force search among all potential symmetry axes. As a measure of symmetry, we use the set-theoretic Jaccard similarity applied to two subsets of pixels of the image which is divided by some axis. Brute-force search algorithm definitely finds the axis of approximate symmetry which could be considered as ground-truth, but it requires quite a lot of time to process each image. As a first step of our contribution we develop the parallel version of the brute-force algorithm. It allows us to process large image databases and obtain the desired axis of approximate symmetry for each shape in database. Experimental studies implemented on "Butterflies" and "Flavia" datasets have shown that the proposed algorithm takes several minutes per image to find a symmetry axis. However, in case of real-world applications we need computational efficiency which allows solving the task of symmetry axis search in real or quasi-real time. So, for the task of fast shape symmetry calculation on the common multicore PC we elaborated another parallel program, which based on the procedure suggested before in (Fedotova, 2016). That method takes as an initial axis the axis obtained by superfast comparison of two skeleton primitive sub-chains. This process takes about 0.5 sec on the common PC, it is considerably faster than any of the optimized brute-force methods including ones implemented in supercomputer. In our experiments for 70 percent of cases the found axis coincides with the ground-truth one absolutely, and for the rest of cases it is very close to the ground-truth.

  17. A theoretical Gaussian framework for anomalous change detection in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Acito, Nicola; Diani, Marco; Corsini, Giovanni

    2017-10-01

    Exploitation of temporal series of hyperspectral images is a relatively new discipline that has a wide variety of possible applications in fields like remote sensing, area surveillance, defense and security, search and rescue and so on. In this work, we discuss how images taken at two different times can be processed to detect changes caused by insertion, deletion or displacement of small objects in the monitored scene. This problem is known in the literature as anomalous change detection (ACD) and it can be viewed as the extension, to the multitemporal case, of the well-known anomaly detection problem in a single image. In fact, in both cases, the hyperspectral images are processed blindly in an unsupervised manner and without a-priori knowledge about the target spectrum. We introduce the ACD problem using an approach based on the statistical decision theory and we derive a common framework including different ACD approaches. Particularly, we clearly define the observation space, the data statistical distribution conditioned to the two competing hypotheses and the procedure followed to come with the solution. The proposed overview places emphasis on techniques based on the multivariate Gaussian model that allows a formal presentation of the ACD problem and the rigorous derivation of the possible solutions in a way that is both mathematically more tractable and easier to interpret. We also discuss practical problems related to the application of the detectors in the real world and present affordable solutions. Namely, we describe the ACD processing chain including the strategies that are commonly adopted to compensate pervasive radiometric changes, caused by the different illumination/atmospheric conditions, and to mitigate the residual geometric image co-registration errors. Results obtained on real freely available data are discussed in order to test and compare the methods within the proposed general framework.

  18. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

    PubMed

    Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

    2014-06-16

    Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.

  19. Intensity-hue-saturation-based image fusion using iterative linear regression

    NASA Astrophysics Data System (ADS)

    Cetin, Mufit; Tepecik, Abdulkadir

    2016-10-01

    The image fusion process basically produces a high-resolution image by combining the superior features of a low-resolution spatial image and a high-resolution panchromatic image. Despite its common usage due to its fast computing capability and high sharpening ability, the intensity-hue-saturation (IHS) fusion method may cause some color distortions, especially when a large number of gray value differences exist among the images to be combined. This paper proposes a spatially adaptive IHS (SA-IHS) technique to avoid these distortions by automatically adjusting the exact spatial information to be injected into the multispectral image during the fusion process. The SA-IHS method essentially suppresses the effects of those pixels that cause the spectral distortions by assigning weaker weights to them and avoiding a large number of redundancies on the fused image. The experimental database consists of IKONOS images, and the experimental results both visually and statistically prove the enhancement of the proposed algorithm when compared with the several other IHS-like methods such as IHS, generalized IHS, fast IHS, and generalized adaptive IHS.

  20. Animal Detection in Natural Images: Effects of Color and Image Database

    PubMed Central

    Zhu, Weina; Drewes, Jan; Gegenfurtner, Karl R.

    2013-01-01

    The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used. PMID:24130744

  1. Using parallel evolutionary development for a biologically-inspired computer vision system for mobile robots.

    PubMed

    Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J

    2005-01-01

    We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.

  2. Highly sensitive image-derived indices of water-stressed plants using hyperspectral imaging in SWIR and histogram analysis

    PubMed Central

    Kim, David M.; Zhang, Hairong; Zhou, Haiying; Du, Tommy; Wu, Qian; Mockler, Todd C.; Berezin, Mikhail Y.

    2015-01-01

    The optical signature of leaves is an important monitoring and predictive parameter for a variety of biotic and abiotic stresses, including drought. Such signatures derived from spectroscopic measurements provide vegetation indices – a quantitative method for assessing plant health. However, the commonly used metrics suffer from low sensitivity. Relatively small changes in water content in moderately stressed plants demand high-contrast imaging to distinguish affected plants. We present a new approach in deriving sensitive indices using hyperspectral imaging in a short-wave infrared range from 800 nm to 1600 nm. Our method, based on high spectral resolution (1.56 nm) instrumentation and image processing algorithms (quantitative histogram analysis), enables us to distinguish a moderate water stress equivalent of 20% relative water content (RWC). The identified image-derived indices 15XX nm/14XX nm (i.e. 1529 nm/1416 nm) were superior to common vegetation indices, such as WBI, MSI, and NDWI, with significantly better sensitivity, enabling early diagnostics of plant health. PMID:26531782

  3. ISLE (Image and Signal Processing LISP Environment) reference manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sherwood, R.J.; Searfus, R.M.

    1990-01-01

    ISLE is a rapid prototyping system for performing image and signal processing. It is designed to meet the needs of a person doing development of image and signal processing algorithms in a research environment. The image and signal processing modules in ISLE form a very capable package in themselves. They also provide a rich environment for quickly and easily integrating user-written software modules into the package. ISLE is well suited to applications in which there is a need to develop a processing algorithm in an interactive manner. It is straightforward to develop the algorithms, load it into ISLE, apply themore » algorithm to an image or signal, display the results, then modify the algorithm and repeat the develop-load-apply-display cycle. ISLE consists of a collection of image and signal processing modules integrated into a cohesive package through a standard command interpreter. ISLE developer elected to concentrate their effort on developing image and signal processing software rather than developing a command interpreter. A COMMON LISP interpreter was selected for the command interpreter because it already has the features desired in a command interpreter, it supports dynamic loading of modules for customization purposes, it supports run-time parameter and argument type checking, it is very well documented, and it is a commercially supported product. This manual is intended to be a reference manual for the ISLE functions The functions are grouped into a number of categories and briefly discussed in the Function Summary chapter. The full descriptions of the functions and all their arguments are given in the Function Descriptions chapter. 6 refs.« less

  4. Shortcomings of low-cost imaging systems for viewing computed radiographs.

    PubMed

    Ricke, J; Hänninen, E L; Zielinski, C; Amthauer, H; Stroszczynski, C; Liebig, T; Wolf, M; Hosten, N

    2000-01-01

    To assess potential advantages of a new PC-based viewing tool featuring image post-processing for viewing computed radiographs on low-cost hardware (PC) with a common display card and color monitor, and to evaluate the effect of using color versus monochrome monitors. Computed radiographs of a statistical phantom were viewed on a PC, with and without post-processing (spatial frequency and contrast processing), employing a monochrome or a color monitor. Findings were compared with the viewing on a radiological Workstation and evaluated with ROC analysis. Image post-processing improved the perception of low-contrast details significantly irrespective of the monitor used. No significant difference in perception was observed between monochrome and color monitors. The review at the radiological Workstation was superior to the review done using the PC with image processing. Lower quality hardware (graphic card and monitor) used in low cost PCs negatively affects perception of low-contrast details in computed radiographs. In this situation, it is highly recommended to use spatial frequency and contrast processing. No significant quality gain has been observed for the high-end monochrome monitor compared to the color display. However, the color monitor was affected stronger by high ambient illumination.

  5. EVALUATION OF LOW-SUN ILLUMINATED LANDSAT-4 THEMATIC MAPPER DATA FOR MAPPING HYDROTHERMALLY ALTERED ROCKS IN SOUTHERN NEVADA.

    USGS Publications Warehouse

    Podwysocki, Melvin H.; Power, Marty S.; Salisbury, Jack; Jones, O.D.

    1984-01-01

    Landsat-4 Thematic Mapper (TM) data of southern Nevada collected under conditions of low-angle solar illumination were digitally processed to identify hydroxyl-bearing minerals commonly associated with hydrothermal alteration in volcanic terrains. Digital masking procedures were used to exclude shadow areas and vegetation and thus to produce a CRC image suitable for testing the new TM bands as a means to map hydrothermally altered rocks. Field examination of a masked CRC image revealed that several different types of altered rocks displayed hues associated with spectral characteristics common to hydroxyl-bearing minerals. Several types of unaltered rocks also displayed similar hues.

  6. Survey: interpolation methods for whole slide image processing.

    PubMed

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  7. Remembering verbally-presented items as pictures: Brain activity underlying visual mental images in schizophrenia patients with visual hallucinations.

    PubMed

    Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Cuevas-Esteban, Jorge; Cambra-Martí, Maria Rosa; Ochoa, Susana; Brébion, Gildas

    2017-09-01

    Previous research suggests that visual hallucinations in schizophrenia consist of mental images mistaken for percepts due to failure of the reality-monitoring processes. However, the neural substrates that underpin such dysfunction are currently unknown. We conducted a brain imaging study to investigate the role of visual mental imagery in visual hallucinations. Twenty-three patients with schizophrenia and 26 healthy participants were administered a reality-monitoring task whilst undergoing an fMRI protocol. At the encoding phase, a mixture of pictures of common items and labels designating common items were presented. On the memory test, participants were requested to remember whether a picture of the item had been presented or merely its label. Visual hallucination scores were associated with a liberal response bias reflecting propensity to erroneously remember pictures of the items that had in fact been presented as words. At encoding, patients with visual hallucinations differentially activated the right fusiform gyrus when processing the words they later remembered as pictures, which suggests the formation of visual mental images. On the memory test, the whole patient group activated the anterior cingulate and medial superior frontal gyrus when falsely remembering pictures. However, no differential activation was observed in patients with visual hallucinations, whereas in the healthy sample, the production of visual mental images at encoding led to greater activation of a fronto-parietal decisional network on the memory test. Visual hallucinations are associated with enhanced visual imagery and possibly with a failure of the reality-monitoring processes that enable discrimination between imagined and perceived events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Guided wave imaging of oblique reflecting interfaces in pipes using common-source synthetic focusing

    NASA Astrophysics Data System (ADS)

    Sun, Zeqing; Sun, Anyu; Ju, Bing-Feng

    2018-04-01

    Cross-mode-family mode conversion and secondary reflection of guided waves in pipes complicate the processing of guided waves signals, and can cause false detection. In this paper, filters operating in the spectral domain of wavenumber, circumferential order and frequency are designed to suppress the signal components of unwanted mode-family and unwanted traveling direction. Common-source synthetic focusing is used to reconstruct defect images from the guided wave signals. Simulations of the reflections from linear oblique defects and a semicircle defect are separately implemented. Defect images, which are reconstructed from the simulation results under different excitation conditions, are comparatively studied in terms of axial resolution, reflection amplitude, detectable oblique angle and so on. Further, the proposed method is experimentally validated by detecting linear cracks with various oblique angles (10-40°). The proposed method relies on the guided wave signals that are captured during 2-D scanning of a cylindrical area on the pipe. The redundancy of the signals is analyzed to reduce the time-consumption of the scanning process and to enhance the practicability of the proposed method.

  9. Localization Using Visual Odometry and a Single Downward-Pointing Camera

    NASA Technical Reports Server (NTRS)

    Swank, Aaron J.

    2012-01-01

    Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.

  10. Rapid Target Detection in High Resolution Remote Sensing Images Using Yolo Model

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Chen, X.; Gao, Y.; Li, Y.

    2018-04-01

    Object detection in high resolution remote sensing images is a fundamental and challenging problem in the field of remote sensing imagery analysis for civil and military application due to the complex neighboring environments, which can cause the recognition algorithms to mistake irrelevant ground objects for target objects. Deep Convolution Neural Network(DCNN) is the hotspot in object detection for its powerful ability of feature extraction and has achieved state-of-the-art results in Computer Vision. Common pipeline of object detection based on DCNN consists of region proposal, CNN feature extraction, region classification and post processing. YOLO model frames object detection as a regression problem, using a single CNN predicts bounding boxes and class probabilities in an end-to-end way and make the predict faster. In this paper, a YOLO based model is used for object detection in high resolution sensing images. The experiments on NWPU VHR-10 dataset and our airport/airplane dataset gain from GoogleEarth show that, compare with the common pipeline, the proposed model speeds up the detection process and have good accuracy.

  11. Plant phenomics: an overview of image acquisition technologies and image data analysis algorithms.

    PubMed

    Perez-Sanz, Fernando; Navarro, Pedro J; Egea-Cortines, Marcos

    2017-11-01

    The study of phenomes or phenomics has been a central part of biology. The field of automatic phenotype acquisition technologies based on images has seen an important advance in the last years. As with other high-throughput technologies, it addresses a common set of problems, including data acquisition and analysis. In this review, we give an overview of the main systems developed to acquire images. We give an in-depth analysis of image processing with its major issues and the algorithms that are being used or emerging as useful to obtain data out of images in an automatic fashion. © The Author 2017. Published by Oxford University Press.

  12. Investigation of radio astronomy image processing techniques for use in the passive millimetre-wave security screening environment

    NASA Astrophysics Data System (ADS)

    Taylor, Christopher T.; Hutchinson, Simon; Salmon, Neil A.; Wilkinson, Peter N.; Cameron, Colin D.

    2014-06-01

    Image processing techniques can be used to improve the cost-effectiveness of future interferometric Passive MilliMetre Wave (PMMW) imagers. The implementation of such techniques will allow for a reduction in the number of collecting elements whilst ensuring adequate image fidelity is maintained. Various techniques have been developed by the radio astronomy community to enhance the imaging capability of sparse interferometric arrays. The most prominent are Multi- Frequency Synthesis (MFS) and non-linear deconvolution algorithms, such as the Maximum Entropy Method (MEM) and variations of the CLEAN algorithm. This investigation focuses on the implementation of these methods in the defacto standard for radio astronomy image processing, the Common Astronomy Software Applications (CASA) package, building upon the discussion presented in Taylor et al., SPIE 8362-0F. We describe the image conversion process into a CASA suitable format, followed by a series of simulations that exploit the highlighted deconvolution and MFS algorithms assuming far-field imagery. The primary target application used for this investigation is an outdoor security scanner for soft-sided Heavy Goods Vehicles. A quantitative analysis of the effectiveness of the aforementioned image processing techniques is presented, with thoughts on the potential cost-savings such an approach could yield. Consideration is also given to how the implementation of these techniques in CASA might be adapted to operate in a near-field target environment. This may enable a much wider usability by the imaging community outside of radio astronomy and thus would be directly relevant to portal screening security systems in the microwave and millimetre wave bands.

  13. Methodology for the Elimination of Reflection and System Vibration Effects in Particle Image Velocimetry Data Processing

    NASA Technical Reports Server (NTRS)

    Bremmer, David M.; Hutcheson, Florence V.; Stead, Daniel J.

    2005-01-01

    A methodology to eliminate model reflection and system vibration effects from post processed particle image velocimetry data is presented. Reflection and vibration lead to loss of data, and biased velocity calculations in PIV processing. A series of algorithms were developed to alleviate these problems. Reflections emanating from the model surface caused by the laser light sheet are removed from the PIV images by subtracting an image in which only the reflections are visible from all of the images within a data acquisition set. The result is a set of PIV images where only the seeded particles are apparent. Fiduciary marks painted on the surface of the test model were used as reference points in the images. By locating the centroids of these marks it was possible to shift all of the images to a common reference frame. This image alignment procedure as well as the subtraction of model reflection are performed in a first algorithm. Once the images have been shifted, they are compared with a background image that was recorded under no flow conditions. The second and third algorithms find the coordinates of fiduciary marks in the acquisition set images and the background image and calculate the displacement between these images. The final algorithm shifts all of the images so that fiduciary mark centroids lie in the same location as the background image centroids. This methodology effectively eliminated the effects of vibration so that unbiased data could be used for PIV processing. The PIV data used for this work was generated at the NASA Langley Research Center Quiet Flow Facility. The experiment entailed flow visualization near the flap side edge region of an airfoil model. Commercial PIV software was used for data acquisition and processing. In this paper, the experiment and the PIV acquisition of the data are described. The methodology used to develop the algorithms for reflection and system vibration removal is stated, and the implementation, testing and validation of these algorithms are presented.

  14. Vivaldi: A Domain-Specific Language for Volume Processing and Visualization on Distributed Heterogeneous Systems.

    PubMed

    Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki

    2014-12-01

    As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.

  15. Processing and analysis of cardiac optical mapping data obtained with potentiometric dyes

    PubMed Central

    Laughner, Jacob I.; Ng, Fu Siong; Sulkin, Matthew S.; Arthur, R. Martin

    2012-01-01

    Optical mapping has become an increasingly important tool to study cardiac electrophysiology in the past 20 years. Multiple methods are used to process and analyze cardiac optical mapping data, and no consensus currently exists regarding the optimum methods. The specific methods chosen to process optical mapping data are important because inappropriate data processing can affect the content of the data and thus alter the conclusions of the studies. Details of the different steps in processing optical imaging data, including image segmentation, spatial filtering, temporal filtering, and baseline drift removal, are provided in this review. We also provide descriptions of the common analyses performed on data obtained from cardiac optical imaging, including activation mapping, action potential duration mapping, repolarization mapping, conduction velocity measurements, and optical action potential upstroke analysis. Optical mapping is often used to study complex arrhythmias, and we also discuss dominant frequency analysis and phase mapping techniques used for the analysis of cardiac fibrillation. PMID:22821993

  16. Study on Underwater Image Denoising Algorithm Based on Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Jian, Sun; Wen, Wang

    2017-02-01

    This paper analyzes the application of MATLAB in underwater image processing, the transmission characteristics of the underwater laser light signal and the kinds of underwater noise has been described, the common noise suppression algorithm: Wiener filter, median filter, average filter algorithm is brought out. Then the advantages and disadvantages of each algorithm in image sharpness and edge protection areas have been compared. A hybrid filter algorithm based on wavelet transform has been proposed which can be used for Color Image Denoising. At last the PSNR and NMSE of each algorithm has been given out, which compares the ability to de-noising

  17. Fetal MRI: A Technical Update with Educational Aspirations

    PubMed Central

    Gholipour, Ali; Estroff, Judith A.; Barnewolt, Carol E.; Robertson, Richard L.; Grant, P. Ellen; Gagoski, Borjan; Warfield, Simon K.; Afacan, Onur; Connolly, Susan A.; Neil, Jeffrey J.; Wolfberg, Adam; Mulkern, Robert V.

    2015-01-01

    Fetal magnetic resonance imaging (MRI) examinations have become well-established procedures at many institutions and can serve as useful adjuncts to ultrasound (US) exams when diagnostic doubts remain after US. Due to fetal motion, however, fetal MRI exams are challenging and require the MR scanner to be used in a somewhat different mode than that employed for more routine clinical studies. Herein we review the techniques most commonly used, and those that are available, for fetal MRI with an emphasis on the physics of the techniques and how to deploy them to improve success rates for fetal MRI exams. By far the most common technique employed is single-shot T2-weighted imaging due to its excellent tissue contrast and relative immunity to fetal motion. Despite the significant challenges involved, however, many of the other techniques commonly employed in conventional neuro- and body MRI such as T1 and T2*-weighted imaging, diffusion and perfusion weighted imaging, as well as spectroscopic methods remain of interest for fetal MR applications. An effort to understand the strengths and limitations of these basic methods within the context of fetal MRI is made in order to optimize their use and facilitate implementation of technical improvements for the further development of fetal MR imaging, both in acquisition and post-processing strategies. PMID:26225129

  18. Deep Learning for Lowtextured Image Matching

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Fedorenko, V. V.; Fomin, N. A.

    2018-05-01

    Low-textured objects pose challenges for an automatic 3D model reconstruction. Such objects are common in archeological applications of photogrammetry. Most of the common feature point descriptors fail to match local patches in featureless regions of an object. Hence, automatic documentation of the archeological process using Structure from Motion (SfM) methods is challenging. Nevertheless, such documentation is possible with the aid of a human operator. Deep learning-based descriptors have outperformed most of common feature point descriptors recently. This paper is focused on the development of a new Wide Image Zone Adaptive Robust feature Descriptor (WIZARD) based on the deep learning. We use a convolutional auto-encoder to compress discriminative features of a local path into a descriptor code. We build a codebook to perform point matching on multiple images. The matching is performed using the nearest neighbor search and a modified voting algorithm. We present a new "Multi-view Amphora" (Amphora) dataset for evaluation of point matching algorithms. The dataset includes images of an Ancient Greek vase found at Taman Peninsula in Southern Russia. The dataset provides color images, a ground truth 3D model, and a ground truth optical flow. We evaluated the WIZARD descriptor on the "Amphora" dataset to show that it outperforms the SIFT and SURF descriptors on the complex patch pairs.

  19. 78 FR 17233 - Notice of Opportunity To File Amicus Briefs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-20

    .... Any commonly-used word processing format or PDF format is acceptable; text formats are preferable to image formats. Briefs may also be filed with the Office of the Clerk of the Board, Merit Systems...

  20. Multisensor data fusion across time and space

    NASA Astrophysics Data System (ADS)

    Villeneuve, Pierre V.; Beaven, Scott G.; Reed, Robert A.

    2014-06-01

    Field measurement campaigns typically deploy numerous sensors having different sampling characteristics for spatial, temporal, and spectral domains. Data analysis and exploitation is made more difficult and time consuming as the sample data grids between sensors do not align. This report summarizes our recent effort to demonstrate feasibility of a processing chain capable of "fusing" image data from multiple independent and asynchronous sensors into a form amenable to analysis and exploitation using commercially-available tools. Two important technical issues were addressed in this work: 1) Image spatial registration onto a common pixel grid, 2) Image temporal interpolation onto a common time base. The first step leverages existing image matching and registration algorithms. The second step relies upon a new and innovative use of optical flow algorithms to perform accurate temporal upsampling of slower frame rate imagery. Optical flow field vectors were first derived from high-frame rate, high-resolution imagery, and then finally used as a basis for temporal upsampling of the slower frame rate sensor's imagery. Optical flow field values are computed using a multi-scale image pyramid, thus allowing for more extreme object motion. This involves preprocessing imagery to varying resolution scales and initializing new vector flow estimates using that from the previous coarser-resolution image. Overall performance of this processing chain is demonstrated using sample data involving complex too motion observed by multiple sensors mounted to the same base. Multiple sensors were included, including a high-speed visible camera, up to a coarser resolution LWIR camera.

  1. Multipathing Via Three Parameter Common Image Gathers (CIGs) From Reverse Time Migration

    NASA Astrophysics Data System (ADS)

    Ostadhassan, M.; Zhang, X.

    2015-12-01

    A noteworthy problem for seismic exploration is effects of multipathing (both wanted or unwanted) caused by subsurface complex structures. We show that reverse time migration (RTM) combined with a unified, systematic three parameter framework that flexibly handles multipathing can be accomplished by adding one more dimension (image time) to the angle domain common image gather (ADCIG) data. RTM is widely used to generate prestack depth migration images. When using the cross-correlation image condition in 2D prestack migration in RTM, the usual practice is to sum over all the migration time steps. Thus all possible wave types and paths automatically contribute to the resulting image, including destructive wave interferences, phase shifts, and other distortions. One reason is that multipath (prismatic wave) contributions are not properly sorted and mapped in the ADCIGs. Also, multipath arrivals usually have different instantaneous attributes (amplitude, phase and frequency), and if not separated, the amplitudes and phases in the final prestack image will not stack coherently across sources. A prismatic path satisfies an image time for it's unique path; Cavalca and Lailly (2005) show that RTM images with multipaths can provide more complete target information in complex geology, as multipaths usually have different incident angles and amplitudes compared to primary reflections. If the image time slices within a cross-correlation common-source migration are saved for each image time, this three-parameter (incident angle, depth, image time) volume can be post-processed to generate separate, or composite, images of any desired subset of the migrated data. Images can by displayed for primary contributions, any combination of primary and multipath contributions (with or without artifacts), or various projections, including the conventional ADCIG (angle vs depth) plane. Examples show that signal from the true structure can be separated from artifacts caused by multiple arrivals when they have different image times. This improves the quality of images and benefits migration velocity analysis (MVA) and amplitude variation with angle (AVA) inversion.

  2. Luminescent sensing and imaging of oxygen: Fierce competition to the Clark electrode

    PubMed Central

    2015-01-01

    Luminescence‐based sensing schemes for oxygen have experienced a fast growth and are in the process of replacing the Clark electrode in many fields. Unlike electrodes, sensing is not limited to point measurements via fiber optic microsensors, but includes additional features such as planar sensing, imaging, and intracellular assays using nanosized sensor particles. In this essay, I review and discuss the essentials of (i) common solid‐state sensor approaches based on the use of luminescent indicator dyes and host polymers; (ii) fiber optic and planar sensing schemes; (iii) nanoparticle‐based intracellular sensing; and (iv) common spectroscopies. Optical sensors are also capable of multiple simultaneous sensing (such as O2 and temperature). Sensors for O2 are produced nowadays in large quantities in industry. Fields of application include sensing of O2 in plant and animal physiology, in clinical chemistry, in marine sciences, in the chemical industry and in process biotechnology. PMID:26113255

  3. Use of a web-based image reporting and tracking system for assessing abdominal imaging examination quality issues in a single practice.

    PubMed

    Rosenkrantz, Andrew B; Johnson, Evan; Sanger, Joseph J

    2015-10-01

    This article presents our local experience in the implementation of a real-time web-based system for reporting and tracking quality issues relating to abdominal imaging examinations. This system allows radiologists to electronically submit examination quality issues during clinical readouts. The submitted information is e-mailed to a designate for the given modality for further follow-up; the designate may subsequently enter text describing their response or action taken, which is e-mailed back to the radiologist. Review of 558 entries over a 6-year period demonstrated documentation of a broad range of examination quality issues, including specific issues relating to protocol deviation, post-processing errors, positioning errors, artifacts, and IT concerns. The most common issues varied among US, CT, MRI, radiography, and fluoroscopy. In addition, the most common issues resulting in a patient recall for repeat imaging (generally related to protocol deviation in MRI and US) were identified. In addition to submitting quality problems, radiologists also commonly used the tool to provide recognition of a well-performed examination. An electronic log of actions taken in response to radiologists' submissions indicated that both positive and negative feedback were commonly communicated to the performing technologist. Information generated using the tool can be used to guide subsequent quality improvement initiatives within a practice, including continued protocol standardization as well as education of technologists in the optimization of abdominal imaging examinations.

  4. Referential processing: reciprocity and correlates of naming and imaging.

    PubMed

    Paivio, A; Clark, J M; Digdon, N; Bons, T

    1989-03-01

    To shed light on the referential processes that underlie mental translation between representations of objects and words, we studied the reciprocity and determinants of naming and imaging reaction times (RT). Ninety-six subjects pressed a key when they had covertly named 248 pictures or imaged to their names. Mean naming and imagery RTs for each item were correlated with one another, and with properties of names, images, and their interconnections suggested by prior research and dual coding theory. Imagery RTs correlated .56 (df = 246) with manual naming RTs and .58 with voicekey naming RTs from prior studies. A factor analysis of the RTs and of 31 item characteristics revealed 7 dimensions. Imagery and naming RTs loaded on a common referential factor that included variables related to both directions of processing (e.g., missing names and missing images). Naming RTs also loaded on a nonverbal-to-verbal factor that included such variables as number of different names, whereas imagery RTs loaded on a verbal-to-nonverbal factor that included such variables as rated consistency of imagery. The other factors were verbal familiarity, verbal complexity, nonverbal familiarity, and nonverbal complexity. The findings confirm the reciprocity of imaging and naming, and their relation to constructs associated with distinct phases of referential processing.

  5. Cryo-EM Structure Determination Using Segmented Helical Image Reconstruction.

    PubMed

    Fromm, S A; Sachse, C

    2016-01-01

    Treating helices as single-particle-like segments followed by helical image reconstruction has become the method of choice for high-resolution structure determination of well-ordered helical viruses as well as flexible filaments. In this review, we will illustrate how the combination of latest hardware developments with optimized image processing routines have led to a series of near-atomic resolution structures of helical assemblies. Originally, the treatment of helices as a sequence of segments followed by Fourier-Bessel reconstruction revealed the potential to determine near-atomic resolution structures from helical specimens. In the meantime, real-space image processing of helices in a stack of single particles was developed and enabled the structure determination of specimens that resisted classical Fourier helical reconstruction and also facilitated high-resolution structure determination. Despite the progress in real-space analysis, the combination of Fourier and real-space processing is still commonly used to better estimate the symmetry parameters as the imposition of the correct helical symmetry is essential for high-resolution structure determination. Recent hardware advancement by the introduction of direct electron detectors has significantly enhanced the image quality and together with improved image processing procedures has made segmented helical reconstruction a very productive cryo-EM structure determination method. © 2016 Elsevier Inc. All rights reserved.

  6. Adaptive windowing in contrast-enhanced intravascular ultrasound imaging

    PubMed Central

    Lindsey, Brooks D.; Martin, K. Heath; Jiang, Xiaoning; Dayton, Paul A.

    2016-01-01

    Intravascular ultrasound (IVUS) is one of the most commonly-used interventional imaging techniques and has seen recent innovations which attempt to characterize the risk posed by atherosclerotic plaques. One such development is the use of microbubble contrast agents to image vasa vasorum, fine vessels which supply oxygen and nutrients to the walls of coronary arteries and typically have diameters less than 200 µm. The degree of vasa vasorum neovascularization within plaques is positively correlated with plaque vulnerability. Having recently presented a prototype dual-frequency transducer for contrast agent-specific intravascular imaging, here we describe signal processing approaches based on minimum variance (MV) beamforming and the phase coherence factor (PCF) for improving the spatial resolution and contrast-to-tissue ratio (CTR) in IVUS imaging. These approaches are examined through simulations, phantom studies, ex vivo studies in porcine arteries, and in vivo studies in chicken embryos. In phantom studies, PCF processing improved CTR by a mean of 4.2 dB, while combined MV and PCF processing improved spatial resolution by 41.7%. Improvements of 2.2 dB in CTR and 37.2% in resolution were observed in vivo. Applying these processing strategies can enhance image quality in conventional B-mode IVUS or in contrast-enhanced IVUS, where signal-to-noise ratio is relatively low and resolution is at a premium. PMID:27161022

  7. Facial fluid synthesis for assessment of acne vulgaris using luminescent visualization system through optical imaging and integration of fluorescent imaging system

    NASA Astrophysics Data System (ADS)

    Balbin, Jessie R.; Dela Cruz, Jennifer C.; Camba, Clarisse O.; Gozo, Angelo D.; Jimenez, Sheena Mariz B.; Tribiana, Aivje C.

    2017-06-01

    Acne vulgaris, commonly called as acne, is a skin problem that occurs when oil and dead skin cells clog up in a person's pores. This is because hormones change which makes the skin oilier. The problem is people really do not know the real assessment of sensitivity of their skin in terms of fluid development on their faces that tends to develop acne vulgaris, thus having more complications. This research aims to assess Acne Vulgaris using luminescent visualization system through optical imaging and integration of image processing algorithms. Specifically, this research aims to design a prototype for facial fluid analysis using luminescent visualization system through optical imaging and integration of fluorescent imaging system, and to classify different facial fluids present in each person. Throughout the process, some structures and layers of the face will be excluded, leaving only a mapped facial structure with acne regions. Facial fluid regions are distinguished from the acne region as they are characterized differently.

  8. Advanced Imaging Methods for Long-Baseline Optical Interferometry

    NASA Astrophysics Data System (ADS)

    Le Besnerais, G.; Lacour, S.; Mugnier, L. M.; Thiebaut, E.; Perrin, G.; Meimon, S.

    2008-11-01

    We address the data processing methods needed for imaging with a long baseline optical interferometer. We first describe parametric reconstruction approaches and adopt a general formulation of nonparametric image reconstruction as the solution of a constrained optimization problem. Within this framework, we present two recent reconstruction methods, Mira and Wisard, representative of the two generic approaches for dealing with the missing phase information. Mira is based on an implicit approach and a direct optimization of a Bayesian criterion while Wisard adopts a self-calibration approach and an alternate minimization scheme inspired from radio-astronomy. Both methods can handle various regularization criteria. We review commonly used regularization terms and introduce an original quadratic regularization called ldquosoft support constraintrdquo that favors the object compactness. It yields images of quality comparable to nonquadratic regularizations on the synthetic data we have processed. We then perform image reconstructions, both parametric and nonparametric, on astronomical data from the IOTA interferometer, and discuss the respective roles of parametric and nonparametric approaches for optical interferometric imaging.

  9. Geometrically robust image watermarking by sector-shaped partitioning of geometric-invariant regions.

    PubMed

    Tian, Huawei; Zhao, Yao; Ni, Rongrong; Cao, Gang

    2009-11-23

    In a feature-based geometrically robust watermarking system, it is a challenging task to detect geometric-invariant regions (GIRs) which can survive a broad range of image processing operations. Instead of commonly used Harris detector or Mexican hat wavelet method, a more robust corner detector named multi-scale curvature product (MSCP) is adopted to extract salient features in this paper. Based on such features, disk-like GIRs are found, which consists of three steps. First, robust edge contours are extracted. Then, MSCP is utilized to detect the centers for GIRs. Third, the characteristic scale selection is performed to calculate the radius of each GIR. A novel sector-shaped partitioning method for the GIRs is designed, which can divide a GIR into several sector discs with the help of the most important corner (MIC). The watermark message is then embedded bit by bit in each sector by using Quantization Index Modulation (QIM). The GIRs and the divided sector discs are invariant to geometric transforms, so the watermarking method inherently has high robustness against geometric attacks. Experimental results show that the scheme has a better robustness against various image processing operations including common processing attacks, affine transforms, cropping, and random bending attack (RBA) than the previous approaches.

  10. Imaging of non-neoplastic duodenal diseases. A pictorial review with emphasis on MDCT.

    PubMed

    Juanpere, Sergi; Valls, Laia; Serra, Isabel; Osorio, Margarita; Gelabert, Arantxa; Maroto, Albert; Pedraza, Salvador

    2018-04-01

    A wide spectrum of abnormalities can affect the duodenum, ranging from congenital anomalies to traumatic and inflammatory entities. The location of the duodenum and its close relationship with other organs make it easy to miss or misinterpret duodenal abnormalities on cross-sectional imaging. Endoscopy has largely supplanted fluoroscopy for the assessment of the duodenal lumen. Cross-sectional imaging modalities, especially multidetector computed tomography (MDCT) and magnetic resonance imaging (MRI), enable comprehensive assessment of the duodenum and surrounding viscera. Although overlapping imaging findings can make it difficult to differentiate between some lesions, characteristic features may suggest a specific diagnosis in some cases. Familiarity with pathologic conditions that can affect the duodenum and with the optimal MDCT and MRI techniques for studying them can help ensure diagnostic accuracy in duodenal diseases. The goal of this pictorial review is to illustrate the most common non-malignant duodenal processes. Special emphasis is placed on MDCT features and their endoscopic correlation as well as on avoiding the most common pitfalls in the evaluation of the duodenum. • Cross-sectional imaging modalities enable comprehensive assessment of duodenum diseases. • Causes of duodenal obstruction include intraluminal masses, inflammation and hematomas. • Distinguishing between tumour and groove pancreatitis can be challenging by cross-sectional imaging. • Infectious diseases of the duodenum are difficult to diagnose, as the findings are not specific. • The most common cause of nonvariceal upper gastrointestinal bleeding is peptic ulcer disease.

  11. An embedded processor for real-time atmoshperic compensation

    NASA Astrophysics Data System (ADS)

    Bodnar, Michael R.; Curt, Petersen F.; Ortiz, Fernando E.; Carrano, Carmen J.; Kelmelis, Eric J.

    2009-05-01

    Imaging over long distances is crucial to a number of defense and security applications, such as homeland security and launch tracking. However, the image quality obtained from current long-range optical systems can be severely degraded by the turbulent atmosphere in the path between the region under observation and the imager. While this obscured image information can be recovered using post-processing techniques, the computational complexity of such approaches has prohibited deployment in real-time scenarios. To overcome this limitation, we have coupled a state-of-the-art atmospheric compensation algorithm, the average-bispectrum speckle method, with a powerful FPGA-based embedded processing board. The end result is a light-weight, lower-power image processing system that improves the quality of long-range imagery in real-time, and uses modular video I/O to provide a flexible interface to most common digital and analog video transport methods. By leveraging the custom, reconfigurable nature of the FPGA, a 20x speed increase over a modern desktop PC was achieved in a form-factor that is compact, low-power, and field-deployable.

  12. sTools - a data reduction pipeline for the GREGOR Fabry-Pérot Interferometer and the High-resolution Fast Imager at the GREGOR solar telescope

    NASA Astrophysics Data System (ADS)

    Kuckein, C.; Denker, C.; Verma, M.; Balthasar, H.; González Manrique, S. J.; Louis, R. E.; Diercke, A.

    2017-10-01

    A huge amount of data has been acquired with the GREGOR Fabry-Pérot Interferometer (GFPI), large-format facility cameras, and since 2016 with the High-resolution Fast Imager (HiFI). These data are processed in standardized procedures with the aim of providing science-ready data for the solar physics community. For this purpose, we have developed a user-friendly data reduction pipeline called ``sTools'' based on the Interactive Data Language (IDL) and licensed under creative commons license. The pipeline delivers reduced and image-reconstructed data with a minimum of user interaction. Furthermore, quick-look data are generated as well as a webpage with an overview of the observations and their statistics. All the processed data are stored online at the GREGOR GFPI and HiFI data archive of the Leibniz Institute for Astrophysics Potsdam (AIP). The principles of the pipeline are presented together with selected high-resolution spectral scans and images processed with sTools.

  13. Inselect: Automating the Digitization of Natural History Collections

    PubMed Central

    Hudson, Lawrence N.; Blagoderov, Vladimir; Heaton, Alice; Holtzhausen, Pieter; Livermore, Laurence; Price, Benjamin W.; van der Walt, Stéfan; Smith, Vincent S.

    2015-01-01

    The world’s natural history collections constitute an enormous evidence base for scientific research on the natural world. To facilitate these studies and improve access to collections, many organisations are embarking on major programmes of digitization. This requires automated approaches to mass-digitization that support rapid imaging of specimens and associated data capture, in order to process the tens of millions of specimens common to most natural history collections. In this paper we present Inselect—a modular, easy-to-use, cross-platform suite of open-source software tools that supports the semi-automated processing of specimen images generated by natural history digitization programmes. The software is made up of a Windows, Mac OS X, and Linux desktop application, together with command-line tools that are designed for unattended operation on batches of images. Blending image visualisation algorithms that automatically recognise specimens together with workflows to support post-processing tasks such as barcode reading, label transcription and metadata capture, Inselect fills a critical gap to increase the rate of specimen digitization. PMID:26599208

  14. Inselect: Automating the Digitization of Natural History Collections.

    PubMed

    Hudson, Lawrence N; Blagoderov, Vladimir; Heaton, Alice; Holtzhausen, Pieter; Livermore, Laurence; Price, Benjamin W; van der Walt, Stéfan; Smith, Vincent S

    2015-01-01

    The world's natural history collections constitute an enormous evidence base for scientific research on the natural world. To facilitate these studies and improve access to collections, many organisations are embarking on major programmes of digitization. This requires automated approaches to mass-digitization that support rapid imaging of specimens and associated data capture, in order to process the tens of millions of specimens common to most natural history collections. In this paper we present Inselect-a modular, easy-to-use, cross-platform suite of open-source software tools that supports the semi-automated processing of specimen images generated by natural history digitization programmes. The software is made up of a Windows, Mac OS X, and Linux desktop application, together with command-line tools that are designed for unattended operation on batches of images. Blending image visualisation algorithms that automatically recognise specimens together with workflows to support post-processing tasks such as barcode reading, label transcription and metadata capture, Inselect fills a critical gap to increase the rate of specimen digitization.

  15. GPU-Based High-performance Imaging for Mingantu Spectral RadioHeliograph

    NASA Astrophysics Data System (ADS)

    Mei, Ying; Wang, Feng; Wang, Wei; Chen, Linjie; Liu, Yingbo; Deng, Hui; Dai, Wei; Liu, Cuiyin; Yan, Yihua

    2018-01-01

    As a dedicated solar radio interferometer, the MingantU SpEctral RadioHeliograph (MUSER) generates massive observational data in the frequency range of 400 MHz-15 GHz. High-performance imaging forms a significantly important aspect of MUSER’s massive data processing requirements. In this study, we implement a practical high-performance imaging pipeline for MUSER data processing. At first, the specifications of the MUSER are introduced and its imaging requirements are analyzed. Referring to the most commonly used radio astronomy software such as CASA and MIRIAD, we then implement a high-performance imaging pipeline based on the Graphics Processing Unit technology with respect to the current operational status of the MUSER. A series of critical algorithms and their pseudo codes, i.e., detection of the solar disk and sky brightness, automatic centering of the solar disk and estimation of the number of iterations for clean algorithms, are proposed in detail. The preliminary experimental results indicate that the proposed imaging approach significantly increases the processing performance of MUSER and generates images with high-quality, which can meet the requirements of the MUSER data processing. Supported by the National Key Research and Development Program of China (2016YFE0100300), the Joint Research Fund in Astronomy (No. U1531132, U1631129, U1231205) under cooperative agreement between the National Natural Science Foundation of China (NSFC) and the Chinese Academy of Sciences (CAS), the National Natural Science Foundation of China (Nos. 11403009 and 11463003).

  16. GUIs in the MIDAS environment

    NASA Technical Reports Server (NTRS)

    Ballester, P.

    1992-01-01

    MIDAS (Munich Image Data Analysis System) is the image processing system developed at ESO for astronomical data reduction. MIDAS is used for off-line data reduction at ESO and many astronomical institutes all over Europe. In addition to a set of general commands, enabling to process and analyze images, catalogs, graphics and tables, MIDAS includes specialized packages dedicated to astronomical applications or to specific ESO instruments. Several graphical interfaces are available in the MIDAS environment: XHelp provides an interactive help facility, and XLong and XEchelle enable data reduction of long-slip and echelle spectra. GUI builders facilitate the development of interfaces. All ESO interfaces comply to the ESO User Interfaces Common Conventions which secures an identical look and feel for telescope operations, data analysis, and archives.

  17. A Multi-Sensor Aerogeophysical Study of Afghanistan

    DTIC Science & Technology

    2007-01-01

    magnetometer coupled with an Applied Physics 539 3-axis fluxgate mag- netometer for compensation of the aircraft field; • an Applanix DSS 301 digital...survey. DATA COlleCTION AND PROCeSSINg Photogrammetry More than 65,000 high-resolution photogram- metric images were collected using an Applanix Digital...HSI L-Band Polarimetric Imaging Radar KGPS Dual Gravity Meters Common Sensor Bomb-bay Pallet Applanix DSS Camera Sensor Suite • Magnetometer • Gravity

  18. Automated System for Early Breast Cancer Detection in Mammograms

    NASA Technical Reports Server (NTRS)

    Bankman, Isaac N.; Kim, Dong W.; Christens-Barry, William A.; Weinberg, Irving N.; Gatewood, Olga B.; Brody, William R.

    1993-01-01

    The increasing demand on mammographic screening for early breast cancer detection, and the subtlety of early breast cancer signs on mammograms, suggest an automated image processing system that can serve as a diagnostic aid in radiology clinics. We present a fully automated algorithm for detecting clusters of microcalcifications that are the most common signs of early, potentially curable breast cancer. By using the contour map of the mammogram, the algorithm circumvents some of the difficulties encountered with standard image processing methods. The clinical implementation of an automated instrument based on this algorithm is also discussed.

  19. SPIDER image processing for single-particle reconstruction of biological macromolecules from electron micrographs

    PubMed Central

    Shaikh, Tanvir R; Gao, Haixiao; Baxter, William T; Asturias, Francisco J; Boisset, Nicolas; Leith, Ardean; Frank, Joachim

    2009-01-01

    This protocol describes the reconstruction of biological molecules from the electron micrographs of single particles. Computation here is performed using the image-processing software SPIDER and can be managed using a graphical user interface, termed the SPIDER Reconstruction Engine. Two approaches are described to obtain an initial reconstruction: random-conical tilt and common lines. Once an existing model is available, reference-based alignment can be used, a procedure that can be iterated. Also described is supervised classification, a method to look for homogeneous subsets when multiple known conformations of the molecule may coexist. PMID:19180078

  20. Constraint processing in our extensible language for cooperative imaging system

    NASA Astrophysics Data System (ADS)

    Aoki, Minoru; Murao, Yo; Enomoto, Hajime

    1996-02-01

    The extensible WELL (Window-based elaboration language) has been developed using the concept of common platform, where both client and server can communicate with each other with support from a communication manager. This extensible language is based on an object oriented design by introducing constraint processing. Any kind of services including imaging in the extensible language is controlled by the constraints. Interactive functions between client and server are extended by introducing agent functions including a request-respond relation. Necessary service integrations are satisfied with some cooperative processes using constraints. Constraints are treated similarly to data, because the system should have flexibilities in the execution of many kinds of services. The similar control process is defined by using intentional logic. There are two kinds of constraints, temporal and modal constraints. Rendering the constraints, the predicate format as the relation between attribute values can be a warrant for entities' validity as data. As an imaging example, a processing procedure of interaction between multiple objects is shown as an image application for the extensible system. This paper describes how the procedure proceeds in the system, and that how the constraints work for generating moving pictures.

  1. Spectral imaging toolbox: segmentation, hyperstack reconstruction, and batch processing of spectral images for the determination of cell and model membrane lipid order.

    PubMed

    Aron, Miles; Browning, Richard; Carugo, Dario; Sezgin, Erdinc; Bernardino de la Serna, Jorge; Eggeling, Christian; Stride, Eleanor

    2017-05-12

    Spectral imaging with polarity-sensitive fluorescent probes enables the quantification of cell and model membrane physical properties, including local hydration, fluidity, and lateral lipid packing, usually characterized by the generalized polarization (GP) parameter. With the development of commercial microscopes equipped with spectral detectors, spectral imaging has become a convenient and powerful technique for measuring GP and other membrane properties. The existing tools for spectral image processing, however, are insufficient for processing the large data sets afforded by this technological advancement, and are unsuitable for processing images acquired with rapidly internalized fluorescent probes. Here we present a MATLAB spectral imaging toolbox with the aim of overcoming these limitations. In addition to common operations, such as the calculation of distributions of GP values, generation of pseudo-colored GP maps, and spectral analysis, a key highlight of this tool is reliable membrane segmentation for probes that are rapidly internalized. Furthermore, handling for hyperstacks, 3D reconstruction and batch processing facilitates analysis of data sets generated by time series, z-stack, and area scan microscope operations. Finally, the object size distribution is determined, which can provide insight into the mechanisms underlying changes in membrane properties and is desirable for e.g. studies involving model membranes and surfactant coated particles. Analysis is demonstrated for cell membranes, cell-derived vesicles, model membranes, and microbubbles with environmentally-sensitive probes Laurdan, carboxyl-modified Laurdan (C-Laurdan), Di-4-ANEPPDHQ, and Di-4-AN(F)EPPTEA (FE), for quantification of the local lateral density of lipids or lipid packing. The Spectral Imaging Toolbox is a powerful tool for the segmentation and processing of large spectral imaging datasets with a reliable method for membrane segmentation and no ability in programming required. The Spectral Imaging Toolbox can be downloaded from https://uk.mathworks.com/matlabcentral/fileexchange/62617-spectral-imaging-toolbox .

  2. Automated Processing of Imaging Data through Multi-tiered Classification of Biological Structures Illustrated Using Caenorhabditis elegans.

    PubMed

    Zhan, Mei; Crane, Matthew M; Entchev, Eugeni V; Caballero, Antonio; Fernandes de Abreu, Diana Andrea; Ch'ng, QueeLim; Lu, Hang

    2015-04-01

    Quantitative imaging has become a vital technique in biological discovery and clinical diagnostics; a plethora of tools have recently been developed to enable new and accelerated forms of biological investigation. Increasingly, the capacity for high-throughput experimentation provided by new imaging modalities, contrast techniques, microscopy tools, microfluidics and computer controlled systems shifts the experimental bottleneck from the level of physical manipulation and raw data collection to automated recognition and data processing. Yet, despite their broad importance, image analysis solutions to address these needs have been narrowly tailored. Here, we present a generalizable formulation for autonomous identification of specific biological structures that is applicable for many problems. The process flow architecture we present here utilizes standard image processing techniques and the multi-tiered application of classification models such as support vector machines (SVM). These low-level functions are readily available in a large array of image processing software packages and programming languages. Our framework is thus both easy to implement at the modular level and provides specific high-level architecture to guide the solution of more complicated image-processing problems. We demonstrate the utility of the classification routine by developing two specific classifiers as a toolset for automation and cell identification in the model organism Caenorhabditis elegans. To serve a common need for automated high-resolution imaging and behavior applications in the C. elegans research community, we contribute a ready-to-use classifier for the identification of the head of the animal under bright field imaging. Furthermore, we extend our framework to address the pervasive problem of cell-specific identification under fluorescent imaging, which is critical for biological investigation in multicellular organisms or tissues. Using these examples as a guide, we envision the broad utility of the framework for diverse problems across different length scales and imaging methods.

  3. Imaging of Intracranial Pressure Disorders.

    PubMed

    Holbrook, John; Saindane, Amit M

    2017-03-01

    Intracranial pressure (ICP) is the pressure inside the bony calvarium and can be affected by a variety of processes, such as intracranial masses and edema, obstruction or leakage of cerebrospinal fluid, and obstruction of venous outflow. This review focuses on the imaging of 2 important but less well understood ICP disorders: idiopathic intracranial hypertension and spontaneous intracranial hypotension. Both of these ICP disorders have salient imaging findings that are important to recognize to help prevent their misdiagnosis from other common neurological disorders. Copyright © 2017 by the Congress of Neurological Surgeons.

  4. Multispectral image enhancement processing for microsat-borne imager

    NASA Astrophysics Data System (ADS)

    Sun, Jianying; Tan, Zheng; Lv, Qunbo; Pei, Linlin

    2017-10-01

    With the rapid development of remote sensing imaging technology, the micro satellite, one kind of tiny spacecraft, appears during the past few years. A good many studies contribute to dwarfing satellites for imaging purpose. Generally speaking, micro satellites weigh less than 100 kilograms, even less than 50 kilograms, which are slightly larger or smaller than the common miniature refrigerators. However, the optical system design is hard to be perfect due to the satellite room and weight limitation. In most cases, the unprocessed data captured by the imager on the microsatellite cannot meet the application need. Spatial resolution is the key problem. As for remote sensing applications, the higher spatial resolution of images we gain, the wider fields we can apply them. Consequently, how to utilize super resolution (SR) and image fusion to enhance the quality of imagery deserves studying. Our team, the Key Laboratory of Computational Optical Imaging Technology, Academy Opto-Electronics, is devoted to designing high-performance microsat-borne imagers and high-efficiency image processing algorithms. This paper addresses a multispectral image enhancement framework for space-borne imagery, jointing the pan-sharpening and super resolution techniques to deal with the spatial resolution shortcoming of microsatellites. We test the remote sensing images acquired by CX6-02 satellite and give the SR performance. The experiments illustrate the proposed approach provides high-quality images.

  5. The integration processing of the visual and auditory information in videos of real-world events: an ERP study.

    PubMed

    Liu, Baolin; Wang, Zhongning; Jin, Zhixing

    2009-09-11

    In real life, the human brain usually receives information through visual and auditory channels and processes the multisensory information, but studies on the integration processing of the dynamic visual and auditory information are relatively few. In this paper, we have designed an experiment, where through the presentation of common scenario, real-world videos, with matched and mismatched actions (images) and sounds as stimuli, we aimed to study the integration processing of synchronized visual and auditory information in videos of real-world events in the human brain, through the use event-related potentials (ERPs) methods. Experimental results showed that videos of mismatched actions (images) and sounds would elicit a larger P400 as compared to videos of matched actions (images) and sounds. We believe that the P400 waveform might be related to the cognitive integration processing of mismatched multisensory information in the human brain. The results also indicated that synchronized multisensory information would interfere with each other, which would influence the results of the cognitive integration processing.

  6. Comparison of remote sensing image processing techniques to identify tornado damage areas from Landsat TM data

    USGS Publications Warehouse

    Myint, S.W.; Yuan, M.; Cerveny, R.S.; Giri, C.P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and objectoriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. ?? 2008 by MDPI.

  7. Comparison of Remote Sensing Image Processing Techniques to Identify Tornado Damage Areas from Landsat TM Data

    PubMed Central

    Myint, Soe W.; Yuan, May; Cerveny, Randall S.; Giri, Chandra P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and object-oriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. PMID:27879757

  8. Integrating image processing and classification technology into automated polarizing film defect inspection

    NASA Astrophysics Data System (ADS)

    Kuo, Chung-Feng Jeffrey; Lai, Chun-Yu; Kao, Chih-Hsiang; Chiu, Chin-Hsun

    2018-05-01

    In order to improve the current manual inspection and classification process for polarizing film on production lines, this study proposes a high precision automated inspection and classification system for polarizing film, which is used for recognition and classification of four common defects: dent, foreign material, bright spot, and scratch. First, the median filter is used to remove the impulse noise in the defect image of polarizing film. The random noise in the background is smoothed by the improved anisotropic diffusion, while the edge detail of the defect region is sharpened. Next, the defect image is transformed by Fourier transform to the frequency domain, combined with a Butterworth high pass filter to sharpen the edge detail of the defect region, and brought back by inverse Fourier transform to the spatial domain to complete the image enhancement process. For image segmentation, the edge of the defect region is found by Canny edge detector, and then the complete defect region is obtained by two-stage morphology processing. For defect classification, the feature values, including maximum gray level, eccentricity, the contrast, and homogeneity of gray level co-occurrence matrix (GLCM) extracted from the images, are used as the input of the radial basis function neural network (RBFNN) and back-propagation neural network (BPNN) classifier, 96 defect images are then used as training samples, and 84 defect images are used as testing samples to validate the classification effect. The result shows that the classification accuracy by using RBFNN is 98.9%. Thus, our proposed system can be used by manufacturing companies for a higher yield rate and lower cost. The processing time of one single image is 2.57 seconds, thus meeting the practical application requirement of an industrial production line.

  9. Imaging of Posttraumatic Arthritis, Avascular Necrosis, Septic Arthritis, Complex Regional Pain Syndrome, and Cancer Mimicking Arthritis.

    PubMed

    Rupasov, Andrey; Cain, Usa; Montoya, Simone; Blickman, Johan G

    2017-09-01

    This article focuses on the imaging of 5 discrete entities with a common end result of disability: posttraumatic arthritis, a common form of secondary osteoarthritis that results from a prior insult to the joint; avascular necrosis, a disease of impaired osseous blood flow, leading to cellular death and subsequent osseous collapse; septic arthritis, an infectious process leading to destructive changes within the joint; complex regional pain syndrome, a chronic limb-confined painful condition arising after injury; and cases of cancer mimicking arthritis, in which the initial findings seem to represent arthritis, despite a more insidious cause. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Study on real-time images compounded using spatial light modulator

    NASA Astrophysics Data System (ADS)

    Xu, Jin; Chen, Zhebo; Ni, Xuxiang; Lu, Zukang

    2007-01-01

    Image compounded technology is often used on film and its facture. In common, image compounded use image processing arithmetic, get useful object, details, background or some other things from the images firstly, then compounding all these information into one image. When using this method, the film system needs a powerful processor, for the process function is very complex, we get the compounded image for a few time delay. In this paper, we introduce a new method of image real-time compounded, use this method, we can do image composite at the same time with movie shot. The whole system is made up of two camera-lens, spatial light modulator array and image sensor. In system, the spatial light modulator could be liquid crystal display (LCD), liquid crystal on silicon (LCoS), thin film transistor liquid crystal display (TFTLCD), Deformable Micro-mirror Device (DMD), and so on. Firstly, one camera-lens images the object on the spatial light modulator's panel, we call this camera-lens as first image lens. Secondly, we output an image to the panel of spatial light modulator. Then, the image of the object and image that output by spatial light modulator will be spatial compounded on the panel of spatial light modulator. Thirdly, the other camera-lens images the compounded image to the image sensor, and we call this camera-lens as second image lens. After these three steps, we will gain the compound images by image sensor. For the spatial light modulator could output the image continuously, then the image will be compounding continuously too, and the compounding procedure is completed in real-time. When using this method to compounding image, if we will put real object into invented background, we can output the invented background scene on the spatial light modulator, and the real object will be imaged by first image lens. Then, we get the compounded images by image sensor in real time. The same way, if we will put real background to an invented object, we can output the invented object on the spatial light modulator and the real background will be imaged by first image lens. Then, we can also get the compounded images by image sensor real time. Commonly, most spatial light modulator only can do modulate light intensity, so we can only do compounding BW images if use only one panel which without color filter. If we will get colorful compounded image, we need use the system like three spatial light modulator panel projection. In the paper, the system's optical system framework we will give out. In all experiment, the spatial light modulator used liquid crystal on silicon (LCoS). At the end of the paper, some original pictures and compounded pictures will be given on it. Although the system has a few shortcomings, we can conclude that, using this system to compounding images has no delay to do mathematic compounding process, it is a really real time images compounding system.

  11. Applying Enhancement Filters in the Pre-processing of Images of Lymphoma

    NASA Astrophysics Data System (ADS)

    Henrique Silva, Sérgio; Zanchetta do Nascimento, Marcelo; Alves Neves, Leandro; Ramos Batista, Valério

    2015-01-01

    Lymphoma is a type of cancer that affects the immune system, and is classified as Hodgkin or non-Hodgkin. It is one of the ten types of cancer that are the most common on earth. Among all malignant neoplasms diagnosed in the world, lymphoma ranges from three to four percent of them. Our work presents a study of some filters devoted to enhancing images of lymphoma at the pre-processing step. Here the enhancement is useful for removing noise from the digital images. We have analysed the noise caused by different sources like room vibration, scraps and defocusing, and in the following classes of lymphoma: follicular, mantle cell and B-cell chronic lymphocytic leukemia. The filters Gaussian, Median and Mean-Shift were applied to different colour models (RGB, Lab and HSV). Afterwards, we performed a quantitative analysis of the images by means of the Structural Similarity Index. This was done in order to evaluate the similarity between the images. In all cases we have obtained a certainty of at least 75%, which rises to 99% if one considers only HSV. Namely, we have concluded that HSV is an important choice of colour model at pre-processing histological images of lymphoma, because in this case the resulting image will get the best enhancement.

  12. Reflectance from images: a model-based approach for human faces.

    PubMed

    Fuchs, Martin; Blanz, Volker; Lensch, Hendrik; Seidel, Hans-Peter

    2005-01-01

    In this paper, we present an image-based framework that acquires the reflectance properties of a human face. A range scan of the face is not required. Based on a morphable face model, the system estimates the 3D shape and establishes point-to-point correspondence across images taken from different viewpoints and across different individuals' faces. This provides a common parameterization of all reconstructed surfaces that can be used to compare and transfer BRDF data between different faces. Shape estimation from images compensates deformations of the face during the measurement process, such as facial expressions. In the common parameterization, regions of homogeneous materials on the face surface can be defined a priori. We apply analytical BRDF models to express the reflectance properties of each region and we estimate their parameters in a least-squares fit from the image data. For each of the surface points, the diffuse component of the BRDF is locally refined, which provides high detail. We present results for multiple analytical BRDF models, rendered at novel orientations and lighting conditions.

  13. Assessment of Restoration Methods of X-Ray Images with Emphasis on Medical Photogrammetric Usage

    NASA Astrophysics Data System (ADS)

    Hosseinian, S.; Arefi, H.

    2016-06-01

    Nowadays, various medical X-ray imaging methods such as digital radiography, computed tomography and fluoroscopy are used as important tools in diagnostic and operative processes especially in the computer and robotic assisted surgeries. The procedures of extracting information from these images require appropriate deblurring and denoising processes on the pre- and intra-operative images in order to obtain more accurate information. This issue becomes more considerable when the X-ray images are planned to be employed in the photogrammetric processes for 3D reconstruction from multi-view X-ray images since, accurate data should be extracted from images for 3D modelling and the quality of X-ray images affects directly on the results of the algorithms. For restoration of X-ray images, it is essential to consider the nature and characteristics of these kinds of images. X-ray images exhibit severe quantum noise due to limited X-ray photons involved. The assumptions of Gaussian modelling are not appropriate for photon-limited images such as X-ray images, because of the nature of signal-dependant quantum noise. These images are generally modelled by Poisson distribution which is the most common model for low-intensity imaging. In this paper, existing methods are evaluated. For this purpose, after demonstrating the properties of medical X-ray images, the more efficient and recommended methods for restoration of X-ray images would be described and assessed. After explaining these approaches, they are implemented on samples from different kinds of X-ray images. By considering the results, it is concluded that using PURE-LET, provides more effective and efficient denoising than other examined methods in this research.

  14. Noncongenital central nervous system infections in children: radiology review.

    PubMed

    Acosta, Jorge Humberto Davila; Rantes, Claudia Isabel Lazarte; Arbelaez, Andres; Restrepo, Feliza; Castillo, Mauricio

    2014-06-01

    Infections of the central nervous system (CNS) are a very common worldwide health problem in childhood with significant morbidity and mortality. In children, viruses are the most common cause of CNS infections, followed by bacterial etiology, and less frequent due to mycosis and other causes. Noncomplicated meningitis is easier to recognize clinically; however, complications of meningitis such as abscesses, infarcts, venous thrombosis, or extra-axial empyemas are difficult to recognize clinically, and imaging plays a very important role on this setting. In addition, it is important to keep in mind that infectious process adjacent to the CNS such as mastoiditis can develop by contiguity in an infectious process within the CNS. We display the most common causes of meningitis and their complications.

  15. Combined neutron and x-ray imaging at the National Ignition Facility (invited)

    DOE PAGES

    Danly, C. R.; Christensen, K.; Fatherley, Valerie E.; ...

    2016-10-11

    X-ray and neutrons are commonly used to image Inertial Confinement Fusion implosions, providing key diagnostic information on the fuel assembly of burning DT fuel. The x-ray and neutron data provided are complementary as the production of neutrons and x-rays occur from different physical processes, but typically these two images are collected from different views with no opportunity for co-registration of the two images. Neutrons are produced where the DT fusion fuel is burning; X-rays are produced in regions corresponding to high temperatures. Processes such as mix of ablator material into the hotspot can result in increased x-ray production and decreasedmore » neutron production but can only be confidently observed if the two images are collected along the same line of sight and co-registered. To allow direct comparison of x-ray and neutron data, a Combined Neutron X-ray Imaging system has been tested at Omega and installed at the National Ignition Facility to collect an x-ray image along the currently installed neutron imaging line-of-sight. Here, this system is described, and initial results are presented along with prospects for definitive coregistration of the images.« less

  16. Nondestructive, fast, and cost-effective image processing method for roughness measurement of randomly rough metallic surfaces.

    PubMed

    Ghodrati, Sajjad; Kandi, Saeideh Gorji; Mohseni, Mohsen

    2018-06-01

    In recent years, various surface roughness measurement methods have been proposed as alternatives to the commonly used stylus profilometry, which is a low-speed, destructive, expensive but precise method. In this study, a novel method, called "image profilometry," has been introduced for nondestructive, fast, and low-cost surface roughness measurement of randomly rough metallic samples based on image processing and machine vision. The impacts of influential parameters such as image resolution and filtering approach for elimination of the long wavelength surface undulations on the accuracy of the image profilometry results have been comprehensively investigated. Ten surface roughness parameters were measured for the samples using both the stylus and image profilometry. Based on the results, the best image resolution was 800 dpi, and the most practical filtering method was Gaussian convolution+cutoff. In these conditions, the best and worst correlation coefficients (R 2 ) between the stylus and image profilometry results were 0.9892 and 0.9313, respectively. Our results indicated that the image profilometry predicted the stylus profilometry results with high accuracy. Consequently, it could be a viable alternative to the stylus profilometry, particularly in online applications.

  17. Image processing based automatic diagnosis of glaucoma using wavelet features of segmented optic disc from fundus image.

    PubMed

    Singh, Anushikha; Dutta, Malay Kishore; ParthaSarathi, M; Uher, Vaclav; Burget, Radim

    2016-02-01

    Glaucoma is a disease of the retina which is one of the most common causes of permanent blindness worldwide. This paper presents an automatic image processing based method for glaucoma diagnosis from the digital fundus image. In this paper wavelet feature extraction has been followed by optimized genetic feature selection combined with several learning algorithms and various parameter settings. Unlike the existing research works where the features are considered from the complete fundus or a sub image of the fundus, this work is based on feature extraction from the segmented and blood vessel removed optic disc to improve the accuracy of identification. The experimental results presented in this paper indicate that the wavelet features of the segmented optic disc image are clinically more significant in comparison to features of the whole or sub fundus image in the detection of glaucoma from fundus image. Accuracy of glaucoma identification achieved in this work is 94.7% and a comparison with existing methods of glaucoma detection from fundus image indicates that the proposed approach has improved accuracy of classification. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Combined neutron and x-ray imaging at the National Ignition Facility (invited).

    PubMed

    Danly, C R; Christensen, K; Fatherley, V E; Fittinghoff, D N; Grim, G P; Hibbard, R; Izumi, N; Jedlovec, D; Merrill, F E; Schmidt, D W; Simpson, R A; Skulina, K; Volegov, P L; Wilde, C H

    2016-11-01

    X-ray and neutrons are commonly used to image inertial confinement fusion implosions, providing key diagnostic information on the fuel assembly of burning deuterium-tritium (DT) fuel. The x-ray and neutron data provided are complementary as the production of neutrons and x-rays occurs from different physical processes, but typically these two images are collected from different views with no opportunity for co-registration of the two images. Neutrons are produced where the DT fusion fuel is burning; X-rays are produced in regions corresponding to high temperatures. Processes such as mix of ablator material into the hotspot can result in increased x-ray production and decreased neutron production but can only be confidently observed if the two images are collected along the same line of sight and co-registered. To allow direct comparison of x-ray and neutron data, a combined neutron x-ray imaging system has been tested at Omega and installed at the National Ignition Facility to collect an x-ray image along the currently installed neutron imaging line of sight. This system is described, and initial results are presented along with prospects for definitive coregistration of the images.

  19. Decomposed multidimensional control grid interpolation for common consumer electronic image processing applications

    NASA Astrophysics Data System (ADS)

    Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.

    2012-10-01

    Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.

  20. Novel Applications of Radionuclide Imaging in Peripheral Vascular Disease

    PubMed Central

    Stacy, Mitchel R.; Sinusas, Albert J.

    2015-01-01

    Peripheral vascular disease (PVD) is a progressive atherosclerotic disease that leads to stenosis or occlusion of blood vessels supplying the lower extremities. Current diagnostic imaging techniques commonly focus on evaluation of anatomy or blood flow at the macrovascular level and do not permit assessment of the underlying pathophysiology associated with disease progression or treatment response. Molecular imaging with radionuclide-based approaches, such as PET and SPECT, can offer novel insight into PVD by providing non-invasive assessment of biological processes such as angiogenesis and atherosclerosis. This review discusses emerging radionuclide-based imaging approaches that have potential clinical applications in the evaluation of PVD progression and treatment. PMID:26590787

  1. Magnetic resonance imaging of the pediatric neck: an overview.

    PubMed

    Shekdar, Karuna V; Mirsky, David M; Kazahaya, Ken; Bilaniuk, Larissa T

    2012-08-01

    Evaluation of neck lesions in the pediatric population can be a diagnostic challenge, for which magnetic resonance (MR) imaging is extremely valuable. This article provides an overview of the value and utility of MR imaging in the evaluation of pediatric neck lesions, addressing what the referring clinician requires from the radiologist. Concise descriptions and illustrations of MR imaging findings of commonly encountered pathologic entities in the pediatric neck, including abnormalities of the branchial apparatus, thyroglossal duct anomalies, and neoplastic processes, are given. An approach to establishing a differential diagnosis is provided, and critical points of information are summarized. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Processing the Viking lander camera data

    NASA Technical Reports Server (NTRS)

    Levinthal, E. C.; Tucker, R.; Green, W.; Jones, K. L.

    1977-01-01

    Over 1000 camera events were returned from the two Viking landers during the Primary Mission. A system was devised for processing camera data as they were received, in real time, from the Deep Space Network. This system provided a flexible choice of parameters for three computer-enhanced versions of the data for display or hard-copy generation. Software systems allowed all but 0.3% of the imagery scan lines received on earth to be placed correctly in the camera data record. A second-order processing system was developed which allowed extensive interactive image processing including computer-assisted photogrammetry, a variety of geometric and photometric transformations, mosaicking, and color balancing using six different filtered images of a common scene. These results have been completely cataloged and documented to produce an Experiment Data Record.

  3. OSM-Classic : An optical imaging technique for accurately determining strain

    NASA Astrophysics Data System (ADS)

    Aldrich, Daniel R.; Ayranci, Cagri; Nobes, David S.

    OSM-Classic is a program designed in MATLAB® to provide a method of accurately determining strain in a test sample using an optical imaging technique. Measuring strain for the mechanical characterization of materials is most commonly performed with extensometers, LVDT (linear variable differential transistors), and strain gauges; however, these strain measurement methods suffer from their fragile nature and it is not particularly easy to attach these devices to the material for testing. To alleviate these potential problems, an optical approach that does not require contact with the specimen can be implemented to measure the strain. OSM-Classic is a software that interrogates a series of images to determine elongation in a test sample and hence, strain of the specimen. It was designed to provide a graphical user interface that includes image processing with a dynamic region of interest. Additionally, the stain is calculated directly while providing active feedback during the processing.

  4. Study on field weed recognition in real time

    NASA Astrophysics Data System (ADS)

    He, Yong; Pan, Jiazhi; Zhang, Yun

    2006-02-01

    This research aimed to identify weeds from crops in early stage in the field by using image-processing technology. As 3CCD images offer greater binary value difference between weed and crop section than ordinary digital images taken by common cameras. It has 3 channels (green, red, ir red), which takes a snap-photo of the same area, and the three images can be composed into one image, which facilitates the segmentation of different areas. In this research, MS3100 3CCD camera is used to get images of 6 kinds of weeds and crops. Part of these images contained more than 2 kinds of plants. The leaves' shapes, sizes and colors may be very similar or differs from each other greatly. Some are sword-shaped and some (are) round. Some are large as palm and some small as peanut. Some are little brown while other is blue or green. Different combinations are taken into consideration. By the application of image-processing toolkit in MATLAB, the different areas in the image can be segmented clearly. The texture of the images was also analyzed. The processing methods include operations, such as edge detection, erosion, dilation and other algorithms to process the edge vectors and textures. It is of great importance to segment, in real time, the different areas in digital images in field. When the technique is applied in precision farming, many energies and herbicides and many other materials can be saved. At present time large scale softwares as MATLAB on PC are also used, but the computation can be reduced and integrated into a small embedded system. The research results have shown that the application of this technique in agricultural engineering is feasible and of great economical value.

  5. ClearedLeavesDB: an online database of cleared plant leaf images

    PubMed Central

    2014-01-01

    Background Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. Description The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. Conclusions We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org. PMID:24678985

  6. ClearedLeavesDB: an online database of cleared plant leaf images.

    PubMed

    Das, Abhiram; Bucksch, Alexander; Price, Charles A; Weitz, Joshua S

    2014-03-28

    Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org.

  7. Application of the Digital Image Technology in the Visual Monitoring and Prediction of Shuttering Construction Safety

    NASA Astrophysics Data System (ADS)

    Ummin, Okumura; Tian, Han; Zhu, Haiyu; Liu, Fuqiang

    2018-03-01

    Construction safety has always been the first priority in construction process. The common safety problem is the instability of the template support. In order to solve this problem, the digital image measurement technology has been contrived to support real-time monitoring system which can be triggered if the deformation value exceed the specified range. Thus the economic loss could be reduced to the lowest level.

  8. Comparison of ring artifact removal methods using flat panel detector based CT images

    PubMed Central

    2011-01-01

    Background Ring artifacts are the concentric rings superimposed on the tomographic images often caused by the defective and insufficient calibrated detector elements as well as by the damaged scintillator crystals of the flat panel detector. It may be also generated by objects attenuating X-rays very differently in different projection direction. Ring artifact reduction techniques so far reported in the literature can be broadly classified into two groups. One category of the approaches is based on the sinogram processing also known as the pre-processing techniques and the other category of techniques perform processing on the 2-D reconstructed images, recognized as the post-processing techniques in the literature. The strength and weakness of these categories of approaches are yet to be explored from a common platform. Method In this paper, a comparative study of the two categories of ring artifact reduction techniques basically designed for the multi-slice CT instruments is presented from a common platform. For comparison, two representative algorithms from each of the two categories are selected from the published literature. A very recently reported state-of-the-art sinogram domain ring artifact correction method that classifies the ring artifacts according to their strength and then corrects the artifacts using class adaptive correction schemes is also included in this comparative study. The first sinogram domain correction method uses a wavelet based technique to detect the corrupted pixels and then using a simple linear interpolation technique estimates the responses of the bad pixels. The second sinogram based correction method performs all the filtering operations in the transform domain, i.e., in the wavelet and Fourier domain. On the other hand, the two post-processing based correction techniques actually operate on the polar transform domain of the reconstructed CT images. The first method extracts the ring artifact template vector using a homogeneity test and then corrects the CT images by subtracting the artifact template vector from the uncorrected images. The second post-processing based correction technique performs median and mean filtering on the reconstructed images to produce the corrected images. Results The performances of the comparing algorithms have been tested by using both quantitative and perceptual measures. For quantitative analysis, two different numerical performance indices are chosen. On the other hand, different types of artifact patterns, e.g., single/band ring, artifacts from defective and mis-calibrated detector elements, rings in highly structural object and also in hard object, rings from different flat-panel detectors are analyzed to perceptually investigate the strength and weakness of the five methods. An investigation has been also carried out to compare the efficacy of these algorithms in correcting the volume images from a cone beam CT with the parameters determined from one particular slice. Finally, the capability of each correction technique in retaining the image information (e.g., small object at the iso-center) accurately in the corrected CT image has been also tested. Conclusions The results show that the performances of the algorithms are limited and none is fully suitable for correcting different types of ring artifacts without introducing processing distortion to the image structure. To achieve the diagnostic quality of the corrected slices a combination of the two approaches (sinogram- and post-processing) can be used. Also the comparing methods are not suitable for correcting the volume images from a cone beam flat-panel detector based CT. PMID:21846411

  9. Image fusion via nonlocal sparse K-SVD dictionary learning.

    PubMed

    Li, Ying; Li, Fangyi; Bai, Bendu; Shen, Qiang

    2016-03-01

    Image fusion aims to merge two or more images captured via various sensors of the same scene to construct a more informative image by integrating their details. Generally, such integration is achieved through the manipulation of the representations of the images concerned. Sparse representation plays an important role in the effective description of images, offering a great potential in a variety of image processing tasks, including image fusion. Supported by sparse representation, in this paper, an approach for image fusion by the use of a novel dictionary learning scheme is proposed. The nonlocal self-similarity property of the images is exploited, not only at the stage of learning the underlying description dictionary but during the process of image fusion. In particular, the property of nonlocal self-similarity is combined with the traditional sparse dictionary. This results in an improved learned dictionary, hereafter referred to as the nonlocal sparse K-SVD dictionary (where K-SVD stands for the K times singular value decomposition that is commonly used in the literature), and abbreviated to NL_SK_SVD. The performance of the NL_SK_SVD dictionary is applied for image fusion using simultaneous orthogonal matching pursuit. The proposed approach is evaluated with different types of images, and compared with a number of alternative image fusion techniques. The resultant superior fused images using the present approach demonstrates the efficacy of the NL_SK_SVD dictionary in sparse image representation.

  10. Angle-domain common imaging gather extraction via Kirchhoff prestack depth migration based on a traveltime table in transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Liu, Shaoyong; Gu, Hanming; Tang, Yongjie; Bingkai, Han; Wang, Huazhong; Liu, Dingjin

    2018-04-01

    Angle-domain common image-point gathers (ADCIGs) can alleviate the limitations of common image-point gathers in an offset domain, and have been widely used for velocity inversion and amplitude variation with angle (AVA) analysis. We propose an effective algorithm for generating ADCIGs in transversely isotropic (TI) media based on the gradient of traveltime by Kirchhoff pre-stack depth migration (KPSDM), as the dynamic programming method for computing the traveltime in TI media would not suffer from the limitation of shadow zones and traveltime interpolation. Meanwhile, we present a specific implementation strategy for ADCIG extraction via KPSDM. Three major steps are included in the presented strategy: (1) traveltime computation using a dynamic programming approach in TI media; (2) slowness vector calculation by the gradient of a traveltime table calculated previously; (3) construction of illumination vectors and subsurface angles in the migration process. Numerical examples are included to demonstrate the effectiveness of our approach, which henceforce shows its potential application for subsequent tomographic velocity inversion and AVA.

  11. MRI features of cervical articular process degenerative joint disease in Great Dane dogs with cervical spondylomyelopathy.

    PubMed

    Gutierrez-Quintana, Rodrigo; Penderis, Jacques

    2012-01-01

    Cervical spondylomyelopathy or Wobbler syndrome commonly affects the cervical vertebral column of Great Dane dogs. Degenerative changes affecting the articular process joints are a frequent finding in these patients; however, the correlation between these changes and other features of cervical spondylomyelopathy are uncertain. We described and graded the degenerative changes evident in the cervical articular process joints from 13 Great Danes dogs with cervical spondylomyelopathy using MR imaging, and evaluated the relationship between individual features of cervical articular process joint degeneration and the presence of spinal cord compression, vertebral foraminal stenosis, intramedullary spinal cord changes, and intervertebral disc degenerative changes. Degenerative changes affecting the articular process joints were common, with only 13 of 94 (14%) having no degenerative changes. The most severe changes were evident between C4-C5 and C7-T1 intervertebral spaces. Reduction or loss of the hyperintense synovial fluid signal on T2-weighted MR images was the most frequent feature associated with articular process joint degenerative changes. Degenerative changes of the articular process joints affecting the synovial fluid or articular surface, or causing lateral hypertrophic tissue, were positively correlated with lateral spinal cord compression and vertebral foraminal stenosis. Dorsal hypertrophic tissue was positively correlated with dorsal spinal cord compression. Disc-associated spinal cord compression was recognized less frequently. © 2011 Veterinary Radiology & Ultrasound.

  12. OCT monitoring of pathophysiological processes

    NASA Astrophysics Data System (ADS)

    Gladkova, Natalia D.; Shakhova, Natalia M.; Shakhov, Andrei; Petrova, Galina P.; Zagainova, Elena; Snopova, Ludmila; Kuznetzova, Irina N.; Chumakov, Yuri; Feldchtein, Felix I.; Gelikonov, Valentin M.; Gelikonov, Grigory V.; Kamensky, Vladislav A.; Kuranov, Roman V.; Sergeev, Alexander M.

    1999-04-01

    Based on results of clinical examination of about 200 patients we discuss capabilities of the optical coherence tomography (OCT) in monitoring and diagnosing of various pathophysiological processes. Performed in several clinical areas including dermatology, urology, laryngology, gynecology, and dentistry, our study shows the existence of common optical features in manifestation of a pathophysiological process in different organs. In this paper we focus at such universal tomographic optical signs for processes of inflammation, necrosis and tumor growth. We also present data on dynamical OCT monitoring of evolution of pathophysiological processes, both at the stage of disease development and following-up results of different treatments such as drug application, radiation therapy, cryodestruction, and laser vaporization. The discovered peculiarities of OCT images for structural and functional imaging of biological tissues can be put as a basis for application of this method for diagnosing of pathology, guidance of treatment, estimation of its adequacy and assessing of the healing process.

  13. Reconstruction of biofilm images: combining local and global structural parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Resat, Haluk; Renslow, Ryan S.; Beyenal, Haluk

    2014-10-20

    Digitized images can be used for quantitative comparison of biofilms grown under different conditions. Using biofilm image reconstruction, it was previously found that biofilms with a completely different look can have nearly identical structural parameters and that the most commonly utilized global structural parameters were not sufficient to uniquely define these biofilms. Here, additional local and global parameters are introduced to show that these parameters considerably increase the reliability of the image reconstruction process. Assessment using human evaluators indicated that the correct identification rate of the reconstructed images increased from 50% to 72% with the introduction of the new parametersmore » into the reconstruction procedure. An expanded set of parameters especially improved the identification of biofilm structures with internal orientational features and of structures in which colony sizes and spatial locations varied. Hence, the newly introduced structural parameter sets helped to better classify the biofilms by incorporating finer local structural details into the reconstruction process.« less

  14. Virtual and super - virtual refraction method: Application to synthetic data and 2012 of Karangsambung survey data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nugraha, Andri Dian; Adisatrio, Philipus Ronnie

    2013-09-09

    Seismic refraction survey is one of geophysical method useful for imaging earth interior, definitely for imaging near surface. One of the common problems in seismic refraction survey is weak amplitude due to attenuations at far offset. This phenomenon will make it difficult to pick first refraction arrival, hence make it challenging to produce the near surface image. Seismic interferometry is a new technique to manipulate seismic trace for obtaining Green's function from a pair of receiver. One of its uses is for improving first refraction arrival quality at far offset. This research shows that we could estimate physical properties suchmore » as seismic velocity and thickness from virtual refraction processing. Also, virtual refraction could enhance the far offset signal amplitude since there is stacking procedure involved in it. Our results show super - virtual refraction processing produces seismic image which has higher signal-to-noise ratio than its raw seismic image. In the end, the numbers of reliable first arrival picks are also increased.« less

  15. Morphological rational operator for contrast enhancement.

    PubMed

    Peregrina-Barreto, Hayde; Herrera-Navarro, Ana M; Morales-Hernández, Luis A; Terol-Villalobos, Iván R

    2011-03-01

    Contrast enhancement is an important task in image processing that is commonly used as a preprocessing step to improve the images for other tasks such as segmentation. However, some methods for contrast improvement that work well in low-contrast regions affect good contrast regions as well. This occurs due to the fact that some elements may vanish. A method focused on images with different luminance conditions is introduced in the present work. The proposed method is based on morphological transformations by reconstruction and rational operations, which, altogether, allow a more accurate contrast enhancement resulting in regions that are in harmony with their environment. Furthermore, due to the properties of these morphological transformations, the creation of new elements on the image is avoided. The processing is carried out on luminance values in the u'v'Y color space, which avoids the creation of new colors. As a result of the previous considerations, the proposed method keeps the natural color appearance of the image.

  16. MR imaging evaluation of abdominal pain during pregnancy: appendicitis and other nonobstetric causes.

    PubMed

    Spalluto, Lucy B; Woodfield, Courtney A; DeBenedectis, Carolynn M; Lazarus, Elizabeth

    2012-01-01

    Clinical diagnosis of the cause of abdominal pain in a pregnant patient is particularly difficult because of multiple confounding factors related to normal pregnancy. Magnetic resonance (MR) imaging is useful in evaluation of abdominal pain during pregnancy, as it offers the benefit of cross-sectional imaging without ionizing radiation or evidence of harmful effects to the fetus. MR imaging is often performed specifically for diagnosis of possible appendicitis, which is the most common illness necessitating emergency surgery in pregnant patients. However, it is important to look for pathologic processes outside the appendix that may be an alternative source of abdominal pain. Numerous entities other than appendicitis can cause abdominal pain during pregnancy, including processes of gastrointestinal, hepatobiliary, genitourinary, vascular, and gynecologic origin. MR imaging is useful in diagnosing the cause of abdominal pain in a pregnant patient because of its ability to safely demonstrate a wide range of pathologic conditions in the abdomen and pelvis beyond appendicitis. © RSNA, 2012.

  17. Enhancement of image contrast by fluorescence in microtechnology

    NASA Astrophysics Data System (ADS)

    Berndt, Michael; Tutsch, Rainer

    2005-06-01

    New developments in production technology increasingly focus on hybrid microsystems. Especially for systems with movable components, the process step of assembly is mandatory. In general, the accuracy of positioning of the parts has to be better than 1 μm. This makes specialized and automated production equipment necessary, which can lead to a conflict with the aim of flexibility of the range of products. Design for manufacturing is a well known remedy. Assembly aids are common practice today. These features of the workpieces bear no functionality for the end product but considerably ease certain process steps. By standardization of assembly aids generalized production equipment free from product-specific features could be developed. In our contribution, we demonstrate the photogrammetric determination of the positions of workpieces without reference to their exterior shape, using circular fiducial marks of 150 μm in diameter. The surface properties of the workpieces, however, still have an influence on image formation. As an example, the marks may be hidden by local specular reflections. A solution to this problem is to add an exclusive optical property to the fiducial marks to get an image with high contrast against the surface of the workpiece. In biology and medicine samples are stained with fluorescing dyes to enhance the contrast in optical microscopy. In fluorochromes, light of a characteristic wavelength is emitted after the absorption of light with a shorter wavelength. In our experiments we added a fluorochrome to a common photoresist and coated the surface of the workpiece with a thin layer thereof. Using photolithography as a patterning technique we generated fiducial marks with structures down to 25 μm. These marks can be identified by their characteristic emission wavelength under short-wavelength illumination. Only the fiducial marks remain visible in the images and processing these images is straightforward. The generation of fluorescing patterns by photolithography opens new possibilities for testing and process control in many fields of microtechnology.

  18. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  19. Visualization and recommendation of large image collections toward effective sensemaking

    NASA Astrophysics Data System (ADS)

    Gu, Yi; Wang, Chaoli; Nemiroff, Robert; Kao, David; Parra, Denis

    2016-03-01

    In our daily lives, images are among the most commonly found data which we need to handle. We present iGraph, a graph-based approach for visual analytics of large image collections and their associated text information. Given such a collection, we compute the similarity between images, the distance between texts, and the connection between image and text to construct iGraph, a compound graph representation which encodes the underlying relationships among these images and texts. To enable effective visual navigation and comprehension of iGraph with tens of thousands of nodes and hundreds of millions of edges, we present a progressive solution that offers collection overview, node comparison, and visual recommendation. Our solution not only allows users to explore the entire collection with representative images and keywords but also supports detailed comparison for understanding and intuitive guidance for navigation. The visual exploration of iGraph is further enhanced with the implementation of bubble sets to highlight group memberships of nodes, suggestion of abnormal keywords or time periods based on text outlier detection, and comparison of four different recommendation solutions. For performance speedup, multiple graphics processing units and central processing units are utilized for processing and visualization in parallel. We experiment with two image collections and leverage a cluster driving a display wall of nearly 50 million pixels. We show the effectiveness of our approach by demonstrating experimental results and conducting a user study.

  20. Adaptive windowing in contrast-enhanced intravascular ultrasound imaging.

    PubMed

    Lindsey, Brooks D; Martin, K Heath; Jiang, Xiaoning; Dayton, Paul A

    2016-08-01

    Intravascular ultrasound (IVUS) is one of the most commonly-used interventional imaging techniques and has seen recent innovations which attempt to characterize the risk posed by atherosclerotic plaques. One such development is the use of microbubble contrast agents to image vasa vasorum, fine vessels which supply oxygen and nutrients to the walls of coronary arteries and typically have diameters less than 200μm. The degree of vasa vasorum neovascularization within plaques is positively correlated with plaque vulnerability. Having recently presented a prototype dual-frequency transducer for contrast agent-specific intravascular imaging, here we describe signal processing approaches based on minimum variance (MV) beamforming and the phase coherence factor (PCF) for improving the spatial resolution and contrast-to-tissue ratio (CTR) in IVUS imaging. These approaches are examined through simulations, phantom studies, ex vivo studies in porcine arteries, and in vivo studies in chicken embryos. In phantom studies, PCF processing improved CTR by a mean of 4.2dB, while combined MV and PCF processing improved spatial resolution by 41.7%. Improvements of 2.2dB in CTR and 37.2% in resolution were observed in vivo. Applying these processing strategies can enhance image quality in conventional B-mode IVUS or in contrast-enhanced IVUS, where signal-to-noise ratio is relatively low and resolution is at a premium. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Imaging immune response of skin mast cells in vivo with two-photon microscopy

    NASA Astrophysics Data System (ADS)

    Li, Chunqiang; Pastila, Riikka K.; Lin, Charles P.

    2012-02-01

    Intravital multiphoton microscopy has provided insightful information of the dynamic process of immune cells in vivo. However, the use of exogenous labeling agents limits its applications. There is no method to perform functional imaging of mast cells, a population of innate tissue-resident immune cells. Mast cells are widely recognized as the effector cells in allergy. Recently their roles as immunoregulatory cells in certain innate and adaptive immune responses are being actively investigated. Here we report in vivo mouse skin mast cells imaging with two-photon microscopy using endogenous tryptophan as the fluorophore. We studied the following processes. 1) Mast cells degranulation, the first step in the mast cell activation process in which the granules are released into peripheral tissue to trigger downstream reactions. 2) Mast cell reconstitution, a procedure commonly used to study mast cells functioning by comparing the data from wild type mice, mast cell-deficient mice, and mast-cell deficient mice reconstituted with bone marrow-derived mast cells (BMMCs). Imaging the BMMCs engraftment in tissue reveals the mast cells development and the efficiency of BMMCs reconstitution. We observed the reconstitution process for 6 weeks in the ear skin of mast cell-deficient Kit wsh/ w-sh mice by two-photon imaging. Our finding is the first instance of imaging mast cells in vivo with endogenous contrast.

  2. Neuroimaging standards for research into small vessel disease and its contribution to ageing and neurodegeneration

    PubMed Central

    Wardlaw, Joanna M; Smith, Eric E; Biessels, Geert J; Cordonnier, Charlotte; Fazekas, Franz; Frayne, Richard; Lindley, Richard I; O'Brien, John T; Barkhof, Frederik; Benavente, Oscar R; Black, Sandra E; Brayne, Carol; Breteler, Monique; Chabriat, Hugues; DeCarli, Charles; de Leeuw, Frank-Erik; Doubal, Fergus; Duering, Marco; Fox, Nick C; Greenberg, Steven; Hachinski, Vladimir; Kilimann, Ingo; Mok, Vincent; Oostenbrugge, Robert van; Pantoni, Leonardo; Speck, Oliver; Stephan, Blossom C M; Teipel, Stefan; Viswanathan, Anand; Werring, David; Chen, Christopher; Smith, Colin; van Buchem, Mark; Norrving, Bo; Gorelick, Philip B; Dichgans, Martin

    2013-01-01

    Summary Cerebral small vessel disease (SVD) is a common accompaniment of ageing. Features seen on neuroimaging include recent small subcortical infarcts, lacunes, white matter hyperintensities, perivascular spaces, microbleeds, and brain atrophy. SVD can present as a stroke or cognitive decline, or can have few or no symptoms. SVD frequently coexists with neurodegenerative disease, and can exacerbate cognitive deficits, physical disabilities, and other symptoms of neurodegeneration. Terminology and definitions for imaging the features of SVD vary widely, which is also true for protocols for image acquisition and image analysis. This lack of consistency hampers progress in identifying the contribution of SVD to the pathophysiology and clinical features of common neurodegenerative diseases. We are an international working group from the Centres of Excellence in Neurodegeneration. We completed a structured process to develop definitions and imaging standards for markers and consequences of SVD. We aimed to achieve the following: first, to provide a common advisory about terms and definitions for features visible on MRI; second, to suggest minimum standards for image acquisition and analysis; third, to agree on standards for scientific reporting of changes related to SVD on neuroimaging; and fourth, to review emerging imaging methods for detection and quantification of preclinical manifestations of SVD. Our findings and recommendations apply to research studies, and can be used in the clinical setting to standardise image interpretation, acquisition, and reporting. This Position Paper summarises the main outcomes of this international effort to provide the STandards for ReportIng Vascular changes on nEuroimaging (STRIVE). PMID:23867200

  3. Applying a visual language for image processing as a graphical teaching tool in medical imaging

    NASA Astrophysics Data System (ADS)

    Birchman, James J.; Tanimoto, Steven L.; Rowberg, Alan H.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Typical user interaction in image processing is with command line entries, pull-down menus, or text menu selections from a list, and as such is not generally graphical in nature. Although applying these interactive methods to construct more sophisticated algorithms from a series of simple image processing steps may be clear to engineers and programmers, it may not be clear to clinicians. A solution to this problem is to implement a visual programming language using visual representations to express image processing algorithms. Visual representations promote a more natural and rapid understanding of image processing algorithms by providing more visual insight into what the algorithms do than the interactive methods mentioned above can provide. Individuals accustomed to dealing with images will be more likely to understand an algorithm that is represented visually. This is especially true of referring physicians, such as surgeons in an intensive care unit. With the increasing acceptance of picture archiving and communications system (PACS) workstations and the trend toward increasing clinical use of image processing, referring physicians will need to learn more sophisticated concepts than simply image access and display. If the procedures that they perform commonly, such as window width and window level adjustment and image enhancement using unsharp masking, are depicted visually in an interactive environment, it will be easier for them to learn and apply these concepts. The software described in this paper is a visual programming language for imaging processing which has been implemented on the NeXT computer using NeXTstep user interface development tools and other tools in an object-oriented environment. The concept is based upon the description of a visual language titled `Visualization of Vision Algorithms' (VIVA). Iconic representations of simple image processing steps are placed into a workbench screen and connected together into a dataflow path by the user. As the user creates and edits a dataflow path, more complex algorithms can be built on the screen. Once the algorithm is built, it can be executed, its results can be reviewed, and operator parameters can be interactively adjusted until an optimized output is produced. The optimized algorithm can then be saved and added to the system as a new operator. This system has been evaluated as a graphical teaching tool for window width and window level adjustment, image enhancement using unsharp masking, and other techniques.

  4. Analyses of requirements for computer control and data processing experiment subsystems: Image data processing system (IDAPS) software description (7094 version), volume 2

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A description of each of the software modules of the Image Data Processing System (IDAPS) is presented. The changes in the software modules are the result of additions to the application software of the system and an upgrade of the IBM 7094 Mod(1) computer to a 1301 disk storage configuration. Necessary information about IDAPS sofware is supplied to the computer programmer who desires to make changes in the software system or who desires to use portions of the software outside of the IDAPS system. Each software module is documented with: module name, purpose, usage, common block(s) description, method (algorithm of subroutine) flow diagram (if needed), subroutines called, and storage requirements.

  5. Implementation Analysis of Cutting Tool Carbide with Cast Iron Material S45 C on Universal Lathe

    NASA Astrophysics Data System (ADS)

    Junaidi; hestukoro, Soni; yanie, Ahmad; Jumadi; Eddy

    2017-12-01

    Cutting tool is the tools lathe. Cutting process tool CARBIDE with Cast Iron Material Universal Lathe which is commonly found at Analysiscutting Process by some aspects numely Cutting force, Cutting Speed, Cutting Power, Cutting Indication Power, Temperature Zone 1 and Temperatur Zone 2. Purpose of this Study was to determine how big the cutting Speed, Cutting Power, electromotor Power,Temperatur Zone 1 and Temperatur Zone 2 that drives the chisel cutting CARBIDE in the Process of tur ning Cast Iron Material. Cutting force obtained from image analysis relationship between the recommended Component Cuting Force with plane of the cut and Cutting Speed obtained from image analysis of relationships between the recommended Cutting Speed Feed rate.

  6. New false color mapping for image fusion

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Walraven, Jan

    1996-03-01

    A pixel-based color-mapping algorithm is presented that produces a fused false color rendering of two gray-level images representing different sensor modalities. The resulting images have a higher information content than each of the original images and retain sensor-specific image information. The unique component of each image modality is enhanced in the resulting fused color image representation. First, the common component of the two original input images is determined. Second, the common component is subtracted from the original images to obtain the unique component of each image. Third, the unique component of each image modality is subtracted from the image of the other modality. This step serves to enhance the representation of sensor-specific details in the final fused result. Finally, a fused color image is produced by displaying the images resulting from the last step through, respectively, the red and green channels of a color display. The method is applied to fuse thermal and visual images. The results show that the color mapping enhances the visibility of certain details and preserves the specificity of the sensor information. The fused images also have a fairly natural appearance. The fusion scheme involves only operations on corresponding pixels. The resolution of a fused image is therefore directly related to the resolution of the input images. Before fusing, the contrast of the images can be enhanced and their noise can be reduced by standard image- processing techniques. The color mapping algorithm is computationally simple. This implies that the investigated approaches can eventually be applied in real time and that the hardware needed is not too complicated or too voluminous (an important consideration when it has to fit in an airplane, for instance).

  7. Active thermography and post-processing image enhancement for recovering of abraded and paint-covered alphanumeric identification marks

    NASA Astrophysics Data System (ADS)

    Montanini, R.; Quattrocchi, A.; Piccolo, S. A.

    2016-09-01

    Alphanumeric marking is a common technique employed in industrial applications for identification of products. However, the realised mark can undergo deterioration, either by extensive use or voluntary deletion (e.g. removal of identification numbers of weapons or vehicles). For recovery of the lost data many destructive or non-destructive techniques have been endeavoured so far, which however present several restrictions. In this paper, active infrared thermography has been exploited for the first time in order to assess its effectiveness in restoring paint covered and abraded labels made by means of different manufacturing processes (laser, dot peen, impact, cold press and scribe). Optical excitation of the target surface has been achieved using pulse (PT), lock-in (LT) and step heating (SHT) thermography. Raw infrared images were analysed with a dedicated image processing software originally developed in Matlab™, exploiting several methods, which include thermographic signal reconstruction (TSR), guided filtering (GF), block guided filtering (BGF) and logarithmic transformation (LN). Proper image processing of the raw infrared images resulted in superior contrast and enhanced readability. In particular, for deeply abraded marks, good outcomes have been obtained by application of logarithmic transformation to raw PT images and block guided filtering to raw phase LT images. With PT and LT it was relatively easy to recover labels covered by paint, with the latter one providing better thermal contrast for all the examined targets. Step heating thermography never led to adequate label identification instead.

  8. Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Park, Justin C.; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G.; Liu, Chihray; Lu, Bo

    2015-12-01

    Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm ‘the common mask guided image reconstruction’ (c-MGIR). In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and ‘well’ solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the algorithm, the code was implemented with a graphic processing unit for parallel processing purposes. Root mean square error (RMSE) between the ground truth and reconstructed volumes of the numerical phantom were in the descending order of FDK, CTV, PICCS, MCIR, and c-MGIR for all phases. Specifically, the means and the standard deviations of the RMSE of FDK, CTV, PICCS, MCIR and c-MGIR for all phases were 42.64  ±  6.5%, 3.63  ±  0.83%, 1.31%  ±  0.09%, 0.86%  ±  0.11% and 0.52 %  ±  0.02%, respectively. The image quality of the patient case also indicated the superiority of c-MGIR compared to other algorithms. The results indicated that clinically viable 4D CBCT images can be reconstructed while requiring no more projection data than a typical clinical 3D CBCT scan. This makes c-MGIR a potential online reconstruction algorithm for 4D CBCT, which can provide much better image quality than other available algorithms, while requiring less dose and potentially less scanning time.

  9. Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography.

    PubMed

    Park, Justin C; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G; Liu, Chihray; Lu, Bo

    2015-12-07

    Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm 'the common mask guided image reconstruction' (c-MGIR).In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and 'well' solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the algorithm, the code was implemented with a graphic processing unit for parallel processing purposes.Root mean square error (RMSE) between the ground truth and reconstructed volumes of the numerical phantom were in the descending order of FDK, CTV, PICCS, MCIR, and c-MGIR for all phases. Specifically, the means and the standard deviations of the RMSE of FDK, CTV, PICCS, MCIR and c-MGIR for all phases were 42.64  ±  6.5%, 3.63  ±  0.83%, 1.31%  ±  0.09%, 0.86%  ±  0.11% and 0.52 %  ±  0.02%, respectively. The image quality of the patient case also indicated the superiority of c-MGIR compared to other algorithms.The results indicated that clinically viable 4D CBCT images can be reconstructed while requiring no more projection data than a typical clinical 3D CBCT scan. This makes c-MGIR a potential online reconstruction algorithm for 4D CBCT, which can provide much better image quality than other available algorithms, while requiring less dose and potentially less scanning time.

  10. Post-encoding control of working memory enhances processing of relevant information in rhesus monkeys (Macaca mulatta).

    PubMed

    Brady, Ryan J; Hampton, Robert R

    2018-06-01

    Working memory is a system by which a limited amount of information can be kept available for processing after the cessation of sensory input. Because working memory resources are limited, it is adaptive to focus processing on the most relevant information. We used a retro-cue paradigm to determine the extent to which monkey working memory possesses control mechanisms that focus processing on the most relevant representations. Monkeys saw a sample array of images, and shortly after the array disappeared, they were visually cued to a location that had been occupied by one of the sample images. The cue indicated which image should be remembered for the upcoming recognition test. By determining whether the monkeys were more accurate and quicker to respond to cued images compared to un-cued images, we tested the hypothesis that monkey working memory focuses processing on relevant information. We found a memory benefit for the cued image in terms of accuracy and retrieval speed with a memory load of two images. With a memory load of three images, we found a benefit in retrieval speed but only after shortening the onset latency of the retro-cue. Our results demonstrate previously unknown flexibility in the cognitive control of memory in monkeys, suggesting that control mechanisms in working memory likely evolved in a common ancestor of humans and monkeys more than 32 million years ago. Future work should be aimed at understanding the interaction between memory load and the ability to control memory resources, and the role of working memory control in generating differences in cognitive capacity among primates. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Geocoding uncertainty analysis for the automated processing of Sentinel-1 data using Sentinel-1 Toolbox software

    NASA Astrophysics Data System (ADS)

    Dostálová, Alena; Naeimi, Vahid; Wagner, Wolfgang; Elefante, Stefano; Cao, Senmao; Persson, Henrik

    2016-10-01

    One of the major advantages of the Sentinel-1 data is its capability to provide very high spatio-temporal coverage allowing the mapping of large areas as well as creation of dense time-series of the Sentinel-1 acquisitions. The SGRT software developed at TU Wien aims at automated processing of Sentinel-1 data for global and regional products. The first step of the processing consists of the Sentinel-1 data geocoding with the help of S1TBX software and their resampling to a common grid. These resampled images serve as an input for the product derivation. Thus, it is very important to select the most reliable processing settings and assess the geocoding uncertainty for both backscatter and projected local incidence angle images. Within this study, selection of Sentinel-1 acquisitions over 3 test areas in Europe were processed manually in the S1TBX software, testing multiple software versions, processing settings and digital elevation models (DEM) and the accuracy of the resulting geocoded images were assessed. Secondly, all available Sentinel-1 data over the areas were processed using selected settings and detailed quality check was performed. Overall, strong influence of the used DEM on the geocoding quality was confirmed with differences up to 80 meters in areas with higher terrain variations. In flat areas, the geocoding accuracy of backscatter images was overall good, with observed shifts between 0 and 30m. Larger systematic shifts were identified in case of projected local incidence angle images. These results encourage the automated processing of large volumes of Sentinel-1 data.

  12. Comparison of two SVD-based color image compression schemes.

    PubMed

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.

  13. Comparison of two SVD-based color image compression schemes

    PubMed Central

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR. PMID:28257451

  14. A Synthesis of Star Calibration Techniques for Ground-Based Narrowband Electron-Multiplying Charge-Coupled Device Imagers Used in Auroral Photometry

    NASA Technical Reports Server (NTRS)

    Grubbs, Guy II; Michell, Robert; Samara, Marilia; Hampton, Don; Jahn, Jorg-Micha

    2016-01-01

    A technique is presented for the periodic and systematic calibration of ground-based optical imagers. It is important to have a common system of units (Rayleighs or photon flux) for cross comparison as well as self-comparison over time. With the advancement in technology, the sensitivity of these imagers has improved so that stars can be used for more precise calibration. Background subtraction, flat fielding, star mapping, and other common techniques are combined in deriving a calibration technique appropriate for a variety of ground-based imager installations. Spectral (4278, 5577, and 8446 A ) ground-based imager data with multiple fields of view (19, 47, and 180 deg) are processed and calibrated using the techniques developed. The calibration techniques applied result in intensity measurements in agreement between different imagers using identical spectral filtering, and the intensity at each wavelength observed is within the expected range of auroral measurements. The application of these star calibration techniques, which convert raw imager counts into units of photon flux, makes it possible to do quantitative photometry. The computed photon fluxes, in units of Rayleighs, can be used for the absolute photometry between instruments or as input parameters for auroral electron transport models.

  15. My Body Looks Like That Girl’s: Body Mass Index Modulates Brain Activity during Body Image Self-Reflection among Young Women

    PubMed Central

    Wen, Xin; She, Ying; Vinke, Petra Corianne; Chen, Hong

    2016-01-01

    Body image distress or body dissatisfaction is one of the most common consequences of obesity and overweight. We investigated the neural bases of body image processing in overweight and average weight young women to understand whether brain regions that were previously found to be involved in processing self-reflective, perspective and affective components of body image would show different activation between two groups. Thirteen overweight (O-W group, age = 20.31±1.70 years) and thirteen average weight (A-W group, age = 20.15±1.62 years) young women underwent functional magnetic resonance imaging while performing a body image self-reflection task. Among both groups, whole-brain analysis revealed activations of a brain network related to perceptive and affective components of body image processing. ROI analysis showed a main effect of group in ACC as well as a group by condition interaction within bilateral EBA, bilateral FBA, right IPL, bilateral DLPFC, left amygdala and left MPFC. For the A-W group, simple effect analysis revealed stronger activations in Thin-Control compared to Fat-Control condition within regions related to perceptive (including bilateral EBA, bilateral FBA, right IPL) and affective components of body image processing (including bilateral DLPFC, left amygdala), as well as self-reference (left MPFC). The O-W group only showed stronger activations in Fat-Control than in Thin-Control condition within regions related to the perceptive component of body image processing (including left EBA and left FBA). Path analysis showed that in the Fat-Thin contrast, body dissatisfaction completely mediated the group difference in brain response in left amygdala across the whole sample. Our data are the first to demonstrate differences in brain response to body pictures between average weight and overweight young females involved in a body image self-reflection task. These results provide insights for understanding the vulnerability to body image distress among overweight or obese young females. PMID:27764116

  16. My Body Looks Like That Girl's: Body Mass Index Modulates Brain Activity during Body Image Self-Reflection among Young Women.

    PubMed

    Gao, Xiao; Deng, Xiao; Wen, Xin; She, Ying; Vinke, Petra Corianne; Chen, Hong

    2016-01-01

    Body image distress or body dissatisfaction is one of the most common consequences of obesity and overweight. We investigated the neural bases of body image processing in overweight and average weight young women to understand whether brain regions that were previously found to be involved in processing self-reflective, perspective and affective components of body image would show different activation between two groups. Thirteen overweight (O-W group, age = 20.31±1.70 years) and thirteen average weight (A-W group, age = 20.15±1.62 years) young women underwent functional magnetic resonance imaging while performing a body image self-reflection task. Among both groups, whole-brain analysis revealed activations of a brain network related to perceptive and affective components of body image processing. ROI analysis showed a main effect of group in ACC as well as a group by condition interaction within bilateral EBA, bilateral FBA, right IPL, bilateral DLPFC, left amygdala and left MPFC. For the A-W group, simple effect analysis revealed stronger activations in Thin-Control compared to Fat-Control condition within regions related to perceptive (including bilateral EBA, bilateral FBA, right IPL) and affective components of body image processing (including bilateral DLPFC, left amygdala), as well as self-reference (left MPFC). The O-W group only showed stronger activations in Fat-Control than in Thin-Control condition within regions related to the perceptive component of body image processing (including left EBA and left FBA). Path analysis showed that in the Fat-Thin contrast, body dissatisfaction completely mediated the group difference in brain response in left amygdala across the whole sample. Our data are the first to demonstrate differences in brain response to body pictures between average weight and overweight young females involved in a body image self-reflection task. These results provide insights for understanding the vulnerability to body image distress among overweight or obese young females.

  17. An enhanced approach for biomedical image restoration using image fusion techniques

    NASA Astrophysics Data System (ADS)

    Karam, Ghada Sabah; Abbas, Fatma Ismail; Abood, Ziad M.; Kadhim, Kadhim K.; Karam, Nada S.

    2018-05-01

    Biomedical image is generally noisy and little blur due to the physical mechanisms of the acquisition process, so one of the common degradations in biomedical image is their noise and poor contrast. The idea of biomedical image enhancement is to improve the quality of the image for early diagnosis. In this paper we are using Wavelet Transformation to remove the Gaussian noise from biomedical images: Positron Emission Tomography (PET) image and Radiography (Radio) image, in different color spaces (RGB, HSV, YCbCr), and we perform the fusion of the denoised images resulting from the above denoising techniques using add image method. Then some quantive performance metrics such as signal -to -noise ratio (SNR), peak signal-to-noise ratio (PSNR), and Mean Square Error (MSE), etc. are computed. Since this statistical measurement helps in the assessment of fidelity and image quality. The results showed that our approach can be applied of Image types of color spaces for biomedical images.

  18. The HERSCHEL/PACS early Data Products

    NASA Astrophysics Data System (ADS)

    Wieprecht, E.; Wetzstein, M.; Huygen, R.; Vandenbussche, B.; De Meester, W.

    2006-07-01

    ESA's Herschel Space Observatory to be launched in 2007, is the first space observatory covering the full far-infrared and submillimeter wavelength range (60 - 670 microns). The Photodetector Array Camera & Spectrometer (PACS) is one of the three science instruments. It contains two Ge:Ga photoconductor arrays and two bolometer arrays to perform imaging line spectroscopy and imaging photometry in the 60 - 210 micron wavelength band. The HERSCHEL ground segment (Herschel Common Science System - HCSS) is implemented using JAVA technology and written in a common effort by the HERSCHEL Science Center and the three instrument teams. The PACS Common Software System (PCSS) is based on the HCSS and used for the online and offline analysis of PACS data. For telemetry bandwidth reasons PACS science data are partially processed on board, compressed, cut into telemetry packets and transmitted to the ground. These steps are instrument mode dependent. We will present the software model which allows to reverse the discrete on board processing steps and evaluate the data. After decompression and reconstruction the detector data and instrument status information are organized in two main PACS Products. The design of these JAVA classes considers the individual sampling rates, data formats, memory and performance optimization aspects and comfortable user interfaces.

  19. Real-time embedded atmospheric compensation for long-range imaging using the average bispectrum speckle method

    NASA Astrophysics Data System (ADS)

    Curt, Petersen F.; Bodnar, Michael R.; Ortiz, Fernando E.; Carrano, Carmen J.; Kelmelis, Eric J.

    2009-02-01

    While imaging over long distances is critical to a number of security and defense applications, such as homeland security and launch tracking, current optical systems are limited in resolving power. This is largely a result of the turbulent atmosphere in the path between the region under observation and the imaging system, which can severely degrade captured imagery. There are a variety of post-processing techniques capable of recovering this obscured image information; however, the computational complexity of such approaches has prohibited real-time deployment and hampers the usability of these technologies in many scenarios. To overcome this limitation, we have designed and manufactured an embedded image processing system based on commodity hardware which can compensate for these atmospheric disturbances in real-time. Our system consists of a reformulation of the average bispectrum speckle method coupled with a high-end FPGA processing board, and employs modular I/O capable of interfacing with most common digital and analog video transport methods (composite, component, VGA, DVI, SDI, HD-SDI, etc.). By leveraging the custom, reconfigurable nature of the FPGA, we have achieved performance twenty times faster than a modern desktop PC, in a form-factor that is compact, low-power, and field-deployable.

  20. 3D Image Analysis of Geomaterials using Confocal Microscopy

    NASA Astrophysics Data System (ADS)

    Mulukutla, G.; Proussevitch, A.; Sahagian, D.

    2009-05-01

    Confocal microscopy is one of the most significant advances in optical microscopy of the last century. It is widely used in biological sciences but its application to geomaterials lingers due to a number of technical problems. Potentially the technique can perform non-invasive testing on a laser illuminated sample that fluoresces using a unique optical sectioning capability that rejects out-of-focus light reaching the confocal aperture. Fluorescence in geomaterials is commonly induced using epoxy doped with a fluorochrome that is impregnated into the sample to enable discrimination of various features such as void space or material boundaries. However, for many geomaterials, this method cannot be used because they do not naturally fluoresce and because epoxy cannot be impregnated into inaccessible parts of the sample due to lack of permeability. As a result, the confocal images of most geomaterials that have not been pre-processed with extensive sample preparation techniques are of poor quality and lack the necessary image and edge contrast necessary to apply any commonly used segmentation techniques to conduct any quantitative study of its features such as vesicularity, internal structure, etc. In our present work, we are developing a methodology to conduct a quantitative 3D analysis of images of geomaterials collected using a confocal microscope with minimal amount of prior sample preparation and no addition of fluorescence. Two sample geomaterials, a volcanic melt sample and a crystal chip containing fluid inclusions are used to assess the feasibility of the method. A step-by-step process of image analysis includes application of image filtration to enhance the edges or material interfaces and is based on two segmentation techniques: geodesic active contours and region competition. Both techniques have been applied extensively to the analysis of medical MRI images to segment anatomical structures. Preliminary analysis suggests that there is distortion in the shapes of the segmented vesicles, vapor bubbles, and void spaces due to the optical measurements, so corrective actions are being explored. This will establish a practical and reliable framework for an adaptive 3D image processing technique for the analysis of geomaterials using confocal microscopy.

  1. Real-time thermal imaging of solid oxide fuel cell cathode activity in working condition.

    PubMed

    Montanini, Roberto; Quattrocchi, Antonino; Piccolo, Sebastiano A; Amato, Alessandra; Trocino, Stefano; Zignani, Sabrina C; Faro, Massimiliano Lo; Squadrito, Gaetano

    2016-09-01

    Electrochemical methods such as voltammetry and electrochemical impedance spectroscopy are effective for quantifying solid oxide fuel cell (SOFC) operational performance, but not for identifying and monitoring the chemical processes that occur on the electrodes' surface, which are thought to be strictly related to the SOFCs' efficiency. Because of their high operating temperature, mechanical failure or cathode delamination is a common shortcoming of SOFCs that severely affects their reliability. Infrared thermography may provide a powerful tool for probing in situ SOFC electrode processes and the materials' structural integrity, but, due to the typical design of pellet-type cells, a complete optical access to the electrode surface is usually prevented. In this paper, a specially designed SOFC is introduced, which allows temperature distribution to be measured over all the cathode area while still preserving the electrochemical performance of the device. Infrared images recorded under different working conditions are then processed by means of a dedicated image processing algorithm for quantitative data analysis. Results reported in the paper highlight the effectiveness of infrared thermal imaging in detecting the onset of cell failure during normal operation and in monitoring cathode activity when the cell is fed with different types of fuels.

  2. Non-stationary noise estimation using dictionary learning and Gaussian mixture models

    NASA Astrophysics Data System (ADS)

    Hughes, James M.; Rockmore, Daniel N.; Wang, Yang

    2014-02-01

    Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.

  3. Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service.

    PubMed

    Bao, Shunxing; Plassard, Andrew J; Landman, Bennett A; Gokhale, Aniruddha

    2017-04-01

    Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based "medical image processing-as-a-service" offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop's distributed file system. Despite this promise, HBase's load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage.

  4. Multispectral imaging approach for simplified non-invasive in-vivo evaluation of gingival erythema

    NASA Astrophysics Data System (ADS)

    Eckhard, Timo; Valero, Eva M.; Nieves, Juan L.; Gallegos-Rueda, José M.; Mesa, Francisco

    2012-03-01

    Erythema is a common visual sign of gingivitis. In this work, a new and simple low-cost image capture and analysis method for erythema assessment is proposed. The method is based on digital still images of gingivae and applied on a pixel-by-pixel basis. Multispectral images are acquired with a conventional digital camera and multiplexed LED illumination panels at 460nm and 630nm peak wavelength. An automatic work-flow segments teeth from gingiva regions in the images and creates a map of local blood oxygenation levels, which relates to the presence of erythema. The map is computed from the ratio of the two spectral images. An advantage of the proposed approach is that the whole process is easy to manage by dental health care professionals in clinical environment.

  5. Quantitative imaging of fibrotic and morphological changes in liver of non-alcoholic steatohepatitis (NASH) model mice by second harmonic generation (SHG) and auto-fluorescence (AF) imaging using two-photon excitation microscopy (TPEM).

    PubMed

    Yamamoto, Shin; Oshima, Yusuke; Saitou, Takashi; Watanabe, Takao; Miyake, Teruki; Yoshida, Osamu; Tokumoto, Yoshio; Abe, Masanori; Matsuura, Bunzo; Hiasa, Yoichi; Imamura, Takeshi

    2016-12-01

    Non-alcoholic steatohepatitis (NASH) is a common liver disorder caused by fatty liver. Because NASH is associated with fibrotic and morphological changes in liver tissue, a direct imaging technique is required for accurate staging of liver tissue. For this purpose, in this study we took advantage of two label-free optical imaging techniques, second harmonic generation (SHG) and auto-fluorescence (AF), using two-photon excitation microscopy (TPEM). Three-dimensional ex vivo imaging of tissues from NASH model mice, followed by image processing, revealed that SHG and AF are sufficient to quantitatively characterize the hepatic capsule at an early stage and parenchymal morphologies associated with liver disease progression, respectively.

  6. Wave intensity analysis in mice: an ultrasound-based study in the abdominal aorta and common carotid artery

    NASA Astrophysics Data System (ADS)

    Di Lascio, N.; Kusmic, C.; Stea, F.; Faita, F.

    2017-03-01

    Wave Intensity Analysis (WIA) can provide parameters representative of the interaction between the vascular network and the heart. It has been already demonstrated that WIA-derived biomarkes have a quantitative physiological meaning. Aim of this study was to develop an image process algorithm for performing non-invasive WIA in mice and correlate commonly used cardiac function parameters with WIA-derived indexes. Sixteen wild-type male mice (8 weeks-old) were imaged with high-resolution ultrasound (Vevo 2100). Abdominal aorta and common carotid pulse wave velocities (PWVabd, PWVcar) were obtained processing B-Mode and PW-Doppler images and employed to assess WIA. Amplitudes of the first (W1abd, W1car) and the second (W2abd, W2car) local maxima and minimum (Wbabd,Wbcar) were evaluated; areas under the negative part of the curve were also calculated (NAabd, NAcar). Cardiac output (CO), ejection fraction (EF) fractional shortening (FS) and stroke volume (SV) were estimated; strain analysis provided strain and strain rate values for longitudinal, radial and circumferential directions (LS, LSR, RS, RSR, CS, CSR). Isovolumetric relaxation time (IVRT) was calculated from mitral inflow PW-Doppler images; IVRT values were normalized for cardiac cycle length. W1abd was correlated with LS (R=0.65) and LSR (R=0.59), while W1car was correlated with CO (R=0.58), EF (R=0.72), LS (R=0.65), LSR (R=0.89), CS (R=0.71), CSR (R=0.70). Both W2abd and W2car were not correlated with IVRT. Carotid artery WIA-derived parameters are more representative of cardiac function than those obtained from the abdominal aorta. The described US-based method can provide information about cardiac function and cardio-vascular interaction simply studying a single vascular site.

  7. Platform for Postprocessing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Don

    2008-01-01

    Taking advantage of the similarities that exist among all waveform-based non-destructive evaluation (NDE) methods, a common software platform has been developed containing multiple- signal and image-processing techniques for waveforms and images. The NASA NDE Signal and Image Processing software has been developed using the latest versions of LabVIEW, and its associated Advanced Signal Processing and Vision Toolkits. The software is useable on a PC with Windows XP and Windows Vista. The software has been designed with a commercial grade interface in which two main windows, Waveform Window and Image Window, are displayed if the user chooses a waveform file to display. Within these two main windows, most actions are chosen through logically conceived run-time menus. The Waveform Window has plots for both the raw time-domain waves and their frequency- domain transformations (fast Fourier transform and power spectral density). The Image Window shows the C-scan image formed from information of the time-domain waveform (such as peak amplitude) or its frequency-domain transformation at each scan location. The user also has the ability to open an image, or series of images, or a simple set of X-Y paired data set in text format. Each of the Waveform and Image Windows contains menus from which to perform many user actions. An option exists to use raw waves obtained directly from scan, or waves after deconvolution if system wave response is provided. Two types of deconvolution, time-based subtraction or inverse-filter, can be performed to arrive at a deconvolved wave set. Additionally, the menu on the Waveform Window allows preprocessing of waveforms prior to image formation, scaling and display of waveforms, formation of different types of images (including non-standard types such as velocity), gating of portions of waves prior to image formation, and several other miscellaneous and specialized operations. The menu available on the Image Window allows many further image processing and analysis operations, some of which are found in commercially-available image-processing software programs (such as Adobe Photoshop), and some that are not (removing outliers, Bscan information, region-of-interest analysis, line profiles, and precision feature measurements).

  8. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers

    PubMed Central

    Filipovic, Nenad D.

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration. PMID:28611851

  9. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers.

    PubMed

    Milankovic, Ivan L; Mijailovic, Nikola V; Filipovic, Nenad D; Peulic, Aleksandar S

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration.

  10. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array—Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique

    PubMed Central

    Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue

    2017-01-01

    With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array—application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. PMID:28672813

  11. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array-Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique.

    PubMed

    Yang, Chen; Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue

    2017-06-24

    With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array-application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.

  12. Detecting tympanostomy tubes from otoscopic images via offline and online training.

    PubMed

    Wang, Xin; Valdez, Tulio A; Bi, Jinbo

    2015-06-01

    Tympanostomy tube placement has been commonly used nowadays as a surgical treatment for otitis media. Following the placement, regular scheduled follow-ups for checking the status of the tympanostomy tubes are important during the treatment. The complexity of performing the follow up care mainly lies on identifying the presence and patency of the tympanostomy tube. An automated tube detection program will largely reduce the care costs and enhance the clinical efficiency of the ear nose and throat specialists and general practitioners. In this paper, we develop a computer vision system that is able to automatically detect a tympanostomy tube in an otoscopic image of the ear drum. The system comprises an offline classifier training process followed by a real-time refinement stage performed at the point of care. The offline training process constructs a three-layer cascaded classifier with each layer reflecting specific characteristics of the tube. The real-time refinement process enables the end users to interact and adjust the system over time based on their otoscopic images and patient care. The support vector machine (SVM) algorithm has been applied to train all of the classifiers. Empirical evaluation of the proposed system on both high quality hospital images and low quality internet images demonstrates the effectiveness of the system. The offline classifier trained using 215 images could achieve a 90% accuracy in terms of classifying otoscopic images with and without a tympanostomy tube, and then the real-time refinement process could improve the classification accuracy by 3-5% based on additional 20 images. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Extracting paleo-climate signals from sediment laminae: A new, automated image processing method

    NASA Astrophysics Data System (ADS)

    Gan, S. Q.; Scholz, C. A.

    2010-12-01

    Lake sediment laminations commonly represent depositional seasonality in lacustrine environments. Their occurrence and quantitative attributes contain various signals of their depositional environment, limnological conditions and climate. However, the identification and measurement of laminae remains a mainly manual process that is not only tedious and labor intensive, but also subjective and error prone. We present a batch method to identify laminae and extract lamina properties automatically and accurately from sediment core images. Our algorithm is focused on image enhancement that improves the signal-to-noise ratio and maximizes and normalizes image contrast. The unique feature of these algorithms is that they are all direction-sensitive, i.e., the algorithms treat images in the horizontal direction and vertical direction differently and independently. The core process of lamina identification is to use a one-dimensional (1-D) lamina identification algorithm to produce a lamina map, and to use image blob analyses and lamina connectivity analyses to aggregate and smash two-dimensional (2-D) lamina data for the best representation of fine-scale stratigraphy in the sediment profile. The primary output datasets of the system are definitions of laminae and primary color values for each pixel and each lamina in the depth direction; other derived datasets can be retrieved at users’ discretion. Sediment core images from Lake Hitchcock , USA and Lake Bosumtwi, Ghana, were used for algorithm development and testing. As a demonstration of the utility of the software, we processed sediment core images from the top of 50 meters of drill core (representing the past ~100 ky) from Lake Bosumtwi, Ghana.

  14. Fundamentals of nuclear medicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alazraki, N.P.; Mishkin, F.S.

    1988-01-01

    The book begins with basic science and statistics relevant to nuclear medicine, and specific organ systems are addressed in separate chapters. A section of the text also covers imaging of groups of disease processes (eg, trauma, cancer). The authors present a comparison between nuclear medicine techniques and other diagnostic imaging studies. A table is given which comments on sensitivities and specificities of common nuclear medicine studies. The sensitivities and specificities are categorized as very high, high, moderate, and so forth.

  15. FBI Fingerprint Image Capture System High-Speed-Front-End throughput modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rathke, P.M.

    1993-09-01

    The Federal Bureau of Investigation (FBI) has undertaken a major modernization effort called the Integrated Automated Fingerprint Identification System (IAFISS). This system will provide centralized identification services using automated fingerprint, subject descriptor, mugshot, and document processing. A high-speed Fingerprint Image Capture System (FICS) is under development as part of the IAFIS program. The FICS will capture digital and microfilm images of FBI fingerprint cards for input into a central database. One FICS design supports two front-end scanning subsystems, known as the High-Speed-Front-End (HSFE) and Low-Speed-Front-End, to supply image data to a common data processing subsystem. The production rate of themore » HSFE is critical to meeting the FBI`s fingerprint card processing schedule. A model of the HSFE has been developed to help identify the issues driving the production rate, assist in the development of component specifications, and guide the evolution of an operations plan. A description of the model development is given, the assumptions are presented, and some HSFE throughput analysis is performed.« less

  16. Practical Implementation of Prestack Kirchhoff Time Migration on a General Purpose Graphics Processing Unit

    NASA Astrophysics Data System (ADS)

    Liu, Guofeng; Li, Chun

    2016-08-01

    In this study, we present a practical implementation of prestack Kirchhoff time migration (PSTM) on a general purpose graphic processing unit. First, we consider the three main optimizations of the PSTM GPU code, i.e., designing a configuration based on a reasonable execution, using the texture memory for velocity interpolation, and the application of an intrinsic function in device code. This approach can achieve a speedup of nearly 45 times on a NVIDIA GTX 680 GPU compared with CPU code when a larger imaging space is used, where the PSTM output is a common reflection point that is gathered as I[ nx][ ny][ nh][ nt] in matrix format. However, this method requires more memory space so the limited imaging space cannot fully exploit the GPU sources. To overcome this problem, we designed a PSTM scheme with multi-GPUs for imaging different seismic data on different GPUs using an offset value. This process can achieve the peak speedup of GPU PSTM code and it greatly increases the efficiency of the calculations, but without changing the imaging result.

  17. Matrix phased array (MPA) imaging technology for resistance spot welds

    NASA Astrophysics Data System (ADS)

    Na, Jeong K.; Gleeson, Sean T.

    2014-02-01

    A three-dimensional MPA probe has been incorporated with a high speed phased array electronic board to visualize nugget images of resistance spot welds. The primary application area of this battery operated portable MPA ultrasonic imaging system is in the automotive industry which a conventional destructive testing process is commonly adopted to check the quality of resistance spot welds in auto bodies. Considering an average of five-thousand spot welds in a medium size passenger vehicle, the amount of time and effort given to popping the welds and measuring nugget size are immeasurable in addition to the millions of dollars' worth of scrap metals recycled per plant per year. This wasteful labor intensive destructive testing process has become less reliable as auto body sheet metal has transitioned from thick and heavy mild steels to thin and light high strength steels. Consequently, the necessity of developing a non-destructive inspection methodology has become inevitable. In this paper, the fundamental aspects of the current 3-D probe design, data acquisition algorithms, and weld nugget imaging process are discussed.

  18. Matrix phased array (MPA) imaging technology for resistance spot welds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Na, Jeong K.; Gleeson, Sean T.

    2014-02-18

    A three-dimensional MPA probe has been incorporated with a high speed phased array electronic board to visualize nugget images of resistance spot welds. The primary application area of this battery operated portable MPA ultrasonic imaging system is in the automotive industry which a conventional destructive testing process is commonly adopted to check the quality of resistance spot welds in auto bodies. Considering an average of five-thousand spot welds in a medium size passenger vehicle, the amount of time and effort given to popping the welds and measuring nugget size are immeasurable in addition to the millions of dollars' worth ofmore » scrap metals recycled per plant per year. This wasteful labor intensive destructive testing process has become less reliable as auto body sheet metal has transitioned from thick and heavy mild steels to thin and light high strength steels. Consequently, the necessity of developing a non-destructive inspection methodology has become inevitable. In this paper, the fundamental aspects of the current 3-D probe design, data acquisition algorithms, and weld nugget imaging process are discussed.« less

  19. Toward a functional neuroanatomy of dysthymia: a functional magnetic resonance imaging study.

    PubMed

    Ravindran, Arun V; Smith, Andra; Cameron, Colin; Bhatla, Raj; Cameron, Ian; Georgescu, Tania M; Hogan, Matthew J

    2009-12-01

    Dysthymia is a common mood disorder. Recent studies have confirmed the neurobiological and treatment response overlap of dysthymia with major depression. There are no previous published studies of functional magnetic resonance imaging (fMRI) in dysthymia. fMRI was used to compare neural processing of 17 unmedicated dysthymic patients with 17 age, sex, and education-matched control subjects in a mood induction paradigm using the International Affective Pictures System (IAPS). Using a random effects analysis to compare the groups, the results revealed that the dysthymic patients had significantly reduced activation in the dorsolateral prefrontal cortex compared to controls. The dysthymic patients exhibited increased activation in the amygdala, anterior cingulate and insula compared to controls and these differences were more evident when processing negative than positive images. This study included both early and late subtypes of dysthymia, and participants were only imaged at one time point, which may limit the generalizability of the results. The findings suggest the involvement of the prefrontal cortex, anterior cingulate, amygdala, and insula in the neural circuitry underlying dysthymia. It is suggested that altered activation in some of these neural regions may be a common substrate for depressive disorders in general while others may relate specifically to symptom characteristics and the chronic course of dysthymia. These findings are particularly striking given the history of this deceptively mild disorder which is still confused by some with character pathology.

  20. Imaging windows for long-term intravital imaging

    PubMed Central

    Alieva, Maria; Ritsma, Laila; Giedt, Randy J; Weissleder, Ralph; van Rheenen, Jacco

    2014-01-01

    Intravital microscopy is increasingly used to visualize and quantitate dynamic biological processes at the (sub)cellular level in live animals. By visualizing tissues through imaging windows, individual cells (e.g., cancer, host, or stem cells) can be tracked and studied over a time-span of days to months. Several imaging windows have been developed to access tissues including the brain, superficial fascia, mammary glands, liver, kidney, pancreas, and small intestine among others. Here, we review the development of imaging windows and compare the most commonly used long-term imaging windows for cancer biology: the cranial imaging window, the dorsal skin fold chamber, the mammary imaging window, and the abdominal imaging window. Moreover, we provide technical details, considerations, and trouble-shooting tips on the surgical procedures and microscopy setups for each imaging window and explain different strategies to assure imaging of the same area over multiple imaging sessions. This review aims to be a useful resource for establishing the long-term intravital imaging procedure. PMID:28243510

  1. Imaging windows for long-term intravital imaging: General overview and technical insights.

    PubMed

    Alieva, Maria; Ritsma, Laila; Giedt, Randy J; Weissleder, Ralph; van Rheenen, Jacco

    2014-01-01

    Intravital microscopy is increasingly used to visualize and quantitate dynamic biological processes at the (sub)cellular level in live animals. By visualizing tissues through imaging windows, individual cells (e.g., cancer, host, or stem cells) can be tracked and studied over a time-span of days to months. Several imaging windows have been developed to access tissues including the brain, superficial fascia, mammary glands, liver, kidney, pancreas, and small intestine among others. Here, we review the development of imaging windows and compare the most commonly used long-term imaging windows for cancer biology: the cranial imaging window, the dorsal skin fold chamber, the mammary imaging window, and the abdominal imaging window. Moreover, we provide technical details, considerations, and trouble-shooting tips on the surgical procedures and microscopy setups for each imaging window and explain different strategies to assure imaging of the same area over multiple imaging sessions. This review aims to be a useful resource for establishing the long-term intravital imaging procedure.

  2. Evaluating some computer exhancement algorithms that improve the visibility of cometary morphology

    NASA Technical Reports Server (NTRS)

    Larson, Stephen M.; Slaughter, Charles D.

    1992-01-01

    Digital enhancement of cometary images is a necessary tool in studying cometary morphology. Many image processing algorithms, some developed specifically for comets, have been used to enhance the subtle, low contrast coma and tail features. We compare some of the most commonly used algorithms on two different images to evaluate their strong and weak points, and conclude that there currently exists no single 'ideal' algorithm, although the radial gradient spatial filter gives the best overall result. This comparison should aid users in selecting the best algorithm to enhance particular features of interest.

  3. Luminescent sensing and imaging of oxygen: fierce competition to the Clark electrode.

    PubMed

    Wolfbeis, Otto S

    2015-08-01

    Luminescence-based sensing schemes for oxygen have experienced a fast growth and are in the process of replacing the Clark electrode in many fields. Unlike electrodes, sensing is not limited to point measurements via fiber optic microsensors, but includes additional features such as planar sensing, imaging, and intracellular assays using nanosized sensor particles. In this essay, I review and discuss the essentials of (i) common solid-state sensor approaches based on the use of luminescent indicator dyes and host polymers; (ii) fiber optic and planar sensing schemes; (iii) nanoparticle-based intracellular sensing; and (iv) common spectroscopies. Optical sensors are also capable of multiple simultaneous sensing (such as O2 and temperature). Sensors for O2 are produced nowadays in large quantities in industry. Fields of application include sensing of O2 in plant and animal physiology, in clinical chemistry, in marine sciences, in the chemical industry and in process biotechnology. © 2015 The Author. Bioessays published by WILEY Periodicals, Inc.

  4. In flight image processing on multi-rotor aircraft for autonomous landing

    NASA Astrophysics Data System (ADS)

    Henry, Richard, Jr.

    An estimated $6.4 billion was spent during the year 2013 on developing drone technology around the world and is expected to double in the next decade. However, drone applications typically require strong pilot skills, safety, responsibilities and adherence to regulations during flight. If the flight control process could be safer and more reliable in terms of landing, it would be possible to further develop a wider range of applications. The objective of this research effort is to describe the design and evaluation of a fully autonomous Unmanned Aerial system (UAS), specifically a four rotor aircraft, commonly known as quad copter for precise landing applications. The full landing autonomy is achieved by image processing capabilities during flight for target recognition by employing the open source library OpenCV. In addition, all imaging data is processed by a single embedded computer that estimates a relative position with respect to the target landing pad. Results shows a reduction on the average offset error by 67.88% in comparison to the current return to lunch (RTL) method which only relies on GPS positioning. The present work validates the need for relying on image processing for precise landing applications instead of the inexact method of a commercial low cost GPS dependency.

  5. Detection of motile micro-organisms in biological samples by means of a fully automated image processing system

    NASA Astrophysics Data System (ADS)

    Alanis, Elvio; Romero, Graciela; Alvarez, Liliana; Martinez, Carlos C.; Hoyos, Daniel; Basombrio, Miguel A.

    2001-08-01

    A fully automated image processing system for detection of motile microorganism is biological samples is presented. The system is specifically calibrated for determining the concentration of Trypanosoma Cruzi parasites in blood samples of mice infected with Chagas disease. The method can be adapted for use in other biological samples. A thin layer of blood infected by T. cruzi parasites is examined in a common microscope in which the images of the vision field are taken by a CCD camera and temporarily stored in the computer memory. In a typical field, a few motile parasites are observable surrounded by blood red cells. The parasites have low contrast. Thus, they are difficult to detect visually but their great motility betrays their presence by the movement of the nearest neighbor red cells. Several consecutive images of the same field are taken, decorrelated with each other where parasites are present, and digitally processed in order to measure the number of parasites present in the field. Several fields are sequentially processed in the same fashion, displacing the sample by means of step motors driven by the computer. A direct advantage of this system is that its results are more reliable and the process is less time consuming than the current subjective evaluations made visually by technicians.

  6. Simultaneously Discovering and Localizing Common Objects in Wild Images.

    PubMed

    Wang, Zhenzhen; Yuan, Junsong

    2018-09-01

    Motivated by the recent success of supervised and weakly supervised common object discovery, in this paper, we move forward one step further to tackle common object discovery in a fully unsupervised way. Generally, object co-localization aims at simultaneously localizing objects of the same class across a group of images. Traditional object localization/detection usually trains specific object detectors which require bounding box annotations of object instances, or at least image-level labels to indicate the presence/absence of objects in an image. Given a collection of images without any annotations, our proposed fully unsupervised method is to simultaneously discover images that contain common objects and also localize common objects in corresponding images. Without requiring to know the total number of common objects, we formulate this unsupervised object discovery as a sub-graph mining problem from a weighted graph of object proposals, where nodes correspond to object proposals, and edges represent the similarities between neighbouring proposals. The positive images and common objects are jointly discovered by finding sub-graphs of strongly connected nodes, with each sub-graph capturing one object pattern. The optimization problem can be efficiently solved by our proposed maximal-flow-based algorithm. Instead of assuming that each image contains only one common object, our proposed solution can better address wild images where each image may contain multiple common objects or even no common object. Moreover, our proposed method can be easily tailored to the task of image retrieval in which the nodes correspond to the similarity between query and reference images. Extensive experiments on PASCAL VOC 2007 and Object Discovery data sets demonstrate that even without any supervision, our approach can discover/localize common objects of various classes in the presence of scale, view point, appearance variation, and partial occlusions. We also conduct broad experiments on image retrieval benchmarks, Holidays and Oxford5k data sets, to show that our proposed method, which considers both the similarity between query and reference images and also similarities among reference images, can help to improve the retrieval results significantly.

  7. The Montage Image Mosaic Toolkit As A Visualization Engine.

    NASA Astrophysics Data System (ADS)

    Berriman, G. Bruce; Lerias, Angela; Good, John; Mandel, Eric; Pepper, Joshua

    2018-01-01

    The Montage toolkit has since 2003 been used to aggregate FITS images into mosaics for science analysis. It is now finding application as an engine for image visualization. One important reason is that the functionality developed for creating mosaics is also valuable in image visualization. An equally important (though perhaps less obvious) reason is that Montage is portable and is built on standard astrophysics toolkits, making it very easy to integrate into new environments. Montage models and rectifies the sky background to a common level and thus reveals faint, diffuse features; it offers an adaptive image stretching method that preserves the dynamic range of a FITS image when represented in PNG format; it provides utilities for creating cutouts of large images and downsampled versions of large images that can then be visualized on desktops or in browsers; it contains a fast reprojection algorithm intended for visualization; and it resamples and reprojects images to a common grid for subsequent multi-color visualization.This poster will highlight these visualization capabilities with the following examples:1. Creation of down-sampled multi-color images of a 16-wavelength Infrared Atlas of the Galactic Plane, sampled at 1 arcsec when created2. Integration into web-based image processing environment: JS9 is an interactive image display service for web browsers, desktops and mobile devices. It exploits the flux-preserving reprojection algorithms in Montage to transform diverse images to common image parameters for display. Select Montage programs have been compiled to Javascript/WebAssembly using the Emscripten compiler, which allows our reprojection algorithms to run in browsers at close to native speed.3. Creation of complex sky coverage maps: an multicolor all-sky map that shows the sky coverage of the Kepler and K2, KELT and TESS projects, overlaid on an all-sky 2MASS image.Montage is funded by the National Science Foundation under Grant Number ACI-1642453. JS9 is funded by the Chandra X-ray Center (NAS8-03060) and NASA's Universe of Learning (STScI-509913).

  8. Electronic photography at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Holm, Jack M.

    1994-01-01

    The field of photography began a metamorphosis several years ago which promises to fundamentally change how images are captured, transmitted, and output. At this time the metamorphosis is still in the early stages, but already new processes, hardware, and software are allowing many individuals and organizations to explore the entry of imaging into the information revolution. Exploration at this time is prerequisite to leading expertise in the future, and a number of branches at LaRC have ventured into electronic and digital imaging. Their progress until recently has been limited by two factors: the lack of an integrated approach and the lack of an electronic photographic capability. The purpose of the research conducted was to address these two items. In some respects, the lack of electronic photographs has prevented application of an integrated imaging approach. Since everything could not be electronic, the tendency was to work with hard copy. Over the summer, the Photographics Section has set up an Electronic Photography Laboratory. This laboratory now has the capability to scan film images, process the images, and output the images in a variety of forms. Future plans also include electronic capture capability. The current forms of image processing available include sharpening, noise reduction, dust removal, tone correction, color balancing, image editing, cropping, electronic separations, and halftoning. Output choices include customer specified electronic file formats which can be output on magnetic or optical disks or over the network, 4400 line photographic quality prints and transparencies to 8.5 by 11 inches, and 8000 line film negatives and transparencies to 4 by 5 inches. The problem of integrated imaging involves a number of branches at LaRC including Visual Imaging, Research Printing and Publishing, Data Visualization and Animation, Advanced Computing, and various research groups. These units must work together to develop common approaches to image processing and archiving. The ultimate goal is to be able to search for images using an on-line database and image catalog. These images could then be retrieved over the network as needed, along with information on the acquisition and processing prior to storage. For this goal to be realized, a number of standard processing protocols must be developed to allow the classification of images into categories. Standard series of processing algorithms can then be applied to each category (although many of these may be adaptive between images). Since the archived image files would be standardized, it should also be possible to develop standard output processing protocols for a number of output devices. If LaRC continues the research effort begun this summer, it may be one of the first organizations to develop an integrated approach to imaging. As such, it could serve as a model for other organizations in government and the private sector.

  9. Image reconstruction for PET/CT scanners: past achievements and future challenges

    PubMed Central

    Tong, Shan; Alessio, Adam M; Kinahan, Paul E

    2011-01-01

    PET is a medical imaging modality with proven clinical value for disease diagnosis and treatment monitoring. The integration of PET and CT on modern scanners provides a synergy of the two imaging modalities. Through different mathematical algorithms, PET data can be reconstructed into the spatial distribution of the injected radiotracer. With dynamic imaging, kinetic parameters of specific biological processes can also be determined. Numerous efforts have been devoted to the development of PET image reconstruction methods over the last four decades, encompassing analytic and iterative reconstruction methods. This article provides an overview of the commonly used methods. Current challenges in PET image reconstruction include more accurate quantitation, TOF imaging, system modeling, motion correction and dynamic reconstruction. Advances in these aspects could enhance the use of PET/CT imaging in patient care and in clinical research studies of pathophysiology and therapeutic interventions. PMID:21339831

  10. MilxXplore: a web-based system to explore large imaging datasets.

    PubMed

    Bourgeat, P; Dore, V; Villemagne, V L; Rowe, C C; Salvado, O; Fripp, J

    2013-01-01

    As large-scale medical imaging studies are becoming more common, there is an increasing reliance on automated software to extract quantitative information from these images. As the size of the cohorts keeps increasing with large studies, there is a also a need for tools that allow results from automated image processing and analysis to be presented in a way that enables fast and efficient quality checking, tagging and reporting on cases in which automatic processing failed or was problematic. MilxXplore is an open source visualization platform, which provides an interface to navigate and explore imaging data in a web browser, giving the end user the opportunity to perform quality control and reporting in a user friendly, collaborative and efficient way. Compared to existing software solutions that often provide an overview of the results at the subject's level, MilxXplore pools the results of individual subjects and time points together, allowing easy and efficient navigation and browsing through the different acquisitions of a subject over time, and comparing the results against the rest of the population. MilxXplore is fast, flexible and allows remote quality checks of processed imaging data, facilitating data sharing and collaboration across multiple locations, and can be easily integrated into a cloud computing pipeline. With the growing trend of open data and open science, such a tool will become increasingly important to share and publish results of imaging analysis.

  11. EVALUATION OF REGISTRATION, COMPRESSION AND CLASSIFICATION ALGORITHMS

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R.

    1994-01-01

    Several types of algorithms are generally used to process digital imagery such as Landsat data. The most commonly used algorithms perform the task of registration, compression, and classification. Because there are different techniques available for performing registration, compression, and classification, imagery data users need a rationale for selecting a particular approach to meet their particular needs. This collection of registration, compression, and classification algorithms was developed so that different approaches could be evaluated and the best approach for a particular application determined. Routines are included for six registration algorithms, six compression algorithms, and two classification algorithms. The package also includes routines for evaluating the effects of processing on the image data. This collection of routines should be useful to anyone using or developing image processing software. Registration of image data involves the geometrical alteration of the imagery. Registration routines available in the evaluation package include image magnification, mapping functions, partitioning, map overlay, and data interpolation. The compression of image data involves reducing the volume of data needed for a given image. Compression routines available in the package include adaptive differential pulse code modulation, two-dimensional transforms, clustering, vector reduction, and picture segmentation. Classification of image data involves analyzing the uncompressed or compressed image data to produce inventories and maps of areas of similar spectral properties within a scene. The classification routines available include a sequential linear technique and a maximum likelihood technique. The choice of the appropriate evaluation criteria is quite important in evaluating the image processing functions. The user is therefore given a choice of evaluation criteria with which to investigate the available image processing functions. All of the available evaluation criteria basically compare the observed results with the expected results. For the image reconstruction processes of registration and compression, the expected results are usually the original data or some selected characteristics of the original data. For classification processes the expected result is the ground truth of the scene. Thus, the comparison process consists of determining what changes occur in processing, where the changes occur, how much change occurs, and the amplitude of the change. The package includes evaluation routines for performing such comparisons as average uncertainty, average information transfer, chi-square statistics, multidimensional histograms, and computation of contingency matrices. This collection of routines is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 computer with a central memory requirement of approximately 662K of 8 bit bytes. This collection of image processing and evaluation routines was developed in 1979.

  12. Feature tracking for automated volume of interest stabilization on 4D-OCT images

    NASA Astrophysics Data System (ADS)

    Laves, Max-Heinrich; Schoob, Andreas; Kahrs, Lüder A.; Pfeiffer, Tom; Huber, Robert; Ortmaier, Tobias

    2017-03-01

    A common representation of volumetric medical image data is the triplanar view (TV), in which the surgeon manually selects slices showing the anatomical structure of interest. In addition to common medical imaging such as MRI or computed tomography, recent advances in the field of optical coherence tomography (OCT) have enabled live processing and volumetric rendering of four-dimensional images of the human body. Due to the region of interest undergoing motion, it is challenging for the surgeon to simultaneously keep track of an object by continuously adjusting the TV to desired slices. To select these slices in subsequent frames automatically, it is necessary to track movements of the volume of interest (VOI). This has not been addressed with respect to 4DOCT images yet. Therefore, this paper evaluates motion tracking by applying state-of-the-art tracking schemes on maximum intensity projections (MIP) of 4D-OCT images. Estimated VOI location is used to conveniently show corresponding slices and to improve the MIPs by calculating thin-slab MIPs. Tracking performances are evaluated on an in-vivo sequence of human skin, captured at 26 volumes per second. Among investigated tracking schemes, our recently presented tracking scheme for soft tissue motion provides highest accuracy with an error of under 2.2 voxels for the first 80 volumes. Object tracking on 4D-OCT images enables its use for sub-epithelial tracking of microvessels for image-guidance.

  13. Projection of Stabilized Aerial Imagery Onto Digital Elevation Maps for Geo-Rectified and Jitter-Free Viewing

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.

    2012-01-01

    As imagery is collected from an airborne platform, an individual viewing the images wants to know from where on the Earth the images were collected. To do this, some information about the camera needs to be known, such as its position and orientation relative to the Earth. This can be provided by common inertial navigation systems (INS). Once the location of the camera is known, it is useful to project an image onto some representation of the Earth. Due to the non-smooth terrain of the Earth (mountains, valleys, etc.), this projection is highly non-linear. Thus, to ensure accurate projection, one needs to project onto a digital elevation map (DEM). This allows one to view the images overlaid onto a representation of the Earth. A code has been developed that takes an image, a model of the camera used to acquire that image, the pose of the camera during acquisition (as provided by an INS), and a DEM, and outputs an image that has been geo-rectified. The world coordinate of the bounds of the image are provided for viewing purposes. The code finds a mapping from points on the ground (DEM) to pixels in the image. By performing this process for all points on the ground, one can "paint" the ground with the image, effectively performing a projection of the image onto the ground. In order to make this process efficient, a method was developed for finding a region of interest (ROI) on the ground to where the image will project. This code is useful in any scenario involving an aerial imaging platform that moves and rotates over time. Many other applications are possible in processing aerial and satellite imagery.

  14. Image Algebra Matlab language version 2.3 for image processing and compression research

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric

    2010-08-01

    Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision algorithms and software. The University of Florida has been associated with development of the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the Matlab™ programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation, called Image Algebra Matlab (IAM), has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, IAM offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that IAM programs can employ versatile representations for the operands and operations of the algebra, which are supported by the underlying libraries written in Matlab. In a previous publication, we showed how the functionality of IAC++ could be carried forth into a Matlab implementation, and provided practical details of a prototype implementation called IAM Version 1. In this paper, we further elaborate the purpose and structure of image algebra, then present a maturing implementation of Image Algebra Matlab called IAM Version 2.3, which extends the previous implementation of IAM to include polymorphic operations over different point sets, as well as recursive convolution operations and functional composition. We also show how image algebra and IAM can be employed in image processing and compression research, as well as algorithm development and analysis.

  15. Quality Improvement of Liver Ultrasound Images Using Fuzzy Techniques.

    PubMed

    Bayani, Azadeh; Langarizadeh, Mostafa; Radmard, Amir Reza; Nejad, Ahmadreza Farzaneh

    2016-12-01

    Liver ultrasound images are so common and are applied so often to diagnose diffuse liver diseases like fatty liver. However, the low quality of such images makes it difficult to analyze them and diagnose diseases. The purpose of this study, therefore, is to improve the contrast and quality of liver ultrasound images. In this study, a number of image contrast enhancement algorithms which are based on fuzzy logic were applied to liver ultrasound images - in which the view of kidney is observable - using Matlab2013b to improve the image contrast and quality which has a fuzzy definition; just like image contrast improvement algorithms using a fuzzy intensification operator, contrast improvement algorithms applying fuzzy image histogram hyperbolization, and contrast improvement algorithms by fuzzy IF-THEN rules. With the measurement of Mean Squared Error and Peak Signal to Noise Ratio obtained from different images, fuzzy methods provided better results, and their implementation - compared with histogram equalization method - led both to the improvement of contrast and visual quality of images and to the improvement of liver segmentation algorithms results in images. Comparison of the four algorithms revealed the power of fuzzy logic in improving image contrast compared with traditional image processing algorithms. Moreover, contrast improvement algorithm based on a fuzzy intensification operator was selected as the strongest algorithm considering the measured indicators. This method can also be used in future studies on other ultrasound images for quality improvement and other image processing and analysis applications.

  16. Quality Improvement of Liver Ultrasound Images Using Fuzzy Techniques

    PubMed Central

    Bayani, Azadeh; Langarizadeh, Mostafa; Radmard, Amir Reza; Nejad, Ahmadreza Farzaneh

    2016-01-01

    Background: Liver ultrasound images are so common and are applied so often to diagnose diffuse liver diseases like fatty liver. However, the low quality of such images makes it difficult to analyze them and diagnose diseases. The purpose of this study, therefore, is to improve the contrast and quality of liver ultrasound images. Methods: In this study, a number of image contrast enhancement algorithms which are based on fuzzy logic were applied to liver ultrasound images - in which the view of kidney is observable - using Matlab2013b to improve the image contrast and quality which has a fuzzy definition; just like image contrast improvement algorithms using a fuzzy intensification operator, contrast improvement algorithms applying fuzzy image histogram hyperbolization, and contrast improvement algorithms by fuzzy IF-THEN rules. Results: With the measurement of Mean Squared Error and Peak Signal to Noise Ratio obtained from different images, fuzzy methods provided better results, and their implementation - compared with histogram equalization method - led both to the improvement of contrast and visual quality of images and to the improvement of liver segmentation algorithms results in images. Conclusion: Comparison of the four algorithms revealed the power of fuzzy logic in improving image contrast compared with traditional image processing algorithms. Moreover, contrast improvement algorithm based on a fuzzy intensification operator was selected as the strongest algorithm considering the measured indicators. This method can also be used in future studies on other ultrasound images for quality improvement and other image processing and analysis applications. PMID:28077898

  17. Symmetric Phase Only Filtering for Improved DPIV Data Processing

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    2006-01-01

    The standard approach in Digital Particle Image Velocimetry (DPIV) data processing is to use Fast Fourier Transforms to obtain the cross-correlation of two single exposure subregions, where the location of the cross-correlation peak is representative of the most probable particle displacement across the subregion. This standard DPIV processing technique is analogous to Matched Spatial Filtering, a technique commonly used in optical correlators to perform the crosscorrelation operation. Phase only filtering is a well known variation of Matched Spatial Filtering, which when used to process DPIV image data yields correlation peaks which are narrower and up to an order of magnitude larger than those obtained using traditional DPIV processing. In addition to possessing desirable correlation plane features, phase only filters also provide superior performance in the presence of DC noise in the correlation subregion. When DPIV image subregions contaminated with surface flare light or high background noise levels are processed using phase only filters, the correlation peak pertaining only to the particle displacement is readily detected above any signal stemming from the DC objects. Tedious image masking or background image subtraction are not required. Both theoretical and experimental analyses of the signal-to-noise ratio performance of the filter functions are presented. In addition, a new Symmetric Phase Only Filtering (SPOF) technique, which is a variation on the traditional phase only filtering technique, is described and demonstrated. The SPOF technique exceeds the performance of the traditionally accepted phase only filtering techniques and is easily implemented in standard DPIV FFT based correlation processing with no significant computational performance penalty. An "Automatic" SPOF algorithm is presented which determines when the SPOF is able to provide better signal to noise results than traditional PIV processing. The SPOF based optical correlation processing approach is presented as a new paradigm for more robust cross-correlation processing of low signal-to-noise ratio DPIV image data."

  18. Wound size measurement of lower extremity ulcers using segmentation algorithms

    NASA Astrophysics Data System (ADS)

    Dadkhah, Arash; Pang, Xing; Solis, Elizabeth; Fang, Ruogu; Godavarty, Anuradha

    2016-03-01

    Lower extremity ulcers are one of the most common complications that not only affect many people around the world but also have huge impact on economy since a large amount of resources are spent for treatment and prevention of the diseases. Clinical studies have shown that reduction in the wound size of 40% within 4 weeks is an acceptable progress in the healing process. Quantification of the wound size plays a crucial role in assessing the extent of healing and determining the treatment process. To date, wound healing is visually inspected and the wound size is measured from surface images. The extent of wound healing internally may vary from the surface. A near-infrared (NIR) optical imaging approach has been developed for non-contact imaging of wounds internally and differentiating healing from non-healing wounds. Herein, quantitative wound size measurements from NIR and white light images are estimated using a graph cuts and region growing image segmentation algorithms. The extent of the wound healing from NIR imaging of lower extremity ulcers in diabetic subjects are quantified and compared across NIR and white light images. NIR imaging and wound size measurements can play a significant role in potentially predicting the extent of internal healing, thus allowing better treatment plans when implemented for periodic imaging in future.

  19. Fabricating optical phantoms to simulate skin tissue properties and microvasculatures

    NASA Astrophysics Data System (ADS)

    Sheng, Shuwei; Wu, Qiang; Han, Yilin; Dong, Erbao; Xu, Ronald

    2015-03-01

    This paper introduces novel methods to fabricate optical phantoms that simulate the morphologic, optical, and microvascular characteristics of skin tissue. The multi-layer skin-simulating phantom was fabricated by a light-cured 3D printer that mixed and printed the colorless light-curable ink with the absorption and the scattering ingredients for the designated optical properties. The simulated microvascular network was fabricated by a soft lithography process to embed microchannels in polydimethylsiloxane (PDMS) phantoms. The phantoms also simulated vascular anomalies and hypoxia commonly observed in cancer. A dual-modal multispectral and laser speckle imaging system was used for oxygen and perfusion imaging of the tissue-simulating phantoms. The light-cured 3D printing technique and the soft lithography process may enable freeform fabrication of skin-simulating phantoms that embed microvessels for image and drug delivery applications.

  20. Standardization efforts of digital pathology in Europe.

    PubMed

    Rojo, Marcial García; Daniel, Christel; Schrader, Thomas

    2012-01-01

    EURO-TELEPATH is a European COST Action IC0604. It started in 2007 and will end in November 2011. Its main objectives are evaluating and validating the common technological framework and communication standards required to access, transmit, and manage digital medical records by pathologists and other medical specialties in a networked environment. Working Group 1, "Business Modelling in Pathology," has designed main pathology processes - Frozen Study, Formalin Fixed Specimen Study, Telepathology, Cytology, and Autopsy - using Business Process Modelling Notation (BPMN). Working Group 2 has been dedicated to promoting the application of informatics standards in pathology, collaborating with Integrating Healthcare Enterprise (IHE), Digital Imaging and Communications in Medicine (DICOM), Health Level Seven (HL7), and other standardization bodies. Health terminology standardization research has become a topic of great interest. Future research work should focus on standardizing automatic image analysis and tissue microarrays imaging.

  1. Real-Time Spaceborne Synthetic Aperture Radar Float-Point Imaging System Using Optimized Mapping Methodology and a Multi-Node Parallel Accelerating Technique

    PubMed Central

    Li, Bingyi; Chen, Liang; Yu, Wenyue; Xie, Yizhuang; Bian, Mingming; Zhang, Qingjun; Pang, Long

    2018-01-01

    With the development of satellite load technology and very large-scale integrated (VLSI) circuit technology, on-board real-time synthetic aperture radar (SAR) imaging systems have facilitated rapid response to disasters. A key goal of the on-board SAR imaging system design is to achieve high real-time processing performance under severe size, weight, and power consumption constraints. This paper presents a multi-node prototype system for real-time SAR imaging processing. We decompose the commonly used chirp scaling (CS) SAR imaging algorithm into two parts according to the computing features. The linearization and logic-memory optimum allocation methods are adopted to realize the nonlinear part in a reconfigurable structure, and the two-part bandwidth balance method is used to realize the linear part. Thus, float-point SAR imaging processing can be integrated into a single Field Programmable Gate Array (FPGA) chip instead of relying on distributed technologies. A single-processing node requires 10.6 s and consumes 17 W to focus on 25-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. The design methodology of the multi-FPGA parallel accelerating system under the real-time principle is introduced. As a proof of concept, a prototype with four processing nodes and one master node is implemented using a Xilinx xc6vlx315t FPGA. The weight and volume of one single machine are 10 kg and 32 cm × 24 cm × 20 cm, respectively, and the power consumption is under 100 W. The real-time performance of the proposed design is demonstrated on Chinese Gaofen-3 stripmap continuous imaging. PMID:29495637

  2. Studies on image compression and image reconstruction

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Nori, Sekhar; Araj, A.

    1994-01-01

    During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.

  3. Atlas-guided volumetric diffuse optical tomography enhanced by generalized linear model analysis to image risk decision-making responses in young adults.

    PubMed

    Lin, Zi-Jing; Li, Lin; Cazzell, Mary; Liu, Hanli

    2014-08-01

    Diffuse optical tomography (DOT) is a variant of functional near infrared spectroscopy and has the capability of mapping or reconstructing three dimensional (3D) hemodynamic changes due to brain activity. Common methods used in DOT image analysis to define brain activation have limitations because the selection of activation period is relatively subjective. General linear model (GLM)-based analysis can overcome this limitation. In this study, we combine the atlas-guided 3D DOT image reconstruction with GLM-based analysis (i.e., voxel-wise GLM analysis) to investigate the brain activity that is associated with risk decision-making processes. Risk decision-making is an important cognitive process and thus is an essential topic in the field of neuroscience. The Balloon Analog Risk Task (BART) is a valid experimental model and has been commonly used to assess human risk-taking actions and tendencies while facing risks. We have used the BART paradigm with a blocked design to investigate brain activations in the prefrontal and frontal cortical areas during decision-making from 37 human participants (22 males and 15 females). Voxel-wise GLM analysis was performed after a human brain atlas template and a depth compensation algorithm were combined to form atlas-guided DOT images. In this work, we wish to demonstrate the excellence of using voxel-wise GLM analysis with DOT to image and study cognitive functions in response to risk decision-making. Results have shown significant hemodynamic changes in the dorsal lateral prefrontal cortex (DLPFC) during the active-choice mode and a different activation pattern between genders; these findings correlate well with published literature in functional magnetic resonance imaging (fMRI) and fNIRS studies. Copyright © 2014 The Authors. Human Brain Mapping Published by Wiley Periodicals, Inc.

  4. The application of 3D image processing to studies of the musculoskeletal system

    NASA Astrophysics Data System (ADS)

    Hirsch, Bruce Elliot; Udupa, Jayaram K.; Siegler, Sorin; Winkelstein, Beth A.

    2009-10-01

    Three dimensional renditions of anatomical structures are commonly used to improve visualization, surgical planning, and patient education. However, such 3D images also contain information which is not readily apparent, and which can be mined to elucidate, for example, such parameters as joint kinematics, spacial relationships, and distortions of those relationships with movement. Here we describe two series of experiments which demonstrate the functional application of 3D imaging. The first concerns the joints of the ankle complex, where the usual description of motions in the talocrural joint is shown to be incomplete, and where the roles of the anterior talofibular and calcaneofibular ligaments are clarified in ankle sprains. Also, the biomechanical effects of two common surgical procedures for repairing torn ligaments were examined. The second series of experiments explores changes in the anatomical relationships between nerve elements and the cervical vertebrae with changes in neck position. They provide preliminary evidence that morphological differences may exist between asymptomatic subjects and patients with radiculopathy in certain positions, even when conventional imaging shows no difference.

  5. Toward development of mobile application for hand arthritis screening.

    PubMed

    Akhbardeh, Farhad; Vasefi, Fartash; Tavakolian, Kouhyar; Bradley, David; Fazel-Rezai, Reza

    2015-01-01

    Arthritis is one of the most common health problems affecting people throughout the world. The goal of the work presented in this paper is to provide individuals, who may be developing or have developed arthritis, with a mobile application to assess and monitor the progress of their disease using their smartphone. The image processing algorithm includes finger border detection algorithm to monitor joint thickness and angular deviation abnormalities, which are common symptoms of arthritis. In this work, we have analyzed and compared gradient, thresholding and Canny algorithms for border detection. The effect of image spatial resolution (down-sampling) is also investigated. The results calculated based on 36 joint measurements show that the mean errors for gradient, thresholding, and Canny methods are 0.20, 2.13, and 2.03 mm, respectively. In addition, the average error for different image resolutions is analyzed and the minimum required resolution is determined for each method. The results confirm that recent smartphone imaging capabilities can provide enough accuracy for hand border detection and finger joint analysis based on gradient method.

  6. Evaluation of Biofilms and the Effects of Biocides Thereon

    NASA Technical Reports Server (NTRS)

    Pierson, Duane L. (Inventor); Koenig, David W. (Inventor); Mishra, Saroj K. (Inventor)

    2002-01-01

    Biofilm formation is monitored by real-time continuous measurement. Images are formed of sessile cells on a surface and planktonic cells adjacent the surface. The attachment of cells to the surface is measured and quantitated, and sessile and planktonic cells are distinguished using image processing techniques. Single cells as well as colonies are monitored on or adjacent a variety of substrates. Flowing streams may be monitored. The effects of biocides on biofilms commonly isolated from recyclable water systems are measured.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choong, W. -S.; Abu-Nimeh, F.; Moses, W. W.

    Here, we present a 16-channel front-end readout board for the OpenPET electronics system. A major task in developing a nuclear medical imaging system, such as a positron emission computed tomograph (PET) or a single-photon emission computed tomograph (SPECT), is the electronics system. While there are a wide variety of detector and camera design concepts, the relatively simple nature of the acquired data allows for a common set of electronics requirements that can be met by a flexible, scalable, and high-performance OpenPET electronics system. The analog signals from the different types of detectors used in medical imaging share similar characteristics, whichmore » allows for a common analog signal processing. The OpenPET electronics processes the analog signals with Detector Boards. Here we report on the development of a 16-channel Detector Board. Each signal is digitized by a continuously sampled analog-to-digital converter (ADC), which is processed by a field programmable gate array (FPGA) to extract pulse height information. A leading edge discriminator creates a timing edge that is "time stamped" by a time-to-digital converter (TDC) implemented inside the FPGA. In conclusion, this digital information from each channel is sent to an FPGA that services 16 analog channels, and then information from multiple channels is processed by this FPGA to perform logic for crystal lookup, DOI calculation, calibration, etc.« less

  8. Improved blood velocity measurements with a hybrid image filtering and iterative Radon transform algorithm

    PubMed Central

    Chhatbar, Pratik Y.; Kara, Prakash

    2013-01-01

    Neural activity leads to hemodynamic changes which can be detected by functional magnetic resonance imaging (fMRI). The determination of blood flow changes in individual vessels is an important aspect of understanding these hemodynamic signals. Blood flow can be calculated from the measurements of vessel diameter and blood velocity. When using line-scan imaging, the movement of blood in the vessel leads to streaks in space-time images, where streak angle is a function of the blood velocity. A variety of methods have been proposed to determine blood velocity from such space-time image sequences. Of these, the Radon transform is relatively easy to implement and has fast data processing. However, the precision of the velocity measurements is dependent on the number of Radon transforms performed, which creates a trade-off between the processing speed and measurement precision. In addition, factors like image contrast, imaging depth, image acquisition speed, and movement artifacts especially in large mammals, can potentially lead to data acquisition that results in erroneous velocity measurements. Here we show that pre-processing the data with a Sobel filter and iterative application of Radon transforms address these issues and provide more accurate blood velocity measurements. Improved signal quality of the image as a result of Sobel filtering increases the accuracy and the iterative Radon transform offers both increased precision and an order of magnitude faster implementation of velocity measurements. This algorithm does not use a priori knowledge of angle information and therefore is sensitive to sudden changes in blood flow. It can be applied on any set of space-time images with red blood cell (RBC) streaks, commonly acquired through line-scan imaging or reconstructed from full-frame, time-lapse images of the vasculature. PMID:23807877

  9. Detection and characterization of exercise induced muscle damage (EIMD) via thermography and image processing

    NASA Astrophysics Data System (ADS)

    Avdelidis, N. P.; Kappatos, V.; Georgoulas, G.; Karvelis, P.; Deli, C. K.; Theodorakeas, P.; Giakas, G.; Tsiokanos, A.; Koui, M.; Jamurtas, A. Z.

    2017-04-01

    Exercise induced muscle damage (EIMD), is usually experienced in i) humans who have been physically inactive for prolonged periods of time and then begin with sudden training trials and ii) athletes who train over their normal limits. EIMD is not so easy to be detected and quantified, by means of commonly measurement tools and methods. Thermography has been used successfully as a research detection tool in medicine for the last 6 decades but very limited work has been reported on EIMD area. The main purpose of this research is to assess and characterize EIMD, using thermography and image processing techniques. The first step towards that goal is to develop a reliable segmentation technique to isolate the region of interest (ROI). A semi-automatic image processing software was designed and regions of the left and right leg based on superpixels were segmented. The image is segmented into a number of regions and the user is able to intervene providing the regions which belong to each of the two legs. In order to validate the image processing software, an extensive experimental investigation was carried out, acquiring thermographic images of the rectus femoris muscle before, immediately post and 24, 48 and 72 hours after an acute bout of eccentric exercise (5 sets of 15 maximum repetitions), on males and females (20-30 year-old). Results indicate that the semi-automated approach provides an excellent bench-mark that can be used as a clinical reliable tool.

  10. A wavelet-based adaptive fusion algorithm of infrared polarization imaging

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang

    2011-08-01

    The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.

  11. Blind image deblurring based on trained dictionary and curvelet using sparse representation

    NASA Astrophysics Data System (ADS)

    Feng, Liang; Huang, Qian; Xu, Tingfa; Li, Shao

    2015-04-01

    Motion blur is one of the most significant and common artifacts causing poor image quality in digital photography, in which many factors resulted. In imaging process, if the objects are moving quickly in the scene or the camera moves in the exposure interval, the image of the scene would blur along the direction of relative motion between the camera and the scene, e.g. camera shake, atmospheric turbulence. Recently, sparse representation model has been widely used in signal and image processing, which is an effective method to describe the natural images. In this article, a new deblurring approach based on sparse representation is proposed. An overcomplete dictionary learned from the trained image samples via the KSVD algorithm is designed to represent the latent image. The motion-blur kernel can be treated as a piece-wise smooth function in image domain, whose support is approximately a thin smooth curve, so we employed curvelet to represent the blur kernel. Both of overcomplete dictionary and curvelet system have high sparsity, which improves the robustness to the noise and more satisfies the observer's visual demand. With the two priors, we constructed restoration model of blurred images and succeeded to solve the optimization problem with the help of alternating minimization technique. The experiment results prove the method can preserve the texture of original images and suppress the ring artifacts effectively.

  12. A Neuroimaging Web Services Interface as a Cyber Physical System for Medical Imaging and Data Management in Brain Research: Design Study

    PubMed Central

    2018-01-01

    Background Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. Objective The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Methods Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. Results All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. Conclusions To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. PMID:29699962

  13. Merged GLORIA sidescan and hydrosweep pseudo-sidescan: Processing and creation of digital mosaics

    USGS Publications Warehouse

    Bird, R.T.; Searle, R.C.; Paskevich, V.; Twichell, D.C.

    1996-01-01

    We have replaced the usual band of poor-quality data in the near-nadir region of our GLORIA long-range sidescan-sonar imagery with a shaded-relief image constructed from swath bathymetry data (collected simultaneously with GLORIA) which completely cover the nadir area. We have developed a technique to enhance these "pseudo-sidescan" images in order to mimic the neighbouring GLORIA backscatter intensities. As a result, the enhanced images greatly facilitate the geologic interpretation of the adjacent GLORIA data, and geologic features evident in the GLORIA data may be correlated with greater confidence across track. Features interpreted from the pseudo-sidescan may be extrapolated from the near-nadir region out into the GLORIA range where they may not have been recognized otherwise, and therefore the pseudo-sidescan can be used to ground-truth GLORIA interpretations. Creation of digital sidescan mosaics utilized an approach not previously used for GLORIA data. Pixels were correctly placed in cartographic space and the time required to complete a final mosaic was significantly reduced. Computer software for digital mapping and mosaic creation is incorporated into the newly-developed Woods Hole Image Processing System (WHIPS) which can process both low- and high-frequency sidescan, and can interchange data with the Mini Image Processing System (MIPS) most commonly used for GLORIA processing. These techniques are tested by creating digital mosaics of merged GLORIA sidescan and Hydrosweep pseudo-sidescan data from the vicinity of the Juan Fernandez microplate along the East Pacific Rise (EPR). 

  14. Paperless protocoling of CT and MRI requests at an outpatient imaging center.

    PubMed

    Bassignani, Matthew J; Dierolf, David A; Roberts, David L; Lee, Steven

    2010-04-01

    We created our imaging center (IC) to move outpatient imaging from our busy inpatient imaging suite off-site to a location that is more inviting to ambulatory patients. Nevertheless, patients scanned at our IC still represent the depth and breadth of illness complexity seen with our tertiary care population. Thus, we protocol exams on an individualized basis to ensure that the referring clinician's question is fully answered by the exam performed. Previously, paper based protocoling was a laborious process for all those involved where the IC business office would fax the requests to various reading rooms for protocoling by the subspecialist radiologists who are 3 miles away at the main hospital. Once protocoled, reading room coordinators would fax back the protocoled request to the IC technical area in preparation for the next day's scheduled exams. At any breakdown in this process (e.g., lost paperwork), patient exams were delayed and clinicians and patients became upset. To improve this process, we developed a paper free process whereby protocoling is accomplished through scanning of exam requests into our PACS. Using the common worklist functionality found in most PACS, we created "protocoling worklists" that contain these scanned documents. Radiologists protocol these studies in the PACS worklist (with the added benefit of having all imaging and report data available), and subsequently, the technologists can see and act on the protocols they find in PACS. This process has significantly decreased interruptions in our busy reading rooms and decreased rework of IC staff.

  15. WE-G-204-01: BEST IN PHYSICS (IMAGING): Effect of Image Processing Parameters On Nodule Detectability in Chest Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Little, K; Lu, Z; MacMahon, H

    Purpose: To investigate the effect of varying system image processing parameters on lung nodule detectability in digital radiography. Methods: An anthropomorphic chest phantom was imaged in the posterior-anterior position using a GE Discovery XR656 digital radiography system. To simulate lung nodules, a polystyrene board with 6.35mm diameter PMMA spheres was placed adjacent to the phantom (into the x-ray path). Due to magnification, the projected simulated nodules had a diameter in the radiographs of approximately 7.5 mm. The images were processed using one of GE’s default chest settings (Factory3) and reprocessed by varying the “Edge” and “Tissue Contrast” processing parameters, whichmore » were the two user-configurable parameters for a single edge and contrast enhancement algorithm. For each parameter setting, the nodule signals were calculated by subtracting the chest-only image from the image with simulated nodules. Twenty nodule signals were averaged, Gaussian filtered, and radially averaged in order to generate an approximately noiseless signal. For each processing parameter setting, this noise-free signal and 180 background samples from across the lung were used to estimate ideal observer performance in a signal-known-exactly detection task. Performance was estimated using a channelized Hotelling observer with 10 Laguerre-Gauss channel functions. Results: The “Edge” and “Tissue Contrast” parameters each had an effect on the detectability as calculated by the model observer. The CHO-estimated signal detectability ranged from 2.36 to 2.93 and was highest for “Edge” = 4 and “Tissue Contrast” = −0.15. In general, detectability tended to decrease as “Edge” was increased and as “Tissue Contrast” was increased. A human observer study should be performed to validate the relation to human detection performance. Conclusion: Image processing parameters can affect lung nodule detection performance in radiography. While validation with a human observer study is needed, model observer detectability for common tasks could provide a means for optimizing image processing parameters.« less

  16. Open-loop measurement of data sampling point for SPM

    NASA Astrophysics Data System (ADS)

    Wang, Yueyu; Zhao, Xuezeng

    2006-03-01

    SPM (Scanning Probe Microscope) provides "three-dimensional images" with nanometer level resolution, and some of them can be used as metrology tools. However, SPM's images are commonly distorted by non-ideal properties of SPM's piezoelectric scanner, which reduces metrological accuracy and data repeatability. In order to eliminate this limit, an "open-loop sampling" method is presented. In this method, the positional values of sampling points in all three directions on the surface of the sample are measured by the position sensor and recorded in SPM's image file, which is used to replace the image file from a conventional SPM. Because the positions in X and Y directions are measured at the same time of sampling height information in Z direction, the image distortion caused by scanner locating error can be reduced by proper image processing algorithm.

  17. Translational MR Neuroimaging of Stroke and Recovery

    PubMed Central

    Mandeville, Emiri T.; Ayata, Cenk; Zheng, Yi; Mandeville, Joseph B.

    2016-01-01

    Multiparametric magnetic resonance imaging (MRI) has become a critical clinical tool for diagnosing focal ischemic stroke severity, staging treatment, and predicting outcome. Imaging during the acute phase focuses on tissue viability in the stroke vicinity, while imaging during recovery requires the evaluation of distributed structural and functional connectivity. Preclinical MRI of experimental stroke models provides validation of non-invasive biomarkers in terms of cellular and molecular mechanisms, while also providing a translational platform for evaluation of prospective therapies. This brief review of translational stroke imaging discusses the acute to chronic imaging transition, the principles underlying common MRI methods employed in stroke research, and experimental results obtained by clinical and preclinical imaging to determine tissue viability, vascular remodeling, structural connectivity of major white matter tracts, and functional connectivity using task-based and resting-state fMRI during the stroke recovery process. PMID:27578048

  18. Quantitative assessment of the impact of biomedical image acquisition on the results obtained from image analysis and processing.

    PubMed

    Koprowski, Robert

    2014-07-04

    Dedicated, automatic algorithms for image analysis and processing are becoming more and more common in medical diagnosis. When creating dedicated algorithms, many factors must be taken into consideration. They are associated with selecting the appropriate algorithm parameters and taking into account the impact of data acquisition on the results obtained. An important feature of algorithms is the possibility of their use in other medical units by other operators. This problem, namely operator's (acquisition) impact on the results obtained from image analysis and processing, has been shown on a few examples. The analysed images were obtained from a variety of medical devices such as thermal imaging, tomography devices and those working in visible light. The objects of imaging were cellular elements, the anterior segment and fundus of the eye, postural defects and others. In total, almost 200'000 images coming from 8 different medical units were analysed. All image analysis algorithms were implemented in C and Matlab. For various algorithms and methods of medical imaging, the impact of image acquisition on the results obtained is different. There are different levels of algorithm sensitivity to changes in the parameters, for example: (1) for microscope settings and the brightness assessment of cellular elements there is a difference of 8%; (2) for the thyroid ultrasound images there is a difference in marking the thyroid lobe area which results in a brightness assessment difference of 2%. The method of image acquisition in image analysis and processing also affects: (3) the accuracy of determining the temperature in the characteristic areas on the patient's back for the thermal method - error of 31%; (4) the accuracy of finding characteristic points in photogrammetric images when evaluating postural defects - error of 11%; (5) the accuracy of performing ablative and non-ablative treatments in cosmetology - error of 18% for the nose, 10% for the cheeks, and 7% for the forehead. Similarly, when: (7) measuring the anterior eye chamber - there is an error of 20%; (8) measuring the tooth enamel thickness - error of 15%; (9) evaluating the mechanical properties of the cornea during pressure measurement - error of 47%. The paper presents vital, selected issues occurring when assessing the accuracy of designed automatic algorithms for image analysis and processing in bioengineering. The impact of acquisition of images on the problems arising in their analysis has been shown on selected examples. It has also been indicated to which elements of image analysis and processing special attention should be paid in their design.

  19. Single Channel Quantum Color Image Encryption Algorithm Based on HSI Model and Quantum Fourier Transform

    NASA Astrophysics Data System (ADS)

    Gong, Li-Hua; He, Xiang-Tao; Tan, Ru-Chao; Zhou, Zhi-Hong

    2018-01-01

    In order to obtain high-quality color images, it is important to keep the hue component unchanged while emphasize the intensity or saturation component. As a public color model, Hue-Saturation Intensity (HSI) model is commonly used in image processing. A new single channel quantum color image encryption algorithm based on HSI model and quantum Fourier transform (QFT) is investigated, where the color components of the original color image are converted to HSI and the logistic map is employed to diffuse the relationship of pixels in color components. Subsequently, quantum Fourier transform is exploited to fulfill the encryption. The cipher-text is a combination of a gray image and a phase matrix. Simulations and theoretical analyses demonstrate that the proposed single channel quantum color image encryption scheme based on the HSI model and quantum Fourier transform is secure and effective.

  20. [Improvement of magnetic resonance phase unwrapping method based on Goldstein Branch-cut algorithm].

    PubMed

    Guo, Lin; Kang, Lili; Wang, Dandan

    2013-02-01

    The phase information of magnetic resonance (MR) phase image can be used in many MR imaging techniques, but phase wrapping of the images often results in inaccurate phase information and phase unwrapping is essential for MR imaging techniques. In this paper we analyze the causes of errors in phase unwrapping with the commonly used Goldstein Brunch-cut algorithm and propose an improved algorithm. During the unwrapping process, masking, filtering, dipole- remover preprocessor, and the Prim algorithm of the minimum spanning tree were introduced to optimize the residues essential for the Goldstein Brunch-cut algorithm. Experimental results showed that the residues, branch-cuts and continuous unwrapped phase surface were efficiently reduced and the quality of MR phase images was obviously improved with the proposed method.

  1. Feature Matching of Historical Images Based on Geometry of Quadrilaterals

    NASA Astrophysics Data System (ADS)

    Maiwald, F.; Schneider, D.; Henze, F.; Münster, S.; Niebling, F.

    2018-05-01

    This contribution shows an approach to match historical images from the photo library of the Saxon State and University Library Dresden (SLUB) in the context of a historical three-dimensional city model of Dresden. In comparison to recent images, historical photography provides diverse factors which make an automatical image analysis (feature detection, feature matching and relative orientation of images) difficult. Due to e.g. film grain, dust particles or the digitalization process, historical images are often covered by noise interfering with the image signal needed for a robust feature matching. The presented approach uses quadrilaterals in image space as these are commonly available in man-made structures and façade images (windows, stones, claddings). It is explained how to generally detect quadrilaterals in images. Consequently, the properties of the quadrilaterals as well as the relationship to neighbouring quadrilaterals are used for the description and matching of feature points. The results show that most of the matches are robust and correct but still small in numbers.

  2. Local area networks in an imaging environment.

    PubMed

    Noz, M E; Maguire, G Q; Erdman, W A

    1986-01-01

    There is great interest at present in incorporating image-management systems popularly referred to as picture archiving and communication systems (PACS) into imaging departments. This paper will describe various aspects of local area networks (LANs) for medical images and will give a definition of terms and classification of devices by describing a possible system which links various digital image sources through a high-speed data link and a common image format, allows for viewing and processing of all images produced within the complex, and eliminates the transport of films. The status of standards governing LAN and particularly PACS systems along with a proposed image exchange format will be given. Prototype systems, particularly a system for nuclear medicine images, will be presented, as well as the prospects for the immediate future in terms of installations started and commercial products available. A survey of the many questions that arise in the development of a PACS for medical images and also a survey of the presently suggested/adopted answers will be given.

  3. Developing an Intelligent Automatic Appendix Extraction Method from Ultrasonography Based on Fuzzy ART and Image Processing.

    PubMed

    Kim, Kwang Baek; Park, Hyun Jun; Song, Doo Heon; Han, Sang-suk

    2015-01-01

    Ultrasound examination (US) does a key role in the diagnosis and management of the patients with clinically suspected appendicitis which is the most common abdominal surgical emergency. Among the various sonographic findings of appendicitis, outer diameter of the appendix is most important. Therefore, clear delineation of the appendix on US images is essential. In this paper, we propose a new intelligent method to extract appendix automatically from abdominal sonographic images as a basic building block of developing such an intelligent tool for medical practitioners. Knowing that the appendix is located at the lower organ area below the bottom fascia line, we conduct a series of image processing techniques to find the fascia line correctly. And then we apply fuzzy ART learning algorithm to the organ area in order to extract appendix accurately. The experiment verifies that the proposed method is highly accurate (successful in 38 out of 40 cases) in extracting appendix.

  4. A Fast Smoothing Algorithm for Post-Processing of Surface Reflectance Spectra Retrieved from Airborne Imaging Spectrometer Data

    PubMed Central

    Gao, Bo-Cai; Liu, Ming

    2013-01-01

    Surface reflectance spectra retrieved from remotely sensed hyperspectral imaging data using radiative transfer models often contain residual atmospheric absorption and scattering effects. The reflectance spectra may also contain minor artifacts due to errors in radiometric and spectral calibrations. We have developed a fast smoothing technique for post-processing of retrieved surface reflectance spectra. In the present spectral smoothing technique, model-derived reflectance spectra are first fit using moving filters derived with a cubic spline smoothing algorithm. A common gain curve, which contains minor artifacts in the model-derived reflectance spectra, is then derived. This gain curve is finally applied to all of the reflectance spectra in a scene to obtain the spectrally smoothed surface reflectance spectra. Results from analysis of hyperspectral imaging data collected with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data are given. Comparisons between the smoothed spectra and those derived with the empirical line method are also presented. PMID:24129022

  5. Evaluation of Lip Prints on Different Supports Using a Batch Image Processing Algorithm and Image Superimposition.

    PubMed

    Herrera, Lara Maria; Fernandes, Clemente Maia da Silva; Serra, Mônica da Costa

    2018-01-01

    This study aimed to develop and to assess an algorithm to facilitate lip print visualization, and to digitally analyze lip prints on different supports, by superimposition. It also aimed to classify lip prints according to sex. A batch image processing algorithm was developed, which facilitated the identification and extraction of information about lip grooves. However, it performed better for lip print images with a uniform background. Paper and glass slab allowed more correct identifications than glass and the both sides of compact disks. There was no significant difference between the type of support and the amount of matching structures located in the middle area of the lower lip. There was no evidence of association between types of lip grooves and sex. Lip groove patterns of type III and type I were the most common for both sexes. The development of systems for lip print analysis is necessary, mainly concerning digital methods. © 2017 American Academy of Forensic Sciences.

  6. Single-snapshot 2D color measurement by plenoptic imaging system

    NASA Astrophysics Data System (ADS)

    Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana

    2014-03-01

    Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.

  7. Extraction of line properties based on direction fields.

    PubMed

    Kutka, R; Stier, S

    1996-01-01

    The authors present a new set of algorithms for segmenting lines, mainly blood vessels in X-ray images, and extracting properties such as their intensities, diameters, and center lines. The authors developed a tracking algorithm that checks rules taking the properties of vessels into account. The tools even detect veins, arteries, or catheters of two pixels in diameter and with poor contrast. Compared with other algorithms, such as the Canny line detector or anisotropic diffusion, the authors extract a smoother and connected vessel tree without artifacts in the image background. As the tools depend on common intermediate results, they are very fast when used together. The authors' results will support the 3-D reconstruction of the vessel tree from stereoscopic projections. Moreover, the authors make use of their line intensity measure for enhancing and improving the visibility of vessels in 3-D X-ray images. The processed images are intended to support radiologists in diagnosis, radiation therapy planning, and surgical planning. Radiologists verified the improved quality of the processed images and the enhanced visibility of relevant details, particularly fine blood vessels.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zoberi, J.

    Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR ismore » U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions.« less

  9. Analysis of Spatial Point Patterns in Nuclear Biology

    PubMed Central

    Weston, David J.; Adams, Niall M.; Russell, Richard A.; Stephens, David A.; Freemont, Paul S.

    2012-01-01

    There is considerable interest in cell biology in determining whether, and to what extent, the spatial arrangement of nuclear objects affects nuclear function. A common approach to address this issue involves analyzing a collection of images produced using some form of fluorescence microscopy. We assume that these images have been successfully pre-processed and a spatial point pattern representation of the objects of interest within the nuclear boundary is available. Typically in these scenarios, the number of objects per nucleus is low, which has consequences on the ability of standard analysis procedures to demonstrate the existence of spatial preference in the pattern. There are broadly two common approaches to look for structure in these spatial point patterns. First a spatial point pattern for each image is analyzed individually, or second a simple normalization is performed and the patterns are aggregated. In this paper we demonstrate using synthetic spatial point patterns drawn from predefined point processes how difficult it is to distinguish a pattern from complete spatial randomness using these techniques and hence how easy it is to miss interesting spatial preferences in the arrangement of nuclear objects. The impact of this problem is also illustrated on data related to the configuration of PML nuclear bodies in mammalian fibroblast cells. PMID:22615822

  10. Image navigation and registration performance assessment tool set for the GOES-R Advanced Baseline Imager and Geostationary Lightning Mapper

    NASA Astrophysics Data System (ADS)

    De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.

    2016-05-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99. 73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.

  11. Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for the GOES-R Advanced Baseline Imager and Geostationary Lightning Mapper

    NASA Technical Reports Server (NTRS)

    DeLuccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.

    2016-01-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.

  12. Image Navigation and Registration Performance Assessment Tool Set for the GOES-R Advanced Baseline Imager and Geostationary Lightning Mapper

    NASA Technical Reports Server (NTRS)

    De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.

    2016-01-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24-hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24-hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.

  13. Easier to swallow: pictorial review of structural findings of the pharynx at barium pharyngography.

    PubMed

    Tao, Ting Y; Menias, Christine O; Herman, Thomas E; McAlister, William H; Balfe, Dennis M

    2013-01-01

    Barium pharyngography remains an important diagnostic tool in the evaluation of patients with dysphagia. Pharyngography can not only help detect functional abnormalities but also help identify a wide spectrum of structural abnormalities in children and adults. These structural abnormalities may reflect malignant or nonmalignant oropharyngeal, hypopharyngeal, or laryngeal processes that deform or alter normal coated mucosal surfaces. Therefore, an understanding of the normal appearance of the pharynx at contrast material-enhanced imaging is necessary for accurate detection and interpretation of abnormal findings. Congenital malformations are more typically identified in the younger population; inflammatory and infiltrative diseases, trauma, foreign bodies, and laryngeal cysts can be seen in all age groups; and Zenker and Killian-Jamieson diverticula tend to occur in the older population. Squamous cell carcinoma is by far the most common malignant process, with contrast-enhanced imaging findings that depend on tumor location and morphology. Treatments of head and neck cancers include total laryngectomy and radiation therapy, both of which alter normal anatomy. Patients are usually evaluated immediately after laryngectomy to detect complications such as fistulas; later, pharyngography is useful for identifying and characterizing strictures. Deviation from the expected posttreatment appearance, such as irregular narrowing or mucosal nodularity, should prompt direct visualization to evaluate for recurrence. Contrast-enhanced imaging of the pharynx is commonly used in patients who present with dysphagia, and radiologists should be familiar with the barium pharyngographic appearance of the normal pharyngeal anatomy and of some of the processes that alter normal anatomy. © RSNA, 2013.

  14. A survey of GPU-based acceleration techniques in MRI reconstructions

    PubMed Central

    Wang, Haifeng; Peng, Hanchuan; Chang, Yuchou

    2018-01-01

    Image reconstruction in magnetic resonance imaging (MRI) clinical applications has become increasingly more complicated. However, diagnostic and treatment require very fast computational procedure. Modern competitive platforms of graphics processing unit (GPU) have been used to make high-performance parallel computations available, and attractive to common consumers for computing massively parallel reconstruction problems at commodity price. GPUs have also become more and more important for reconstruction computations, especially when deep learning starts to be applied into MRI reconstruction. The motivation of this survey is to review the image reconstruction schemes of GPU computing for MRI applications and provide a summary reference for researchers in MRI community. PMID:29675361

  15. A survey of GPU-based acceleration techniques in MRI reconstructions.

    PubMed

    Wang, Haifeng; Peng, Hanchuan; Chang, Yuchou; Liang, Dong

    2018-03-01

    Image reconstruction in magnetic resonance imaging (MRI) clinical applications has become increasingly more complicated. However, diagnostic and treatment require very fast computational procedure. Modern competitive platforms of graphics processing unit (GPU) have been used to make high-performance parallel computations available, and attractive to common consumers for computing massively parallel reconstruction problems at commodity price. GPUs have also become more and more important for reconstruction computations, especially when deep learning starts to be applied into MRI reconstruction. The motivation of this survey is to review the image reconstruction schemes of GPU computing for MRI applications and provide a summary reference for researchers in MRI community.

  16. Abnormal brain magnetic resonance imaging in two patients with Smith-Magenis syndrome.

    PubMed

    Maya, Idit; Vinkler, Chana; Konen, Osnat; Kornreich, Liora; Steinberg, Tamar; Yeshaya, Josepha; Latarowski, Victoria; Shohat, Mordechai; Lev, Dorit; Baris, Hagit N

    2014-08-01

    Smith-Magenis syndrome (SMS) is a clinically recognizable contiguous gene syndrome ascribed to an interstitial deletion in chromosome 17p11.2. Seventy percent of SMS patients have a common deletion interval spanning 3.5 megabases (Mb). Clinical features of SMS include characteristic mild dysmorphic features, ocular anomalies, short stature, brachydactyly, and hypotonia. SMS patients have a unique neurobehavioral phenotype that includes intellectual disability, self-injurious behavior and severe sleep disturbance. Little has been reported in the medical literature about anatomical brain anomalies in patients with SMS. Here we describe two patients with SMS caused by the common deletion in 17p11.2 diagnosed using chromosomal microarray (CMA). Both patients had a typical clinical presentation and abnormal brain magnetic resonance imaging (MRI) findings. One patient had subependymal periventricular gray matter heterotopia, and the second had a thin corpus callosum, a thin brain stem and hypoplasia of the cerebellar vermis. This report discusses the possible abnormal MRI images in SMS and reviews the literature on brain malformations in SMS. Finally, although structural brain malformations in SMS patients are not a common feature, we suggest baseline routine brain imaging in patients with SMS in particular, and in patients with chromosomal microdeletion/microduplication syndromes in general. Structural brain malformations in these patients may affect the decision-making process regarding their management. © 2014 Wiley Periodicals, Inc.

  17. Research on interpolation methods in medical image processing.

    PubMed

    Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

    2012-04-01

    Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

  18. A robust color image watermarking algorithm against rotation attacks

    NASA Astrophysics Data System (ADS)

    Han, Shao-cheng; Yang, Jin-feng; Wang, Rui; Jia, Gui-min

    2018-01-01

    A robust digital watermarking algorithm is proposed based on quaternion wavelet transform (QWT) and discrete cosine transform (DCT) for copyright protection of color images. The luminance component Y of a host color image in YIQ space is decomposed by QWT, and then the coefficients of four low-frequency subbands are transformed by DCT. An original binary watermark scrambled by Arnold map and iterated sine chaotic system is embedded into the mid-frequency DCT coefficients of the subbands. In order to improve the performance of the proposed algorithm against rotation attacks, a rotation detection scheme is implemented before watermark extracting. The experimental results demonstrate that the proposed watermarking scheme shows strong robustness not only against common image processing attacks but also against arbitrary rotation attacks.

  19. Wigner analysis of three dimensional pupil with finite lateral aperture

    PubMed Central

    Chen, Hsi-Hsun; Oh, Se Baek; Zhai, Xiaomin; Tsai, Jui-Chang; Cao, Liang-Cai; Barbastathis, George; Luo, Yuan

    2015-01-01

    A three dimensional (3D) pupil is an optical element, most commonly implemented on a volume hologram, that processes the incident optical field on a 3D fashion. Here we analyze the diffraction properties of a 3D pupil with finite lateral aperture in the 4-f imaging system configuration, using the Wigner Distribution Function (WDF) formulation. Since 3D imaging pupil is finite in both lateral and longitudinal directions, the WDF of the volume holographic 4-f imager theoretically predicts distinct Bragg diffraction patterns in phase space. These result in asymmetric profiles of diffracted coherent point spread function between degenerate diffraction and Bragg diffraction, elucidating the fundamental performance of volume holographic imaging. Experimental measurements are also presented, confirming the theoretical predictions. PMID:25836443

  20. Second harmonic generation imaging of the collagen in myocardium for atrial fibrillation diagnosis

    NASA Astrophysics Data System (ADS)

    Tsai, Ming-Rung; Chiou, Yu-We; Sun, Chi-Kuang

    2009-02-01

    Myocardial fibrosis, a common sequela of cardiac hypertrophy, has been shown to be associated with arrhythmias in experimental models. Some research has indicated that myocardial fibrosis plays an important role in predisposing patients to atrial fibrillation. Second harmonic generation (SHG) is an optically nonlinear coherent process to image the collagen network. In this presentation, we observe the SHG images of the collagen matrix in atrial myocardium and we analyzed of collagen fibers arrangement by using Fourier-transform analysis. Moreover, comparing the SHG images of the collagen fibers in atrial myocardium between normal sinus rhythm (NSR) and atrial fibrillation (AF), our result indicated that it is possible to realize the relation between myocardial fibrosis and AF.

  1. Color preservation for tone reproduction and image enhancement

    NASA Astrophysics Data System (ADS)

    Hsin, Chengho; Lee, Zong Wei; Lee, Zheng Zhan; Shin, Shaw-Jyh

    2014-01-01

    Applications based on luminance processing often face the problem of recovering the original chrominance in the output color image. A common approach to reconstruct a color image from the luminance output is by preserving the original hue and saturation. However, this approach often produces a highly colorful image which is undesirable. We develop a color preservation method that not only retains the ratios of the input tri-chromatic values but also adjusts the output chroma in an appropriate way. Linearizing the output luminance is the key idea to realize this method. In addition, a lightness difference metric together with a colorfulness difference metric are proposed to evaluate the performance of the color preservation methods. It shows that the proposed method performs consistently better than the existing approaches.

  2. Multimodality Imaging Approach towards Primary Aortic Sarcomas Arising after Endovascular Abdominal Aortic Aneurysm Repair: Case Series Report.

    PubMed

    Kamran, Mudassar; Fowler, Kathryn J; Mellnick, Vincent M; Sicard, Gregorio A; Narra, Vamsi R

    2016-06-01

    Primary aortic neoplasms are rare. Aortic sarcoma arising after endovascular aneurysm repair (EVAR) is a scarce subset of primary aortic malignancies, reports of which are infrequent in the published literature. The diagnosis of aortic sarcoma is challenging due to its non-specific clinical presentation, and the prognosis is poor due to delayed diagnosis, rapid proliferation, and propensity for metastasis. Post-EVAR, aortic sarcomas may mimic other more common aortic processes on surveillance imaging. Radiologists are rarely knowledgeable about this rare entity for which multimodality imaging and awareness are invaluable in early diagnosis. A series of three pathologically confirmed cases are presented to display the multimodality imaging features and clinical presentations of aortic sarcoma arising after EVAR.

  3. Automatic segmentation of mammogram and tomosynthesis images

    NASA Astrophysics Data System (ADS)

    Sargent, Dusty; Park, Sun Young

    2016-03-01

    Breast cancer is a one of the most common forms of cancer in terms of new cases and deaths both in the United States and worldwide. However, the survival rate with breast cancer is high if it is detected and treated before it spreads to other parts of the body. The most common screening methods for breast cancer are mammography and digital tomosynthesis, which involve acquiring X-ray images of the breasts that are interpreted by radiologists. The work described in this paper is aimed at optimizing the presentation of mammography and tomosynthesis images to the radiologist, thereby improving the early detection rate of breast cancer and the resulting patient outcomes. Breast cancer tissue has greater density than normal breast tissue, and appears as dense white image regions that are asymmetrical between the breasts. These irregularities are easily seen if the breast images are aligned and viewed side-by-side. However, since the breasts are imaged separately during mammography, the images may be poorly centered and aligned relative to each other, and may not properly focus on the tissue area. Similarly, although a full three dimensional reconstruction can be created from digital tomosynthesis images, the same centering and alignment issues can occur for digital tomosynthesis. Thus, a preprocessing algorithm that aligns the breasts for easy side-by-side comparison has the potential to greatly increase the speed and accuracy of mammogram reading. Likewise, the same preprocessing can improve the results of automatic tissue classification algorithms for mammography. In this paper, we present an automated segmentation algorithm for mammogram and tomosynthesis images that aims to improve the speed and accuracy of breast cancer screening by mitigating the above mentioned problems. Our algorithm uses information in the DICOM header to facilitate preprocessing, and incorporates anatomical region segmentation and contour analysis, along with a hidden Markov model (HMM) for processing the multi-frame tomosynthesis images. The output of the algorithm is a new set of images that have been processed to show only the diagnostically relevant region and align the breasts so that they can be easily compared side-by-side. Our method has been tested on approximately 750 images, including various examples of mammogram, tomosynthesis, and scanned images, and has correctly segmented the diagnostically relevant image region in 97% of cases.

  4. Multi-scale Morphological Image Enhancement of Chest Radiographs by a Hybrid Scheme.

    PubMed

    Alavijeh, Fatemeh Shahsavari; Mahdavi-Nasab, Homayoun

    2015-01-01

    Chest radiography is a common diagnostic imaging test, which contains an enormous amount of information about a patient. However, its interpretation is highly challenging. The accuracy of the diagnostic process is greatly influenced by image processing algorithms; hence enhancement of the images is indispensable in order to improve visibility of the details. This paper aims at improving radiograph parameters such as contrast, sharpness, noise level, and brightness to enhance chest radiographs, making use of a triangulation method. Here, contrast limited adaptive histogram equalization technique and noise suppression are simultaneously performed in wavelet domain in a new scheme, followed by morphological top-hat and bottom-hat filtering. A unique implementation of morphological filters allows for adjustment of the image brightness and significant enhancement of the contrast. The proposed method is tested on chest radiographs from Japanese Society of Radiological Technology database. The results are compared with conventional enhancement techniques such as histogram equalization, contrast limited adaptive histogram equalization, Retinex, and some recently proposed methods to show its strengths. The experimental results reveal that the proposed method can remarkably improve the image contrast while keeping the sensitive chest tissue information so that radiologists might have a more precise interpretation.

  5. Multi-scale Morphological Image Enhancement of Chest Radiographs by a Hybrid Scheme

    PubMed Central

    Alavijeh, Fatemeh Shahsavari; Mahdavi-Nasab, Homayoun

    2015-01-01

    Chest radiography is a common diagnostic imaging test, which contains an enormous amount of information about a patient. However, its interpretation is highly challenging. The accuracy of the diagnostic process is greatly influenced by image processing algorithms; hence enhancement of the images is indispensable in order to improve visibility of the details. This paper aims at improving radiograph parameters such as contrast, sharpness, noise level, and brightness to enhance chest radiographs, making use of a triangulation method. Here, contrast limited adaptive histogram equalization technique and noise suppression are simultaneously performed in wavelet domain in a new scheme, followed by morphological top-hat and bottom-hat filtering. A unique implementation of morphological filters allows for adjustment of the image brightness and significant enhancement of the contrast. The proposed method is tested on chest radiographs from Japanese Society of Radiological Technology database. The results are compared with conventional enhancement techniques such as histogram equalization, contrast limited adaptive histogram equalization, Retinex, and some recently proposed methods to show its strengths. The experimental results reveal that the proposed method can remarkably improve the image contrast while keeping the sensitive chest tissue information so that radiologists might have a more precise interpretation. PMID:25709942

  6. Localized water reverberation phases and its impact on back-projection images

    NASA Astrophysics Data System (ADS)

    Yue, H.; Castillo, J.; Yu, C.; Meng, L.; Zhan, Z.

    2017-12-01

    Coherent radiators imaged by back-projections (BP) are commonly interpreted as part of the rupture process. Nevertheless, artifacts introduced by structure related phases are rarely discriminated from the rupture process. In this study, we adopt the logic of empirical Greens' function analysis (EGF) to discriminate between rupture and structure effect. We re-examine the waveforms and BP images of the 2012 Mw 7.2 Indian Ocean earthquake and an EGF event (Mw 6.2). The P wave codas of both events present similar shape with characteristic period of approximately 10 s, which are back-projected as coherent radiators near the trench. S wave BP doesn't image energy radiation near the trench. We interpret those coda waves as localized water reverberation phases excited near the trench. We perform a 2D waveform modeling using realistic bathymetry model, and find that the sharp near-trench bathymetry traps the acoustic water waves forming localized reverberation phases. These waves can be imaged as coherent near-trench radiators with similar features as that in the observations. We present a set of methodology to discriminate between the rupture and propagation effects in BP images, which can serve as a criterion of subevent identification.

  7. Pitfalls and Limitations in the Interpretation of Geophysical Images for Hydrologic Properties and Processes

    NASA Astrophysics Data System (ADS)

    Day-Lewis, F. D.

    2014-12-01

    Geophysical imaging (e.g., electrical, radar, seismic) can provide valuable information for the characterization of hydrologic properties and monitoring of hydrologic processes, as evidenced in the rapid growth of literature on the subject. Geophysical imaging has been used for monitoring tracer migration and infiltration, mapping zones of focused groundwater/surface-water exchange, and verifying emplacement of amendments for bioremediation. Despite the enormous potential for extraction of hydrologic information from geophysical images, there also is potential for misinterpretation and over-interpretation. These concerns are particularly relevant when geophysical results are used within quantitative frameworks, e.g., conversion to hydrologic properties through petrophysical relations, geostatistical estimation and simulation conditioned to geophysical inversions, and joint inversion. We review pitfalls to interpretation associated with limited image resolution, spatially variable image resolution, incorrect data weighting, errors in the timing of measurements, temporal smearing resulting from changes during data acquisition, support-volume/scale effects, and incorrect assumptions or approximations involved in modeling geophysical or other jointly inverted data. A series of numerical and field-based examples illustrate these potential problems. Our goal in this talk is to raise awareness of common pitfalls and present strategies for recognizing and avoiding them.

  8. Advanced magnetic resonance imaging of the physical processes in human glioblastoma.

    PubMed

    Kalpathy-Cramer, Jayashree; Gerstner, Elizabeth R; Emblem, Kyrre E; Andronesi, Ovidiu; Rosen, Bruce

    2014-09-01

    The most common malignant primary brain tumor, glioblastoma multiforme (GBM) is a devastating disease with a grim prognosis. Patient survival is typically less than two years and fewer than 10% of patients survive more than five years. Magnetic resonance imaging (MRI) can have great utility in the diagnosis, grading, and management of patients with GBM as many of the physical manifestations of the pathologic processes in GBM can be visualized and quantified using MRI. Newer MRI techniques such as dynamic contrast enhanced and dynamic susceptibility contrast MRI provide functional information about the tumor hemodynamic status. Diffusion MRI can shed light on tumor cellularity and the disruption of white matter tracts in the proximity of tumors. MR spectroscopy can be used to study new tumor tissue markers such as IDH mutations. MRI is helping to noninvasively explore the link between the molecular basis of gliomas and the imaging characteristics of their physical processes. We, here, review several approaches to MR-based imaging and discuss the potential for these techniques to quantify the physical processes in glioblastoma, including tumor cellularity and vascularity, metabolite expression, and patterns of tumor growth and recurrence. We conclude with challenges and opportunities for further research in applying physical principles to better understand the biologic process in this deadly disease. See all articles in this Cancer Research section, "Physics in Cancer Research." ©2014 American Association for Cancer Research.

  9. Client/server approach to image capturing

    NASA Astrophysics Data System (ADS)

    Tuijn, Chris; Stokes, Earle

    1998-01-01

    The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven applications). This paper is structured as follows. In the introduction, we further motive the need for a scan server-based architecture. In the second section, we give a brief architectural overview of the scan server and the other components it is connected to. The third chapter exposes the generic model for input devices as well as the image processing model; the fourth chapter reveals the different shapes the scanning applications (or modules) can have. In the last section, we briefly summarize the presented material and point out trends for future development.

  10. Optical computing and image processing using photorefractive gallium arsenide

    NASA Technical Reports Server (NTRS)

    Cheng, Li-Jen; Liu, Duncan T. H.

    1990-01-01

    Recent experimental results on matrix-vector multiplication and multiple four-wave mixing using GaAs are presented. Attention is given to a simple concept of using two overlapping holograms in GaAs to do two matrix-vector multiplication processes operating in parallel with a common input vector. This concept can be used to construct high-speed, high-capacity, reconfigurable interconnection and multiplexing modules, important for optical computing and neural-network applications.

  11. Automatic detection and severity measurement of eczema using image processing.

    PubMed

    Alam, Md Nafiul; Munia, Tamanna Tabassum Khan; Tavakolian, Kouhyar; Vasefi, Fartash; MacKinnon, Nick; Fazel-Rezai, Reza

    2016-08-01

    Chronic skin diseases like eczema may lead to severe health and financial consequences for patients if not detected and controlled early. Early measurement of disease severity, combined with a recommendation for skin protection and use of appropriate medication can prevent the disease from worsening. Current diagnosis can be costly and time-consuming. In this paper, an automatic eczema detection and severity measurement model are presented using modern image processing and computer algorithm. The system can successfully detect regions of eczema and classify the identified region as mild or severe based on image color and texture feature. Then the model automatically measures skin parameters used in the most common assessment tool called "Eczema Area and Severity Index (EASI)," by computing eczema affected area score, eczema intensity score, and body region score of eczema allowing both patients and physicians to accurately assess the affected skin.

  12. Automatic analysis of microscopic images of red blood cell aggregates

    NASA Astrophysics Data System (ADS)

    Menichini, Pablo A.; Larese, Mónica G.; Riquelme, Bibiana D.

    2015-06-01

    Red blood cell aggregation is one of the most important factors in blood viscosity at stasis or at very low rates of flow. The basic structure of aggregates is a linear array of cell commonly termed as rouleaux. Enhanced or abnormal aggregation is seen in clinical conditions, such as diabetes and hypertension, producing alterations in the microcirculation, some of which can be analyzed through the characterization of aggregated cells. Frequently, image processing and analysis for the characterization of RBC aggregation were done manually or semi-automatically using interactive tools. We propose a system that processes images of RBC aggregation and automatically obtains the characterization and quantification of the different types of RBC aggregates. Present technique could be interesting to perform the adaptation as a routine used in hemorheological and Clinical Biochemistry Laboratories because this automatic method is rapid, efficient and economical, and at the same time independent of the user performing the analysis (repeatability of the analysis).

  13. Nonpuerperal mastitis and subareolar abscess of the breast.

    PubMed

    Kasales, Claudia J; Han, Bing; Smith, J Stanley; Chetlen, Alison L; Kaneda, Heather J; Shereef, Serene

    2014-02-01

    The purpose of this article is to show radiologists how to readily recognize nonpuerperal subareolar abscess and its complications in order to help reduce the time to definitive therapy and improve patient care. To achieve this purpose, the various theories of pathogenesis and the associated histopathologic features are reviewed; the typical clinical characteristics are detailed in contrast to those seen in lactational abscess and inflammatory breast cancer; the common imaging findings are described with emphasis on the sonographic features; correlative pathologic findings are presented to reinforce the imaging findings as they pertain to disease origins; and the various treatment options are reviewed. Nonpuerperal subareolar mastitis and abscess is a benign breast entity often associated with prolonged morbidity. Through better understanding of the underlying disease process the imaging, physical, and clinical findings of this rare process can be more readily recognized and treatment options expedited, improving patient care.

  14. The use of vision-based image quality metrics to predict low-light performance of camera phones

    NASA Astrophysics Data System (ADS)

    Hultgren, B.; Hertel, D.

    2010-01-01

    Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of utmost importance since a high percentage of images are made under low lighting conditions where image quality failure may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions affect overall image quality. Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels. The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging performance.

  15. MilxXplore: a web-based system to explore large imaging datasets

    PubMed Central

    Bourgeat, P; Dore, V; Villemagne, V L; Rowe, C C; Salvado, O; Fripp, J

    2013-01-01

    Objective As large-scale medical imaging studies are becoming more common, there is an increasing reliance on automated software to extract quantitative information from these images. As the size of the cohorts keeps increasing with large studies, there is a also a need for tools that allow results from automated image processing and analysis to be presented in a way that enables fast and efficient quality checking, tagging and reporting on cases in which automatic processing failed or was problematic. Materials and methods MilxXplore is an open source visualization platform, which provides an interface to navigate and explore imaging data in a web browser, giving the end user the opportunity to perform quality control and reporting in a user friendly, collaborative and efficient way. Discussion Compared to existing software solutions that often provide an overview of the results at the subject's level, MilxXplore pools the results of individual subjects and time points together, allowing easy and efficient navigation and browsing through the different acquisitions of a subject over time, and comparing the results against the rest of the population. Conclusions MilxXplore is fast, flexible and allows remote quality checks of processed imaging data, facilitating data sharing and collaboration across multiple locations, and can be easily integrated into a cloud computing pipeline. With the growing trend of open data and open science, such a tool will become increasingly important to share and publish results of imaging analysis. PMID:23775173

  16. Technical note: DIRART--A software suite for deformable image registration and adaptive radiotherapy research.

    PubMed

    Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Goddu, S Murty; Mutic, Sasa; Deasy, Joseph O; Low, Daniel A

    2011-01-01

    Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. DIRART provides a set of image processing/registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. 0 2011 Ameri-

  17. Content Based Image Retrieval by Using Color Descriptor and Discrete Wavelet Transform.

    PubMed

    Ashraf, Rehan; Ahmed, Mudassar; Jabbar, Sohail; Khalid, Shehzad; Ahmad, Awais; Din, Sadia; Jeon, Gwangil

    2018-01-25

    Due to recent development in technology, the complexity of multimedia is significantly increased and the retrieval of similar multimedia content is a open research problem. Content-Based Image Retrieval (CBIR) is a process that provides a framework for image search and low-level visual features are commonly used to retrieve the images from the image database. The basic requirement in any image retrieval process is to sort the images with a close similarity in term of visually appearance. The color, shape and texture are the examples of low-level image features. The feature plays a significant role in image processing. The powerful representation of an image is known as feature vector and feature extraction techniques are applied to get features that will be useful in classifying and recognition of images. As features define the behavior of an image, they show its place in terms of storage taken, efficiency in classification and obviously in time consumption also. In this paper, we are going to discuss various types of features, feature extraction techniques and explaining in what scenario, which features extraction technique will be better. The effectiveness of the CBIR approach is fundamentally based on feature extraction. In image processing errands like object recognition and image retrieval feature descriptor is an immense among the most essential step. The main idea of CBIR is that it can search related images to an image passed as query from a dataset got by using distance metrics. The proposed method is explained for image retrieval constructed on YCbCr color with canny edge histogram and discrete wavelet transform. The combination of edge of histogram and discrete wavelet transform increase the performance of image retrieval framework for content based search. The execution of different wavelets is additionally contrasted with discover the suitability of specific wavelet work for image retrieval. The proposed algorithm is prepared and tried to implement for Wang image database. For Image Retrieval Purpose, Artificial Neural Networks (ANN) is used and applied on standard dataset in CBIR domain. The execution of the recommended descriptors is assessed by computing both Precision and Recall values and compared with different other proposed methods with demonstrate the predominance of our method. The efficiency and effectiveness of the proposed approach outperforms the existing research in term of average precision and recall values.

  18. Survey of image-guided radiotherapy use in Australia.

    PubMed

    Batumalai, Vikneswary; Holloway, Lois Charlotte; Kumar, Shivani; Dundas, Kylie; Jameson, Michael Geoffrey; Vinod, Shalini Kavita; Delaney, Geoff P

    2017-06-01

    This study aimed to evaluate the current use of imaging technologies for planning and delivery of radiotherapy (RT) in Australia. An online survey was emailed to all Australian RT centres in August 2015. The survey inquired about imaging practices during planning and treatment delivery processes. Participants were asked about the types of image-guided RT (IGRT) technologies and the disease sites they were used for, reasons for implementation, frequency of imaging and future plans for IGRT use in their department. The survey was completed by 71% of Australian RT centres. All respondents had access to computed tomography (CT) simulators and regularly co-registered the following scans to the RT: diagnostic CT (50%), diagnostic magnetic resonance imaging (MRI) (95%), planning MRI (34%), planning positron emission tomography (PET) (26%) and diagnostic PET (97%) to aid in tumour delineation. The main reason for in-room IGRT implementation was the use of highly conformal techniques, while the most common reason for under-utilisation was lack of equipment capability. The most commonly used IGRT modalities were kilovoltage (kV) cone-beam CT (CBCT) (97%), kV electronic portal image (EPI) (89%) and megavoltage (MV) EPI (75%). Overall, participants planned to increase IGRT use in planning (33%) and treatment delivery (36%). IGRT is widely used among Australian RT centres. On the basis of future plans of respondents, the installation of new imaging modalities is expected to increase for both planning and treatment. © 2016 The Royal Australian and New Zealand College of Radiologists.

  19. MO-B-BRC-00: Prostate HDR Treatment Planning - Considering Different Imaging Modalities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2016-06-15

    Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR ismore » U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions.« less

  20. Data processing from lobster eye type optics

    NASA Astrophysics Data System (ADS)

    Nentvich, Ondrej; Stehlikova, Veronika; Urban, Martin; Hudec, Rene; Sieger, Ladislav

    2017-05-01

    Wolter I optics are commonly used for imaging in X-Ray spectrum. This system uses two reflections, and at higher energies, this system is not so much efficient but has a very good optical resolution. Here is another type of optics Lobster Eye, which is using also two reflections for focusing rays in Schmidt's or Angel's arrangement. Here is also possible to use Lobster eye optics as two one dimensional independent optics. This paper describes advantages of one dimensional and two dimensional Lobster Eye optics in Schmidt's arrangement and its data processing - find out a number of sources in wide field of view. Two dimensional (2D) optics are suitable to detect the number of point X-ray sources and their magnitude, but it is necessary to expose for a long time because a 2D system has much lower transitivity, due to double reflection, compared to one dimensional (1D) optics. Not only for this reason, two 1D optics are better to use for lower magnitudes of sources. In this case, additional image processing is necessary to achieve a 2D image. This article describes of approach an image reconstruction and advantages of two 1D optics without significant losses of transitivity.

  1. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; Russell, Samuel S.

    2012-01-01

    Objective Develop a software application utilizing high performance computing techniques, including general purpose graphics processing units (GPGPUs), for the analysis and visualization of large thermographic data sets. Over the past several years, an increasing effort among scientists and engineers to utilize graphics processing units (GPUs) in a more general purpose fashion is allowing for previously unobtainable levels of computation by individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU which yield significant increases in performance. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Image processing is one area were GPUs are being used to greatly increase the performance of certain analysis and visualization techniques.

  2. Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service

    PubMed Central

    Bao, Shunxing; Plassard, Andrew J.; Landman, Bennett A.; Gokhale, Aniruddha

    2017-01-01

    Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based “medical image processing-as-a-service” offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop’s distributed file system. Despite this promise, HBase’s load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage. PMID:28884169

  3. Application of an enhanced fuzzy algorithm for MR brain tumor image segmentation

    NASA Astrophysics Data System (ADS)

    Hemanth, D. Jude; Vijila, C. Kezi Selva; Anitha, J.

    2010-02-01

    Image segmentation is one of the significant digital image processing techniques commonly used in the medical field. One of the specific applications is tumor detection in abnormal Magnetic Resonance (MR) brain images. Fuzzy approaches are widely preferred for tumor segmentation which generally yields superior results in terms of accuracy. But most of the fuzzy algorithms suffer from the drawback of slow convergence rate which makes the system practically non-feasible. In this work, the application of modified Fuzzy C-means (FCM) algorithm to tackle the convergence problem is explored in the context of brain image segmentation. This modified FCM algorithm employs the concept of quantization to improve the convergence rate besides yielding excellent segmentation efficiency. This algorithm is experimented on real time abnormal MR brain images collected from the radiologists. A comprehensive feature vector is extracted from these images and used for the segmentation technique. An extensive feature selection process is performed which reduces the convergence time period and improve the segmentation efficiency. After segmentation, the tumor portion is extracted from the segmented image. Comparative analysis in terms of segmentation efficiency and convergence rate is performed between the conventional FCM and the modified FCM. Experimental results show superior results for the modified FCM algorithm in terms of the performance measures. Thus, this work highlights the application of the modified algorithm for brain tumor detection in abnormal MR brain images.

  4. Cell segmentation in phase contrast microscopy images via semi-supervised classification over optics-related features.

    PubMed

    Su, Hang; Yin, Zhaozheng; Huh, Seungil; Kanade, Takeo

    2013-10-01

    Phase-contrast microscopy is one of the most common and convenient imaging modalities to observe long-term multi-cellular processes, which generates images by the interference of lights passing through transparent specimens and background medium with different retarded phases. Despite many years of study, computer-aided phase contrast microscopy analysis on cell behavior is challenged by image qualities and artifacts caused by phase contrast optics. Addressing the unsolved challenges, the authors propose (1) a phase contrast microscopy image restoration method that produces phase retardation features, which are intrinsic features of phase contrast microscopy, and (2) a semi-supervised learning based algorithm for cell segmentation, which is a fundamental task for various cell behavior analysis. Specifically, the image formation process of phase contrast microscopy images is first computationally modeled with a dictionary of diffraction patterns; as a result, each pixel of a phase contrast microscopy image is represented by a linear combination of the bases, which we call phase retardation features. Images are then partitioned into phase-homogeneous atoms by clustering neighboring pixels with similar phase retardation features. Consequently, cell segmentation is performed via a semi-supervised classification technique over the phase-homogeneous atoms. Experiments demonstrate that the proposed approach produces quality segmentation of individual cells and outperforms previous approaches. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. DOCLIB: a software library for document processing

    NASA Astrophysics Data System (ADS)

    Jaeger, Stefan; Zhu, Guangyu; Doermann, David; Chen, Kevin; Sampat, Summit

    2006-01-01

    Most researchers would agree that research in the field of document processing can benefit tremendously from a common software library through which institutions are able to develop and share research-related software and applications across academic, business, and government domains. However, despite several attempts in the past, the research community still lacks a widely-accepted standard software library for document processing. This paper describes a new library called DOCLIB, which tries to overcome the drawbacks of earlier approaches. Many of DOCLIB's features are unique either in themselves or in their combination with others, e.g. the factory concept for support of different image types, the juxtaposition of image data and metadata, or the add-on mechanism. We cherish the hope that DOCLIB serves the needs of researchers better than previous approaches and will readily be accepted by a larger group of scientists.

  6. Motion-Blurred Particle Image Restoration for On-Line Wear Monitoring

    PubMed Central

    Peng, Yeping; Wu, Tonghai; Wang, Shuo; Kwok, Ngaiming; Peng, Zhongxiao

    2015-01-01

    On-line images of wear debris contain important information for real-time condition monitoring, and a dynamic imaging technique can eliminate particle overlaps commonly found in static images, for instance, acquired using ferrography. However, dynamic wear debris images captured in a running machine are unavoidably blurred because the particles in lubricant are in motion. Hence, it is difficult to acquire reliable images of wear debris with an adequate resolution for particle feature extraction. In order to obtain sharp wear particle images, an image processing approach is proposed. Blurred particles were firstly separated from the static background by utilizing a background subtraction method. Second, the point spread function was estimated using power cepstrum to determine the blur direction and length. Then, the Wiener filter algorithm was adopted to perform image restoration to improve the image quality. Finally, experiments were conducted with a large number of dynamic particle images to validate the effectiveness of the proposed method and the performance of the approach was also evaluated. This study provides a new practical approach to acquire clear images for on-line wear monitoring. PMID:25856328

  7. A sparse representation-based approach for copy-move image forgery detection in smooth regions

    NASA Astrophysics Data System (ADS)

    Abdessamad, Jalila; ElAdel, Asma; Zaied, Mourad

    2017-03-01

    Copy-move image forgery is the act of cloning a restricted region in the image and pasting it once or multiple times within that same image. This procedure intends to cover a certain feature, probably a person or an object, in the processed image or emphasize it through duplication. Consequences of this malicious operation can be unexpectedly harmful. Hence, the present paper proposes a new approach that automatically detects Copy-move Forgery (CMF). In particular, this work broaches a widely common open issue in CMF research literature that is detecting CMF within smooth areas. Indeed, the proposed approach represents the image blocks as a sparse linear combination of pre-learned bases (a mixture of texture and color-wise small patches) which allows a robust description of smooth patches. The reported experimental results demonstrate the effectiveness of the proposed approach in identifying the forged regions in CM attacks.

  8. A state-of-the-art review on segmentation algorithms in intravascular ultrasound (IVUS) images.

    PubMed

    Katouzian, Amin; Angelini, Elsa D; Carlier, Stéphane G; Suri, Jasjit S; Navab, Nassir; Laine, Andrew F

    2012-09-01

    Over the past two decades, intravascular ultrasound (IVUS) image segmentation has remained a challenge for researchers while the use of this imaging modality is rapidly growing in catheterization procedures and in research studies. IVUS provides cross-sectional grayscale images of the arterial wall and the extent of atherosclerotic plaques with high spatial resolution in real time. In this paper, we review recently developed image processing methods for the detection of media-adventitia and luminal borders in IVUS images acquired with different transducers operating at frequencies ranging from 20 to 45 MHz. We discuss methodological challenges, lack of diversity in reported datasets, and weaknesses of quantification metrics that make IVUS segmentation still an open problem despite all efforts. In conclusion, we call for a common reference database, validation metrics, and ground-truth definition with which new and existing algorithms could be benchmarked.

  9. Camera sensor arrangement for crop/weed detection accuracy in agronomic images.

    PubMed

    Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo

    2013-04-02

    In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects.

  10. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  11. Seismic data enhancement and regularization using finite offset Common Diffraction Surface (CDS) stack

    NASA Astrophysics Data System (ADS)

    Garabito, German; Cruz, João Carlos Ribeiro; Oliva, Pedro Andrés Chira; Söllner, Walter

    2017-01-01

    The Common Reflection Surface stack is a robust method for simulating zero-offset and common-offset sections with high accuracy from multi-coverage seismic data. For simulating common-offset sections, the Common-Reflection-Surface stack method uses a hyperbolic traveltime approximation that depends on five kinematic parameters for each selected sample point of the common-offset section to be simulated. The main challenge of this method is to find a computationally efficient data-driven optimization strategy for accurately determining the five kinematic stacking parameters on which each sample of the stacked common-offset section depends. Several authors have applied multi-step strategies to obtain the optimal parameters by combining different pre-stack data configurations. Recently, other authors used one-step data-driven strategies based on a global optimization for estimating simultaneously the five parameters from multi-midpoint and multi-offset gathers. In order to increase the computational efficiency of the global optimization process, we use in this paper a reduced form of the Common-Reflection-Surface traveltime approximation that depends on only four parameters, the so-called Common Diffraction Surface traveltime approximation. By analyzing the convergence of both objective functions and the data enhancement effect after applying the two traveltime approximations to the Marmousi synthetic dataset and a real land dataset, we conclude that the Common-Diffraction-Surface approximation is more efficient within certain aperture limits and preserves at the same time a high image accuracy. The preserved image quality is also observed in a direct comparison after applying both approximations for simulating common-offset sections on noisy pre-stack data.

  12. ASAP (Automatic Software for ASL Processing): A toolbox for processing Arterial Spin Labeling images.

    PubMed

    Mato Abad, Virginia; García-Polo, Pablo; O'Daly, Owen; Hernández-Tamames, Juan Antonio; Zelaya, Fernando

    2016-04-01

    The method of Arterial Spin Labeling (ASL) has experienced a significant rise in its application to functional imaging, since it is the only technique capable of measuring blood perfusion in a truly non-invasive manner. Currently, there are no commercial packages for processing ASL data and there is no recognized standard for normalizing ASL data to a common frame of reference. This work describes a new Automated Software for ASL Processing (ASAP) that can automatically process several ASL datasets. ASAP includes functions for all stages of image pre-processing: quantification, skull-stripping, co-registration, partial volume correction and normalization. To assess the applicability and validity of the toolbox, this work shows its application in the study of hypoperfusion in a sample of healthy subjects at risk of progressing to Alzheimer's disease. ASAP requires limited user intervention, minimizing the possibility of random and systematic errors, and produces cerebral blood flow maps that are ready for statistical group analysis. The software is easy to operate and results in excellent quality of spatial normalization. The results found in this evaluation study are consistent with previous studies that find decreased perfusion in Alzheimer's patients in similar regions and demonstrate the applicability of ASAP. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Real-Time Measurement of Width and Height of Weld Beads in GMAW Processes.

    PubMed

    Pinto-Lopera, Jesús Emilio; S T Motta, José Mauricio; Absi Alfaro, Sadek Crisostomo

    2016-09-15

    Associated to the weld quality, the weld bead geometry is one of the most important parameters in welding processes. It is a significant requirement in a welding project, especially in automatic welding systems where a specific width, height, or penetration of weld bead is needed. This paper presents a novel technique for real-time measuring of the width and height of weld beads in gas metal arc welding (GMAW) using a single high-speed camera and a long-pass optical filter in a passive vision system. The measuring method is based on digital image processing techniques and the image calibration process is based on projective transformations. The measurement process takes less than 3 milliseconds per image, which allows a transfer rate of more than 300 frames per second. The proposed methodology can be used in any metal transfer mode of a gas metal arc welding process and does not have occlusion problems. The responses of the measurement system, presented here, are in a good agreement with off-line data collected by a common laser-based 3D scanner. Each measurement is compare using a statistical Welch's t-test of the null hypothesis, which, in any case, does not exceed the threshold of significance level α = 0.01, validating the results and the performance of the proposed vision system.

  14. A novel double patterning approach for 30nm dense holes

    NASA Astrophysics Data System (ADS)

    Hsu, Dennis Shu-Hao; Wang, Walter; Hsieh, Wei-Hsien; Huang, Chun-Yen; Wu, Wen-Bin; Shih, Chiang-Lin; Shih, Steven

    2011-04-01

    Double Patterning Technology (DPT) was commonly accepted as the major workhorse beyond water immersion lithography for sub-38nm half-pitch line patterning before the EUV production. For dense hole patterning, classical DPT employs self-aligned spacer deposition and uses the intersection of horizontal and vertical lines to define the desired hole patterns. However, the increase in manufacturing cost and process complexity is tremendous. Several innovative approaches have been proposed and experimented to address the manufacturing and technical challenges. A novel process of double patterned pillars combined image reverse will be proposed for the realization of low cost dense holes in 30nm node DRAM. The nature of pillar formation lithography provides much better optical contrast compared to the counterpart hole patterning with similar CD requirements. By the utilization of a reliable freezing process, double patterned pillars can be readily implemented. A novel image reverse process at the last stage defines the hole patterns with high fidelity. In this paper, several freezing processes for the construction of the double patterned pillars were tested and compared, and 30nm double patterning pillars were demonstrated successfully. A variety of different image reverse processes will be investigated and discussed for their pros and cons. An economic approach with the optimized lithography performance will be proposed for the application of 30nm DRAM node.

  15. Mapping of Polar Areas Based on High-Resolution Satellite Images: The Example of the Henryk Arctowski Polish Antarctic Station

    NASA Astrophysics Data System (ADS)

    Kurczyński, Zdzisław; Różycki, Sebastian; Bylina, Paweł

    2017-12-01

    To produce orthophotomaps or digital elevation models, the most commonly used method is photogrammetric measurement. However, the use of aerial images is not easy in polar regions for logistical reasons. In these areas, remote sensing data acquired from satellite systems is much more useful. This paper presents the basic technical requirements of different products which can be obtain (in particular orthoimages and digital elevation model (DEM)) using Very-High-Resolution Satellite (VHRS) images. The study area was situated in the vicinity of the Henryk Arctowski Polish Antarctic Station on the Western Shore of Admiralty Bay, King George Island, Western Antarctic. Image processing was applied on two triplets of images acquired by the Pléiades 1A and 1B in March 2013. The results of the generation of orthoimages from the Pléiades systems without control points showed that the proposed method can achieve Root Mean Squared Error (RMSE) of 3-9 m. The presented Pléiades images are useful for thematic remote sensing analysis and processing of measurements. Using satellite images to produce remote sensing products for polar regions is highly beneficial and reliable and compares well with more expensive airborne photographs or field surveys.

  16. The universal toolbox thermal imager

    NASA Astrophysics Data System (ADS)

    Hollock, Steve; Jones, Graham; Usowicz, Paul

    2003-09-01

    The Introduction of Microsoft Pocket PC 2000/2002 has seen a standardisation of the operating systems used by the majority of PDA manufacturers. This, coupled with the recent price reductions associated with these devices, has led to a rapid increase in the sales of such units; their use is now common in industrial, commercial and domestic applications throughout the world. This paper describes the results of a programme to develop a thermal imager that will interface directly to all of these units so as to take advantage of the existing and future installed base of such devices. The imager currently interfaces with virtually any Pocket PC which provides the necessary processing, display and storage capability; as an alternative, the output of the unit can be visualised and processed in real time using a PC or laptop computer. In future, the open architecture employed by this imager will allow it to support all mobile computing devices, including phones and PDAs. The imager has been designed for one-handed or two-handed operation so that it may be pointed at awkward angles or used in confined spaces; this flexibility of use coupled with the extensive feature range and exceedingly low-cost of the imager, is extending the marketplace for thermal imaging from military and professional, through industrial to the commercial and domestic marketplaces.

  17. Design and development of an ultrasound calibration phantom and system

    NASA Astrophysics Data System (ADS)

    Cheng, Alexis; Ackerman, Martin K.; Chirikjian, Gregory S.; Boctor, Emad M.

    2014-03-01

    Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the ultrasound transducer and the ultrasound image. A phantom or model with known geometry is also required. In this work, we design and test an ultrasound calibration phantom and software. The two main considerations in this work are utilizing our knowledge of ultrasound physics to design the phantom and delivering an easy to use calibration process to the user. We explore the use of a three-dimensional printer to create the phantom in its entirety without need for user assembly. We have also developed software to automatically segment the three-dimensional printed rods from the ultrasound image by leveraging knowledge about the shape and scale of the phantom. In this work, we present preliminary results from using this phantom to perform ultrasound calibration. To test the efficacy of our method, we match the projection of the points segmented from the image to the known model and calculate a sum squared difference between each point for several combinations of motion generation and filtering methods. The best performing combination of motion and filtering techniques had an error of 1.56 mm and a standard deviation of 1.02 mm.

  18. A Neuroimaging Web Services Interface as a Cyber Physical System for Medical Imaging and Data Management in Brain Research: Design Study.

    PubMed

    Lizarraga, Gabriel; Li, Chunfei; Cabrerizo, Mercedes; Barker, Warren; Loewenstein, David A; Duara, Ranjan; Adjouadi, Malek

    2018-04-26

    Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. ©Gabriel Lizarraga, Chunfei Li, Mercedes Cabrerizo, Warren Barker, David A Loewenstein, Ranjan Duara, Malek Adjouadi. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 26.04.2018.

  19. Use of a portable hyperspectral imaging system for monitoring the efficacy of sanitation procedures in produce processing plants

    USDA-ARS?s Scientific Manuscript database

    Cleaning and sanitation of production surfaces and equipment plays a critical role in lowering the risk of food borne illness associated with consumption of fresh-cut produce. Visual observation and sampling methods including ATP tests and cell culturing are commonly used to monitor the effectivenes...

  20. Mental Visualization of Objects from Cross-Sectional Images

    ERIC Educational Resources Information Center

    Wu, Bing; Klatzky, Roberta L.; Stetten, George D.

    2012-01-01

    We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object…

  1. Opaque for the Reader but Transparent for the Brain: Neural Signatures of Morphological Complexity

    ERIC Educational Resources Information Center

    Meinzer, Marcus; Lahiri, Aditi; Flaisch, Tobias; Hannemann, Ronny; Eulitz, Carsten

    2009-01-01

    Within linguistics, words with a complex internal structure are commonly assumed to be decomposed into their constituent morphemes (e.g., un-help-ful). Nevertheless, an ongoing debate concerns the brain structures that subserve this process. Using functional magnetic resonance imaging, the present study varied the internal complexity of derived…

  2. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badal, A; Zbijewski, W; Bolch, W

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less

  3. An effective detection algorithm for region duplication forgery in digital images

    NASA Astrophysics Data System (ADS)

    Yavuz, Fatih; Bal, Abdullah; Cukur, Huseyin

    2016-04-01

    Powerful image editing tools are very common and easy to use these days. This situation may cause some forgeries by adding or removing some information on the digital images. In order to detect these types of forgeries such as region duplication, we present an effective algorithm based on fixed-size block computation and discrete wavelet transform (DWT). In this approach, the original image is divided into fixed-size blocks, and then wavelet transform is applied for dimension reduction. Each block is processed by Fourier Transform and represented by circle regions. Four features are extracted from each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks are detected according to comparison metric results. The experimental results show that the proposed algorithm presents computational efficiency due to fixed-size circle block architecture.

  4. Small blob identification in medical images using regional features from optimum scale.

    PubMed

    Zhang, Min; Wu, Teresa; Bennett, Kevin M

    2015-04-01

    Recent advances in medical imaging technology have greatly enhanced imaging-based diagnosis which requires computational effective and accurate algorithms to process the images (e.g., measure the objects) for quantitative assessment. In this research, we are interested in one type of imaging objects: small blobs. Examples of small blob objects are cells in histopathology images, glomeruli in MR images, etc. This problem is particularly challenging because the small blobs often have in homogeneous intensity distribution and an indistinct boundary against the background. Yet, in general, these blobs have similar sizes. Motivated by this finding, we propose a novel detector termed Hessian-based Laplacian of Gaussian (HLoG) using scale space theory as the foundation. Like most imaging detectors, an image is first smoothed via LoG. Hessian analysis is then launched to identify the single optimal scale on which a presegmentation is conducted. The advantage of the Hessian process is that it is capable of delineating the blobs. As a result, regional features can be retrieved. These features enable an unsupervised clustering algorithm for postpruning which should be more robust and sensitive than the traditional threshold-based postpruning commonly used in most imaging detectors. To test the performance of the proposed HLoG, two sets of 2-D grey medical images are studied. HLoG is compared against three state-of-the-art detectors: generalized LoG, Radial-Symmetry and LoG using precision, recall, and F-score metrics.We observe that HLoG statistically outperforms the compared detectors.

  5. Autonomic markers of emotional processing: skin sympathetic nerve activity in humans during exposure to emotionally charged images.

    PubMed

    Brown, Rachael; James, Cheree; Henderson, Luke A; Macefield, Vaughan G

    2012-01-01

    The sympathetic innervation of the skin primarily subserves thermoregulation, but the system has also been commandeered as a means of expressing emotion. While it is known that the level of skin sympathetic nerve activity (SSNA) is affected by anxiety, the majority of emotional studies have utilized the galvanic skin response as a means of inferring increases in SSNA. The purpose of the present study was to characterize the changes in SSNA when showing subjects neutral or emotionally charged images from the International Affective Picture System (IAPS). SSNA was recorded via tungsten microelectrodes inserted into cutaneous fascicles of the common peroneal nerve in ten subjects. Neutral images, positively charged images (erotica) or negatively charged images (mutilation) were presented in blocks of fifteen images of a specific type, each block lasting 2 min. Images of erotica or mutilation were presented in a quasi-random fashion, each block following a block of neutral images. Both images of erotica or images of mutilation caused significant increases in SSNA, but the increases in SSNA were greater for mutilation. The increases in SSNA were often coupled with sweat release and cutaneous vasoconstriction; however, these markers were not always consistent with the SSNA increases. We conclude that SSNA, comprising cutaneous vasoconstrictor and sudomotor activity, increases with both positively charged and negatively charged emotional images. Measurement of SSNA provides a more comprehensive assessment of sympathetic outflow to the skin than does the use of sweat release alone as a marker of emotional processing.

  6. Spaceborne electronic imaging systems

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Criteria and recommended practices for the design of the spaceborne elements of electronic imaging systems are presented. A spaceborne electronic imaging system is defined as a device that collects energy in some portion of the electromagnetic spectrum with detector(s) whose direct output is an electrical signal that can be processed (using direct transmission or delayed transmission after recording) to form a pictorial image. This definition encompasses both image tube systems and scanning point-detector systems. The intent was to collect the design experience and recommended practice of the several systems possessing the common denominator of acquiring images from space electronically and to maintain the system viewpoint rather than pursuing specialization in devices. The devices may be markedly different physically, but each was designed to provide a particular type of image within particular limitations. Performance parameters which determine the type of system selected for a given mission and which influence the design include: Sensitivity, Resolution, Dynamic range, Spectral response, Frame rate/bandwidth, Optics compatibility, Image motion, Radiation resistance, Size, Weight, Power, and Reliability.

  7. An application of viola jones method for face recognition for absence process efficiency

    NASA Astrophysics Data System (ADS)

    Rizki Damanik, Rudolfo; Sitanggang, Delima; Pasaribu, Hendra; Siagian, Hendrik; Gulo, Frisman

    2018-04-01

    Absence was a list of documents that the company used to record the attendance time of each employee. The most common problem in a fingerprint machine is the identification of a slow sensor or a sensor not recognizing a finger. The employees late to work because they get difficulties at fingerprint system, they need about 3 – 5 minutes to absence when the condition of finger is wet or not fit. To overcome this problem, this research tried to utilize facial recognition for attendance process. The method used for facial recognition was Viola Jones. Through the processing phase of the RGB face image was converted into a histogram equalization face image for the next stage of recognition. The result of this research was the absence process could be done less than 1 second with a maximum slope of ± 700 and a distance of 20-200 cm. After implement facial recognition the process of absence is more efficient, just take less 1 minute to absence.

  8. Targeted and untargeted-metabolite profiling to track the compositional integrity of ginger during processing using digitally-enhanced HPTLC pattern recognition analysis.

    PubMed

    Ibrahim, Reham S; Fathy, Hoda

    2018-03-30

    Tracking the impact of commonly applied post-harvesting and industrial processing practices on the compositional integrity of ginger rhizome was implemented in this work. Untargeted metabolite profiling was performed using digitally-enhanced HPTLC method where the chromatographic fingerprints were extracted using ImageJ software then analysed with multivariate Principal Component Analysis (PCA) for pattern recognition. A targeted approach was applied using a new, validated, simple and fast HPTLC image analysis method for simultaneous quantification of the officially recognized markers 6-, 8-, 10-gingerol and 6-shogaol in conjunction with chemometric Hierarchical Clustering Analysis (HCA). The results of both targeted and untargeted metabolite profiling revealed that peeling, drying in addition to storage employed during processing have a great influence on ginger chemo-profile, the different forms of processed ginger shouldn't be used interchangeably. Moreover, it deemed necessary to consider the holistic metabolic profile for comprehensive evaluation of ginger during processing. Copyright © 2018. Published by Elsevier B.V.

  9. The importance of ray pathlengths when measuring objects in maximum intensity projection images.

    PubMed

    Schreiner, S; Dawant, B M; Paschal, C B; Galloway, R L

    1996-01-01

    It is important to understand any process that affects medical data. Once the data have changed from the original form, one must consider the possibility that the information contained in the data has also changed. In general, false negative and false positive diagnoses caused by this post-processing must be minimized. Medical imaging is one area in which post-processing is commonly performed, but there is often little or no discussion of how these algorithms affect the data. This study uncovers some interesting properties of maximum intensity projection (MIP) algorithms which are commonly used in the post-processing of magnetic resonance (MR) and computed tomography (CT) angiographic data. The appearance of the width of vessels and the extent of malformations such as aneurysms is of interest to clinicians. This study will show how MIP algorithms interact with the shape of the object being projected. MIP's can make objects appear thinner in the projection than in the original data set and also alter the shape of the profile of the object seen in the original data. These effects have consequences for width-measuring algorithms which will be discussed. Each projected intensity is dependent upon the pathlength of the ray from which the projected pixel arises. The morphology (shape and intensity profile) of an object will change the pathlength that each ray experiences. This is termed the pathlength effect. In order to demonstrate the pathlength effect, simple computer models of an imaged vessel were created. Additionally, a static MR phantom verified that the derived equation for the projection-plane probability density function (pdf) predicts the projection-plane intensities well (R(2)=0.96). Finally, examples of projections through in vivo MR angiography and CT angiography data are presented.

  10. Extraction and analysis of the image in the sight field of comparison goniometer to measure IR mirrors assembly

    NASA Astrophysics Data System (ADS)

    Wang, Zhi-shan; Zhao, Yue-jin; Li, Zhuo; Dong, Liquan; Chu, Xuhong; Li, Ping

    2010-11-01

    The comparison goniometer is widely used to measure and inspect small angle, angle difference, and parallelism of two surfaces. However, the common manner to read a comparison goniometer is to inspect the ocular of the goniometer by one eye of the operator. To read an old goniometer that just equips with one adjustable ocular is a difficult work. In the fabrication of an IR reflecting mirrors assembly, a common comparison goniometer is used to measure the angle errors between two neighbor assembled mirrors. In this paper, a quick reading technique image-based for the comparison goniometer used to inspect the parallelism of mirrors in a mirrors assembly is proposed. One digital camera, one comparison goniometer and one set of computer are used to construct a reading system, the image of the sight field in the comparison goniometer will be extracted and recognized to get the angle positions of the reflection surfaces to be measured. In order to obtain the interval distance between the scale lines, a particular technique, left peak first method, based on the local peak values of intensity in the true color image is proposed. A program written in VC++6.0 has been developed to perform the color digital image processing.

  11. Independent component model for cognitive functions of multiple subjects using [15O]H2O PET images.

    PubMed

    Park, Hae-Jeong; Kim, Jae-Jin; Youn, Tak; Lee, Dong Soo; Lee, Myung Chul; Kwon, Jun Soo

    2003-04-01

    An independent component model of multiple subjects' positron emission tomography (PET) images is proposed to explore the overall functional components involved in a task and to explain subject specific variations of metabolic activities under altered experimental conditions utilizing the Independent component analysis (ICA) concept. As PET images represent time-compressed activities of several cognitive components, we derived a mathematical model to decompose functional components from cross-sectional images based on two fundamental hypotheses: (1) all subjects share basic functional components that are common to subjects and spatially independent of each other in relation to the given experimental task, and (2) all subjects share common functional components throughout tasks which are also spatially independent. The variations of hemodynamic activities according to subjects or tasks can be explained by the variations in the usage weight of the functional components. We investigated the plausibility of the model using serial cognitive experiments of simple object perception, object recognition, two-back working memory, and divided attention of a syntactic process. We found that the independent component model satisfactorily explained the functional components involved in the task and discuss here the application of ICA in multiple subjects' PET images to explore the functional association of brain activations. Copyright 2003 Wiley-Liss, Inc.

  12. An optical clearing technique for plant tissues allowing deep imaging and compatible with fluorescence microscopy.

    PubMed

    Warner, Cherish A; Biedrzycki, Meredith L; Jacobs, Samuel S; Wisser, Randall J; Caplan, Jeffrey L; Sherrier, D Janine

    2014-12-01

    We report on a nondestructive clearing technique that enhances transmission of light through specimens from diverse plant species, opening unique opportunities for microscope-enabled plant research. After clearing, plant organs and thick tissue sections are amenable to deep imaging. The clearing method is compatible with immunocytochemistry techniques and can be used in concert with common fluorescent probes, including widely adopted protein tags such as GFP, which has fluorescence that is preserved during the clearing process. © 2014 American Society of Plant Biologists. All Rights Reserved.

  13. Content-addressable read/write memories for image analysis

    NASA Technical Reports Server (NTRS)

    Snyder, W. E.; Savage, C. D.

    1982-01-01

    The commonly encountered image analysis problems of region labeling and clustering are found to be cases of search-and-rename problem which can be solved in parallel by a system architecture that is inherently suitable for VLSI implementation. This architecture is a novel form of content-addressable memory (CAM) which provides parallel search and update functions, allowing speed reductions down to constant time per operation. It has been proposed in related investigations by Hall (1981) that, with VLSI, CAM-based structures with enhanced instruction sets for general purpose processing will be feasible.

  14. Multispectral simulation environment for modeling low-light-level sensor systems

    NASA Astrophysics Data System (ADS)

    Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.

    1998-11-01

    Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.

  15. Camera processing with chromatic aberration.

    PubMed

    Korneliussen, Jan Tore; Hirakawa, Keigo

    2014-10-01

    Since the refractive index of materials commonly used for lens depends on the wavelengths of light, practical camera optics fail to converge light to a single point on an image plane. Known as chromatic aberration, this phenomenon distorts image details by introducing magnification error, defocus blur, and color fringes. Though achromatic and apochromatic lens designs reduce chromatic aberration to a degree, they are complex and expensive and they do not offer a perfect correction. In this paper, we propose a new postcapture processing scheme designed to overcome these problems computationally. Specifically, the proposed solution is comprised of chromatic aberration-tolerant demosaicking algorithm and post-demosaicking chromatic aberration correction. Experiments with simulated and real sensor data verify that the chromatic aberration is effectively corrected.

  16. New approach to estimating variability in visual field data using an image processing technique.

    PubMed Central

    Crabb, D P; Edgar, D F; Fitzke, F W; McNaught, A I; Wynn, H P

    1995-01-01

    AIMS--A new framework for evaluating pointwise sensitivity variation in computerised visual field data is demonstrated. METHODS--A measure of local spatial variability (LSV) is generated using an image processing technique. Fifty five eyes from a sample of normal and glaucomatous subjects, examined on the Humphrey field analyser (HFA), were used to illustrate the method. RESULTS--Significant correlation between LSV and conventional estimates--namely, HFA pattern standard deviation and short term fluctuation, were found. CONCLUSION--LSV is not dependent on normals' reference data or repeated threshold determinations, thus potentially reducing test time. Also, the illustrated pointwise maps of LSV could provide a method for identifying areas of fluctuation commonly found in early glaucomatous field loss. PMID:7703196

  17. Automatic recognition of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNNs.

    PubMed

    Han, Guanghui; Liu, Xiabi; Zheng, Guangyuan; Wang, Murong; Huang, Shan

    2018-06-06

    Ground-glass opacity (GGO) is a common CT imaging sign on high-resolution CT, which means the lesion is more likely to be malignant compared to common solid lung nodules. The automatic recognition of GGO CT imaging signs is of great importance for early diagnosis and possible cure of lung cancers. The present GGO recognition methods employ traditional low-level features and system performance improves slowly. Considering the high-performance of CNN model in computer vision field, we proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling is performed on multi-views and multi-receptive fields, which reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has the ability to obtain the optimal fine-tuning model. Multi-CNN models fusion strategy obtains better performance than any single trained model. We evaluated our method on the GGO nodule samples in publicly available LIDC-IDRI dataset of chest CT scans. The experimental results show that our method yields excellent results with 96.64% sensitivity, 71.43% specificity, and 0.83 F1 score. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images. Graphical abstract We proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has ability to obtain the optimal fine-tuning model. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images.

  18. dada - a web-based 2D detector analysis tool

    NASA Astrophysics Data System (ADS)

    Osterhoff, Markus

    2017-06-01

    The data daemon, dada, is a server backend for unified access to 2D pixel detector image data stored with different detectors, file formats and saved with varying naming conventions and folder structures across instruments. Furthermore, dada implements basic pre-processing and analysis routines from pixel binning over azimuthal integration to raster scan processing. Common user interactions with dada are by a web frontend, but all parameters for an analysis are encoded into a Uniform Resource Identifier (URI) which can also be written by hand or scripts for batch processing.

  19. Enantiomer excesses of rare and common sugar derivatives in carbonaceous meteorites.

    PubMed

    Cooper, George; Rios, Andro C

    2016-06-14

    Biological polymers such as nucleic acids and proteins are constructed of only one-the d or l-of the two possible nonsuperimposable mirror images (enantiomers) of selected organic compounds. However, before the advent of life, it is generally assumed that chemical reactions produced 50:50 (racemic) mixtures of enantiomers, as evidenced by common abiotic laboratory syntheses. Carbonaceous meteorites contain clues to prebiotic chemistry because they preserve a record of some of the Solar System's earliest (∼4.5 Gy) chemical and physical processes. In multiple carbonaceous meteorites, we show that both rare and common sugar monoacids (aldonic acids) contain significant excesses of the d enantiomer, whereas other (comparable) sugar acids and sugar alcohols are racemic. Although the proposed origins of such excesses are still tentative, the findings imply that meteoritic compounds and/or the processes that operated on meteoritic precursors may have played an ancient role in the enantiomer composition of life's carbohydrate-related biopolymers.

  20. Communication in science.

    PubMed

    Deda, H; Yakupoglu, H

    2002-01-01

    Science must have a common language. For centuries, Latin language carried out this job, but the progress in computer technology and internet world through the last 20 years, began to produce a new language with the new century; the computer language. The information masses, which need data language standardization, are the followings; Digital libraries and medical education systems, Consumer health informatics, Medical education systems, World Wide Web Applications, Database systems, Medical language processing, Automatic indexing systems, Image processing units, Telemedicine, New Generation Internet (NGI).

  1. A front-end readout Detector Board for the OpenPET electronics system

    NASA Astrophysics Data System (ADS)

    Choong, W.-S.; Abu-Nimeh, F.; Moses, W. W.; Peng, Q.; Vu, C. Q.; Wu, J.-Y.

    2015-08-01

    We present a 16-channel front-end readout board for the OpenPET electronics system. A major task in developing a nuclear medical imaging system, such as a positron emission computed tomograph (PET) or a single-photon emission computed tomograph (SPECT), is the electronics system. While there are a wide variety of detector and camera design concepts, the relatively simple nature of the acquired data allows for a common set of electronics requirements that can be met by a flexible, scalable, and high-performance OpenPET electronics system. The analog signals from the different types of detectors used in medical imaging share similar characteristics, which allows for a common analog signal processing. The OpenPET electronics processes the analog signals with Detector Boards. Here we report on the development of a 16-channel Detector Board. Each signal is digitized by a continuously sampled analog-to-digital converter (ADC), which is processed by a field programmable gate array (FPGA) to extract pulse height information. A leading edge discriminator creates a timing edge that is ``time stamped'' by a time-to-digital converter (TDC) implemented inside the FPGA . This digital information from each channel is sent to an FPGA that services 16 analog channels, and then information from multiple channels is processed by this FPGA to perform logic for crystal lookup, DOI calculation, calibration, etc.

  2. A front-end readout Detector Board for the OpenPET electronics system

    DOE PAGES

    Choong, W. -S.; Abu-Nimeh, F.; Moses, W. W.; ...

    2015-08-12

    Here, we present a 16-channel front-end readout board for the OpenPET electronics system. A major task in developing a nuclear medical imaging system, such as a positron emission computed tomograph (PET) or a single-photon emission computed tomograph (SPECT), is the electronics system. While there are a wide variety of detector and camera design concepts, the relatively simple nature of the acquired data allows for a common set of electronics requirements that can be met by a flexible, scalable, and high-performance OpenPET electronics system. The analog signals from the different types of detectors used in medical imaging share similar characteristics, whichmore » allows for a common analog signal processing. The OpenPET electronics processes the analog signals with Detector Boards. Here we report on the development of a 16-channel Detector Board. Each signal is digitized by a continuously sampled analog-to-digital converter (ADC), which is processed by a field programmable gate array (FPGA) to extract pulse height information. A leading edge discriminator creates a timing edge that is "time stamped" by a time-to-digital converter (TDC) implemented inside the FPGA. In conclusion, this digital information from each channel is sent to an FPGA that services 16 analog channels, and then information from multiple channels is processed by this FPGA to perform logic for crystal lookup, DOI calculation, calibration, etc.« less

  3. imzML: Imaging Mass Spectrometry Markup Language: A common data format for mass spectrometry imaging.

    PubMed

    Römpp, Andreas; Schramm, Thorsten; Hester, Alfons; Klinkert, Ivo; Both, Jean-Pierre; Heeren, Ron M A; Stöckli, Markus; Spengler, Bernhard

    2011-01-01

    Imaging mass spectrometry is the method of scanning a sample of interest and generating an "image" of the intensity distribution of a specific analyte. The data sets consist of a large number of mass spectra which are usually acquired with identical settings. Existing data formats are not sufficient to describe an MS imaging experiment completely. The data format imzML was developed to allow the flexible and efficient exchange of MS imaging data between different instruments and data analysis software.For this purpose, the MS imaging data is divided in two separate files. The mass spectral data is stored in a binary file to ensure efficient storage. All metadata (e.g., instrumental parameters, sample details) are stored in an XML file which is based on the standard data format mzML developed by HUPO-PSI. The original mzML controlled vocabulary was extended to include specific parameters of imaging mass spectrometry (such as x/y position and spatial resolution). The two files (XML and binary) are connected by offset values in the XML file and are unambiguously linked by a universally unique identifier. The resulting datasets are comparable in size to the raw data and the separate metadata file allows flexible handling of large datasets.Several imaging MS software tools already support imzML. This allows choosing from a (growing) number of processing tools. One is no longer limited to proprietary software, but is able to use the processing software which is best suited for a specific question or application. On the other hand, measurements from different instruments can be compared within one software application using identical settings for data processing. All necessary information for evaluating and implementing imzML can be found at http://www.imzML.org .

  4. Implementation of an axisymmetric drop shape apparatus using a Raspberry-Pi single-board computer and a web camera

    NASA Astrophysics Data System (ADS)

    Carlà, Marcello; Orlando, Antonio

    2018-07-01

    This paper describes the implementation of an axisymmetric drop shape apparatus for the measurement of surface or interfacial tension of a hanging liquid drop, using only cheap resources like a common web camera and a single-board microcomputer. The mechanics of the apparatus is composed of stubs of commonly available aluminium bar, with all other mechanical parts manufactured with an amateur 3D printer. All of the required software, either for handling the camera and taking the images, or for processing the drop images to get the drop profile and fit it with the Bashforth and Adams equation, is freely available under an open source license. Despite the very limited cost of the whole setup, an extensive test has demonstrated an overall accuracy of ±0.2% or better.

  5. Variation of canine vertebral bone architecture in computed tomography

    PubMed Central

    Cheon, Byunggyu; Park, Seungjo; Lee, Sang-kwon; Park, Jun-Gyu; Cho, Kyoung-Oh

    2018-01-01

    Focal vertebral bone density changes were assessed in vertebral computed tomography (CT) images obtained from clinically healthy dogs without diseases that affect bone density. The number, location, and density of lesions were determined. A total of 429 vertebral CT images from 20 dogs were reviewed, and 99 focal vertebral changes were identified in 14 dogs. Focal vertebral bone density changes were mainly found in thoracic vertebrae (29.6%) as hyperattenuating (86.9%) lesions. All focal vertebral changes were observed at the vertebral body, except for a single hyperattenuating change in one thoracic transverse process. Among the hyperattenuating changes, multifocal changes (53.5%) were more common than single changes (46.5%). Most of the hypoattenuating changes were single (92.3%). Eight dogs, 40% of the 20 dogs in the study and 61.6% of the 13 dogs showing focal vertebral changes in the thoracic vertebra, had hyperattenuating changes at the 7th or 8th thoracic vertebra. Our results indicate that focal changes in vertebral bone density are commonly identified on vertebral CT images in healthy dogs, and these changes should be taken into consideration on interpretation of CT images. PMID:28693309

  6. A Comparative Study of Spectral Auroral Intensity Predictions From Multiple Electron Transport Models

    NASA Astrophysics Data System (ADS)

    Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha

    2018-01-01

    It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.

  7. Effect of using different cover image quality to obtain robust selective embedding in steganography

    NASA Astrophysics Data System (ADS)

    Abdullah, Karwan Asaad; Al-Jawad, Naseer; Abdulla, Alan Anwer

    2014-05-01

    One of the common types of steganography is to conceal an image as a secret message in another image which normally called a cover image; the resulting image is called a stego image. The aim of this paper is to investigate the effect of using different cover image quality, and also analyse the use of different bit-plane in term of robustness against well-known active attacks such as gamma, statistical filters, and linear spatial filters. The secret messages are embedded in higher bit-plane, i.e. in other than Least Significant Bit (LSB), in order to resist active attacks. The embedding process is performed in three major steps: First, the embedding algorithm is selectively identifying useful areas (blocks) for embedding based on its lighting condition. Second, is to nominate the most useful blocks for embedding based on their entropy and average. Third, is to select the right bit-plane for embedding. This kind of block selection made the embedding process scatters the secret message(s) randomly around the cover image. Different tests have been performed for selecting a proper block size and this is related to the nature of the used cover image. Our proposed method suggests a suitable embedding bit-plane as well as the right blocks for the embedding. Experimental results demonstrate that different image quality used for the cover images will have an effect when the stego image is attacked by different active attacks. Although the secret messages are embedded in higher bit-plane, but they cannot be recognised visually within the stegos image.

  8. Martian Surface Compositions and Spectral Unit Mapping From the Thermal Emission Imaging System

    NASA Astrophysics Data System (ADS)

    Bandfield, J. L.; Christensen, P. R.; Rogers, D.

    2005-12-01

    The Thermal Emission Imaging System (THEMIS) on board the Mars Odyssey spacecraft observes Mars at nine spectral intervals between 6 and 15 microns and at 100 meter spatial sampling. This spectral and spatial resolution allows for mapping of local spectral units and coarse compositional determination of a variety of rock-forming materials such as carbonates, sulfates, and silicates. A number of data processing and atmospheric correction techniques have been developed to ease and speed the interpretation of multispectral THEMIS infrared images. These products and techniques are in the process of being made publicly available via the THEMIS website and were used to produce the results presented here. Spectral variability at kilometer scales in THEMIS data is more common in the southern highlands than in the northern lowlands. Many of the spectral units are associated with a mobile surface layer such as dune fields and mantled dust. However, a number of spectral units appear to be directly tied to the local geologic rock units. These spectral units are commonly associated with crater walls, floors, and ejecta blankets. Other surface compositions are correlated with layered volcanic materials and knobby remnant terrains. Most of the spectral variability observed to date appears to be tied to a variation in silicate mineralogy. Olivine rich units that have been previously reported in Nili Fossae, Ares Valles, and the Valles Marineris region appear to be sparse but common in a number of regions in the southern highlands. Variations in silica content consistent with previously reported global surface units also appear to be present in THEMIS images, allowing for an examination of their local geologic context. Previously reported quartz and feldspar rich exposures in northern Syrtis Major appear more extensive in the region than previously reported. A coherent global and local picture of the mineralogy of the Martian surface is emerging from THEMIS measurements along with other orbital thermal and near infrared spectroscopy measurements from the Mars Express and Mars Global Surveyor spacecraft.

  9. A 16 x 16-pixel retinal-prosthesis vision chip with in-pixel digital image processing in a frequency domain by use of a pulse-frequency-modulation photosensor

    NASA Astrophysics Data System (ADS)

    Kagawa, Keiichiro; Furumiya, Tetsuo; Ng, David C.; Uehara, Akihiro; Ohta, Jun; Nunoshita, Masahiro

    2004-06-01

    We are exploring the application of pulse-frequency-modulation (PFM) photosensor to retinal prosthesis for the blind because behavior of PFM photosensors is similar to retinal ganglion cells, from which visual data are transmitted from the retina toward the brain. We have developed retinal-prosthesis vision chips that reshape the output pulses of the PFM photosensor to biphasic current pulses suitable for electric stimulation of retinal cells. In this paper, we introduce image-processing functions to the pixel circuits. We have designed a 16x16-pixel retinal-prosthesis vision chip with several kinds of in-pixel digital image processing such as edge enhancement, edge detection, and low-pass filtering. This chip is a prototype demonstrator of the retinal prosthesis vision chip applicable to in-vitro experiments. By utilizing the feature of PFM photosensor, we propose a new scheme to implement the above image processing in a frequency domain by digital circuitry. Intensity of incident light is converted to a 1-bit data stream by a PFM photosensor, and then image processing is executed by a 1-bit image processor based on joint and annihilation of pulses. The retinal prosthesis vision chip is composed of four blocks: a pixels array block, a row-parallel stimulation current amplifiers array block, a decoder block, and a base current generators block. All blocks except PFM photosensors and stimulation current amplifiers are embodied as digital circuitry. This fact contributes to robustness against noises and fluctuation of power lines. With our vision chip, we can control photosensitivity and intensity and durations of stimulus biphasic currents, which are necessary for retinal prosthesis vision chip. The designed dynamic range is more than 100 dB. The amplitude of the stimulus current is given by a base current, which is common for all pixels, multiplied by a value in an amplitude memory of pixel. Base currents of the negative and positive pulses are common for the all pixels, and they are set in a linear manner. Otherwise, the value in the amplitude memory of the pixel is presented in an exponential manner to cover the wide range. The stimulus currents are put out column by column by scanning. The pixel size is 240um x 240um. Each pixel has a bonding pad on which stimulus electrode is to be formed. We will show the experimental results of the test chip.

  10. Alpha-fetoprotein-targeted reporter gene expression imaging in hepatocellular carcinoma.

    PubMed

    Kim, Kwang Il; Chung, Hye Kyung; Park, Ju Hui; Lee, Yong Jin; Kang, Joo Hyun

    2016-07-21

    Hepatocellular carcinoma (HCC) is one of the most common cancers in Eastern Asia, and its incidence is increasing globally. Numerous experimental models have been developed to better our understanding of the pathogenic mechanism of HCC and to evaluate novel therapeutic approaches. Molecular imaging is a convenient and up-to-date biomedical tool that enables the visualization, characterization and quantification of biologic processes in a living subject. Molecular imaging based on reporter gene expression, in particular, can elucidate tumor-specific events or processes by acquiring images of a reporter gene's expression driven by tumor-specific enhancers/promoters. In this review, we discuss the advantages and disadvantages of various experimental HCC mouse models and we present in vivo images of tumor-specific reporter gene expression driven by an alpha-fetoprotein (AFP) enhancer/promoter system in a mouse model of HCC. The current mouse models of HCC development are established by xenograft, carcinogen induction and genetic engineering, representing the spectrum of tumor-inducing factors and tumor locations. The imaging analysis approach of reporter genes driven by AFP enhancer/promoter is presented for these different HCC mouse models. Such molecular imaging can provide longitudinal information about carcinogenesis and tumor progression. We expect that clinical application of AFP-targeted reporter gene expression imaging systems will be useful for the detection of AFP-expressing HCC tumors and screening of increased/decreased AFP levels due to disease or drug treatment.

  11. Alpha-fetoprotein-targeted reporter gene expression imaging in hepatocellular carcinoma

    PubMed Central

    Kim, Kwang Il; Chung, Hye Kyung; Park, Ju Hui; Lee, Yong Jin; Kang, Joo Hyun

    2016-01-01

    Hepatocellular carcinoma (HCC) is one of the most common cancers in Eastern Asia, and its incidence is increasing globally. Numerous experimental models have been developed to better our understanding of the pathogenic mechanism of HCC and to evaluate novel therapeutic approaches. Molecular imaging is a convenient and up-to-date biomedical tool that enables the visualization, characterization and quantification of biologic processes in a living subject. Molecular imaging based on reporter gene expression, in particular, can elucidate tumor-specific events or processes by acquiring images of a reporter gene’s expression driven by tumor-specific enhancers/promoters. In this review, we discuss the advantages and disadvantages of various experimental HCC mouse models and we present in vivo images of tumor-specific reporter gene expression driven by an alpha-fetoprotein (AFP) enhancer/promoter system in a mouse model of HCC. The current mouse models of HCC development are established by xenograft, carcinogen induction and genetic engineering, representing the spectrum of tumor-inducing factors and tumor locations. The imaging analysis approach of reporter genes driven by AFP enhancer/promoter is presented for these different HCC mouse models. Such molecular imaging can provide longitudinal information about carcinogenesis and tumor progression. We expect that clinical application of AFP-targeted reporter gene expression imaging systems will be useful for the detection of AFP-expressing HCC tumors and screening of increased/decreased AFP levels due to disease or drug treatment. PMID:27468205

  12. Quality Control of Structural MRI Images Applied Using FreeSurfer—A Hands-On Workflow to Rate Motion Artifacts

    PubMed Central

    Backhausen, Lea L.; Herting, Megan M.; Buse, Judith; Roessner, Veit; Smolka, Michael N.; Vetter, Nora C.

    2016-01-01

    In structural magnetic resonance imaging motion artifacts are common, especially when not scanning healthy young adults. It has been shown that motion affects the analysis with automated image-processing techniques (e.g., FreeSurfer). This can bias results. Several developmental and adult studies have found reduced volume and thickness of gray matter due to motion artifacts. Thus, quality control is necessary in order to ensure an acceptable level of quality and to define exclusion criteria of images (i.e., determine participants with most severe artifacts). However, information about the quality control workflow and image exclusion procedure is largely lacking in the current literature and the existing rating systems differ. Here, we propose a stringent workflow of quality control steps during and after acquisition of T1-weighted images, which enables researchers dealing with populations that are typically affected by motion artifacts to enhance data quality and maximize sample sizes. As an underlying aim we established a thorough quality control rating system for T1-weighted images and applied it to the analysis of developmental clinical data using the automated processing pipeline FreeSurfer. This hands-on workflow and quality control rating system will aid researchers in minimizing motion artifacts in the final data set, and therefore enhance the quality of structural magnetic resonance imaging studies. PMID:27999528

  13. MO-B-BRC-01: Introduction [Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prisciandaro, J.

    2016-06-15

    Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR ismore » U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions.« less

  14. MO-B-BRC-04: MRI-Based Prostate HDR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mourtada, F.

    2016-06-15

    Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR ismore » U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions.« less

  15. MO-B-BRC-02: Ultrasound Based Prostate HDR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Z.

    2016-06-15

    Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR ismore » U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamran, Mudassar, E-mail: kamranm@mir.wustl.edu; Fowler, Kathryn J., E-mail: fowlerk@mir.wustl.edu; Mellnick, Vincent M., E-mail: mellnickv@mir.wustl.edu

    Primary aortic neoplasms are rare. Aortic sarcoma arising after endovascular aneurysm repair (EVAR) is a scarce subset of primary aortic malignancies, reports of which are infrequent in the published literature. The diagnosis of aortic sarcoma is challenging due to its non-specific clinical presentation, and the prognosis is poor due to delayed diagnosis, rapid proliferation, and propensity for metastasis. Post-EVAR, aortic sarcomas may mimic other more common aortic processes on surveillance imaging. Radiologists are rarely knowledgeable about this rare entity for which multimodality imaging and awareness are invaluable in early diagnosis. A series of three pathologically confirmed cases are presented tomore » display the multimodality imaging features and clinical presentations of aortic sarcoma arising after EVAR.« less

  17. Lumbar Gout Tophus Mimicking Epidural Abscess with Magnetic Resonance Imaging, Bone, and Gallium Scans

    PubMed Central

    Vicente, Justo Serrano; Gómez, Alejandro Lorente; Moreno, Rafael Lorente; Torre, Jose Rafael Infante; Bernardo, Lucía García; Madrid, Juan Ignacio Rayo

    2018-01-01

    Gout is a common metabolic disorder, typically diagnosed in peripheral joints. Tophaceous deposits in lumbar spine are a very rare condition with very few cases reported in literature. The following is a case report of a 52-year-old patient with low back pain, left leg pain, and numbness. Serum uric acid level was in normal range. magnetic resonance imaging, bone scan, and gallium-67 images suggested an inflammatory-infectious process focus at L4. After a decompressive laminectomy at L4–L5 level, histological examination showed a chalky material with extensive deposition of amorphous gouty material surrounded by macrophages and foreign-body giant cells (tophaceous deposits). PMID:29643682

  18. Lumbar Gout Tophus Mimicking Epidural Abscess with Magnetic Resonance Imaging, Bone, and Gallium Scans.

    PubMed

    Vicente, Justo Serrano; Gómez, Alejandro Lorente; Moreno, Rafael Lorente; Torre, Jose Rafael Infante; Bernardo, Lucía García; Madrid, Juan Ignacio Rayo

    2018-01-01

    Gout is a common metabolic disorder, typically diagnosed in peripheral joints. Tophaceous deposits in lumbar spine are a very rare condition with very few cases reported in literature. The following is a case report of a 52-year-old patient with low back pain, left leg pain, and numbness. Serum uric acid level was in normal range. magnetic resonance imaging, bone scan, and gallium-67 images suggested an inflammatory-infectious process focus at L4. After a decompressive laminectomy at L4-L5 level, histological examination showed a chalky material with extensive deposition of amorphous gouty material surrounded by macrophages and foreign-body giant cells (tophaceous deposits).

  19. Active point out-of-plane ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Cheng, Alexis; Guo, Xiaoyu; Zhang, Haichong K.; Kang, Hyunjae; Etienne-Cummings, Ralph; Boctor, Emad M.

    2015-03-01

    Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common intraoperative medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the transducer and the ultrasound image. Point-based phantoms are considered to be accurate, but their calibration framework assumes that the point is in the image plane. In this work, we present the use of an active point phantom and a calibration framework that accounts for the elevational uncertainty of the point. Given the lateral and axial position of the point in the ultrasound image, we approximate a circle in the axial-elevational plane with a radius equal to the axial position. The standard approach transforms all of the imaged points to be a single physical point. In our approach, we minimize the distances between the circular subsets of each image, with them ideally intersecting at a single point. We simulated in noiseless and noisy cases, presenting results on out-of-plane estimation errors, calibration estimation errors, and point reconstruction precision. We also performed an experiment using a robot arm as the tracker, resulting in a point reconstruction precision of 0.64mm.

  20. Development of a widefield reflectance and fluorescence imaging device for the detection of skin and oral cancer

    NASA Astrophysics Data System (ADS)

    Pratavieira, S.; Santos, P. L. A.; Bagnato, V. S.; Kurachi, C.

    2009-06-01

    Oral and skin cancers constitute a major global health problem that cause great impact in patients. The most common screening method for oral cancer is visual inspection and palpation of the mouth. Visual examination relies heavily on the experience and skills of the physician to identify and delineate early premalignant and cancer changes, which is not simple due to the similar characteristics of early stage cancers and benign lesions. Optical imaging has the potential to address these clinical challenges. Contrast between normal and neoplastic areas may be increased, distinct to the conventional white light, when using illumination and detection conditions. Reflectance imaging can detect local changes in tissue scattering and absorption and fluorescence imaging can probe changes in the biochemical composition. These changes have shown to be indicatives of malignant progression. Widefield optical imaging systems are interesting because they may enhance the screening ability in large regions allowing the discrimination and the delineation of neoplastic and potentially of occult lesions. Digital image processing allows the combination of autofluorescence and reflectance images in order to objectively identify and delineate the peripheral extent of neoplastic lesions in the skin and oral cavity. Combining information from different imaging modalities has the potential of increasing diagnostic performance, due to distinct provided information. A simple widefiled imaging device based on fluorescence and reflectance modes together with a digital image processing was assembled and its performance tested in an animal study.

  1. Generation of oculomotor images during tasks requiring visual recognition of polygons.

    PubMed

    Olivier, G; de Mendoza, J L

    2001-06-01

    This paper concerns the contribution of mentally simulated ocular exploration to generation of a visual mental image. In Exp. 1, repeated exploration of the outlines of an irregular decagon allowed an incidental learning of the shape. Analyses showed subjects memorized their ocular movements rather than the polygon. In Exp. 2, exploration of a reversible figure such as a Necker cube varied in opposite directions. Then, both perspective possibilities are presented. The perspective the subjects recognized depended on the way they explored the ambiguous figure. In both experiments, during recognition the subjects recalled a visual mental image of the polygon they compared with the different polygons proposed for recognition. To interpret the data, hypotheses concerning common processes underlying both motor intention of ocular movements and generation of a visual image are suggested.

  2. WFIRST Science Operations at STScI

    NASA Astrophysics Data System (ADS)

    Gilbert, Karoline; STScI WFIRST Team

    2018-06-01

    With sensitivity and resolution comparable the Hubble Space Telescope, and a field of view 100 times larger, the Wide Field Instrument (WFI) on WFIRST will be a powerful survey instrument. STScI will be the Science Operations Center (SOC) for the WFIRST Mission, with additional science support provided by the Infrared Processing and Analysis Center (IPAC) and foreign partners. STScI will schedule and archive all WFIRST observations, calibrate and produce pipeline-reduced data products for imaging with the Wide Field Instrument, support the High Latitude Imaging and Supernova Survey Teams, and support the astronomical community in planning WFI imaging observations and analyzing the data. STScI has developed detailed concepts for WFIRST operations, including a data management system integrating data processing and the archive which will include a novel, cloud-based framework for high-level data processing, providing a common environment accessible to all users (STScI operations, Survey Teams, General Observers, and archival investigators). To aid the astronomical community in examining the capabilities of WFIRST, STScI has built several simulation tools. We describe the functionality of each tool and give examples of its use.

  3. Cameras and settings for optimal image capture from UAVs

    NASA Astrophysics Data System (ADS)

    Smith, Mike; O'Connor, James; James, Mike R.

    2017-04-01

    Aerial image capture has become very common within the geosciences due to the increasing affordability of low payload (<20 kg) Unmanned Aerial Vehicles (UAVs) for consumer markets. Their application to surveying has led to many studies being undertaken using UAV imagery captured from consumer grade cameras as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion which can lead to experiments being difficult to accurately reproduce. In this contribution we revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well exposed and suitable imagery are derived. This then leads to discussion of how to optimise the platform, camera, lens and imaging settings relevant to image quality planning, presenting some worked examples as a guide. Finally, we challenge the community to make their image data open for review in order to ensure confidence in the outputs/error estimates, allow reproducibility of the results and have these comparable with future studies. We recommend providing open access imagery where possible, a range of example images, and detailed metadata to rigorously describe the image capture process.

  4. What Is an Image?

    ERIC Educational Resources Information Center

    Gerber, Andrew J.; Peterson, Bradley S.

    2008-01-01

    The article helps to understand the interpretation of an image by presenting as to what constitutes an image. A common feature in all images is the basic physical structure that can be described with a common set of terms.

  5. Sci—Thur PM: Imaging — 06: Canada's National Computed Tomography (CT) Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wardlaw, GM; Martel, N; Blackler, W

    2014-08-15

    The value of computed tomography (CT) in medical imaging is reflected in its' increased use and availability since the early 1990's; however, given CT's relatively larger exposures (vs. planar x-ray) greater care must be taken to ensure that CT procedures are optimised in terms of providing the smallest dose possible while maintaining sufficient diagnostic image quality. The development of CT Diagnostic Reference Levels (DRLs) supports this process. DRLs have been suggested/supported by international/national bodies since the early 1990's and widely adopted elsewhere, but not on a national basis in Canada. Essentially, CT DRLs provide guidance on what is considered goodmore » practice for common CT exams, but require a representative sample of CT examination data to make any recommendations. Canada's National CT Survey project, in collaboration with provincial/territorial authorities, has collected a large national sample of CT practice data for 7 common examinations (with associated clinical indications) of both adult and pediatric patients. Following completion of data entry into a common database, a survey summary report and recommendations will be made on CT DRLs from this data. It is hoped that these can then be used by local regions to promote CT practice optimisation and support any dose reduction initiatives.« less

  6. Wavelet library for constrained devices

    NASA Astrophysics Data System (ADS)

    Ehlers, Johan Hendrik; Jassim, Sabah A.

    2007-04-01

    The wavelet transform is a powerful tool for image and video processing, useful in a range of applications. This paper is concerned with the efficiency of a certain fast-wavelet-transform (FWT) implementation and several wavelet filters, more suitable for constrained devices. Such constraints are typically found on mobile (cell) phones or personal digital assistants (PDA). These constraints can be a combination of; limited memory, slow floating point operations (compared to integer operations, most often as a result of no hardware support) and limited local storage. Yet these devices are burdened with demanding tasks such as processing a live video or audio signal through on-board capturing sensors. In this paper we present a new wavelet software library, HeatWave, that can be used efficiently for image/video processing/analysis tasks on mobile phones and PDA's. We will demonstrate that HeatWave is suitable for realtime applications with fine control and range to suit transform demands. We shall present experimental results to substantiate these claims. Finally this library is intended to be of real use and applied, hence we considered several well known and common embedded operating system platform differences; such as a lack of common routines or functions, stack limitations, etc. This makes HeatWave suitable for a range of applications and research projects.

  7. We get the algorithms of our ground truths: Designing referential databases in digital image processing

    PubMed Central

    Jaton, Florian

    2017-01-01

    This article documents the practical efforts of a group of scientists designing an image-processing algorithm for saliency detection. By following the actors of this computer science project, the article shows that the problems often considered to be the starting points of computational models are in fact provisional results of time-consuming, collective and highly material processes that engage habits, desires, skills and values. In the project being studied, problematization processes lead to the constitution of referential databases called ‘ground truths’ that enable both the effective shaping of algorithms and the evaluation of their performances. Working as important common touchstones for research communities in image processing, the ground truths are inherited from prior problematization processes and may be imparted to subsequent ones. The ethnographic results of this study suggest two complementary analytical perspectives on algorithms: (1) an ‘axiomatic’ perspective that understands algorithms as sets of instructions designed to solve given problems computationally in the best possible way, and (2) a ‘problem-oriented’ perspective that understands algorithms as sets of instructions designed to computationally retrieve outputs designed and designated during specific problematization processes. If the axiomatic perspective on algorithms puts the emphasis on the numerical transformations of inputs into outputs, the problem-oriented perspective puts the emphasis on the definition of both inputs and outputs. PMID:28950802

  8. Comparison of magnetic resonance imaging and computed tomography in suspected lesions in the posterior cranial fossa.

    PubMed Central

    Teasdale, G. M.; Hadley, D. M.; Lawrence, A.; Bone, I.; Burton, H.; Grant, R.; Condon, B.; Macpherson, P.; Rowan, J.

    1989-01-01

    OBJECTIVE--To compare computed tomography and magnetic resonance imaging in investigating patients suspected of having a lesion in the posterior cranial fossa. DESIGN--Randomised allocation of newly referred patients to undergo either computed tomography or magnetic resonance imaging; the alternative investigation was performed subsequently only in response to a request from the referring doctor. SETTING--A regional neuroscience centre serving 2.7 million. PATIENTS--1020 Patients recruited between April 1986 and December 1987, all suspected by neurologists, neurosurgeons, or other specialists of having a lesion in the posterior fossa and referred for neuroradiology. The groups allocated to undergo computed tomography or magnetic resonance imaging were well matched in distributions of age, sex, specialty of referring doctor, investigation as an inpatient or an outpatient, suspected site of lesion, and presumed disease process; the referring doctor's confidence in the initial clinical diagnosis was also similar. INTERVENTIONS--After the patients had been imaged by either computed tomography or magnetic resonance (using a resistive magnet of 0.15 T) doctors were given the radiologist's report and a form asking if they considered that imaging with the alternative technique was necessary and, if so, why; it also asked for their current diagnoses and their confidence in them. MAIN OUTCOME MEASURES--Number of requests for the alternative method of investigation. Assessment of characteristics of patients for whom further imaging was requested and lesions that were suspected initially and how the results of the second imaging affected clinicians' and radiologists' opinions. RESULTS--Ninety three of the 501 patients who initially underwent computed tomography were referred subsequently for magnetic resonance imaging whereas only 28 of the 493 patients who initially underwent magnetic resonance imaging were referred subsequently for computed tomography. Over the study the number of patients referred for magnetic resonance imaging after computed tomography increased but requests for computed tomography after magnetic resonance imaging decreased. The reason that clinicians gave most commonly for requesting further imaging by magnetic resonance was that the results of the initial computed tomography failed to exclude their suspected diagnosis (64 patients). This was less common in patients investigated initially by magnetic resonance imaging (eight patients). Management of 28 patients (6%) imaged initially with computed tomography and 12 patients (2%) imaged initially with magnetic resonance was changed on the basis of the results of the alternative imaging. CONCLUSIONS--Magnetic resonance imaging provided doctors with the information required to manage patients suspected of having a lesion in the posterior fossa more commonly than computed tomography, but computed tomography alone was satisfactory in 80% of cases... PMID:2506965

  9. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection

    PubMed Central

    Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos

    2016-01-01

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  10. Dual energy x-ray imaging and scoring of coronary calcium: physics-based digital phantom and clinical studies

    NASA Astrophysics Data System (ADS)

    Zhou, Bo; Wen, Di; Nye, Katelyn; Gilkeson, Robert C.; Wilson, David L.

    2016-03-01

    Coronary artery calcification (CAC) as assessed with CT calcium score is the best biomarker of coronary artery disease. Dual energy x-ray provides an inexpensive, low radiation-dose alternative. A two shot system (GE Revolution-XRd) is used, raw images are processed with a custom algorithm, and a coronary calcium image (DECCI) is created, similar to the bone image, but optimized for CAC visualization, not lung visualization. In this report, we developed a physicsbased, digital-phantom containing heart, lung, CAC, spine, ribs, pulmonary artery, and adipose elements, examined effects on DECCI, suggested physics-inspired algorithms to improve CAC contrast, and evaluated the correlation between CT calcium scores and a proposed DE calcium score. In simulation experiment, Beam hardening from increasing adipose thickness (2cm to 8cm) reduced Cg by 19% and 27% in 120kVp and 60kVp images, but only reduced Cg by <7% in DECCI. If a pulmonary artery moves or pulsates with blood filling between exposures, it can give rise to a significantly confounding PA signal in DECCI similar in amplitude to CAC. Observations suggest modifications to DECCI processing, which can further improve CAC contrast by a factor of 2 in clinical exams. The DE score had the best correlation with "CT mass score" among three commonly used CT scores. Results suggest that DE x-ray is a promising tool for imaging and scoring CAC, and there still remains opportunity for further DECCI processing improvements.

  11. Multispectral image sharpening using a shift-invariant wavelet transform and adaptive processing of multiresolution edges

    USGS Publications Warehouse

    Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.

    2002-01-01

    Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.

  12. Analysis of deep inferior epigastric perforator (DIEP) arteries by using MDCTA: Comparison between 2 post-processing techniques.

    PubMed

    Saba, Luca; Atzeni, Matteo; Ribuffo, Diego; Mallarini, Giorgio; Suri, Jasjit S

    2012-08-01

    Our purpose was to compare two post-processing techniques, Maximum-Intensity-Projection (MIP) and Volume Rendering (VR) for the study of perforator arteries. Thirty patients who underwent Multi-Detector-Row CT Angiography (MDCTA) between February 2010 and May 2010 were retrospectively analyzed. For each patient and for each reconstruction method, the image quality was evaluated and the inter- and intra-observer agreement was calculated according to the Cohen statistics. The Hounsfield Unit (HU) value in the common femoral artery was quantified and the correlation (Pearson Statistic) between image quality and HU value was explored. The Pearson r between the right and left common femoral artery was excellent (r=0.955). The highest image quality score was obtained using MIP for both observers (total value 75, with a mean value 2.67 for observer 1 and total value of 79 and a mean value of 2.82 for observer 2). The highest agreement between the two observers was detected using the MIP protocol with a Cohen kappa value of 0.856. The ROC area under the curve (Az) for the VR is 0.786 (0.086 SD; p value=0.0009) whereas the ROC area under the curve (Az) for the MIP is 0.0928 (0.051 SD; p value=0.0001). MIP showed the optimal inter- and intra-observer agreement and the highest quality scores and therefore should be used as post-processing techniques in the analysis of perforating arteries. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  13. Polarized light imaging of birefringence and diattenuation at high resolution and high sensitivity

    PubMed Central

    Mehta, Shalin B.; Shribak, Michael; Oldenbourg, Rudolf

    2013-01-01

    Polarized light microscopy provides unique opportunities for analyzing the molecular order in man-made and natural materials, including biological structures inside living cells, tissues, and whole organisms. 20 years ago, the LC-PolScope was introduced as a modern version of the traditional polarizing microscope enhanced by liquid crystal devices for the control of polarization, and by electronic imaging and digital image processing for fast and comprehensive image acquisition and analysis. The LCPolScope is commonly used for birefringence imaging, analyzing the spatial and temporal variations of the differential phase delay in ordered and transparent materials. Here we describe an alternative use of the LC-PolScope for imaging the polarization dependent transmittance of dichroic materials. We explain the minor changes needed to convert the instrument between the two imaging modes, discuss the relationship between the quantities measured with either instrument, and touch on the physical connection between refractive index, birefringence, transmittance, diattenuation, and dichroism. PMID:24273640

  14. Pitfalls in classical nuclear medicine: myocardial perfusion imaging

    NASA Astrophysics Data System (ADS)

    Fragkaki, C.; Giannopoulou, Ch

    2011-09-01

    Scintigraphic imaging is a complex functional procedure subject to a variety of artefacts and pitfalls that may limit its clinical and diagnostic accuracy. It is important to be aware of and to recognize them when present and to eliminate them whenever possible. Pitfalls may occur at any stage of the imaging procedure and can be related with the γ-camera or other equipment, personnel handling, patient preparation, image processing or the procedure itself. Often, potential causes of artefacts and pitfalls may overlap. In this short review, special interest will be given to cardiac scintigraphic imaging. Most common causes of artefact in myocardial perfusion imaging are soft tissue attenuation as well as motion and gating errors. Additionally, clinical problems like cardiac abnormalities may cause interpretation pitfalls and nuclear medicine physicians should be familiar with these in order to ensure the correct evaluation of the study. Artefacts or suboptimal image quality can also result from infiltrated injections, misalignment in patient positioning, power instability or interruption, flood field non-uniformities, cracked crystal and several other technical reasons.

  15. DSP+FPGA-based real-time histogram equalization system of infrared image

    NASA Astrophysics Data System (ADS)

    Gu, Dongsheng; Yang, Nansheng; Pi, Defu; Hua, Min; Shen, Xiaoyan; Zhang, Ruolan

    2001-10-01

    Histogram Modification is a simple but effective method to enhance an infrared image. There are several methods to equalize an infrared image's histogram due to the different characteristics of the different infrared images, such as the traditional HE (Histogram Equalization) method, and the improved HP (Histogram Projection) and PE (Plateau Equalization) method and so on. If to realize these methods in a single system, the system must have a mass of memory and extremely fast speed. In our system, we introduce a DSP + FPGA based real-time procession technology to do these things together. FPGA is used to realize the common part of these methods while DSP is to do the different part. The choice of methods and the parameter can be input by a keyboard or a computer. By this means, the function of the system is powerful while it is easy to operate and maintain. In this article, we give out the diagram of the system and the soft flow chart of the methods. And at the end of it, we give out the infrared image and its histogram before and after the process of HE method.

  16. Hadamard multimode optical imaging transceiver

    DOEpatents

    Cooke, Bradly J; Guenther, David C; Tiee, Joe J; Kellum, Mervyn J; Olivas, Nicholas L; Weisse-Bernstein, Nina R; Judd, Stephen L; Braun, Thomas R

    2012-10-30

    Disclosed is a method and system for simultaneously acquiring and producing results for multiple image modes using a common sensor without optical filtering, scanning, or other moving parts. The system and method utilize the Walsh-Hadamard correlation detection process (e.g., functions/matrix) to provide an all-binary structure that permits seamless bridging between analog and digital domains. An embodiment may capture an incoming optical signal at an optical aperture, convert the optical signal to an electrical signal, pass the electrical signal through a Low-Noise Amplifier (LNA) to create an LNA signal, pass the LNA signal through one or more correlators where each correlator has a corresponding Walsh-Hadamard (WH) binary basis function, calculate a correlation output coefficient for each correlator as a function of the corresponding WH binary basis function in accordance with Walsh-Hadamard mathematical principles, digitize each of the correlation output coefficient by passing each correlation output coefficient through an Analog-to-Digital Converter (ADC), and performing image mode processing on the digitized correlation output coefficients as desired to produce one or more image modes. Some, but not all, potential image modes include: multi-channel access, temporal, range, three-dimensional, and synthetic aperture.

  17. Daylight coloring for monochrome infrared imagery

    NASA Astrophysics Data System (ADS)

    Gabura, James

    2015-05-01

    The effectiveness of infrared imagery in poor visibility situations is well established and the range of applications is expanding as we enter a new era of inexpensive thermal imagers for mobile phones. However there is a problem in that the counterintuitive reflectance characteristics of various common scene elements can cause slowed reaction times and impaired situational awareness-consequences that can be especially detrimental in emergency situations. While multiband infrared sensors can be used, they are inherently more costly. Here we propose a technique for adding a daylight color appearance to single band infrared images, using the normally overlooked property of local image texture. The simple method described here is illustrated with colorized images from the visible red and long wave infrared bands. Our colorizing process not only imparts a natural daylight appearance to infrared images but also enhances the contrast and visibility of otherwise obscure detail. We anticipate that this colorizing method will lead to a better user experience, faster reaction times and improved situational awareness for a growing community of infrared camera users. A natural extension of our process could expand upon its texture discerning feature by adding specialized filters for discriminating specific targets.

  18. The COST Action IC0604 "Telepathology Network in Europe" (EURO-TELEPATH).

    PubMed

    García-Rojo, Marcial; Gonçalves, Luís; Blobel, Bernd

    2012-01-01

    The COST Action IC0604 "Telepathology Network in Europe" (EURO-TELEPATH) is a European COST Action that has been running from 2007 to 2011. COST Actions are funded by the COST (European Cooperation in the field of Scientific and Technical Research) Agency, supported by the Seventh Framework Programme for Research and Technological Development (FP7), of the European Union. EURO-TELEPATH's main objectives were evaluating and validating the common technological framework and communication standards required to access, transmit and manage digital medical records by pathologists and other medical professionals in a networked environment. The project was organized in four working groups. orking Group 1 "Business modeling in pathology" has designed main pathology processes - Frozen Study, Formalin Fixed Specimen Study, Telepathology, Cytology, and Autopsy -using Business Process Modeling Notation (BPMN). orking Group 2 "Informatics standards in pathology" has been dedicated to promoting the development and application of informatics standards in pathology, collaborating with Integrating the Healthcare Enterprise (IHE), Digital Imaging and Communications in Medicine (DICOM), Health Level Seven (HL7), and other standardization bodies. Working Group 3 "Images: Analysis, Processing, Retrieval and Management" worked on the use of virtual or digital slides that are fostering the use of image processing and analysis in pathology not only for research purposes, but also in daily practice. Working Group 4 "Technology and Automation in Pathology" was focused on studying the adequacy of current existing technical solutions, including, e.g., the quality of images obtained by slide scanners, or the efficiency of image analysis applications. Major outcome of this action are the collaboration with international health informatics standardization bodies to foster the development of standards for digital pathology, offering a new approach for workflow analysis, based in business process modeling. Health terminology standardization research has become a topic of high interest. Future research work should focus on standardization of automatic image analysis and tissue microarrays imaging.

  19. Imaging of particles with 3D full parallax mode with two-color digital off-axis holography

    NASA Astrophysics Data System (ADS)

    Kara-Mohammed, Soumaya; Bouamama, Larbi; Picart, Pascal

    2018-05-01

    This paper proposes an approach based on two orthogonal views and two wavelengths for recording off-axis two-color holograms. The approach permits to discriminate particles aligned along the sight-view axis. The experimental set-up is based on a double Mach-Zehnder architecture in which two different wavelengths provides the reference and the object beams. The digital processing to get images from the particles is based on convolution so as to obtain images with no wavelength dependence. The spatial bandwidth of the angular spectrum transfer function is adapted in order to increase the maximum reconstruction distance which is generally limited to a few tens of millimeters. In order to get the images of particles in the 3D volume, a calibration process is proposed and is based on the modulation theorem to perfectly superimpose the two views in a common XYZ axis. The experimental set-up is applied to two-color hologram recording of moving non-calibrated opaque particles with average diameter at about 150 μm. After processing the two-color holograms with image reconstruction and view calibration, the location of particles in the 3D volume can be obtained. Particularly, ambiguity about close particles, generating hidden particles in a single-view scheme, can be removed to determine the exact number of particles in the region of interest.

  20. Quantitative fluorescence microscopy and image deconvolution.

    PubMed

    Swedlow, Jason R

    2013-01-01

    Quantitative imaging and image deconvolution have become standard techniques for the modern cell biologist because they can form the basis of an increasing number of assays for molecular function in a cellular context. There are two major types of deconvolution approaches--deblurring and restoration algorithms. Deblurring algorithms remove blur but treat a series of optical sections as individual two-dimensional entities and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed in this chapter. Image deconvolution in fluorescence microscopy has usually been applied to high-resolution imaging to improve contrast and thus detect small, dim objects that might otherwise be obscured. Their proper use demands some consideration of the imaging hardware, the acquisition process, fundamental aspects of photon detection, and image processing. This can prove daunting for some cell biologists, but the power of these techniques has been proven many times in the works cited in the chapter and elsewhere. Their usage is now well defined, so they can be incorporated into the capabilities of most laboratories. A major application of fluorescence microscopy is the quantitative measurement of the localization, dynamics, and interactions of cellular factors. The introduction of green fluorescent protein and its spectral variants has led to a significant increase in the use of fluorescence microscopy as a quantitative assay system. For quantitative imaging assays, it is critical to consider the nature of the image-acquisition system and to validate its response to known standards. Any image-processing algorithms used before quantitative analysis should preserve the relative signal levels in different parts of the image. A very common image-processing algorithm, image deconvolution, is used to remove blurred signal from an image. There are two major types of deconvolution approaches, deblurring and restoration algorithms. Deblurring algorithms remove blur, but treat a series of optical sections as individual two-dimensional entities, and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed. Copyright © 1998 Elsevier Inc. All rights reserved.

  1. Motor unit action potential conduction velocity estimated from surface electromyographic signals using image processing techniques.

    PubMed

    Soares, Fabiano Araujo; Carvalho, João Luiz Azevedo; Miosso, Cristiano Jacques; de Andrade, Marcelino Monteiro; da Rocha, Adson Ferreira

    2015-09-17

    In surface electromyography (surface EMG, or S-EMG), conduction velocity (CV) refers to the velocity at which the motor unit action potentials (MUAPs) propagate along the muscle fibers, during contractions. The CV is related to the type and diameter of the muscle fibers, ion concentration, pH, and firing rate of the motor units (MUs). The CV can be used in the evaluation of contractile properties of MUs, and of muscle fatigue. The most popular methods for CV estimation are those based on maximum likelihood estimation (MLE). This work proposes an algorithm for estimating CV from S-EMG signals, using digital image processing techniques. The proposed approach is demonstrated and evaluated, using both simulated and experimentally-acquired multichannel S-EMG signals. We show that the proposed algorithm is as precise and accurate as the MLE method in typical conditions of noise and CV. The proposed method is not susceptible to errors associated with MUAP propagation direction or inadequate initialization parameters, which are common with the MLE algorithm. Image processing -based approaches may be useful in S-EMG analysis to extract different physiological parameters from multichannel S-EMG signals. Other new methods based on image processing could also be developed to help solving other tasks in EMG analysis, such as estimation of the CV for individual MUs, localization and tracking of innervation zones, and study of MU recruitment strategies.

  2. Attentional deployment is not necessary for successful emotion regulation via cognitive reappraisal or expressive suppression.

    PubMed

    Bebko, Genna M; Franconeri, Steven L; Ochsner, Kevin N; Chiao, Joan Y

    2014-06-01

    According to appraisal theories of emotion, cognitive reappraisal is a successful emotion regulation strategy because it involves cognitively changing our thoughts, which, in turn, change our emotions. However, recent evidence has challenged the importance of cognitive change and, instead, has suggested that attentional deployment may at least partly explain the emotion regulation success of cognitive reappraisal. The purpose of the current study was to examine the causal relationship between attentional deployment and emotion regulation success. We examined 2 commonly used emotion regulation strategies--cognitive reappraisal and expressive suppression-because both depend on attention but have divergent behavioral, experiential, and physiological outcomes. Participants were either instructed to regulate emotions during free-viewing (unrestricted image viewing) or gaze-controlled (restricted image viewing) conditions and to self-report negative emotional experience. For both emotion regulation strategies, emotion regulation success was not altered by changes in participant control over the (a) direction of attention (free-viewing vs. gaze-controlled) during image viewing and (b) valence (negative vs. neutral) of visual stimuli viewed when gaze was controlled. Taken together, these findings provide convergent evidence that attentional deployment does not alter subjective negative emotional experience during either cognitive reappraisal or expressive suppression, suggesting that strategy-specific processes, such as cognitive appraisal and response modulation, respectively, may have a greater impact on emotional regulation success than processes common to both strategies, such as attention.

  3. Real-time simulation of the retina allowing visualization of each processing stage

    NASA Astrophysics Data System (ADS)

    Teeters, Jeffrey L.; Werblin, Frank S.

    1991-08-01

    The retina computes to let us see, but can we see the retina compute? Until now, the answer has been no, because the unconscious nature of the processing hides it from our view. Here the authors describe a method of seeing computations performed throughout the retina. This is achieved by using neurophysiological data to construct a model of the retina, and using a special-purpose image processing computer (PIPE) to implement the model in real time. Processing in the model is organized into stages corresponding to computations performed by each retinal cell type. The final stage is the transient (change detecting) ganglion cell. A CCD camera forms the input image, and the activity of a selected retinal cell type is the output which is displayed on a TV monitor. By changing the retina cell driving the monitor, the progressive transformations of the image by the retina can be observed. These simulations demonstrate the ubiquitous presence of temporal and spatial variations in the patterns of activity generated by the retina which are fed into the brain. The dynamical aspects make these patterns very different from those generated by the common DOG (Difference of Gaussian) model of receptive field. Because the retina is so successful in biological vision systems, the processing described here may be useful in machine vision.

  4. Symmetrization for redundant channels

    NASA Technical Reports Server (NTRS)

    Tulplue, Bhalchandra R. (Inventor); Collins, Robert E. (Inventor)

    1988-01-01

    A plurality of redundant channels in a system each contain a global image of all the configuration data bases in each of the channels in the system. Each global image is updated periodically from each of the other channels via cross channel data links. The global images of the local configuration data bases in each channel are separately symmetrized using a voting process to generate a system signal configuration data base which is not written into by any other routine and is available for indicating the status of the system within each channel. Equalization may be imposed on a suspect signal and a number of chances for that signal to heal itself are provided before excluding it from future votes. Reconfiguration is accomplished upon detecting a channel which is deemed invalid. A reset function is provided which permits an externally generated reset signal to permit a previously excluded channel to be reincluded within the system. The updating of global images and/or the symmetrization process may be accomplished at substantially the same time within a synchronized time frame common to all channels.

  5. Acoustic noise and functional magnetic resonance imaging: current strategies and future prospects.

    PubMed

    Amaro, Edson; Williams, Steve C R; Shergill, Sukhi S; Fu, Cynthia H Y; MacSweeney, Mairead; Picchioni, Marco M; Brammer, Michael J; McGuire, Philip K

    2002-11-01

    Functional magnetic resonance imaging (fMRI) has become the method of choice for studying the neural correlates of cognitive tasks. Nevertheless, the scanner produces acoustic noise during the image acquisition process, which is a problem in the study of auditory pathway and language generally. The scanner acoustic noise not only produces activation in brain regions involved in auditory processing, but also interferes with the stimulus presentation. Several strategies can be used to address this problem, including modifications of hardware and software. Although reduction of the source of the acoustic noise would be ideal, substantial hardware modifications to the current base of installed MRI systems would be required. Therefore, the most common strategy employed to minimize the problem involves software modifications. In this work we consider three main types of acquisitions: compressed, partially silent, and silent. For each implementation, paradigms using block and event-related designs are assessed. We also provide new data, using a silent event-related (SER) design, which demonstrate higher blood oxygen level-dependent (BOLD) response to a simple auditory cue when compared to a conventional image acquisition. Copyright 2002 Wiley-Liss, Inc.

  6. Image Processing Diagnostics: Emphysema

    NASA Astrophysics Data System (ADS)

    McKenzie, Alex

    2009-10-01

    Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.

  7. Lightning attachment process to common buildings

    NASA Astrophysics Data System (ADS)

    Saba, M. M. F.; Paiva, A. R.; Schumann, C.; Ferro, M. A. S.; Naccarato, K. P.; Silva, J. C. O.; Siqueira, F. V. C.; Custódio, D. M.

    2017-05-01

    The physical mechanism of lightning attachment to grounded structures is one of the most important issues in lightning physics research, and it is the basis for the design of the lightning protection systems. Most of what is known about the attachment process comes from leader propagation models that are mostly based on laboratory observations of long electrical discharges or from observations of lightning attachment to tall structures. In this paper we use high-speed videos to analyze the attachment process of downward lightning flashes to an ordinary residential building. For the first time, we present characteristics of the attachment process to common structures that are present in almost every city (in this case, two buildings under 60 m in São Paulo City, Brazil). Parameters like striking distance and connecting leaders speed, largely used in lightning attachment models and in lightning protection standards, are revealed in this work.Plain Language SummarySince the time of Benjamin Franklin, no one has ever recorded high-speed video images of a lightning connection to a common building. It is very difficult to do it. Cameras need to be very close to the structure chosen to be observed, and long observation time is required to register one lightning strike to that particular structure. Models and theories used to determine the zone of protection of a lightning rod have been developed, but they all suffer from the lack of field data. The submitted manuscript provides results from high-speed video observations of lightning attachment to low buildings that are commonly found in almost every populated area around the world. The proximity of the camera and the high frame rate allowed us to see interesting details that will improve the understanding of the attachment process and, consequently, the models and theories used by lightning protection standards. This paper also presents spectacular images and videos of lightning flashes connecting lightning rods that will be of interest not only to the lightning physics scientific community and to engineers that struggle with lightning protection but also to all those who want to understand how a lightning rod works.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010PhDT.......201W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010PhDT.......201W"><span>Bayesian multi-scale smoothing of photon-limited images with applications to astronomy and medicine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>White, John</p> <p></p> <p>Multi-scale models for smoothing Poisson signals or images have gained much attention over the past decade. A new Bayesian model is developed using the concept of the Chinese restaurant process to find structures in two-dimensional images when performing image reconstruction or smoothing. This new model performs very well when compared to other leading methodologies for the same problem. It is developed and evaluated theoretically and empirically throughout Chapter 2. The newly developed Bayesian model is extended to three-dimensional images in Chapter 3. The third dimension has numerous different applications, such as different energy spectra, another spatial index, or possibly a temporal dimension. Empirically, this method shows promise in reducing error with the use of simulation studies. A further development removes background noise in the image. This removal can further reduce the error and is done using a modeling adjustment and post-processing techniques. These details are given in Chapter 4. Applications to real world problems are given throughout. Photon-based images are common in astronomical imaging due to the collection of different types of energy such as X-Rays. Applications to real astronomical images are given, and these consist of X-ray images from the Chandra X-ray observatory satellite. Diagnostic medicine uses many types of imaging such as magnetic resonance imaging and computed tomography that can also benefit from smoothing techniques such as the one developed here. Reducing the amount of radiation a patient takes will make images more noisy, but this can be mitigated through the use of image smoothing techniques. Both types of images represent the potential real world use for these methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPD....48.0601D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPD....48.0601D"><span>Noise Gating Solar Images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>DeForest, Craig; Seaton, Daniel B.; Darnell, John A.</p> <p>2017-08-01</p> <p>I present and demonstrate a new, general purpose post-processing technique, "3D noise gating", that can reduce image noise by an order of magnitude or more without effective loss of spatial or temporal resolution in typical solar applications.Nearly all scientific images are, ultimately, limited by noise. Noise can be direct Poisson "shot noise" from photon counting effects, or introduced by other means such as detector read noise. Noise is typically represented as a random variable (perhaps with location- or image-dependent characteristics) that is sampled once per pixel or once per resolution element of an image sequence. Noise limits many aspects of image analysis, including photometry, spatiotemporal resolution, feature identification, morphology extraction, and background modeling and separation.Identifying and separating noise from image signal is difficult. The common practice of blurring in space and/or time works because most image "signal" is concentrated in the low Fourier components of an image, while noise is evenly distributed. Blurring in space and/or time attenuates the high spatial and temporal frequencies, reducing noise at the expense of also attenuating image detail. Noise-gating exploits the same property -- "coherence" -- that we use to identify features in images, to separate image features from noise.Processing image sequences through 3-D noise gating results in spectacular (more than 10x) improvements in signal-to-noise ratio, while not blurring bright, resolved features in either space or time. This improves most types of image analysis, including feature identification, time sequence extraction, absolute and relative photometry (including differential emission measure analysis), feature tracking, computer vision, correlation tracking, background modeling, cross-scale analysis, visual display/presentation, and image compression.I will introduce noise gating, describe the method, and show examples from several instruments (including SDO/AIA , SDO/HMI, STEREO/SECCHI, and GOES-R/SUVI) that explore the benefits and limits of the technique.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10575E..0NW','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10575E..0NW"><span>Segmentation of skin lesions in chronic graft versus host disease photographs with fully convolutional networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Jianing; Chen, Fuyao; Dellalana, Laura E.; Jagasia, Madan H.; Tkaczyk, Eric R.; Dawant, Benoit M.</p> <p>2018-02-01</p> <p>Chronic graft-versus-host disease (cGVHD) is a frequent and potentially life-threatening complication of allogeneic hematopoietic stem cell transplantation (HCT) and commonly affects the skin, resulting in distressing patient morbidity. The percentage of involved body surface area (BSA) is commonly used for diagnosing and scoring the severity of cGVHD. However, the segmentation of the involved BSA from patient whole body serial photography is challenging because (1) it is difficult to design traditional segmentation method that rely on hand crafted features as the appearance of cGVHD lesions can be drastically different from patient to patient; (2) to the best of our knowledge, currently there is no publicavailable labelled image set of cGVHD skin for training deep networks to segment the involved BSA. In this preliminary study we create a small labelled image set of skin cGVHD, and we explore the possibility to use a fully convolutional neural network (FCN) to segment the skin lesion in the images. We use a commercial stereoscopic Vectra H1 camera (Canfield Scientific) to acquire 400 3D photographs of 17 cGVHD patients aged between 22 and 72. A rotational data augmentation process is then applied, which rotates the 3D photos through 10 predefined angles, producing one 2D projection image at each position. This results in 4000 2D images that constitute our cGVHD image set. A FCN model is trained and tested using our images. We show that our method achieves encouraging results for segmenting cGVHD skin lesion in photographic images.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4484879','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4484879"><span>Region Templates: Data Representation and Management for High-Throughput Image Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Pan, Tony; Kurc, Tahsin; Kong, Jun; Cooper, Lee; Klasky, Scott; Saltz, Joel</p> <p>2015-01-01</p> <p>We introduce a region template abstraction and framework for the efficient storage, management and processing of common data types in analysis of large datasets of high resolution images on clusters of hybrid computing nodes. The region template abstraction provides a generic container template for common data structures, such as points, arrays, regions, and object sets, within a spatial and temporal bounding box. It allows for different data management strategies and I/O implementations, while providing a homogeneous, unified interface to applications for data storage and retrieval. A region template application is represented as a hierarchical dataflow in which each computing stage may be represented as another dataflow of finer-grain tasks. The execution of the application is coordinated by a runtime system that implements optimizations for hybrid machines, including performance-aware scheduling for maximizing the utilization of computing devices and techniques to reduce the impact of data transfers between CPUs and GPUs. An experimental evaluation on a state-of-the-art hybrid cluster using a microscopy imaging application shows that the abstraction adds negligible overhead (about 3%) and achieves good scalability and high data transfer rates. Optimizations in a high speed disk based storage implementation of the abstraction to support asynchronous data transfers and computation result in an application performance gain of about 1.13×. Finally, a processing rate of 11,730 4K×4K tiles per minute was achieved for the microscopy imaging application on a cluster with 100 nodes (300 GPUs and 1,200 CPU cores). This computation rate enables studies with very large datasets. PMID:26139953</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4969628','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4969628"><span>Social Cognition in Williams Syndrome: Face Tuning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Pavlova, Marina A.; Heiz, Julie; Sokolov, Alexander N.; Barisnikov, Koviljka</p> <p>2016-01-01</p> <p>Many neurological, neurodevelopmental, neuropsychiatric, and psychosomatic disorders are characterized by impairments in visual social cognition, body language reading, and facial assessment of a social counterpart. Yet a wealth of research indicates that individuals with Williams syndrome exhibit remarkable concern for social stimuli and face fascination. Here individuals with Williams syndrome were presented with a set of Face-n-Food images composed of food ingredients and in different degree resembling a face (slightly bordering on the Giuseppe Arcimboldo style). The primary advantage of these images is that single components do not explicitly trigger face-specific processing, whereas in face images commonly used for investigating face perception (such as photographs or depictions), the mere occurrence of typical cues already implicates face presence. In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Strikingly, individuals with Williams syndrome exhibited profound deficits in recognition of the Face-n-Food images as a face: they did not report seeing a face on the images, which typically developing controls effortlessly recognized as a face, and gave overall fewer face responses. This suggests atypical face tuning in Williams syndrome. The outcome is discussed in the light of a general pattern of social cognition in Williams syndrome and brain mechanisms underpinning face processing. PMID:27531986</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4844949','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4844949"><span>Real-time imaging through strongly scattering media: seeing through turbid media, instantly</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Sudarsanam, Sriram; Mathew, James; Panigrahi, Swapnesh; Fade, Julien; Alouini, Mehdi; Ramachandran, Hema</p> <p>2016-01-01</p> <p>Numerous everyday situations like navigation, medical imaging and rescue operations require viewing through optically inhomogeneous media. This is a challenging task as photons propagate predominantly diffusively (rather than ballistically) due to random multiple scattering off the inhomogenieties. Real-time imaging with ballistic light under continuous-wave illumination is even more challenging due to the extremely weak signal, necessitating voluminous data-processing. Here we report imaging through strongly scattering media in real-time and at rates several times the critical flicker frequency of the eye, so that motion is perceived as continuous. Two factors contributed to the speedup of more than three orders of magnitude over conventional techniques - the use of a simplified algorithm enabling processing of data on the fly, and the utilisation of task and data parallelization capabilities of typical desktop computers. The extreme simplicity of the technique, and its implementation with present day low-cost technology promises its utility in a variety of devices in maritime, aerospace, rail and road transport, in medical imaging and defence. It is of equal interest to the common man and adventure sportsperson like hikers, divers, mountaineers, who frequently encounter situations requiring realtime imaging through obscuring media. As a specific example, navigation under poor visibility is examined. PMID:27114106</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS.978a2083N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS.978a2083N"><span>Feature Detection of Curve Traffic Sign Image on The Bandung - Jakarta Highway</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Naseer, M.; Supriadi, I.; Supangkat, S. H.</p> <p>2018-03-01</p> <p>Unsealed roadside and problems with the road surface are common causes of road crashes, particularly when those are combined with curves. Curve traffic sign is an important component for giving early warning to driver on traffic, especially on high-speed traffic like on the highway. Traffic sign detection has became a very interesting research now, and in this paper will be discussed about the detection of curve traffic sign. There are two types of curve signs are discussed, namely the curve turn to the left and the curve turn to the right and the all data sample used are the curves taken / recorded from some signs on the Bandung - Jakarta Highway. Feature detection of the curve signs use Speed Up Robust Feature (SURF) method, where the detected scene image is 800x450. From 45 curve turn to the right images, the system can detect the feature well to 35 images, where the success rate is 77,78%, while from the 45 curve turn to the left images, the system can detect the feature well to 34 images and the success rate is 75,56%, so the average accuracy in the detection process is 76,67%. While the average time for the detection process is 0.411 seconds.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009SPIE.7263E..0EI','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009SPIE.7263E..0EI"><span>Statistical ultrasonics: the influence of Robert F. Wagner</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Insana, Michael F.</p> <p>2009-02-01</p> <p>An important ongoing question for higher education is how to successfully mentor the next generation of scientists and engineers. It has been my privilege to have been mentored by one of the best, Dr Robert F. Wagner and his colleagues at the CDRH/FDA during the mid 1980s. Bob introduced many of us in medical ultrasonics to statistical imaging techniques. These ideas continue to broadly influence studies on adaptive aperture management (beamforming, speckle suppression, compounding), tissue characterization (texture features, Rayleigh/Rician statistics, scatterer size and number density estimators), and fundamental questions about how limitations of the human eye-brain system for extracting information from textured images can motivate image processing. He adapted the classical techniques of signal detection theory to coherent imaging systems that, for the first time in ultrasonics, related common engineering metrics for image quality to task-based clinical performance. This talk summarizes my wonderfully-exciting three years with Bob as I watched him explore topics in statistical image analysis that formed a rational basis for many of the signal processing techniques used in commercial systems today. It is a story of an exciting time in medical ultrasonics, and of how a sparkling personality guided and motivated the development of junior scientists who flocked around him in admiration and amazement.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27531986','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27531986"><span>Social Cognition in Williams Syndrome: Face Tuning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pavlova, Marina A; Heiz, Julie; Sokolov, Alexander N; Barisnikov, Koviljka</p> <p>2016-01-01</p> <p>Many neurological, neurodevelopmental, neuropsychiatric, and psychosomatic disorders are characterized by impairments in visual social cognition, body language reading, and facial assessment of a social counterpart. Yet a wealth of research indicates that individuals with Williams syndrome exhibit remarkable concern for social stimuli and face fascination. Here individuals with Williams syndrome were presented with a set of Face-n-Food images composed of food ingredients and in different degree resembling a face (slightly bordering on the Giuseppe Arcimboldo style). The primary advantage of these images is that single components do not explicitly trigger face-specific processing, whereas in face images commonly used for investigating face perception (such as photographs or depictions), the mere occurrence of typical cues already implicates face presence. In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Strikingly, individuals with Williams syndrome exhibited profound deficits in recognition of the Face-n-Food images as a face: they did not report seeing a face on the images, which typically developing controls effortlessly recognized as a face, and gave overall fewer face responses. This suggests atypical face tuning in Williams syndrome. The outcome is discussed in the light of a general pattern of social cognition in Williams syndrome and brain mechanisms underpinning face processing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012SPIE.8223E..1UK','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012SPIE.8223E..1UK"><span>Towards nonionizing photoacoustic cystography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kim, Chulhong; Jeon, Mansik; Wang, Lihong V.</p> <p>2012-02-01</p> <p>Normally, urine flows down from kidneys to bladders. Vesicoureteral reflux (VUR) is the abnormal flow of urine from bladders back to kidneys. VUR commonly follows urinary tract infection and leads to renal infection. Fluoroscopic voiding cystourethrography and direct radionuclide voiding cystography have been clinical gold standards for VUR imaging, but these methods are ionizing. Here, we demonstrate the feasibility of a novel and nonionizing process for VUR mapping in vivo, called photoacoustic cystography (PAC). Using a photoacoustic (PA) imaging system, we have successfully imaged a rat bladder filled with clinically being used methylene blue dye. An image contrast of ~8 was achieved. Further, spectroscopic PAC confirmed the accumulation of methylene blue in the bladder. Using a laser pulse energy of less than 1 mJ/cm2, bladder was clearly visible in the PA image. Our results suggest that this technology would be a useful clinical tool, allowing clinicians to identify bladder noninvasively in vivo.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004SPIE.5272..273T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004SPIE.5272..273T"><span>Intelligent imaging systems for automotive applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Thompson, Chris; Huang, Yingping; Fu, Shan</p> <p>2004-03-01</p> <p>In common with many other application areas, visual signals are becoming an increasingly important information source for many automotive applications. For several years CCD cameras have been used as research tools for a range of automotive applications. Infrared cameras, RADAR and LIDAR are other types of imaging sensors that have also been widely investigated for use in cars. This paper will describe work in this field performed in C2VIP over the last decade - starting with Night Vision Systems and looking at various other Advanced Driver Assistance Systems. Emerging from this experience, we make the following observations which are crucial for "intelligent" imaging systems: 1. Careful arrangement of sensor array. 2. Dynamic-Self-Calibration. 3. Networking and processing. 4. Fusion with other imaging sensors, both at the image level and the feature level, provides much more flexibility and reliability in complex situations. We will discuss how these problems can be addressed and what are the outstanding issues.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16971754','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16971754"><span>Advanced image fusion algorithms for Gamma Knife treatment planning. Evaluation and proposal for clinical use.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Apostolou, N; Papazoglou, Th; Koutsouris, D</p> <p>2006-01-01</p> <p>Image fusion is a process of combining information from multiple sensors. It is a useful tool implemented in the treatment planning programme of Gamma Knife Radiosurgery. In this paper we evaluate advanced image fusion algorithms for Matlab platform and head images. We develop nine level grayscale image fusion methods: average, principal component analysis (PCA), discrete wavelet transform (DWT) and Laplacian, filter - subtract - decimate (FSD), contrast, gradient, morphological pyramid and a shift invariant discrete wavelet transform (SIDWT) method in Matlab platform. We test these methods qualitatively and quantitatively. The quantitative criteria we use are the Root Mean Square Error (RMSE), the Mutual Information (MI), the Standard Deviation (STD), the Entropy (H), the Difference Entropy (DH) and the Cross Entropy (CEN). The qualitative are: natural appearance, brilliance contrast, presence of complementary features and enhancement of common features. Finally we make clinically useful suggestions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010SPIE.7628E..0ZX','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010SPIE.7628E..0ZX"><span>Web-accessible cervigram automatic segmentation tool</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Thoma, George R.</p> <p>2010-03-01</p> <p>Uterine cervix image analysis is of great importance to the study of uterine cervix cancer, which is among the leading cancers affecting women worldwide. In this paper, we describe our proof-of-concept, Web-accessible system for automated segmentation of significant tissue regions in uterine cervix images, which also demonstrates our research efforts toward promoting collaboration between engineers and physicians for medical image analysis projects. Our design and implementation unifies the merits of two commonly used languages, MATLAB and Java. It circumvents the heavy workload of recoding the sophisticated segmentation algorithms originally developed in MATLAB into Java while allowing remote users who are not experienced programmers and algorithms developers to apply those processing methods to their own cervicographic images and evaluate the algorithms. Several other practical issues of the systems are also discussed, such as the compression of images and the format of the segmentation results.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016APS..DFD.L5001A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016APS..DFD.L5001A"><span>Optical diagnostics of turbulent mixing in explosively-driven shock tube</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Anderson, James; Hargather, Michael</p> <p>2016-11-01</p> <p>Explosively-driven shock tube experiments were performed to investigate the turbulent mixing of explosive product gases and ambient air. A small detonator initiated Al / I2O5 thermite, which produced a shock wave and expanding product gases. Schlieren and imaging spectroscopy were applied simultaneously along a common optical path to identify correlations between turbulent structures and spatially-resolved absorbance. The schlieren imaging identifies flow features including shock waves and turbulent structures while the imaging spectroscopy identifies regions of iodine gas presence in the product gases. Pressure transducers located before and after the optical diagnostic section measure time-resolved pressure. Shock speed is measured from tracking the leading edge of the shockwave in the schlieren images and from the pressure transducers. The turbulent mixing characteristics were determined using digital image processing. Results show changes in shock speed, product gas propagation, and species concentrations for varied explosive charge mass. Funded by DTRA Grant HDTRA1-14-1-0070.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.7883E..09S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.7883E..09S"><span>Monitoring human melanocytic cell responses to piperine using multispectral imaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Samatham, Ravikant; Phillips, Kevin G.; Sonka, Julia; Yelma, Aznegashe; Reddy, Neha; Vanka, Meenakshi; Thuillier, Philippe; Soumyanath, Amala; Jacques, Steven</p> <p>2011-03-01</p> <p>Vitiligo is a depigmentary disease characterized by melanocyte loss attributed most commonly to autoimmune mechanisms. Currently vitiligo has a high incidence (1% worldwide) but a poor set of treatment options. Piperine, a compound found in black pepper, is a potential treatment for the depigmentary skin disease vitiligo, due to its ability to stimulate mouse epidermal melanocyte proliferation in vitro and in vivo. The present study investigates the use of multispectral imaging and an image processing technique based on local contrast to quantify the stimulatory effects of piperine on human melanocyte proliferation in reconstructed epidermis. We demonstrate the ability of the imaging method to quantify increased pigmentation in response to piperine treatment. The quantization of melanocyte stimulation by the proposed imaging technique illustrates the potential use of this technology to quickly assess therapeutic responses of vitiligo tissue culture models to treatment non-invasively.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1817867B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1817867B"><span>Operational GPS Imaging System at Multiple Scales for Earth Science and Monitoring of Geohazards</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Blewitt, Geoffrey; Hammond, William; Kreemer, Corné</p> <p>2016-04-01</p> <p>Toward scientific targets that range from slow deep Earth processes to geohazard rapid response, our operational GPS data analysis system produces smooth, yet detailed maps of 3-dimensional land motion with respect to our Earth's center of mass at multiple spatio-temporal scales with various latencies. "GPS Imaging" is implemented operationally as a back-end processor to our GPS data processing facility, which uses JPL's GIPSY OASIS II software to produce positions from 14,000 GPS stations in ITRF every 5 minutes, with coordinate precision that gradually improves as latency increases upward from 1 hour to 2 weeks. Our GPS Imaging system then applies sophisticated signal processing and image filtering techniques to generate images of land motion covering our Earth's continents with high levels of robustness, accuracy, spatial resolution, and temporal resolution. Techniques employed by our GPS Imaging system include: (1) similarity transformation of polyhedron coordinates to ITRF with optional common-mode filtering to enhance local transient signal to noise ratio, (2) a comprehensive database of ~100,000 potential step events based on earthquake catalogs and equipment logs, (3) an automatic, robust, and accurate non-parametric estimator of station velocity that is insensitive to prevalent step discontinuities, outliers, seasonality, and heteroscedasticity; (4) a realistic estimator of velocity error bars based on subsampling statistics; (5) image processing to create a map of land motion that is based on median spatial filtering on the Delauney triangulation, which is effective at despeckling the data while faithfully preserving edge features; (6) a velocity time series estimator to assist identification of transient behavior, such as unloading caused by drought, and (7) a method of integrating InSAR and GPS for fine-scale seamless imaging in ITRF. Our system is being used to address three main scientific focus areas, including (1) deep Earth processes, (2) anthropogenic lithospheric processes, and (3) dynamic solid Earth events. Our prototype images show that the striking, first-order signal in North America and Europe is large scale uplift and subsidence from mantle flow driven by Glacial Isostatic Adjustment. At regional scales, the images reveal that anthropogenic lithospheric processes can dominate vertical land motion in extended regions, such as the rapid subsidence of California's Central Valley (CV) exacerbated by drought. The Earth's crust is observed to rebound elastically as evidenced by uplift of surrounding mountain ranges. Images also reveal natural uplift of mountains, mantle relaxation associated with earthquakes over the last century, and uplift at plate boundaries driven by interseismic locking. Using the high-rate positions at low latency, earthquake events can be rapidly imaged, modeled, and monitored for afterslip, potential aftershocks, and subsequent deeper relaxation. Thus we are imaging deep Earth processes with unprecedented scope, resolution and accuracy. In addition to supporting these scientific focus areas, the data products are also being used to support the global reference frame (ITRF), and show potential to enhance missions such as GRACE and NISAR by providing complementary information on Earth processes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009SPIE.7182E..1KH','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009SPIE.7182E..1KH"><span>Measurements and analysis in imaging for biomedical applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hoeller, Timothy L.</p> <p>2009-02-01</p> <p>A Total Quality Management (TQM) approach can be used to analyze data from biomedical optical and imaging platforms of tissues. A shift from individuals to teams, partnerships, and total participation are necessary from health care groups for improved prognostics using measurement analysis. Proprietary measurement analysis software is available for calibrated, pixel-to-pixel measurements of angles and distances in digital images. Feature size, count, and color are determinable on an absolute and comparative basis. Although changes in images of histomics are based on complex and numerous factors, the variation of changes in imaging analysis to correlations of time, extent, and progression of illness can be derived. Statistical methods are preferred. Applications of the proprietary measurement software are available for any imaging platform. Quantification of results provides improved categorization of illness towards better health. As health care practitioners try to use quantified measurement data for patient diagnosis, the techniques reported can be used to track and isolate causes better. Comparisons, norms, and trends are available from processing of measurement data which is obtained easily and quickly from Scientific Software and methods. Example results for the class actions of Preventative and Corrective Care in Ophthalmology and Dermatology, respectively, are provided. Improved and quantified diagnosis can lead to better health and lower costs associated with health care. Systems support improvements towards Lean and Six Sigma affecting all branches of biology and medicine. As an example for use of statistics, the major types of variation involving a study of Bone Mineral Density (BMD) are examined. Typically, special causes in medicine relate to illness and activities; whereas, common causes are known to be associated with gender, race, size, and genetic make-up. Such a strategy of Continuous Process Improvement (CPI) involves comparison of patient results to baseline data using F-statistics. Self-parings over time are also useful. Special and common causes are identified apart from aging in applying the statistical methods. In the future, implementation of imaging measurement methods by research staff, doctors, and concerned patient partners result in improved health diagnosis, reporting, and cause determination. The long-term prospects for quantified measurements are better quality in imaging analysis with applications of higher utility for heath care providers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28069515','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28069515"><span>A new method of content based medical image retrieval and its applications to CT imaging sign retrieval.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ma, Ling; Liu, Xiabi; Gao, Yan; Zhao, Yanfeng; Zhao, Xinming; Zhou, Chunwu</p> <p>2017-02-01</p> <p>This paper proposes a new method of content based medical image retrieval through considering fused, context-sensitive similarity. Firstly, we fuse the semantic and visual similarities between the query image and each image in the database as their pairwise similarities. Then, we construct a weighted graph whose nodes represent the images and edges measure their pairwise similarities. By using the shortest path algorithm over the weighted graph, we obtain a new similarity measure, context-sensitive similarity measure, between the query image and each database image to complete the retrieval process. Actually, we use the fused pairwise similarity to narrow down the semantic gap for obtaining a more accurate pairwise similarity measure, and spread it on the intrinsic data manifold to achieve the context-sensitive similarity for a better retrieval performance. The proposed method has been evaluated on the retrieval of the Common CT Imaging Signs of Lung Diseases (CISLs) and achieved not only better retrieval results but also the satisfactory computation efficiency. Copyright © 2017 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26832583','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26832583"><span>Visible-to-visible four-photon ultrahigh resolution microscopic imaging with 730-nm diode laser excited nanocrystals.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Baoju; Zhan, Qiuqiang; Zhao, Yuxiang; Wu, Ruitao; Liu, Jing; He, Sailing</p> <p>2016-01-25</p> <p>Further development of multiphoton microscopic imaging is confronted with a number of limitations, including high-cost, high complexity and relatively low spatial resolution due to the long excitation wavelength. To overcome these problems, for the first time, we propose visible-to-visible four-photon ultrahigh resolution microscopic imaging by using a common cost-effective 730-nm laser diode to excite the prepared Nd(3+)-sensitized upconversion nanoparticles (Nd(3+)-UCNPs). An ordinary multiphoton scanning microscope system was built using a visible CW diode laser and the lateral imaging resolution as high as 161-nm was achieved via the four-photon upconversion process. The demonstrated large saturation excitation power for Nd(3+)-UCNPs would be more practical and facilitate the four-photon imaging in the application. A sample with fine structure was imaged to demonstrate the advantages of visible-to-visible four-photon ultrahigh resolution microscopic imaging with 730-nm diode laser excited nanocrystals. Combining the uniqueness of UCNPs, the proposed visible-to-visible four-photon imaging would be highly promising and attractive in the field of multiphoton imaging.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10494E..4CB','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10494E..4CB"><span>Photoacoustic projection imaging using an all-optical detector array</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bauer-Marschallinger, J.; Felbermayer, K.; Berer, T.</p> <p>2018-02-01</p> <p>We present a prototype for all-optical photoacoustic projection imaging. By generating projection images, photoacoustic information of large volumes can be retrieved with less effort compared to common photoacoustic computed tomography where many detectors and/or multiple measurements are required. In our approach, an array of 60 integrating line detectors is used to acquire photoacoustic waves. The line detector array consists of fiber-optic MachZehnder interferometers, distributed on a cylindrical surface. From the measured variation of the optical path lengths of the interferometers, induced by photoacoustic waves, a photoacoustic projection image can be reconstructed. The resulting images represent the projection of the three-dimensional spatial light absorbance within the imaged object onto a two-dimensional plane, perpendicular to the line detector array. The fiber-optic detectors achieve a noise-equivalent pressure of 24 Pascal at a 10 MHz bandwidth. We present the operational principle, the structure of the array, and resulting images. The system can acquire high-resolution projection images of large volumes within a short period of time. Imaging large volumes at high frame rates facilitates monitoring of dynamic processes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012OptEn..51j3201L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012OptEn..51j3201L"><span>True color blood flow imaging using a high-speed laser photography system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Chien-Sheng; Lin, Cheng-Hsien; Sun, Yung-Nien; Ho, Chung-Liang; Hsu, Chung-Chi</p> <p>2012-10-01</p> <p>Physiological changes in the retinal vasculature are commonly indicative of such disorders as diabetic retinopathy, glaucoma, and age-related macular degeneration. Thus, various methods have been developed for noninvasive clinical evaluation of ocular hemodynamics. However, to the best of our knowledge, current ophthalmic instruments do not provide a true color blood flow imaging capability. Accordingly, we propose a new method for the true color imaging of blood flow using a high-speed pulsed laser photography system. In the proposed approach, monochromatic images of the blood flow are acquired using a system of three cameras and three color lasers (red, green, and blue). A high-quality true color image of the blood flow is obtained by assembling the monochromatic images by means of image realignment and color calibration processes. The effectiveness of the proposed approach is demonstrated by imaging the flow of mouse blood within a microfluidic channel device. The experimental results confirm the proposed system provides a high-quality true color blood flow imaging capability, and therefore has potential for noninvasive clinical evaluation of ocular hemodynamics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017RAA....17..116L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017RAA....17..116L"><span>Short-term solar flare prediction using image-case-based reasoning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Jin-Fu; Li, Fei; Zhang, Huai-Peng; Yu, Da-Ren</p> <p>2017-10-01</p> <p>Solar flares strongly influence space weather and human activities, and their prediction is highly complex. The existing solutions such as data based approaches and model based approaches have a common shortcoming which is the lack of human engagement in the forecasting process. An image-case-based reasoning method is introduced to achieve this goal. The image case library is composed of SOHO/MDI longitudinal magnetograms, the images from which exhibit the maximum horizontal gradient, the length of the neutral line and the number of singular points that are extracted for retrieving similar image cases. Genetic optimization algorithms are employed for optimizing the weight assignment for image features and the number of similar image cases retrieved. Similar image cases and prediction results derived by majority voting for these similar image cases are output and shown to the forecaster in order to integrate his/her experience with the final prediction results. Experimental results demonstrate that the case-based reasoning approach has slightly better performance than other methods, and is more efficient with forecasts improved by humans.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21898664','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21898664"><span>A new algorithm to reduce noise in microscopy images implemented with a simple program in python.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Papini, Alessio</p> <p>2012-03-01</p> <p>All microscopical images contain noise, increasing when (e.g., transmission electron microscope or light microscope) approaching the resolution limit. Many methods are available to reduce noise. One of the most commonly used is image averaging. We propose here to use the mode of pixel values. Simple Python programs process a given number of images, recorded consecutively from the same subject. The programs calculate the mode of the pixel values in a given position (a, b). The result is a new image containing in (a, b) the mode of the values. Therefore, the final pixel value corresponds to that read in at least two of the pixels in position (a, b). The application of the program on a set of images obtained by applying salt and pepper noise and GIMP hurl noise with 10-90% standard deviation showed that the mode performs better than averaging with three-eight images. The data suggest that the mode would be more efficient (in the sense of a lower number of recorded images to process to reduce noise below a given limit) for lower number of total noisy pixels and high standard deviation (as impulse noise and salt and pepper noise), while averaging would be more efficient when the number of varying pixels is high, and the standard deviation is low, as in many cases of Gaussian noise affected images. The two methods may be used serially. Copyright © 2011 Wiley Periodicals, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhDT.......421B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhDT.......421B"><span>Teaching People and Machines to Enhance Images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Berthouzoz, Floraine Sara Martianne</p> <p></p> <p>Procedural tasks such as following a recipe or editing an image are very common. They require a person to execute a sequence of operations (e.g. chop onions, or sharpen the image) in order to achieve the goal of the task. People commonly use step-by-step tutorials to learn these tasks. We focus on software tutorials, more specifically photo manipulation tutorials, and present a set of tools and techniques to help people learn, compare and automate photo manipulation procedures. We describe three different systems that are each designed to help with a different stage in acquiring procedural knowledge. Today, people primarily rely on hand-crafted tutorials in books and on websites to learn photo manipulation procedures. However, putting together a high quality step-by-step tutorial is a time-consuming process. As a consequence, many online tutorials are poorly designed which can lead to confusion and slow down the learning process. We present a demonstration-based system for automatically generating succinct step-by-step visual tutorials of photo manipulations. An author first demonstrates the manipulation using an instrumented version of GIMP (GNU Image Manipulation Program) that records all changes in interface and application state. From the example recording, our system automatically generates tutorials that illustrate the manipulation using images, text, and annotations. It leverages automated image labeling (recognition of facial features and outdoor scene structures in our implementation) to generate more precise text descriptions of many of the steps in the tutorials. A user study finds that our tutorials are effective for learning the steps of a procedure; users are 20-44% faster and make 60-95% fewer errors when using our tutorials than when using screencapture video tutorials or hand-designed tutorials. We also demonstrate a new interface that allows learners to navigate, explore and compare large collections (i.e. thousands) of photo manipulation tutorials based on their command-level structure. Sites such as tutorialized.com or good-tutorials.com collect tens of thousands of photo manipulation tutorials. These collections typically contain many different tutorials for the same task. For example, there are many different tutorials that describe how to recolor the hair of a person in an image. Learners often want to compare these tutorials to understand the different ways a task can be done. They may also want to identify common strategies that are used across tutorials for a variety of tasks. However, the large number of tutorials in these collections and their inconsistent formats can make it difficult for users to systematically explore and compare them. Current tutorial collections do not exploit the underlying command-level structure of tutorials, and to explore the collection users have to either page through long lists of tutorial titles or perform keyword searches on the natural language tutorial text. We present a new browsing interface to help learners navigate, explore and compare collections of photo manipulation tutorials based on their command-level structure. Our browser indexes tutorials by their commands, identifies common strategies within the tutorial collection, and highlights the similarities and differences between sets of tutorials that execute the same task. User feedback suggests that our interface is easy to understand and use, and that users find command-level browsing to be useful for exploring large tutorial collections. They strongly preferred to explore tutorial collections with our browser over keyword search. Finally, we present a framework for generating content-adaptive macros (programs) that can transfer complex photo manipulation procedures to new target images. After learners master a photo manipulation procedure, they often repeatedly apply it to multiple images. For example, they might routinely apply the same vignetting effect to all their photographs. This process can be very tedious especially for procedures that involve many steps. While image manipulation programs provide basic macro authoring tools that allow users to record and then replay a sequence of operations, these macros are very brittle and cannot adapt to new images. We present a more comprehensive approach for generating content-adaptive macros that can automatically transfer operations to new target images. To create these macro, we make use of multiple training demonstrations. Specifically, we use automated image labeling and machine learning techniques to to adapt the parameters of each operation to the new image content. We show that our framework is able to learn a large class of the most commonly-used manipulations using as few as 20 training demonstrations. Our content-adaptive macros allow users to transfer photo manipulation procedures with a single button click and thereby significantly simplify repetitive procedures.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26826401','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26826401"><span>The application of terahertz pulsed imaging in characterising density distribution of roll-compacted ribbons.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Jianyi; Pei, Chunlei; Schiano, Serena; Heaps, David; Wu, Chuan-Yu</p> <p>2016-09-01</p> <p>Roll compaction is a commonly used dry granulation process in pharmaceutical, fine chemical and agrochemical industries for materials sensitive to heat or moisture. The ribbon density distribution plays an important role in controlling properties of granules (e.g. granule size distribution, porosity and strength). Accurate characterisation of ribbon density distribution is critical in process control and quality assurance. The terahertz imaging system has a great application potential in achieving this as the terahertz radiation has the ability to penetrate most of the pharmaceutical excipients and the refractive index reflects variations in density and chemical compositions. The aim of this study is to explore whether terahertz pulse imaging is a feasible technique for quantifying ribbon density distribution. Ribbons were made of two grades of microcrystalline cellulose (MCC), Avicel PH102 and DG, using a roll compactor at various process conditions and the ribbon density variation was investigated using terahertz imaging and section methods. The density variations obtained from both methods were compared to explore the reliability and accuracy of the terahertz imaging system. An average refractive index is calculated from the refractive index values in the frequency range between 0.5 and 1.5THz. It is shown that the refractive index gradually decreases from the middle of the ribbon towards to the edges. Variations of density distribution across the width of the ribbons are also obtained using both the section method and the terahertz imaging system. It is found that the terahertz imaging results are in excellent agreement with that obtained using the section method, demonstrating that terahertz imaging is a feasible and rapid tool to characterise ribbon density distributions. Copyright © 2016 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5778576','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5778576"><span>Optical coherence tomography: A guide to interpretation of common macular diseases</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bhende, Muna; Shetty, Sharan; Parthasarathy, Mohana Kuppuswamy; Ramya, S</p> <p>2018-01-01</p> <p>Optical coherence tomography is a quick, non invasive and reproducible imaging tool for macular lesions and has become an essential part of retina practice. This review address the common protocols for imaging the macula, basics of image interpretation, features of common macular disorders with clues to differentiate mimickers and an introduction to choroidal imaging. It includes case examples and also a practical algorithm for interpretation. PMID:29283118</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AAS...22524104R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AAS...22524104R"><span>Integrating Robotic Observatories into Astronomy Labs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ruch, Gerald T.</p> <p>2015-01-01</p> <p>The University of St. Thomas (UST) and a consortium of five local schools is using the UST Robotic Observatory, housing a 17' telescope, to develop labs and image processing tools that allow easy integration of observational labs into existing introductory astronomy curriculum. Our lab design removes the burden of equipment ownership by sharing access to a common resource and removes the burden of data processing by automating processing tasks that are not relevant to the learning objectives.Each laboratory exercise takes place over two lab periods. During period one, students design and submit observation requests via the lab website. Between periods, the telescope automatically acquires the data and our image processing pipeline produces data ready for student analysis. During period two, the students retrieve their data from the website and perform the analysis. The first lab, 'Weighing Jupiter,' was successfully implemented at UST and several of our partner schools. We are currently developing a second lab to measure the age of and distance to a globular cluster.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012SPIE.8290E..0UC','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012SPIE.8290E..0UC"><span>A stereoscopic imaging system for laser back scatter based trajectory measurement in ballistics: part 2</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chalupka, Uwe; Rothe, Hendrik</p> <p>2012-03-01</p> <p>The progress on a laser- and stereo-camera-based trajectory measurement system that we already proposed and described in recent publications is given. The system design was extended from one to two more powerful, DSP-controllable LASER systems. Experimental results of the extended system using different projectile-/weapon combinations will be shown and discussed. Automatic processing of acquired images using common 3DIP techniques was realized. Processing steps to extract trajectory segments from images as representative for the current application will be presented. Used algorithms for backward-calculation of the projectile trajectory will be shown. Verification of produced results is done against simulated trajectories, once in terms of detection robustness and once in terms of detection accuracy. Fields of use for the current system are within the ballistic domain. The first purpose is for trajectory measurement of small and middle caliber projectiles on a shooting range. Extension to big caliber projectiles as well as an application for sniper detection is imaginable, but would require further work. Beside classical RADAR, acoustic and optical projectile detection methods, the current system represents a further projectile location method under the new class of electro-optical methods that have been evolved in recent decades and that uses 3D imaging acquisition and processing techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23784877','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23784877"><span>System for verifiable CT radiation dose optimization based on image quality. part II. process control system.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Larson, David B; Malarik, Remo J; Hall, Seth M; Podberesky, Daniel J</p> <p>2013-10-01</p> <p>To evaluate the effect of an automated computed tomography (CT) radiation dose optimization and process control system on the consistency of estimated image noise and size-specific dose estimates (SSDEs) of radiation in CT examinations of the chest, abdomen, and pelvis. This quality improvement project was determined not to constitute human subject research. An automated system was developed to analyze each examination immediately after completion, and to report individual axial-image-level and study-level summary data for patient size, image noise, and SSDE. The system acquired data for 4 months beginning October 1, 2011. Protocol changes were made by using parameters recommended by the prediction application, and 3 months of additional data were acquired. Preimplementation and postimplementation mean image noise and SSDE were compared by using unpaired t tests and F tests. Common-cause variation was differentiated from special-cause variation by using a statistical process control individual chart. A total of 817 CT examinations, 490 acquired before and 327 acquired after the initial protocol changes, were included in the study. Mean patient age and water-equivalent diameter were 12.0 years and 23.0 cm, respectively. The difference between actual and target noise increased from -1.4 to 0.3 HU (P < .01) and the standard deviation decreased from 3.9 to 1.6 HU (P < .01). Mean SSDE decreased from 11.9 to 7.5 mGy, a 37% reduction (P < .01). The process control chart identified several special causes of variation. Implementation of an automated CT radiation dose optimization system led to verifiable simultaneous decrease in image noise variation and SSDE. The automated nature of the system provides the opportunity for consistent CT radiation dose optimization on a broad scale. © RSNA, 2013.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3196628','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3196628"><span>New Developments in Observer Performance Methodology in Medical Imaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Chakraborty, Dev P.</p> <p>2011-01-01</p> <p>A common task in medical imaging is assessing whether a new imaging system, or a variant of an existing one, is an improvement over an existing imaging technology. Imaging systems are generally quite complex, consisting of several components – e.g., image acquisition hardware, image processing and display hardware and software, and image interpretation by radiologists– each of which can affect performance. While it may appear odd to include the radiologist as a “component” of the imaging chain, since the radiologist’s decision determines subsequent patient care, the effect of the human interpretation has to be included. Physical measurements like modulation transfer function, signal to noise ratio, etc., are useful for characterizing the non-human parts of the imaging chain under idealized and often unrealistic conditions, such as uniform background phantoms, target objects with sharp edges, etc. Measuring the effect on performance of the entire imaging chain, including the radiologist, and using real clinical images, requires different methods that fall under the rubric of observer performance methods or “ROC analysis”. The purpose of this paper is to review recent developments in this field, particularly with respect to the free-response method. PMID:21978444</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23337079','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23337079"><span>Incorporating reversible and irreversible transverse relaxation effects into Steady State Free Precession (SSFP) signal intensity expressions for fMRI considerations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mulkern, Robert V; Balasubramanian, Mukund; Orbach, Darren B; Mitsouras, Dimitrios; Haker, Steven J</p> <p>2013-04-01</p> <p>Among the multiple sequences available for functional magnetic resonance imaging (fMRI), the Steady State Free Precession (SSFP) sequence offers the highest signal-to-noise ratio (SNR) per unit time as well as distortion free images not feasible with the more commonly employed single-shot echo planar imaging (EPI) approaches. Signal changes occurring with activation in SSFP sequences reflect underlying changes in both irreversible and reversible transverse relaxation processes. The latter are characterized by changes in the central frequencies and widths of the inherent frequency distribution present within a voxel. In this work, the well-known frequency response of the SSFP signal intensity is generalized to include the widths and central frequencies of some common frequency distributions on SSFP signal intensities. The approach, using a previously unnoted series expansion, allows for a separation of reversible from irreversible transverse relaxation effects on SSFP signal intensity changes. The formalism described here should prove useful for identifying and modeling mechanisms associated with SSFP signal changes accompanying neural activation. Copyright © 2013 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.803a2031D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.803a2031D"><span>Segmentation of anatomical structures of the heart based on echocardiography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Danilov, V. V.; Skirnevskiy, I. P.; Gerget, O. M.</p> <p>2017-01-01</p> <p>Nowadays, many practical applications in the field of medical image processing require valid and reliable segmentation of images in the capacity of input data. Some of the commonly used imaging techniques are ultrasound, CT, and MRI. However, the main difference between the other medical imaging equipment and EchoCG is that it is safer, low cost, non-invasive and non-traumatic. Three-dimensional EchoCG is a non-invasive imaging modality that is complementary and supplementary to two-dimensional imaging and can be used to examine the cardiovascular function and anatomy in different medical settings. The challenging problems, presented by EchoCG image processing, such as speckle phenomena, noise, temporary non-stationarity of processes, unsharp boundaries, attenuation, etc. forced us to consider and compare existing methods and then to develop an innovative approach that can tackle the problems connected with clinical applications. Actual studies are related to the analysis and development of a cardiac parameters automatic detection system by EchoCG that will provide new data on the dynamics of changes in cardiac parameters and improve the accuracy and reliability of the diagnosis. Research study in image segmentation has highlighted the capabilities of image-based methods for medical applications. The focus of the research is both theoretical and practical aspects of the application of the methods. Some of the segmentation approaches can be interesting for the imaging and medical community. Performance evaluation is carried out by comparing the borders, obtained from the considered methods to those manually prescribed by a medical specialist. Promising results demonstrate the possibilities and the limitations of each technique for image segmentation problems. The developed approach allows: to eliminate errors in calculating the geometric parameters of the heart; perform the necessary conditions, such as speed, accuracy, reliability; build a master model that will be an indispensable assistant for operations on a beating heart.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20130013050','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20130013050"><span>Graphics Processing Unit Assisted Thermographic Compositing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ragasa, Scott; McDougal, Matthew; Russell, Sam</p> <p>2013-01-01</p> <p>Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012SPIE.8317E..1SL','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012SPIE.8317E..1SL"><span>Quantitative evaluation of phase processing approaches in susceptibility weighted imaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Ningzhi; Wang, Wen-Tung; Sati, Pascal; Pham, Dzung L.; Butman, John A.</p> <p>2012-03-01</p> <p>Susceptibility weighted imaging (SWI) takes advantage of the local variation in susceptibility between different tissues to enable highly detailed visualization of the cerebral venous system and sensitive detection of intracranial hemorrhages. Thus, it has been increasingly used in magnetic resonance imaging studies of traumatic brain injury as well as other intracranial pathologies. In SWI, magnitude information is combined with phase information to enhance the susceptibility induced image contrast. Because of global susceptibility variations across the image, the rate of phase accumulation varies widely across the image resulting in phase wrapping artifacts that interfere with the local assessment of phase variation. Homodyne filtering is a common approach to eliminate this global phase variation. However, filter size requires careful selection in order to preserve image contrast and avoid errors resulting from residual phase wraps. An alternative approach is to apply phase unwrapping prior to high pass filtering. A suitable phase unwrapping algorithm guarantees no residual phase wraps but additional computational steps are required. In this work, we quantitatively evaluate these two phase processing approaches on both simulated and real data using different filters and cutoff frequencies. Our analysis leads to an improved understanding of the relationship between phase wraps, susceptibility effects, and acquisition parameters. Although homodyne filtering approaches are faster and more straightforward, phase unwrapping approaches perform more accurately in a wider variety of acquisition scenarios.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10615E..4FV','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10615E..4FV"><span>A novel method of the image processing on irregular triangular meshes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vishnyakov, Sergey; Pekhterev, Vitaliy; Sokolova, Elizaveta</p> <p>2018-04-01</p> <p>The paper describes a novel method of the image processing based on irregular triangular meshes implementation. The triangular mesh is adaptive to the image content, least mean square linear approximation is proposed for the basic interpolation within the triangle. It is proposed to use triangular numbers to simplify using of the local (barycentric) coordinates for the further analysis - triangular element of the initial irregular mesh is to be represented through the set of the four equilateral triangles. This allows to use fast and simple pixels indexing in local coordinates, e.g. "for" or "while" loops for access to the pixels. Moreover, representation proposed allows to use discrete cosine transform of the simple "rectangular" symmetric form without additional pixels reordering (as it is used for shape-adaptive DCT forms). Furthermore, this approach leads to the simple form of the wavelet transform on triangular mesh. The results of the method application are presented. It is shown that advantage of the method proposed is a combination of the flexibility of the image-adaptive irregular meshes with the simple form of the pixel indexing in local triangular coordinates and the using of the common forms of the discrete transforms for triangular meshes. Method described is proposed for the image compression, pattern recognition, image quality improvement, image search and indexing. It also may be used as a part of video coding (intra-frame or inter-frame coding, motion detection).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JEI....26e3010N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JEI....26e3010N"><span>Increasing feasibility of the field-programmable gate array implementation of an iterative image registration using a kernel-warping algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nguyen, An Hung; Guillemette, Thomas; Lambert, Andrew J.; Pickering, Mark R.; Garratt, Matthew A.</p> <p>2017-09-01</p> <p>Image registration is a fundamental image processing technique. It is used to spatially align two or more images that have been captured at different times, from different sensors, or from different viewpoints. There have been many algorithms proposed for this task. The most common of these being the well-known Lucas-Kanade (LK) and Horn-Schunck approaches. However, the main limitation of these approaches is the computational complexity required to implement the large number of iterations necessary for successful alignment of the images. Previously, a multi-pass image interpolation algorithm (MP-I2A) was developed to considerably reduce the number of iterations required for successful registration compared with the LK algorithm. This paper develops a kernel-warping algorithm (KWA), a modified version of the MP-I2A, which requires fewer iterations to successfully register two images and less memory space for the field-programmable gate array (FPGA) implementation than the MP-I2A. These reductions increase feasibility of the implementation of the proposed algorithm on FPGAs with very limited memory space and other hardware resources. A two-FPGA system rather than single FPGA system is successfully developed to implement the KWA in order to compensate insufficiency of hardware resources supported by one FPGA, and increase parallel processing ability and scalability of the system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1989RScI...60.2329T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1989RScI...60.2329T"><span>SR high-speed K-edge subtraction angiography in the small animal (abstract)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Takeda, T.; Akisada, M.; Nakajima, T.; Anno, I.; Ueda, K.; Umetani, K.; Yamaguchi, C.</p> <p>1989-07-01</p> <p>To assess the ability of the high-speed K-edge energy subtraction system which was made at beamline 8C of Photon Factory, Tsukuba, we performed an animal experiment. Rabbits were used for the intravenous K-edge subtraction angiography. In this paper, the actual images of the artery obtained by this system, are demonstrated. The high-speed K-edge subtraction system consisted of movable silicon (111) monocrystals, II-ITV, and digital memory system. Image processing was performed by 68000-IP computer. The monochromatic x-ray beam size was 50×60 mm. Photon energy above and below iodine K edge was changed within 16 ms and 32 frames of images were obtained sequentially. The rabbits were anaesthetized by phenobarbital and a 5F catheter was inserted into inferior vena cava via the femoral vein. 1.5 ml/kg of contrast material (Conlaxin H) was injected at the rate of 0.5 ml/kg/s. TV images were obtained 3 s after the starting point of injection. By using this system, the clear K-edge subtracted images were obtained sequentially as a conventional DSA system. The quality of the images were better than that obtained by DSA. The dynamical blood flow was analyzed, and the best arterial image could be selected from the sequential images. The structures of aortic arch, common carotid arteries, right subclavian artery, and internal thoracic artery were obtained at the chest. Both common carotid arteries and vertebral arteries were recorded at the neck. The diameter of about 0.3-0.4 mm artery could be clearly revealed. The high-speed K-edge subtraction system demonstrates the very sharp arterial images clearly and dynamically.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5038773','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5038773"><span>Real-Time Measurement of Width and Height of Weld Beads in GMAW Processes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Pinto-Lopera, Jesús Emilio; S. T. Motta, José Mauricio; Absi Alfaro, Sadek Crisostomo</p> <p>2016-01-01</p> <p>Associated to the weld quality, the weld bead geometry is one of the most important parameters in welding processes. It is a significant requirement in a welding project, especially in automatic welding systems where a specific width, height, or penetration of weld bead is needed. This paper presents a novel technique for real-time measuring of the width and height of weld beads in gas metal arc welding (GMAW) using a single high-speed camera and a long-pass optical filter in a passive vision system. The measuring method is based on digital image processing techniques and the image calibration process is based on projective transformations. The measurement process takes less than 3 milliseconds per image, which allows a transfer rate of more than 300 frames per second. The proposed methodology can be used in any metal transfer mode of a gas metal arc welding process and does not have occlusion problems. The responses of the measurement system, presented here, are in a good agreement with off-line data collected by a common laser-based 3D scanner. Each measurement is compare using a statistical Welch’s t-test of the null hypothesis, which, in any case, does not exceed the threshold of significance level α = 0.01, validating the results and the performance of the proposed vision system. PMID:27649198</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017OptCo.403..211C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017OptCo.403..211C"><span>Optical asymmetric cryptography based on amplitude reconstruction of elliptically polarized light</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cai, Jianjun; Shen, Xueju; Lei, Ming</p> <p>2017-11-01</p> <p>We propose a novel optical asymmetric image encryption method based on amplitude reconstruction of elliptically polarized light, which is free from silhouette problem. The original image is analytically separated into two phase-only masks firstly, and then the two masks are encoded into amplitudes of the orthogonal polarization components of an elliptically polarized light. Finally, the elliptically polarized light propagates through a linear polarizer, and the output intensity distribution is recorded by a CCD camera to obtain the ciphertext. The whole encryption procedure could be implemented by using commonly used optical elements, and it combines diffusion process and confusion process. As a result, the proposed method achieves high robustness against iterative-algorithm-based attacks. Simulation results are presented to prove the validity of the proposed cryptography.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/10497381','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/10497381"><span>[Three-dimensional reconstruction of functional brain images].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Inoue, M; Shoji, K; Kojima, H; Hirano, S; Naito, Y; Honjo, I</p> <p>1999-08-01</p> <p>We consider PET (positron emission tomography) measurement with SPM (Statistical Parametric Mapping) analysis to be one of the most useful methods to identify activated areas of the brain involved in language processing. SPM is an effective analytical method that detects markedly activated areas over the whole brain. However, with the conventional presentations of these functional brain images, such as horizontal slices, three directional projection, or brain surface coloring, makes understanding and interpreting the positional relationships among various brain areas difficult. Therefore, we developed three-dimensionally reconstructed images from these functional brain images to improve the interpretation. The subjects were 12 normal volunteers. The following three types of images were constructed: 1) routine images by SPM, 2) three-dimensional static images, and 3) three-dimensional dynamic images, after PET images were analyzed by SPM during daily dialog listening. The creation of images of both the three-dimensional static and dynamic types employed the volume rendering method by VTK (The Visualization Toolkit). Since the functional brain images did not include original brain images, we synthesized SPM and MRI brain images by self-made C++ programs. The three-dimensional dynamic images were made by sequencing static images with available software. Images of both the three-dimensional static and dynamic types were processed by a personal computer system. Our newly created images showed clearer positional relationships among activated brain areas compared to the conventional method. To date, functional brain images have been employed in fields such as neurology or neurosurgery, however, these images may be useful even in the field of otorhinolaryngology, to assess hearing and speech. Exact three-dimensional images based on functional brain images are important for exact and intuitive interpretation, and may lead to new developments in brain science. Currently, the surface model is the most common method of three-dimensional display. However, the volume rendering method may be more effective for imaging regions such as the brain.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017OptEn..56k1712D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017OptEn..56k1712D"><span>Dual-wavelength common-path digital holographic microscopy for quantitative phase imaging of biological cells</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Di, Jianglei; Song, Yu; Xi, Teli; Zhang, Jiwei; Li, Ying; Ma, Chaojie; Wang, Kaiqiang; Zhao, Jianlin</p> <p>2017-11-01</p> <p>Biological cells are usually transparent with a small refractive index gradient. Digital holographic interferometry can be used in the measurement of biological cells. We propose a dual-wavelength common-path digital holographic microscopy for the quantitative phase imaging of biological cells. In the proposed configuration, a parallel glass plate is inserted in the light path to create the lateral shearing, and two lasers with different wavelengths are used as the light source to form the dual-wavelength composite digital hologram. The information of biological cells for different wavelengths is separated and extracted in the Fourier domain of the hologram, and then combined to a shorter wavelength in the measurement process. This method could improve the system's temporal stability and reduce speckle noises simultaneously. Mouse osteoblastic cells and peony pollens are measured to show the feasibility of this method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ISPAr42W3..559P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ISPAr42W3..559P"><span>Multispectral Photogrammetric Data Acquisition and Processing Forwall Paintings Studies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pamart, A.; Guillon, O.; Faraci, S.; Gattet, E.; Genevois, M.; Vallet, J. M.; De Luca, L.</p> <p>2017-02-01</p> <p>In the field of wall paintings studies different imaging techniques are commonly used for the documentation and the decision making in term of conservation and restoration. There is nowadays some challenging issues to merge scientific imaging techniques in a multimodal context (i.e. multi-sensors, multi-dimensions, multi-spectral and multi-temporal approaches). For decades those CH objects has been widely documented with Technical Photography (TP) which gives precious information to understand or retrieve the painting layouts and history. More recently there is an increasing demand of the use of digital photogrammetry in order to provide, as one of the possible output, an orthophotomosaic which brings a possibility for metrical quantification of conservators/restorators observations and actions planning. This paper presents some ongoing experimentations of the LabCom MAP-CICRP relying on the assumption that those techniques can be merged through a common pipeline to share their own benefits and create a more complete documentation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5155096','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5155096"><span>Calcium Apatite Deposition Disease: Diagnosis and Treatment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2016-01-01</p> <p>Calcium apatite deposition disease (CADD) is a common entity characterized by deposition of calcium apatite crystals within and around connective tissues, usually in a periarticular location. CADD most frequently involves the rotator cuff. However, it can theoretically occur in almost any location in the musculoskeletal system, and many different locations of CADD have been described. When CADD presents in an unexpected location it can pose a diagnostic challenge, particularly when associated with pain or swelling, and can be confused with other pathologic processes, such as infection or malignancy. However, CADD has typical imaging characteristics that usually allows for a correct diagnosis to be made without additional imaging or laboratory workup, even when presenting in unusual locations. This is a review of the common and uncommon presentations of CADD in the appendicular and axial skeleton as well as an updated review of pathophysiology of CADD and current treatments. PMID:28042481</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15924273','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15924273"><span>Department of Defense picture archiving and communication system acceptance testing: results and identification of problem components.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Allison, Scott A; Sweet, Clifford F; Beall, Douglas P; Lewis, Thomas E; Monroe, Thomas</p> <p>2005-09-01</p> <p>The PACS implementation process is complicated requiring a tremendous amount of time, resources, and planning. The Department of Defense (DOD) has significant experience in developing and refining PACS acceptance testing (AT) protocols that assure contract compliance, clinical safety, and functionality. The DOD's AT experience under the initial Medical Diagnostic Imaging Support System contract led to the current Digital Imaging Network-Picture Archiving and Communications Systems (DIN-PACS) contract AT protocol. To identify the most common system and component deficiencies under the current DIN-PACS AT protocol, 14 tri-service sites were evaluated during 1998-2000. Sixteen system deficiency citations with 154 separate types of limitations were noted with problems involving the workstation, interfaces, and the Radiology Information System comprising more than 50% of the citations. Larger PACS deployments were associated with a higher number of deficiencies. The most commonly cited systems deficiencies were among the most expensive components of the PACS.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9844E..0NV','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9844E..0NV"><span>Multispectral image analysis for object recognition and classification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Viau, C. R.; Payeur, P.; Cretu, A.-M.</p> <p>2016-05-01</p> <p>Computer and machine vision applications are used in numerous fields to analyze static and dynamic imagery in order to assist or automate decision-making processes. Advancements in sensor technologies now make it possible to capture and visualize imagery at various wavelengths (or bands) of the electromagnetic spectrum. Multispectral imaging has countless applications in various fields including (but not limited to) security, defense, space, medical, manufacturing and archeology. The development of advanced algorithms to process and extract salient information from the imagery is a critical component of the overall system performance. The fundamental objective of this research project was to investigate the benefits of combining imagery from the visual and thermal bands of the electromagnetic spectrum to improve the recognition rates and accuracy of commonly found objects in an office setting. A multispectral dataset (visual and thermal) was captured and features from the visual and thermal images were extracted and used to train support vector machine (SVM) classifiers. The SVM's class prediction ability was evaluated separately on the visual, thermal and multispectral testing datasets.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PhST..149a4019S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PhST..149a4019S"><span>The effect of different standard illumination conditions on color balance failure in offset printed images on glossy coated paper expressed by color difference</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Spiridonov, I.; Shopova, M.; Boeva, R.; Nikolov, M.</p> <p>2012-05-01</p> <p>One of the biggest problems in color reproduction processes is color shifts occurring when images are viewed under different illuminants. Process ink colors and their combinations that match under one light source will often appear different under another light source. This problem is referred to as color balance failure or color inconstancy. The main goals of the present study are to investigate and determine the color balance failure (color inconstancy) of offset printed images expressed by color difference and color gamut changes depending on three of the most commonly used in practice illuminants, CIE D50, CIE F2 and CIE A. The results obtained are important from a scientific and a practical point of view. For the first time, a methodology is suggested and implemented for the examination and estimation of color shifts by studying a large number of color and gamut changes in various ink combinations for different illuminants.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10456E..5KA','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10456E..5KA"><span>Ross filter pairs for metal artefact reduction in x-ray tomography: a case study based on imaging and segmentation of metallic implants</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Arhatari, Benedicta D.; Abbey, Brian</p> <p>2018-01-01</p> <p>Ross filter pairs have recently been demonstrated as a highly effective means of producing quasi-monoenergetic beams from polychromatic X-ray sources. They have found applications in both X-ray spectroscopy and for elemental separation in X-ray computed tomography (XCT). Here we explore whether they could be applied to the problem of metal artefact reduction (MAR) for applications in medical imaging. Metal artefacts are a common problem in X-ray imaging of metal implants embedded in bone and soft tissue. A number of data post-processing approaches to MAR have been proposed in the literature, however these can be time-consuming and sometimes have limited efficacy. Here we describe and demonstrate an alternative approach based on beam conditioning using Ross filter pairs. This approach obviates the need for any complex post-processing of the data and enables MAR and segmentation from the surrounding tissue by exploiting the absorption edge contrast of the implant.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS1025a2023G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS1025a2023G"><span>Hazard mitigation with cloud model based rainfall and convective data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gernowo, R.; Adi, K.; Yulianto, T.; Seniyatis, S.; Yatunnisa, A. A.</p> <p>2018-05-01</p> <p>Heavy rain in Semarang 15 January 2013 causes flood. It is related to dynamic of weather’s parameter, especially with convection process, clouds and rainfall data. In this case, weather condition analysis uses Weather Research and Forecasting (WRF) model used to analyze. Some weather’s parameters show significant result. Their fluctuations prove there is a strong convection that produces convective cloud (Cumulonimbus). Nesting and 2 domains on WRF model show good output to represent weather’s condition commonly. The results of this study different between output cloud cover rate of observation result and output of model around 6-12 hours is because spinning-up of processing. Satellite Images of MTSAT (Multifunctional Transport Satellite) are used as a verification data to prove the result of WRF. White color of satellite image is Coldest Dark Grey (CDG) that indicates there is cloud’s top. This image consolidates that the output of WRF is good enough to analyze Semarang’s condition when the case happened.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012ISPAr39B5..103H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012ISPAr39B5..103H"><span>D Imaging for Museum Artefacts: a Portable Test Object for Heritage and Museum Documentation of Small Objects</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hess, M.; Robson, S.</p> <p>2012-07-01</p> <p>3D colour image data generated for the recording of small museum objects and archaeological finds are highly variable in quality and fitness for purpose. Whilst current technology is capable of extremely high quality outputs, there are currently no common standards or applicable guidelines in either the museum or engineering domain suited to scientific evaluation, understanding and tendering for 3D colour digital data. This paper firstly explains the rationale towards and requirements for 3D digital documentation in museums. Secondly it describes the design process, development and use of a new portable test object suited to sensor evaluation and the provision of user acceptance metrics. The test object is specifically designed for museums and heritage institutions and includes known surface and geometric properties which support quantitative and comparative imaging on different systems. The development for a supporting protocol will allow object reference data to be included in the data processing workflow with specific reference to conservation and curation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9699E..0VD','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9699E..0VD"><span>Melanoma detection using a mobile phone app</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Diniz, Luciano E.; Ennser, K.</p> <p>2016-03-01</p> <p>Mobile phones have had their processing power greatly increased since their invention a few decades ago. As a direct result of Moore's Law, this improvement has made available several applications that were impossible before. The aim of this project is to develop a mobile phone app, integrated with its camera coupled to an amplifying lens, to help distinguish melanoma. The proposed device has the capability of processing skin mole images and suggesting, using a score system, if it is a case of melanoma or not. This score system is based on the ABCDE signs of melanoma, and takes into account the area, the perimeter and the colors present in the nevus. It was calibrated and tested using images from the PH2 Dermoscopic Image Database from Pedro Hispano Hospital. The results show that the system created can be useful, with an accuracy of up to 100% for malign cases and 80% for benign cases (including common and atypical moles), when used in the test group.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26164440','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26164440"><span>Ultrasound of pediatric breast masses: what to do with lumps and bumps.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Valeur, Natalie S; Rahbar, Habib; Chapman, Teresa</p> <p>2015-10-01</p> <p>The approach to breast masses in children differs from that in adults in many ways, including the differential diagnostic considerations, imaging algorithm and appropriateness of biopsy as a means of further characterization. Most pediatric breast masses are benign, either related to breast development or benign neoplastic processes. Biopsy is rarely needed and can damage the developing breast; thus radiologists must be familiar with the imaging appearance of common entities so that biopsies are judiciously recommended. The purpose of this article is to describe the imaging appearances of the normally developing pediatric breast as well as illustrate the imaging findings of a spectrum of diseases, including those that are benign (fibroadenoma, juvenile papillomatosis, pseudoangiomatous stromal hyperplasia, gynecomastia, abscess and fat necrosis), malignant (breast carcinoma and metastases), and have variable malignant potential (phyllodes tumor).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26440760','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26440760"><span>Augmented microscopy: real-time overlay of bright-field and near-infrared fluorescence images.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Watson, Jeffrey R; Gainer, Christian F; Martirosyan, Nikolay; Skoch, Jesse; Lemole, G Michael; Anton, Rein; Romanowski, Marek</p> <p>2015-10-01</p> <p>Intraoperative applications of near-infrared (NIR) fluorescent contrast agents can be aided by instrumentation capable of merging the view of surgical field with that of NIR fluorescence. We demonstrate augmented microscopy, an intraoperative imaging technique in which bright-field (real) and electronically processed NIR fluorescence (synthetic) images are merged within the optical path of a stereomicroscope. Under luminance of 100,000 lx, representing typical illumination of the surgical field, the augmented microscope detects 189 nM concentration of indocyanine green and produces a composite of the real and synthetic images within the eyepiece of the microscope at 20 fps. Augmentation described here can be implemented as an add-on module to visualize NIR contrast agents, laser beams, or various types of electronic data within the surgical microscopes commonly used in neurosurgical, cerebrovascular, otolaryngological, and ophthalmic procedures.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JBO....20j6002W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JBO....20j6002W"><span>Augmented microscopy: real-time overlay of bright-field and near-infrared fluorescence images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Watson, Jeffrey R.; Gainer, Christian F.; Martirosyan, Nikolay; Skoch, Jesse; Lemole, G. Michael, Jr.; Anton, Rein; Romanowski, Marek</p> <p>2015-10-01</p> <p>Intraoperative applications of near-infrared (NIR) fluorescent contrast agents can be aided by instrumentation capable of merging the view of surgical field with that of NIR fluorescence. We demonstrate augmented microscopy, an intraoperative imaging technique in which bright-field (real) and electronically processed NIR fluorescence (synthetic) images are merged within the optical path of a stereomicroscope. Under luminance of 100,000 lx, representing typical illumination of the surgical field, the augmented microscope detects 189 nM concentration of indocyanine green and produces a composite of the real and synthetic images within the eyepiece of the microscope at 20 fps. Augmentation described here can be implemented as an add-on module to visualize NIR contrast agents, laser beams, or various types of electronic data within the surgical microscopes commonly used in neurosurgical, cerebrovascular, otolaryngological, and ophthalmic procedures.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>