Sample records for image based information

  1. Quantum Image Steganography and Steganalysis Based On LSQu-Blocks Image Information Concealing Algorithm

    NASA Astrophysics Data System (ADS)

    A. AL-Salhi, Yahya E.; Lu, Songfeng

    2016-08-01

    Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.

  2. a Probability-Based Statistical Method to Extract Water Body of TM Images with Missing Information

    NASA Astrophysics Data System (ADS)

    Lian, Shizhong; Chen, Jiangping; Luo, Minghai

    2016-06-01

    Water information cannot be accurately extracted using TM images because true information is lost in some images because of blocking clouds and missing data stripes, thereby water information cannot be accurately extracted. Water is continuously distributed in natural conditions; thus, this paper proposed a new method of water body extraction based on probability statistics to improve the accuracy of water information extraction of TM images with missing information. Different disturbing information of clouds and missing data stripes are simulated. Water information is extracted using global histogram matching, local histogram matching, and the probability-based statistical method in the simulated images. Experiments show that smaller Areal Error and higher Boundary Recall can be obtained using this method compared with the conventional methods.

  3. [Non-rigid medical image registration based on mutual information and thin-plate spline].

    PubMed

    Cao, Guo-gang; Luo, Li-min

    2009-01-01

    To get precise and complete details, the contrast in different images is needed in medical diagnosis and computer assisted treatment. The image registration is the basis of contrast, but the regular rigid registration does not satisfy the clinic requirements. A non-rigid medical image registration method based on mutual information and thin-plate spline was present. Firstly, registering two images globally based on mutual information; secondly, dividing reference image and global-registered image into blocks and registering them; then getting the thin-plate spline transformation according to the shift of blocks' center; finally, applying the transformation to the global-registered image. The results show that the method is more precise than the global rigid registration based on mutual information and it reduces the complexity of getting control points and satisfy the clinic requirements better by getting control points of the thin-plate transformation automatically.

  4. Edge-based correlation image registration for multispectral imaging

    DOEpatents

    Nandy, Prabal [Albuquerque, NM

    2009-11-17

    Registration information for images of a common target obtained from a plurality of different spectral bands can be obtained by combining edge detection and phase correlation. The images are edge-filtered, and pairs of the edge-filtered images are then phase correlated to produce phase correlation images. The registration information can be determined based on these phase correlation images.

  5. Research on polarization imaging information parsing method

    NASA Astrophysics Data System (ADS)

    Yuan, Hongwu; Zhou, Pucheng; Wang, Xiaolong

    2016-11-01

    Polarization information parsing plays an important role in polarization imaging detection. This paper focus on the polarization information parsing method: Firstly, the general process of polarization information parsing is given, mainly including polarization image preprocessing, multiple polarization parameters calculation, polarization image fusion and polarization image tracking, etc.; And then the research achievements of the polarization information parsing method are presented, in terms of polarization image preprocessing, the polarization image registration method based on the maximum mutual information is designed. The experiment shows that this method can improve the precision of registration and be satisfied the need of polarization information parsing; In terms of multiple polarization parameters calculation, based on the omnidirectional polarization inversion model is built, a variety of polarization parameter images are obtained and the precision of inversion is to be improve obviously; In terms of polarization image fusion , using fuzzy integral and sparse representation, the multiple polarization parameters adaptive optimal fusion method is given, and the targets detection in complex scene is completed by using the clustering image segmentation algorithm based on fractal characters; In polarization image tracking, the average displacement polarization image characteristics of auxiliary particle filtering fusion tracking algorithm is put forward to achieve the smooth tracking of moving targets. Finally, the polarization information parsing method is applied to the polarization imaging detection of typical targets such as the camouflage target, the fog and latent fingerprints.

  6. Change detection in synthetic aperture radar images based on image fusion and fuzzy clustering.

    PubMed

    Gong, Maoguo; Zhou, Zhiqiang; Ma, Jingjing

    2012-04-01

    This paper presents an unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) images based on an image fusion strategy and a novel fuzzy clustering algorithm. The image fusion technique is introduced to generate a difference image by using complementary information from a mean-ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.

  7. An effective approach of lesion segmentation within the breast ultrasound image based on the cellular automata principle.

    PubMed

    Liu, Yan; Cheng, H D; Huang, Jianhua; Zhang, Yingtao; Tang, Xianglong

    2012-10-01

    In this paper, a novel lesion segmentation within breast ultrasound (BUS) image based on the cellular automata principle is proposed. Its energy transition function is formulated based on global image information difference and local image information difference using different energy transfer strategies. First, an energy decrease strategy is used for modeling the spatial relation information of pixels. For modeling global image information difference, a seed information comparison function is developed using an energy preserve strategy. Then, a texture information comparison function is proposed for considering local image difference in different regions, which is helpful for handling blurry boundaries. Moreover, two neighborhood systems (von Neumann and Moore neighborhood systems) are integrated as the evolution environment, and a similarity-based criterion is used for suppressing noise and reducing computation complexity. The proposed method was applied to 205 clinical BUS images for studying its characteristic and functionality, and several overlapping area error metrics and statistical evaluation methods are utilized for evaluating its performance. The experimental results demonstrate that the proposed method can handle BUS images with blurry boundaries and low contrast well and can segment breast lesions accurately and effectively.

  8. Medical image registration based on normalized multidimensional mutual information

    NASA Astrophysics Data System (ADS)

    Li, Qi; Ji, Hongbing; Tong, Ming

    2009-10-01

    Registration of medical images is an essential research topic in medical image processing and applications, and especially a preliminary and key step for multimodality image fusion. This paper offers a solution to medical image registration based on normalized multi-dimensional mutual information. Firstly, affine transformation with translational and rotational parameters is applied to the floating image. Then ordinal features are extracted by ordinal filters with different orientations to represent spatial information in medical images. Integrating ordinal features with pixel intensities, the normalized multi-dimensional mutual information is defined as similarity criterion to register multimodality images. Finally the immune algorithm is used to search registration parameters. The experimental results demonstrate the effectiveness of the proposed registration scheme.

  9. D Reconstruction from Uav-Based Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Liu, L.; Xu, L.; Peng, J.

    2018-04-01

    Reconstructing the 3D profile from a set of UAV-based images can obtain hyperspectral information, as well as the 3D coordinate of any point on the profile. Our images are captured from the Cubert UHD185 (UHD) hyperspectral camera, which is a new type of high-speed onboard imaging spectrometer. And it can get both hyperspectral image and panchromatic image simultaneously. The panchromatic image have a higher spatial resolution than hyperspectral image, but each hyperspectral image provides considerable information on the spatial spectral distribution of the object. Thus there is an opportunity to derive a high quality 3D point cloud from panchromatic image and considerable spectral information from hyperspectral image. The purpose of this paper is to introduce our processing chain that derives a database which can provide hyperspectral information and 3D position of each point. First, We adopt a free and open-source software, Visual SFM which is based on structure from motion (SFM) algorithm, to recover 3D point cloud from panchromatic image. And then get spectral information of each point from hyperspectral image by a self-developed program written in MATLAB. The production can be used to support further research and applications.

  10. Improved medical image fusion based on cascaded PCA and shift invariant wavelet transforms.

    PubMed

    Reena Benjamin, J; Jayasree, T

    2018-02-01

    In the medical field, radiologists need more informative and high-quality medical images to diagnose diseases. Image fusion plays a vital role in the field of biomedical image analysis. It aims to integrate the complementary information from multimodal images, producing a new composite image which is expected to be more informative for visual perception than any of the individual input images. The main objective of this paper is to improve the information, to preserve the edges and to enhance the quality of the fused image using cascaded principal component analysis (PCA) and shift invariant wavelet transforms. A novel image fusion technique based on cascaded PCA and shift invariant wavelet transforms is proposed in this paper. PCA in spatial domain extracts relevant information from the large dataset based on eigenvalue decomposition, and the wavelet transform operating in the complex domain with shift invariant properties brings out more directional and phase details of the image. The significance of maximum fusion rule applied in dual-tree complex wavelet transform domain enhances the average information and morphological details. The input images of the human brain of two different modalities (MRI and CT) are collected from whole brain atlas data distributed by Harvard University. Both MRI and CT images are fused using cascaded PCA and shift invariant wavelet transform method. The proposed method is evaluated based on three main key factors, namely structure preservation, edge preservation, contrast preservation. The experimental results and comparison with other existing fusion methods show the superior performance of the proposed image fusion framework in terms of visual and quantitative evaluations. In this paper, a complex wavelet-based image fusion has been discussed. The experimental results demonstrate that the proposed method enhances the directional features as well as fine edge details. Also, it reduces the redundant details, artifacts, distortions.

  11. A color fusion method of infrared and low-light-level images based on visual perception

    NASA Astrophysics Data System (ADS)

    Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa

    2014-11-01

    The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.

  12. Different source image fusion based on FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Xiao; Piao, Yan

    2016-03-01

    The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.

  13. An information gathering system for medical image inspection

    NASA Astrophysics Data System (ADS)

    Lee, Young-Jin; Bajcsy, Peter

    2005-04-01

    We present an information gathering system for medical image inspection that consists of software tools for capturing computer-centric and human-centric information. Computer-centric information includes (1) static annotations, such as (a) image drawings enclosing any selected area, a set of areas with similar colors, a set of salient points, and (b) textual descriptions associated with either image drawings or links between pairs of image drawings, and (2) dynamic (or temporal) information, such as mouse movements, zoom level changes, image panning and frame selections from an image stack. Human-centric information is represented by video and audio signals that are acquired by computer-mounted cameras and microphones. The short-term goal of the presented system is to facilitate learning of medical novices from medical experts, while the long-term goal is to data mine all information about image inspection for assisting in making diagnoses. In this work, we built basic software functionality for gathering computer-centric and human-centric information of the aforementioned variables. Next, we developed the information playback capabilities of all gathered information for educational purposes. Finally, we prototyped text-based and image template-based search engines to retrieve information from recorded annotations, for example, (a) find all annotations containing the word "blood vessels", or (b) search for similar areas to a selected image area. The information gathering system for medical image inspection reported here has been tested with images from the Histology Atlas database.

  14. Structural Information Detection Based Filter for GF-3 SAR Images

    NASA Astrophysics Data System (ADS)

    Sun, Z.; Song, Y.

    2018-04-01

    GF-3 satellite with high resolution, large swath, multi-imaging mode, long service life and other characteristics, can achieve allweather and all day monitoring for global land and ocean. It has become the highest resolution satellite system in the world with the C-band multi-polarized synthetic aperture radar (SAR) satellite. However, due to the coherent imaging system, speckle appears in GF-3 SAR images, and it hinders the understanding and interpretation of images seriously. Therefore, the processing of SAR images has big challenges owing to the appearance of speckle. The high-resolution SAR images produced by the GF-3 satellite are rich in information and have obvious feature structures such as points, edges, lines and so on. The traditional filters such as Lee filter and Gamma MAP filter are not appropriate for the GF-3 SAR images since they ignore the structural information of images. In this paper, the structural information detection based filter is constructed, successively including the point target detection in the smallest window, the adaptive windowing method based on regional characteristics, and the most homogeneous sub-window selection. The despeckling experiments on GF-3 SAR images demonstrate that compared with the traditional filters, the proposed structural information detection based filter can well preserve the points, edges and lines as well as smooth the speckle more sufficiently.

  15. The clinical information system GastroBase: integration of image processing and laboratory communication.

    PubMed

    Kocna, P

    1995-01-01

    GastroBase, a clinical information system, incorporates patient identification, medical records, images, laboratory data, patient history, physical examination, and other patient-related information. Program modules are written in C; all data is processed using Novell-Btrieve data manager. Patient identification database represents the main core of this information systems. A graphic library developed in the past year and graphic modules with a special video-card enables the storing, archiving, and linking of different images to the electronic patient-medical-record. GastroBase has been running for more than four years in daily routine and the database contains more than 25,000 medical records and 1,500 images. This new version of GastroBase is now incorporated into the clinical information system of University Clinic in Prague.

  16. Novel active contour model based on multi-variate local Gaussian distribution for local segmentation of MR brain images

    NASA Astrophysics Data System (ADS)

    Zheng, Qiang; Li, Honglun; Fan, Baode; Wu, Shuanhu; Xu, Jindong

    2017-12-01

    Active contour model (ACM) has been one of the most widely utilized methods in magnetic resonance (MR) brain image segmentation because of its ability of capturing topology changes. However, most of the existing ACMs only consider single-slice information in MR brain image data, i.e., the information used in ACMs based segmentation method is extracted only from one slice of MR brain image, which cannot take full advantage of the adjacent slice images' information, and cannot satisfy the local segmentation of MR brain images. In this paper, a novel ACM is proposed to solve the problem discussed above, which is based on multi-variate local Gaussian distribution and combines the adjacent slice images' information in MR brain image data to satisfy segmentation. The segmentation is finally achieved through maximizing the likelihood estimation. Experiments demonstrate the advantages of the proposed ACM over the single-slice ACM in local segmentation of MR brain image series.

  17. Image acquisition context: procedure description attributes for clinically relevant indexing and selective retrieval of biomedical images.

    PubMed

    Bidgood, W D; Bray, B; Brown, N; Mori, A R; Spackman, K A; Golichowski, A; Jones, R H; Korman, L; Dove, B; Hildebrand, L; Berg, M

    1999-01-01

    To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. The authors introduce the notion of "image acquisition context," the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries.

  18. Content Based Image Retrieval and Information Theory: A General Approach.

    ERIC Educational Resources Information Center

    Zachary, John; Iyengar, S. S.; Barhen, Jacob

    2001-01-01

    Proposes an alternative real valued representation of color based on the information theoretic concept of entropy. A theoretical presentation of image entropy is accompanied by a practical description of the merits and limitations of image entropy compared to color histograms. Results suggest that image entropy is a promising approach to image…

  19. Image Acquisition Context

    PubMed Central

    Bidgood, W. Dean; Bray, Bruce; Brown, Nicolas; Mori, Angelo Rossi; Spackman, Kent A.; Golichowski, Alan; Jones, Robert H.; Korman, Louis; Dove, Brent; Hildebrand, Lloyd; Berg, Michael

    1999-01-01

    Objective: To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. Design: The authors introduce the notion of “image acquisition context,” the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. Methods: The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. Results: The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries. PMID:9925229

  20. Secure biometric image sensor and authentication scheme based on compressed sensing.

    PubMed

    Suzuki, Hiroyuki; Suzuki, Masamichi; Urabe, Takuya; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2013-11-20

    It is important to ensure the security of biometric authentication information, because its leakage causes serious risks, such as replay attacks using the stolen biometric data, and also because it is almost impossible to replace raw biometric information. In this paper, we propose a secure biometric authentication scheme that protects such information by employing an optical data ciphering technique based on compressed sensing. The proposed scheme is based on two-factor authentication, the biometric information being supplemented by secret information that is used as a random seed for a cipher key. In this scheme, a biometric image is optically encrypted at the time of image capture, and a pair of restored biometric images for enrollment and verification are verified in the authentication server. If any of the biometric information is exposed to risk, it can be reenrolled by changing the secret information. Through numerical experiments, we confirm that finger vein images can be restored from the compressed sensing measurement data. We also present results that verify the accuracy of the scheme.

  1. Image barcodes

    NASA Astrophysics Data System (ADS)

    Damera-Venkata, Niranjan; Yen, Jonathan

    2003-01-01

    A Visually significant two-dimensional barcode (VSB) developed by Shaked et. al. is a method used to design an information carrying two-dimensional barcode, which has the appearance of a given graphical entity such as a company logo. The encoding and decoding of information using the VSB, uses a base image with very few graylevels (typically only two). This typically requires the image histogram to be bi-modal. For continuous-tone images such as digital photographs of individuals, the representation of tone or "shades of gray" is not only important to obtain a pleasing rendition of the face, but in most cases, the VSB renders these images unrecognizable due to its inability to represent true gray-tone variations. This paper extends the concept of a VSB to an image bar code (IBC). We enable the encoding and subsequent decoding of information embedded in the hardcopy version of continuous-tone base-images such as those acquired with a digital camera. The encoding-decoding process is modeled by robust data transmission through a noisy print-scan channel that is explicitly modeled. The IBC supports a high information capacity that differentiates it from common hardcopy watermarks. The reason for the improved image quality over the VSB is a joint encoding/halftoning strategy based on a modified version of block error diffusion. Encoder stability, image quality vs. information capacity tradeoffs and decoding issues with and without explicit knowledge of the base-image are discussed.

  2. High-Resolution Remote Sensing Image Building Extraction Based on Markov Model

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Yan, L.; Chang, Y.; Gong, L.

    2018-04-01

    With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.

  3. Registration of 2D to 3D joint images using phase-based mutual information

    NASA Astrophysics Data System (ADS)

    Dalvi, Rupin; Abugharbieh, Rafeef; Pickering, Mark; Scarvell, Jennie; Smith, Paul

    2007-03-01

    Registration of two dimensional to three dimensional orthopaedic medical image data has important applications particularly in the area of image guided surgery and sports medicine. Fluoroscopy to computer tomography (CT) registration is an important case, wherein digitally reconstructed radiographs derived from the CT data are registered to the fluoroscopy data. Traditional registration metrics such as intensity-based mutual information (MI) typically work well but often suffer from gross misregistration errors when the image to be registered contains a partial view of the anatomy visible in the target image. Phase-based MI provides a robust alternative similarity measure which, in addition to possessing the general robustness and noise immunity that MI provides, also employs local phase information in the registration process which makes it less susceptible to the aforementioned errors. In this paper, we propose using the complex wavelet transform for computing image phase information and incorporating that into a phase-based MI measure for image registration. Tests on a CT volume and 6 fluoroscopy images of the knee are presented. The femur and the tibia in the CT volume were individually registered to the fluoroscopy images using intensity-based MI, gradient-based MI and phase-based MI. Errors in the coordinates of fiducials present in the bone structures were used to assess the accuracy of the different registration schemes. Quantitative results demonstrate that the performance of intensity-based MI was the worst. Gradient-based MI performed slightly better, while phase-based MI results were the best consistently producing the lowest errors.

  4. Joint sparse coding based spatial pyramid matching for classification of color medical image.

    PubMed

    Shi, Jun; Li, Yi; Zhu, Jie; Sun, Haojie; Cai, Yin

    2015-04-01

    Although color medical images are important in clinical practice, they are usually converted to grayscale for further processing in pattern recognition, resulting in loss of rich color information. The sparse coding based linear spatial pyramid matching (ScSPM) and its variants are popular for grayscale image classification, but cannot extract color information. In this paper, we propose a joint sparse coding based SPM (JScSPM) method for the classification of color medical images. A joint dictionary can represent both the color information in each color channel and the correlation between channels. Consequently, the joint sparse codes calculated from a joint dictionary can carry color information, and therefore this method can easily transform a feature descriptor originally designed for grayscale images to a color descriptor. A color hepatocellular carcinoma histological image dataset was used to evaluate the performance of the proposed JScSPM algorithm. Experimental results show that JScSPM provides significant improvements as compared with the majority voting based ScSPM and the original ScSPM for color medical image classification. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Fusion of infrared polarization and intensity images based on improved toggle operator

    NASA Astrophysics Data System (ADS)

    Zhu, Pan; Ding, Lei; Ma, Xiaoqing; Huang, Zhanhua

    2018-01-01

    Integration of infrared polarization and intensity images has been a new topic in infrared image understanding and interpretation. The abundant infrared details and target from infrared image and the salient edge and shape information from polarization image should be preserved or even enhanced in the fused result. In this paper, a new fusion method is proposed for infrared polarization and intensity images based on the improved multi-scale toggle operator with spatial scale, which can effectively extract the feature information of source images and heavily reduce redundancy among different scale. Firstly, the multi-scale image features of infrared polarization and intensity images are respectively extracted at different scale levels by the improved multi-scale toggle operator. Secondly, the redundancy of the features among different scales is reduced by using spatial scale. Thirdly, the final image features are combined by simply adding all scales of feature images together, and a base image is calculated by performing mean value weighted method on smoothed source images. Finally, the fusion image is obtained by importing the combined image features into the base image with a suitable strategy. Both objective assessment and subjective vision of the experimental results indicate that the proposed method obtains better performance in preserving the details and edge information as well as improving the image contrast.

  6. Iteration and superposition encryption scheme for image sequences based on multi-dimensional keys

    NASA Astrophysics Data System (ADS)

    Han, Chao; Shen, Yuzhen; Ma, Wenlin

    2017-12-01

    An iteration and superposition encryption scheme for image sequences based on multi-dimensional keys is proposed for high security, big capacity and low noise information transmission. Multiple images to be encrypted are transformed into phase-only images with the iterative algorithm and then are encrypted by different random phase, respectively. The encrypted phase-only images are performed by inverse Fourier transform, respectively, thus new object functions are generated. The new functions are located in different blocks and padded zero for a sparse distribution, then they propagate to a specific region at different distances by angular spectrum diffraction, respectively and are superposed in order to form a single image. The single image is multiplied with a random phase in the frequency domain and then the phase part of the frequency spectrums is truncated and the amplitude information is reserved. The random phase, propagation distances, truncated phase information in frequency domain are employed as multiple dimensional keys. The iteration processing and sparse distribution greatly reduce the crosstalk among the multiple encryption images. The superposition of image sequences greatly improves the capacity of encrypted information. Several numerical experiments based on a designed optical system demonstrate that the proposed scheme can enhance encrypted information capacity and make image transmission at a highly desired security level.

  7. Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.

    PubMed

    Liu, Min; Wang, Xueping; Zhang, Hongzhong

    2018-03-01

    In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Level set method for image segmentation based on moment competition

    NASA Astrophysics Data System (ADS)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  9. Document Indexing for Image-Based Optical Information Systems.

    ERIC Educational Resources Information Center

    Thiel, Thomas J.; And Others

    1991-01-01

    Discussion of image-based information retrieval systems focuses on indexing. Highlights include computerized information retrieval; multimedia optical systems; optical mass storage and personal computers; and a case study that describes an optical disk system which was developed to preserve, access, and disseminate military documents. (19…

  10. An Image Encryption Algorithm Based on Information Hiding

    NASA Astrophysics Data System (ADS)

    Ge, Xin; Lu, Bin; Liu, Fenlin; Gong, Daofu

    Aiming at resolving the conflict between security and efficiency in the design of chaotic image encryption algorithms, an image encryption algorithm based on information hiding is proposed based on the “one-time pad” idea. A random parameter is introduced to ensure a different keystream for each encryption, which has the characteristics of “one-time pad”, improving the security of the algorithm rapidly without significant increase in algorithm complexity. The random parameter is embedded into the ciphered image with information hiding technology, which avoids negotiation for its transport and makes the application of the algorithm easier. Algorithm analysis and experiments show that the algorithm is secure against chosen plaintext attack, differential attack and divide-and-conquer attack, and has good statistical properties in ciphered images.

  11. Improved patch-based learning for image deblurring

    NASA Astrophysics Data System (ADS)

    Dong, Bo; Jiang, Zhiguo; Zhang, Haopeng

    2015-05-01

    Most recent image deblurring methods only use valid information found in input image as the clue to fill the deblurring region. These methods usually have the defects of insufficient prior information and relatively poor adaptiveness. Patch-based method not only uses the valid information of the input image itself, but also utilizes the prior information of the sample images to improve the adaptiveness. However the cost function of this method is quite time-consuming and the method may also produce ringing artifacts. In this paper, we propose an improved non-blind deblurring algorithm based on learning patch likelihoods. On one hand, we consider the effect of the Gaussian mixture model with different weights and normalize the weight values, which can optimize the cost function and reduce running time. On the other hand, a post processing method is proposed to solve the ringing artifacts produced by traditional patch-based method. Extensive experiments are performed. Experimental results verify that our method can effectively reduce the execution time, suppress the ringing artifacts effectively, and keep the quality of deblurred image.

  12. Gradient-based reliability maps for ACM-based segmentation of hippocampus.

    PubMed

    Zarpalas, Dimitrios; Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-04-01

    Automatic segmentation of deep brain structures, such as the hippocampus (HC), in MR images has attracted considerable scientific attention due to the widespread use of MRI and to the principal role of some structures in various mental disorders. In this literature, there exists a substantial amount of work relying on deformable models incorporating prior knowledge about structures' anatomy and shape information. However, shape priors capture global shape characteristics and thus fail to model boundaries of varying properties; HC boundaries present rich, poor, and missing gradient regions. On top of that, shape prior knowledge is blended with image information in the evolution process, through global weighting of the two terms, again neglecting the spatially varying boundary properties, causing segmentation faults. An innovative method is hereby presented that aims to achieve highly accurate HC segmentation in MR images, based on the modeling of boundary properties at each anatomical location and the inclusion of appropriate image information for each of those, within an active contour model framework. Hence, blending of image information and prior knowledge is based on a local weighting map, which mixes gradient information, regional and whole brain statistical information with a multi-atlas-based spatial distribution map of the structure's labels. Experimental results on three different datasets demonstrate the efficacy and accuracy of the proposed method.

  13. A Novel Quantum Image Steganography Scheme Based on LSB

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-Gui; Luo, Jia; Liu, XingAo; Zhu, Changming; Wei, Lai; Zhang, Xiafen

    2018-06-01

    Based on the NEQR representation of quantum images and least significant bit (LSB) scheme, a novel quantum image steganography scheme is proposed. The sizes of the cover image and the original information image are assumed to be 4 n × 4 n and n × n, respectively. Firstly, the bit-plane scrambling method is used to scramble the original information image. Then the scrambled information image is expanded to the same size of the cover image by using the key only known to the operator. The expanded image is scrambled to be a meaningless image with the Arnold scrambling. The embedding procedure and extracting procedure are carried out by K 1 and K 2 which are under control of the operator. For validation of the presented scheme, the peak-signal-to-noise ratio (PSNR), the capacity, the security of the images and the circuit complexity are analyzed.

  14. A novel methodology for querying web images

    NASA Astrophysics Data System (ADS)

    Prabhakara, Rashmi; Lee, Ching Cheng

    2005-01-01

    Ever since the advent of Internet, there has been an immense growth in the amount of image data that is available on the World Wide Web. With such a magnitude of image availability, an efficient and effective image retrieval system is required to make use of this information. This research presents an effective image matching and indexing technique that improvises on existing integrated image retrieval methods. The proposed technique follows a two-phase approach, integrating query by topic and query by example specification methods. The first phase consists of topic-based image retrieval using an improved text information retrieval (IR) technique that makes use of the structured format of HTML documents. It consists of a focused crawler that not only provides for the user to enter the keyword for the topic-based search but also, the scope in which the user wants to find the images. The second phase uses the query by example specification to perform a low-level content-based image match for the retrieval of smaller and relatively closer results of the example image. Information related to the image feature is automatically extracted from the query image by the image processing system. A technique that is not computationally intensive based on color feature is used to perform content-based matching of images. The main goal is to develop a functional image search and indexing system and to demonstrate that better retrieval results can be achieved with this proposed hybrid search technique.

  15. A novel methodology for querying web images

    NASA Astrophysics Data System (ADS)

    Prabhakara, Rashmi; Lee, Ching Cheng

    2004-12-01

    Ever since the advent of Internet, there has been an immense growth in the amount of image data that is available on the World Wide Web. With such a magnitude of image availability, an efficient and effective image retrieval system is required to make use of this information. This research presents an effective image matching and indexing technique that improvises on existing integrated image retrieval methods. The proposed technique follows a two-phase approach, integrating query by topic and query by example specification methods. The first phase consists of topic-based image retrieval using an improved text information retrieval (IR) technique that makes use of the structured format of HTML documents. It consists of a focused crawler that not only provides for the user to enter the keyword for the topic-based search but also, the scope in which the user wants to find the images. The second phase uses the query by example specification to perform a low-level content-based image match for the retrieval of smaller and relatively closer results of the example image. Information related to the image feature is automatically extracted from the query image by the image processing system. A technique that is not computationally intensive based on color feature is used to perform content-based matching of images. The main goal is to develop a functional image search and indexing system and to demonstrate that better retrieval results can be achieved with this proposed hybrid search technique.

  16. High dynamic range image acquisition based on multiplex cameras

    NASA Astrophysics Data System (ADS)

    Zeng, Hairui; Sun, Huayan; Zhang, Tinghua

    2018-03-01

    High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.

  17. Distributed single source coding with side information

    NASA Astrophysics Data System (ADS)

    Vila-Forcen, Jose E.; Koval, Oleksiy; Voloshynovskiy, Sviatoslav V.

    2004-01-01

    In the paper we advocate image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: classical image compression is considered from the positions of source coding with side information and, contrarily to the existing scenarios, where side information is given explicitly, side information is created based on deterministic approximation of local image features. We consider an image in the transform domain as a realization of a source with a bounded codebook of symbols where each symbol represents a particular edge shape. The codebook is image independent and plays the role of auxiliary source. Due to the partial availability of side information at both encoder and decoder we treat our problem as a modification of Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available only at decoder. Finally, we present a practical compression algorithm for passport photo images based on our concept that demonstrates the superior performance in very low bit rate regime.

  18. Resolution improvement in positron emission tomography using anatomical Magnetic Resonance Imaging.

    PubMed

    Chu, Yong; Su, Min-Ying; Mandelkern, Mark; Nalcioglu, Orhan

    2006-08-01

    An ideal imaging system should provide information with high-sensitivity, high spatial, and temporal resolution. Unfortunately, it is not possible to satisfy all of these desired features in a single modality. In this paper, we discuss methods to improve the spatial resolution in positron emission imaging (PET) using a priori information from Magnetic Resonance Imaging (MRI). Our approach uses an image restoration algorithm based on the maximization of mutual information (MMI), which has found significant success for optimizing multimodal image registration. The MMI criterion is used to estimate the parameters in the Sharpness-Constrained Wiener filter. The generated filter is then applied to restore PET images of a realistic digital brain phantom. The resulting restored images show improved resolution and better signal-to-noise ratio compared to the interpolated PET images. We conclude that a Sharpness-Constrained Wiener filter having parameters optimized from a MMI criterion may be useful for restoring spatial resolution in PET based on a priori information from correlated MRI.

  19. Research on image complexity evaluation method based on color information

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo

    2017-11-01

    In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.

  20. Adaptive polarization image fusion based on regional energy dynamic weighted average

    NASA Astrophysics Data System (ADS)

    Zhao, Yong-Qiang; Pan, Quan; Zhang, Hong-Cai

    2005-11-01

    According to the principle of polarization imaging and the relation between Stokes parameters and the degree of linear polarization, there are much redundant and complementary information in polarized images. Since man-made objects and natural objects can be easily distinguished in images of degree of linear polarization and images of Stokes parameters contain rich detailed information of the scene, the clutters in the images can be removed efficiently while the detailed information can be maintained by combining these images. An algorithm of adaptive polarization image fusion based on regional energy dynamic weighted average is proposed in this paper to combine these images. Through an experiment and simulations, most clutters are removed by this algorithm. The fusion method is used for different light conditions in simulation, and the influence of lighting conditions on the fusion results is analyzed.

  1. Despeckling Polsar Images Based on Relative Total Variation Model

    NASA Astrophysics Data System (ADS)

    Jiang, C.; He, X. F.; Yang, L. J.; Jiang, J.; Wang, D. Y.; Yuan, Y.

    2018-04-01

    Relatively total variation (RTV) algorithm, which can effectively decompose structure information and texture in image, is employed in extracting main structures of the image. However, applying the RTV directly to polarimetric SAR (PolSAR) image filtering will not preserve polarimetric information. A new RTV approach based on the complex Wishart distribution is proposed considering the polarimetric properties of PolSAR. The proposed polarization RTV (PolRTV) algorithm can be used for PolSAR image filtering. The L-band Airborne SAR (AIRSAR) San Francisco data is used to demonstrate the effectiveness of the proposed algorithm in speckle suppression, structural information preservation, and polarimetric property preservation.

  2. An Improved Image Matching Method Based on Surf Algorithm

    NASA Astrophysics Data System (ADS)

    Chen, S. J.; Zheng, S. Z.; Xu, Z. G.; Guo, C. C.; Ma, X. L.

    2018-04-01

    Many state-of-the-art image matching methods, based on the feature matching, have been widely studied in the remote sensing field. These methods of feature matching which get highly operating efficiency, have a disadvantage of low accuracy and robustness. This paper proposes an improved image matching method which based on the SURF algorithm. The proposed method introduces color invariant transformation, information entropy theory and a series of constraint conditions to increase feature points detection and matching accuracy. First, the model of color invariant transformation is introduced for two matching images aiming at obtaining more color information during the matching process and information entropy theory is used to obtain the most information of two matching images. Then SURF algorithm is applied to detect and describe points from the images. Finally, constraint conditions which including Delaunay triangulation construction, similarity function and projective invariant are employed to eliminate the mismatches so as to improve matching precision. The proposed method has been validated on the remote sensing images and the result benefits from its high precision and robustness.

  3. Sub-pattern based multi-manifold discriminant analysis for face recognition

    NASA Astrophysics Data System (ADS)

    Dai, Jiangyan; Guo, Changlu; Zhou, Wei; Shi, Yanjiao; Cong, Lin; Yi, Yugen

    2018-04-01

    In this paper, we present a Sub-pattern based Multi-manifold Discriminant Analysis (SpMMDA) algorithm for face recognition. Unlike existing Multi-manifold Discriminant Analysis (MMDA) approach which is based on holistic information of face image for recognition, SpMMDA operates on sub-images partitioned from the original face image and then extracts the discriminative local feature from the sub-images separately. Moreover, the structure information of different sub-images from the same face image is considered in the proposed method with the aim of further improve the recognition performance. Extensive experiments on three standard face databases (Extended YaleB, CMU PIE and AR) demonstrate that the proposed method is effective and outperforms some other sub-pattern based face recognition methods.

  4. Multispectral imaging for biometrics

    NASA Astrophysics Data System (ADS)

    Rowe, Robert K.; Corcoran, Stephen P.; Nixon, Kristin A.; Ostrom, Robert E.

    2005-03-01

    Automated identification systems based on fingerprint images are subject to two significant types of error: an incorrect decision about the identity of a person due to a poor quality fingerprint image and incorrectly accepting a fingerprint image generated from an artificial sample or altered finger. This paper discusses the use of multispectral sensing as a means to collect additional information about a finger that significantly augments the information collected using a conventional fingerprint imager based on total internal reflectance. In the context of this paper, "multispectral sensing" is used broadly to denote a collection of images taken under different polarization conditions and illumination configurations, as well as using multiple wavelengths. Background information is provided on conventional fingerprint imaging. A multispectral imager for fingerprint imaging is then described and a means to combine the two imaging systems into a single unit is discussed. Results from an early-stage prototype of such a system are shown.

  5. A hybrid multimodal non-rigid registration of MR images based on diffeomorphic demons.

    PubMed

    Lu, Huanxiang; Cattin, Philippe C; Reyes, Mauricio

    2010-01-01

    In this paper we present a novel hybrid approach for multimodal medical image registration based on diffeomorphic demons. Diffeomorphic demons have proven to be a robust and efficient way for intensity-based image registration. A very recent extension even allows to use mutual information (MI) as a similarity measure to registration multimodal images. However, due to the intensity correspondence uncertainty existing in some anatomical parts, it is difficult for a purely intensity-based algorithm to solve the registration problem. Therefore, we propose to combine the resulting transformations from both intensity-based and landmark-based methods for multimodal non-rigid registration based on diffeomorphic demons. Several experiments on different types of MR images were conducted, for which we show that a better anatomical correspondence between the images can be obtained using the hybrid approach than using either intensity information or landmarks alone.

  6. Material classification and automatic content enrichment of images using supervised learning and knowledge bases

    NASA Astrophysics Data System (ADS)

    Mallepudi, Sri Abhishikth; Calix, Ricardo A.; Knapp, Gerald M.

    2011-02-01

    In recent years there has been a rapid increase in the size of video and image databases. Effective searching and retrieving of images from these databases is a significant current research area. In particular, there is a growing interest in query capabilities based on semantic image features such as objects, locations, and materials, known as content-based image retrieval. This study investigated mechanisms for identifying materials present in an image. These capabilities provide additional information impacting conditional probabilities about images (e.g. objects made of steel are more likely to be buildings). These capabilities are useful in Building Information Modeling (BIM) and in automatic enrichment of images. I2T methodologies are a way to enrich an image by generating text descriptions based on image analysis. In this work, a learning model is trained to detect certain materials in images. To train the model, an image dataset was constructed containing single material images of bricks, cloth, grass, sand, stones, and wood. For generalization purposes, an additional set of 50 images containing multiple materials (some not used in training) was constructed. Two different supervised learning classification models were investigated: a single multi-class SVM classifier, and multiple binary SVM classifiers (one per material). Image features included Gabor filter parameters for texture, and color histogram data for RGB components. All classification accuracy scores using the SVM-based method were above 85%. The second model helped in gathering more information from the images since it assigned multiple classes to the images. A framework for the I2T methodology is presented.

  7. 2D-3D registration using gradient-based MI for image guided surgery systems

    NASA Astrophysics Data System (ADS)

    Yim, Yeny; Chen, Xuanyi; Wakid, Mike; Bielamowicz, Steve; Hahn, James

    2011-03-01

    Registration of preoperative CT data to intra-operative video images is necessary not only to compare the outcome of the vocal fold after surgery with the preplanned shape but also to provide the image guidance for fusion of all imaging modalities. We propose a 2D-3D registration method using gradient-based mutual information. The 3D CT scan is aligned to 2D endoscopic images by finding the corresponding viewpoint between the real camera for endoscopic images and the virtual camera for CT scans. Even though mutual information has been successfully used to register different imaging modalities, it is difficult to robustly register the CT rendered image to the endoscopic image due to varying light patterns and shape of the vocal fold. The proposed method calculates the mutual information in the gradient images as well as original images, assigning more weight to the high gradient regions. The proposed method can emphasize the effect of vocal fold and allow a robust matching regardless of the surface illumination. To find the viewpoint with maximum mutual information, a downhill simplex method is applied in a conditional multi-resolution scheme which leads to a less-sensitive result to local maxima. To validate the registration accuracy, we evaluated the sensitivity to initial viewpoint of preoperative CT. Experimental results showed that gradient-based mutual information provided robust matching not only for two identical images with different viewpoints but also for different images acquired before and after surgery. The results also showed that conditional multi-resolution scheme led to a more accurate registration than single-resolution.

  8. A new region-edge based level set model with applications to image segmentation

    NASA Astrophysics Data System (ADS)

    Zhi, Xuhao; Shen, Hong-Bin

    2018-04-01

    Level set model has advantages in handling complex shapes and topological changes, and is widely used in image processing tasks. The image segmentation oriented level set models can be grouped into region-based models and edge-based models, both of which have merits and drawbacks. Region-based level set model relies on fitting to color intensity of separated regions, but is not sensitive to edge information. Edge-based level set model evolves by fitting to local gradient information, but can get easily affected by noise. We propose a region-edge based level set model, which considers saliency information into energy function and fuses color intensity with local gradient information. The evolution of the proposed model is implemented by a hierarchical two-stage protocol, and the experimental results show flexible initialization, robust evolution and precise segmentation.

  9. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement

    PubMed Central

    Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming

    2018-01-01

    There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L0 gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements. PMID:29414893

  10. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement.

    PubMed

    Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming

    2018-02-07

    There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L ₀ gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements.

  11. Tag-Based Social Image Search: Toward Relevant and Diverse Results

    NASA Astrophysics Data System (ADS)

    Yang, Kuiyuan; Wang, Meng; Hua, Xian-Sheng; Zhang, Hong-Jiang

    Recent years have witnessed a great success of social media websites. Tag-based image search is an important approach to access the image content of interest on these websites. However, the existing ranking methods for tag-based image search frequently return results that are irrelevant or lack of diversity. This chapter presents a diverse relevance ranking scheme which simultaneously takes relevance and diversity into account by exploring the content of images and their associated tags. First, it estimates the relevance scores of images with respect to the query term based on both visual information of images and semantic information of associated tags. Then semantic similarities of social images are estimated based on their tags. Based on the relevance scores and the similarities, the ranking list is generated by a greedy ordering algorithm which optimizes Average Diverse Precision (ADP), a novel measure that is extended from the conventional Average Precision (AP). Comprehensive experiments and user studies demonstrate the effectiveness of the approach.

  12. Direct Patlak Reconstruction From Dynamic PET Data Using the Kernel Method With MRI Information Based on Structural Similarity.

    PubMed

    Gong, Kuang; Cheng-Liao, Jinxiu; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2018-04-01

    Positron emission tomography (PET) is a functional imaging modality widely used in oncology, cardiology, and neuroscience. It is highly sensitive, but suffers from relatively poor spatial resolution, as compared with anatomical imaging modalities, such as magnetic resonance imaging (MRI). With the recent development of combined PET/MR systems, we can improve the PET image quality by incorporating MR information into image reconstruction. Previously, kernel learning has been successfully embedded into static and dynamic PET image reconstruction using either PET temporal or MRI information. Here, we combine both PET temporal and MRI information adaptively to improve the quality of direct Patlak reconstruction. We examined different approaches to combine the PET and MRI information in kernel learning to address the issue of potential mismatches between MRI and PET signals. Computer simulations and hybrid real-patient data acquired on a simultaneous PET/MR scanner were used to evaluate the proposed methods. Results show that the method that combines PET temporal information and MRI spatial information adaptively based on the structure similarity index has the best performance in terms of noise reduction and resolution improvement.

  13. Spiking Cortical Model Based Multimodal Medical Image Fusion by Combining Entropy Information with Weber Local Descriptor

    PubMed Central

    Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei

    2016-01-01

    Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation. PMID:27649190

  14. Spiking Cortical Model Based Multimodal Medical Image Fusion by Combining Entropy Information with Weber Local Descriptor.

    PubMed

    Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei

    2016-09-15

    Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.

  15. Non-rigid registration between 3D ultrasound and CT images of the liver based on intensity and gradient information

    NASA Astrophysics Data System (ADS)

    Lee, Duhgoon; Nam, Woo Hyun; Lee, Jae Young; Ra, Jong Beom

    2011-01-01

    In order to utilize both ultrasound (US) and computed tomography (CT) images of the liver concurrently for medical applications such as diagnosis and image-guided intervention, non-rigid registration between these two types of images is an essential step, as local deformation between US and CT images exists due to the different respiratory phases involved and due to the probe pressure that occurs in US imaging. This paper introduces a voxel-based non-rigid registration algorithm between the 3D B-mode US and CT images of the liver. In the proposed algorithm, to improve the registration accuracy, we utilize the surface information of the liver and gallbladder in addition to the information of the vessels inside the liver. For an effective correlation between US and CT images, we treat those anatomical regions separately according to their characteristics in US and CT images. Based on a novel objective function using a 3D joint histogram of the intensity and gradient information, vessel-based non-rigid registration is followed by surface-based non-rigid registration in sequence, which improves the registration accuracy. The proposed algorithm is tested for ten clinical datasets and quantitative evaluations are conducted. Experimental results show that the registration error between anatomical features of US and CT images is less than 2 mm on average, even with local deformation due to different respiratory phases and probe pressure. In addition, the lesion registration error is less than 3 mm on average with a maximum of 4.5 mm that is considered acceptable for clinical applications.

  16. An adaptive band selection method for dimension reduction of hyper-spectral remote sensing image

    NASA Astrophysics Data System (ADS)

    Yu, Zhijie; Yu, Hui; Wang, Chen-sheng

    2014-11-01

    Hyper-spectral remote sensing data can be acquired by imaging the same area with multiple wavelengths, and it normally consists of hundreds of band-images. Hyper-spectral images can not only provide spatial information but also high resolution spectral information, and it has been widely used in environment monitoring, mineral investigation and military reconnaissance. However, because of the corresponding large data volume, it is very difficult to transmit and store Hyper-spectral images. Hyper-spectral image dimensional reduction technique is desired to resolve this problem. Because of the High relation and high redundancy of the hyper-spectral bands, it is very feasible that applying the dimensional reduction method to compress the data volume. This paper proposed a novel band selection-based dimension reduction method which can adaptively select the bands which contain more information and details. The proposed method is based on the principal component analysis (PCA), and then computes the index corresponding to every band. The indexes obtained are then ranked in order of magnitude from large to small. Based on the threshold, system can adaptively and reasonably select the bands. The proposed method can overcome the shortcomings induced by transform-based dimension reduction method and prevent the original spectral information from being lost. The performance of the proposed method has been validated by implementing several experiments. The experimental results show that the proposed algorithm can reduce the dimensions of hyper-spectral image with little information loss by adaptively selecting the band images.

  17. Visual information mining in remote sensing image archives

    NASA Astrophysics Data System (ADS)

    Pelizzari, Andrea; Descargues, Vincent; Datcu, Mihai P.

    2002-01-01

    The present article focuses on the development of interactive exploratory tools for visually mining the image content in large remote sensing archives. Two aspects are treated: the iconic visualization of the global information in the archive and the progressive visualization of the image details. The proposed methods are integrated in the Image Information Mining (I2M) system. The images and image structure in the I2M system are indexed based on a probabilistic approach. The resulting links are managed by a relational data base. Both the intrinsic complexity of the observed images and the diversity of user requests result in a great number of associations in the data base. Thus new tools have been designed to visualize, in iconic representation the relationships created during a query or information mining operation: the visualization of the query results positioned on the geographical map, quick-looks gallery, visualization of the measure of goodness of the query, visualization of the image space for statistical evaluation purposes. Additionally the I2M system is enhanced with progressive detail visualization in order to allow better access for operator inspection. I2M is a three-tier Java architecture and is optimized for the Internet.

  18. Application of content-based image compression to telepathology

    NASA Astrophysics Data System (ADS)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  19. A Graph Based Interface for Representing Volume Visualization Results

    NASA Technical Reports Server (NTRS)

    Patten, James M.; Ma, Kwan-Liu

    1998-01-01

    This paper discusses a graph based user interface for representing the results of the volume visualization process. As images are rendered, they are connected to other images in a graph based on their rendering parameters. The user can take advantage of the information in this graph to understand how certain rendering parameter changes affect a dataset, making the visualization process more efficient. Because the graph contains more information than is contained in an unstructured history of images, the image graph is also helpful for collaborative visualization and animation.

  20. A novel blinding digital watermark algorithm based on lab color space

    NASA Astrophysics Data System (ADS)

    Dong, Bing-feng; Qiu, Yun-jie; Lu, Hong-tao

    2010-02-01

    It is necessary for blinding digital image watermark algorithm to extract watermark information without any extra information except the watermarked image itself. But most of the current blinding watermark algorithms have the same disadvantage: besides the watermarked image, they also need the size and other information about the original image when extracting the watermark. This paper presents an innovative blinding color image watermark algorithm based on Lab color space, which does not have the disadvantages mentioned above. This algorithm first marks the watermark region size and position through embedding some regular blocks called anchor points in image spatial domain, and then embeds the watermark into the image. In doing so, the watermark information can be easily extracted after doing cropping and scale change to the image. Experimental results show that the algorithm is particularly robust against the color adjusting and geometry transformation. This algorithm has already been used in a copyright protecting project and works very well.

  1. Significance of perceptually relevant image decolorization for scene classification

    NASA Astrophysics Data System (ADS)

    Viswanathan, Sowmya; Divakaran, Govind; Soman, Kutti Padanyl

    2017-11-01

    Color images contain luminance and chrominance components representing the intensity and color information, respectively. The objective of this paper is to show the significance of incorporating chrominance information to the task of scene classification. An improved color-to-grayscale image conversion algorithm that effectively incorporates chrominance information is proposed using the color-to-gray structure similarity index and singular value decomposition to improve the perceptual quality of the converted grayscale images. The experimental results based on an image quality assessment for image decolorization and its success rate (using the Cadik and COLOR250 datasets) show that the proposed image decolorization technique performs better than eight existing benchmark algorithms for image decolorization. In the second part of the paper, the effectiveness of incorporating the chrominance component for scene classification tasks is demonstrated using a deep belief network-based image classification system developed using dense scale-invariant feature transforms. The amount of chrominance information incorporated into the proposed image decolorization technique is confirmed with the improvement to the overall scene classification accuracy. Moreover, the overall scene classification performance improved by combining the models obtained using the proposed method and conventional decolorization methods.

  2. Web image retrieval using an effective topic and content-based technique

    NASA Astrophysics Data System (ADS)

    Lee, Ching-Cheng; Prabhakara, Rashmi

    2005-03-01

    There has been an exponential growth in the amount of image data that is available on the World Wide Web since the early development of Internet. With such a large amount of information and image available and its usefulness, an effective image retrieval system is thus greatly needed. In this paper, we present an effective approach with both image matching and indexing techniques that improvise on existing integrated image retrieval methods. This technique follows a two-phase approach, integrating query by topic and query by example specification methods. In the first phase, The topic-based image retrieval is performed by using an improved text information retrieval (IR) technique that makes use of the structured format of HTML documents. This technique consists of a focused crawler that not only provides for the user to enter the keyword for the topic-based search but also, the scope in which the user wants to find the images. In the second phase, we use query by example specification to perform a low-level content-based image match in order to retrieve smaller and relatively closer results of the example image. From this, information related to the image feature is automatically extracted from the query image. The main objective of our approach is to develop a functional image search and indexing technique and to demonstrate that better retrieval results can be achieved.

  3. Adaptive Markov Random Fields for Example-Based Super-resolution of Faces

    NASA Astrophysics Data System (ADS)

    Stephenson, Todd A.; Chen, Tsuhan

    2006-12-01

    Image enhancement of low-resolution images can be done through methods such as interpolation, super-resolution using multiple video frames, and example-based super-resolution. Example-based super-resolution, in particular, is suited to images that have a strong prior (for those frameworks that work on only a single image, it is more like image restoration than traditional, multiframe super-resolution). For example, hallucination and Markov random field (MRF) methods use examples drawn from the same domain as the image being enhanced to determine what the missing high-frequency information is likely to be. We propose to use even stronger prior information by extending MRF-based super-resolution to use adaptive observation and transition functions, that is, to make these functions region-dependent. We show with face images how we can adapt the modeling for each image patch so as to improve the resolution.

  4. Some new classification methods for hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia

    2006-10-01

    Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.

  5. A Minimum Spanning Forest Based Method for Noninvasive Cancer Detection with Hyperspectral Imaging

    PubMed Central

    Pike, Robert; Lu, Guolan; Wang, Dongsheng; Chen, Zhuo Georgia; Fei, Baowei

    2016-01-01

    Goal The purpose of this paper is to develop a classification method that combines both spectral and spatial information for distinguishing cancer from healthy tissue on hyperspectral images in an animal model. Methods An automated algorithm based on a minimum spanning forest (MSF) and optimal band selection has been proposed to classify healthy and cancerous tissue on hyperspectral images. A support vector machine (SVM) classifier is trained to create a pixel-wise classification probability map of cancerous and healthy tissue. This map is then used to identify markers that are used to compute mutual information for a range of bands in the hyperspectral image and thus select the optimal bands. An MSF is finally grown to segment the image using spatial and spectral information. Conclusion The MSF based method with automatically selected bands proved to be accurate in determining the tumor boundary on hyperspectral images. Significance Hyperspectral imaging combined with the proposed classification technique has the potential to provide a noninvasive tool for cancer detection. PMID:26285052

  6. Speckle noise reduction in ultrasound images using a discrete wavelet transform-based image fusion technique.

    PubMed

    Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun

    2015-01-01

    Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.

  7. Information theoretical assessment of digital imaging systems

    NASA Technical Reports Server (NTRS)

    John, Sarah; Rahman, Zia-Ur; Huck, Friedrich O.; Reichenbach, Stephen E.

    1990-01-01

    The end-to-end performance of image gathering, coding, and restoration as a whole is considered. This approach is based on the pivotal relationship that exists between the spectral information density of the transmitted signal and the restorability of images from this signal. The information-theoretical assessment accounts for (1) the information density and efficiency of the acquired signal as a function of the image-gathering system design and the radiance-field statistics, and (2) the improvement in information efficiency and data compression that can be gained by combining image gathering with coding to reduce the signal redundancy and irrelevancy. It is concluded that images can be restored with better quality and from fewer data as the information efficiency of the data is increased. The restoration correctly explains the image gathering and coding processes and effectively suppresses the image-display degradations.

  8. Information theoretical assessment of digital imaging systems

    NASA Astrophysics Data System (ADS)

    John, Sarah; Rahman, Zia-Ur; Huck, Friedrich O.; Reichenbach, Stephen E.

    1990-10-01

    The end-to-end performance of image gathering, coding, and restoration as a whole is considered. This approach is based on the pivotal relationship that exists between the spectral information density of the transmitted signal and the restorability of images from this signal. The information-theoretical assessment accounts for (1) the information density and efficiency of the acquired signal as a function of the image-gathering system design and the radiance-field statistics, and (2) the improvement in information efficiency and data compression that can be gained by combining image gathering with coding to reduce the signal redundancy and irrelevancy. It is concluded that images can be restored with better quality and from fewer data as the information efficiency of the data is increased. The restoration correctly explains the image gathering and coding processes and effectively suppresses the image-display degradations.

  9. A concept-based interactive biomedical image retrieval approach using visualness and spatial information

    NASA Astrophysics Data System (ADS)

    Rahman, Md M.; Antani, Sameer K.; Demner-Fushman, Dina; Thoma, George R.

    2015-03-01

    This paper presents a novel approach to biomedical image retrieval by mapping image regions to local concepts and represent images in a weighted entropy-based concept feature space. The term concept refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist user in interactively select a Region-Of-Interest (ROI) and search for similar image ROIs. Further, a spatial verification step is used as a post-processing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval, is validated through experiments on a data set of 450 lung CT images extracted from journal articles from four different collections.

  10. Robust active contour via additive local and global intensity information based on local entropy

    NASA Astrophysics Data System (ADS)

    Yuan, Shuai; Monkam, Patrice; Zhang, Feng; Luan, Fangjun; Koomson, Ben Alfred

    2018-01-01

    Active contour-based image segmentation can be a very challenging task due to many factors such as high intensity inhomogeneity, presence of noise, complex shape, weak boundaries objects, and dependence on the position of the initial contour. We propose a level set-based active contour method to segment complex shape objects from images corrupted by noise and high intensity inhomogeneity. The energy function of the proposed method results from combining the global intensity information and local intensity information with some regularization factors. First, the global intensity term is proposed based on a scheme formulation that considers two intensity values for each region instead of one, which outperforms the well-known Chan-Vese model in delineating the image information. Second, the local intensity term is formulated based on local entropy computed considering the distribution of the image brightness and using the generalized Gaussian distribution as the kernel function. Therefore, it can accurately handle high intensity inhomogeneity and noise. Moreover, our model is not dependent on the position occupied by the initial curve. Finally, extensive experiments using various images have been carried out to illustrate the performance of the proposed method.

  11. A rapid extraction of landslide disaster information research based on GF-1 image

    NASA Astrophysics Data System (ADS)

    Wang, Sai; Xu, Suning; Peng, Ling; Wang, Zhiyi; Wang, Na

    2015-08-01

    In recent years, the landslide disasters occurred frequently because of the seismic activity. It brings great harm to people's life. It has caused high attention of the state and the extensive concern of society. In the field of geological disaster, landslide information extraction based on remote sensing has been controversial, but high resolution remote sensing image can improve the accuracy of information extraction effectively with its rich texture and geometry information. Therefore, it is feasible to extract the information of earthquake- triggered landslides with serious surface damage and large scale. Taking the Wenchuan county as the study area, this paper uses multi-scale segmentation method to extract the landslide image object through domestic GF-1 images and DEM data, which uses the estimation of scale parameter tool to determine the optimal segmentation scale; After analyzing the characteristics of landslide high-resolution image comprehensively and selecting spectrum feature, texture feature, geometric features and landform characteristics of the image, we can establish the extracting rules to extract landslide disaster information. The extraction results show that there are 20 landslide whose total area is 521279.31 .Compared with visual interpretation results, the extraction accuracy is 72.22%. This study indicates its efficient and feasible to extract earthquake landslide disaster information based on high resolution remote sensing and it provides important technical support for post-disaster emergency investigation and disaster assessment.

  12. Linear information retrieval method in X-ray grating-based phase contrast imaging and its interchangeability with tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Gao, K.; Wang, Z. L.; Shao, Q. G.; Hu, R. F.; Wei, C. X.; Zan, G. B.; Wali, F.; Luo, R. H.; Zhu, P. P.; Tian, Y. C.

    2017-06-01

    In X-ray grating-based phase contrast imaging, information retrieval is necessary for quantitative research, especially for phase tomography. However, numerous and repetitive processes have to be performed for tomographic reconstruction. In this paper, we report a novel information retrieval method, which enables retrieving phase and absorption information by means of a linear combination of two mutually conjugate images. Thanks to the distributive law of the multiplication as well as the commutative law and associative law of the addition, the information retrieval can be performed after tomographic reconstruction, thus simplifying the information retrieval procedure dramatically. The theoretical model of this method is established in both parallel beam geometry for Talbot interferometer and fan beam geometry for Talbot-Lau interferometer. Numerical experiments are also performed to confirm the feasibility and validity of the proposed method. In addition, we discuss its possibility in cone beam geometry and its advantages compared with other methods. Moreover, this method can also be employed in other differential phase contrast imaging methods, such as diffraction enhanced imaging, non-interferometric imaging, and edge illumination.

  13. Solution of the problem of superposing image and digital map for detection of new objects

    NASA Astrophysics Data System (ADS)

    Rizaev, I. S.; Miftakhutdinov, D. I.; Takhavova, E. G.

    2018-01-01

    The problem of superposing the map of the terrain with the image of the terrain is considered. The image of the terrain may be represented in different frequency bands. Further analysis of the results of collation the digital map with the image of the appropriate terrain is described. Also the approach to detection of differences between information represented on the digital map and information of the image of the appropriate area is offered. The algorithm for calculating the values of brightness of the converted image area on the original picture is offered. The calculation is based on using information about the navigation parameters and information according to arranged bench marks. For solving the posed problem the experiments were performed. The results of the experiments are shown in this paper. The presented algorithms are applicable to the ground complex of remote sensing data to assess differences between resulting images and accurate geopositional data. They are also suitable for detecting new objects in the image, based on the analysis of the matching the digital map and the image of corresponding locality.

  14. Photoacoustic phasoscopy super-contrast imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Fei; Feng, Xiaohua; Zheng, Yuanjin, E-mail: yjzheng@ntu.edu.sg

    2014-05-26

    Phasoscopy is a recently proposed concept correlating electromagnetic (EM) absorption and scattering properties based on energy conservation. Phase information can be extracted from EM absorption induced acoustic wave and scattered EM wave for biological tissue characterization. In this paper, an imaging modality, termed photoacoustic phasoscopy imaging (PAPS), is proposed and verified experimentally based on phasoscopy concept with laser illumination. Both endogenous photoacoustic wave and scattered photons are collected simultaneously to extract the phase information. The PAPS images are then reconstructed on vessel-mimicking phantom and ex vivo porcine tissues to show significantly improved contrast than conventional photoacoustic imaging.

  15. Stability of cooperation under image scoring in group interactions.

    PubMed

    Nax, Heinrich H; Perc, Matjaž; Szolnoki, Attila; Helbing, Dirk

    2015-07-15

    Image scoring sustains cooperation in the repeated two-player prisoner's dilemma through indirect reciprocity, even though defection is the uniquely dominant selfish behaviour in the one-shot game. Many real-world dilemma situations, however, firstly, take place in groups and, secondly, lack the necessary transparency to inform subjects reliably of others' individual past actions. Instead, there is revelation of information regarding groups, which allows for 'group scoring' but not for image scoring. Here, we study how sensitive the positive results related to image scoring are to information based on group scoring. We combine analytic results and computer simulations to specify the conditions for the emergence of cooperation. We show that under pure group scoring, that is, under the complete absence of image-scoring information, cooperation is unsustainable. Away from this extreme case, however, the necessary degree of image scoring relative to group scoring depends on the population size and is generally very small. We thus conclude that the positive results based on image scoring apply to a much broader range of informational settings that are relevant in the real world than previously assumed.

  16. Stability of cooperation under image scoring in group interactions

    NASA Astrophysics Data System (ADS)

    Nax, Heinrich H.; Perc, Matjaž; Szolnoki, Attila; Helbing, Dirk

    2015-07-01

    Image scoring sustains cooperation in the repeated two-player prisoner’s dilemma through indirect reciprocity, even though defection is the uniquely dominant selfish behaviour in the one-shot game. Many real-world dilemma situations, however, firstly, take place in groups and, secondly, lack the necessary transparency to inform subjects reliably of others’ individual past actions. Instead, there is revelation of information regarding groups, which allows for ‘group scoring’ but not for image scoring. Here, we study how sensitive the positive results related to image scoring are to information based on group scoring. We combine analytic results and computer simulations to specify the conditions for the emergence of cooperation. We show that under pure group scoring, that is, under the complete absence of image-scoring information, cooperation is unsustainable. Away from this extreme case, however, the necessary degree of image scoring relative to group scoring depends on the population size and is generally very small. We thus conclude that the positive results based on image scoring apply to a much broader range of informational settings that are relevant in the real world than previously assumed.

  17. A new Watermarking System based on Discrete Cosine Transform (DCT) in color biometric images.

    PubMed

    Dogan, Sengul; Tuncer, Turker; Avci, Engin; Gulten, Arif

    2012-08-01

    This paper recommend a biometric color images hiding approach An Watermarking System based on Discrete Cosine Transform (DCT), which is used to protect the security and integrity of transmitted biometric color images. Watermarking is a very important hiding information (audio, video, color image, gray image) technique. It is commonly used on digital objects together with the developing technology in the last few years. One of the common methods used for hiding information on image files is DCT method which used in the frequency domain. In this study, DCT methods in order to embed watermark data into face images, without corrupting their features.

  18. Supervised guiding long-short term memory for image caption generation based on object classes

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Cao, Zhiguo; Xiao, Yang; Qi, Xinyuan

    2018-03-01

    The present models of image caption generation have the problems of image visual semantic information attenuation and errors in guidance information. In order to solve these problems, we propose a supervised guiding Long Short Term Memory model based on object classes, named S-gLSTM for short. It uses the object detection results from R-FCN as supervisory information with high confidence, and updates the guidance word set by judging whether the last output matches the supervisory information. S-gLSTM learns how to extract the current interested information from the image visual se-mantic information based on guidance word set. The interested information is fed into the S-gLSTM at each iteration as guidance information, to guide the caption generation. To acquire the text-related visual semantic information, the S-gLSTM fine-tunes the weights of the network through the back-propagation of the guiding loss. Complementing guidance information at each iteration solves the problem of visual semantic information attenuation in the traditional LSTM model. Besides, the supervised guidance information in our model can reduce the impact of the mismatched words on the caption generation. We test our model on MSCOCO2014 dataset, and obtain better performance than the state-of-the- art models.

  19. Image retrieval by information fusion based on scalable vocabulary tree and robust Hausdorff distance

    NASA Astrophysics Data System (ADS)

    Che, Chang; Yu, Xiaoyang; Sun, Xiaoming; Yu, Boyang

    2017-12-01

    In recent years, Scalable Vocabulary Tree (SVT) has been shown to be effective in image retrieval. However, for general images where the foreground is the object to be recognized while the background is cluttered, the performance of the current SVT framework is restricted. In this paper, a new image retrieval framework that incorporates a robust distance metric and information fusion is proposed, which improves the retrieval performance relative to the baseline SVT approach. First, the visual words that represent the background are diminished by using a robust Hausdorff distance between different images. Second, image matching results based on three image signature representations are fused, which enhances the retrieval precision. We conducted intensive experiments on small-scale to large-scale image datasets: Corel-9, Corel-48, and PKU-198, where the proposed Hausdorff metric and information fusion outperforms the state-of-the-art methods by about 13, 15, and 15%, respectively.

  20. Volumetric three-dimensional intravascular ultrasound visualization using shape-based nonlinear interpolation

    PubMed Central

    2013-01-01

    Background Intravascular ultrasound (IVUS) is a standard imaging modality for identification of plaque formation in the coronary and peripheral arteries. Volumetric three-dimensional (3D) IVUS visualization provides a powerful tool to overcome the limited comprehensive information of 2D IVUS in terms of complex spatial distribution of arterial morphology and acoustic backscatter information. Conventional 3D IVUS techniques provide sub-optimal visualization of arterial morphology or lack acoustic information concerning arterial structure due in part to low quality of image data and the use of pixel-based IVUS image reconstruction algorithms. In the present study, we describe a novel volumetric 3D IVUS reconstruction algorithm to utilize IVUS signal data and a shape-based nonlinear interpolation. Methods We developed an algorithm to convert a series of IVUS signal data into a fully volumetric 3D visualization. Intermediary slices between original 2D IVUS slices were generated utilizing the natural cubic spline interpolation to consider the nonlinearity of both vascular structure geometry and acoustic backscatter in the arterial wall. We evaluated differences in image quality between the conventional pixel-based interpolation and the shape-based nonlinear interpolation methods using both virtual vascular phantom data and in vivo IVUS data of a porcine femoral artery. Volumetric 3D IVUS images of the arterial segment reconstructed using the two interpolation methods were compared. Results In vitro validation and in vivo comparative studies with the conventional pixel-based interpolation method demonstrated more robustness of the shape-based nonlinear interpolation algorithm in determining intermediary 2D IVUS slices. Our shape-based nonlinear interpolation demonstrated improved volumetric 3D visualization of the in vivo arterial structure and more realistic acoustic backscatter distribution compared to the conventional pixel-based interpolation method. Conclusions This novel 3D IVUS visualization strategy has the potential to improve ultrasound imaging of vascular structure information, particularly atheroma determination. Improved volumetric 3D visualization with accurate acoustic backscatter information can help with ultrasound molecular imaging of atheroma component distribution. PMID:23651569

  1. Brain medical image diagnosis based on corners with importance-values.

    PubMed

    Gao, Linlin; Pan, Haiwei; Li, Qing; Xie, Xiaoqin; Zhang, Zhiqiang; Han, Jinming; Zhai, Xiao

    2017-11-21

    Brain disorders are one of the top causes of human death. Generally, neurologists analyze brain medical images for diagnosis. In the image analysis field, corners are one of the most important features, which makes corner detection and matching studies essential. However, existing corner detection studies do not consider the domain information of brain. This leads to many useless corners and the loss of significant information. Regarding corner matching, the uncertainty and structure of brain are not employed in existing methods. Moreover, most corner matching studies are used for 3D image registration. They are inapplicable for 2D brain image diagnosis because of the different mechanisms. To address these problems, we propose a novel corner-based brain medical image classification method. Specifically, we automatically extract multilayer texture images (MTIs) which embody diagnostic information from neurologists. Moreover, we present a corner matching method utilizing the uncertainty and structure of brain medical images and a bipartite graph model. Finally, we propose a similarity calculation method for diagnosis. Brain CT and MRI image sets are utilized to evaluate the proposed method. First, classifiers are trained in N-fold cross-validation analysis to produce the best θ and K. Then independent brain image sets are tested to evaluate the classifiers. Moreover, the classifiers are also compared with advanced brain image classification studies. For the brain CT image set, the proposed classifier outperforms the comparison methods by at least 8% on accuracy and 2.4% on F1-score. Regarding the brain MRI image set, the proposed classifier is superior to the comparison methods by more than 7.3% on accuracy and 4.9% on F1-score. Results also demonstrate that the proposed method is robust to different intensity ranges of brain medical image. In this study, we develop a robust corner-based brain medical image classifier. Specifically, we propose a corner detection method utilizing the diagnostic information from neurologists and a corner matching method based on the uncertainty and structure of brain medical images. Additionally, we present a similarity calculation method for brain image classification. Experimental results on two brain image sets show the proposed corner-based brain medical image classifier outperforms the state-of-the-art studies.

  2. Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering

    NASA Astrophysics Data System (ADS)

    Jiang, Lu; Piao, Yan

    2018-04-01

    The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.

  3. Scheme of Optical Image Encryption with Digital Information Input and Dynamic Encryption Key based on Two LC SLMs

    NASA Astrophysics Data System (ADS)

    Bondareva, A. P.; Cheremkhin, P. A.; Evtikhiev, N. N.; Krasnov, V. V.; Starikov, S. N.

    Scheme of optical image encryption with digital information input and dynamic encryption key based on two liquid crystal spatial light modulators and operating with spatially-incoherent monochromatic illumination is experimentally implemented. Results of experiments on images optical encryption and numerical decryption are presented. Satisfactory decryption error of 0.20÷0.27 is achieved.

  4. Magnetic resonance imaging based functional imaging in paediatric oncology.

    PubMed

    Manias, Karen A; Gill, Simrandip K; MacPherson, Lesley; Foster, Katharine; Oates, Adam; Peet, Andrew C

    2017-02-01

    Imaging is central to management of solid tumours in children. Conventional magnetic resonance imaging (MRI) is the standard imaging modality for tumours of the central nervous system (CNS) and limbs and is increasingly used in the abdomen. It provides excellent structural detail, but imparts limited information about tumour type, aggressiveness, metastatic potential or early treatment response. MRI based functional imaging techniques, such as magnetic resonance spectroscopy, diffusion and perfusion weighted imaging, probe tissue properties to provide clinically important information about metabolites, structure and blood flow. This review describes the role of and evidence behind these functional imaging techniques in paediatric oncology and implications for integrating them into routine clinical practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. A multimodal imaging platform with integrated simultaneous photoacoustic microscopy, optical coherence tomography, optical Doppler tomography and fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Dadkhah, Arash; Zhou, Jun; Yeasmin, Nusrat; Jiao, Shuliang

    2018-02-01

    Various optical imaging modalities with different optical contrast mechanisms have been developed over the past years. Although most of these imaging techniques are being used in many biomedical applications and researches, integration of these techniques will allow researchers to reach the full potential of these technologies. Nevertheless, combining different imaging techniques is always challenging due to the difference in optical and hardware requirements for different imaging systems. Here, we developed a multimodal optical imaging system with the capability of providing comprehensive structural, functional and molecular information of living tissue in micrometer scale. This imaging system integrates photoacoustic microscopy (PAM), optical coherence tomography (OCT), optical Doppler tomography (ODT) and fluorescence microscopy in one platform. Optical-resolution PAM (OR-PAM) provides absorption-based imaging of biological tissues. Spectral domain OCT is able to provide structural information based on the scattering property of biological sample with no need for exogenous contrast agents. In addition, ODT is a functional extension of OCT with the capability of measurement and visualization of blood flow based on the Doppler effect. Fluorescence microscopy allows to reveal molecular information of biological tissue using autofluoresce or exogenous fluorophores. In-vivo as well as ex-vivo imaging studies demonstrated the capability of our multimodal imaging system to provide comprehensive microscopic information on biological tissues. Integrating all the aforementioned imaging modalities for simultaneous multimodal imaging has promising potential for preclinical research and clinical practice in the near future.

  6. A novel scatter-matrix eigenvalues-based total variation (SMETV) regularization for medical image restoration

    NASA Astrophysics Data System (ADS)

    Huang, Zhenghua; Zhang, Tianxu; Deng, Lihua; Fang, Hao; Li, Qian

    2015-12-01

    Total variation(TV) based on regularization has been proven as a popular and effective model for image restoration, because of its ability of edge preserved. However, as the TV favors a piece-wise constant solution, the processing results in the flat regions of the image are easily produced "staircase effects", and the amplitude of the edges will be underestimated; the underlying cause of the problem is that the regularization parameter can not be changeable with spatial local information of image. In this paper, we propose a novel Scatter-matrix eigenvalues-based TV(SMETV) regularization with image blind restoration algorithm for deblurring medical images. The spatial information in different image regions is incorporated into regularization by using the edge indicator called difference eigenvalue to distinguish edges from flat areas. The proposed algorithm can effectively reduce the noise in flat regions as well as preserve the edge and detailed information. Moreover, it becomes more robust with the change of the regularization parameter. Extensive experiments demonstrate that the proposed approach produces results superior to most methods in both visual image quality and quantitative measures.

  7. Land Cover Analysis by Using Pixel-Based and Object-Based Image Classification Method in Bogor

    NASA Astrophysics Data System (ADS)

    Amalisana, Birohmatin; Rokhmatullah; Hernina, Revi

    2017-12-01

    The advantage of image classification is to provide earth’s surface information like landcover and time-series changes. Nowadays, pixel-based image classification technique is commonly performed with variety of algorithm such as minimum distance, parallelepiped, maximum likelihood, mahalanobis distance. On the other hand, landcover classification can also be acquired by using object-based image classification technique. In addition, object-based classification uses image segmentation from parameter such as scale, form, colour, smoothness and compactness. This research is aimed to compare the result of landcover classification and its change detection between parallelepiped pixel-based and object-based classification method. Location of this research is Bogor with 20 years range of observation from 1996 until 2016. This region is famous as urban areas which continuously change due to its rapid development, so that time-series landcover information of this region will be interesting.

  8. Tracking target objects orbiting earth using satellite-based telescopes

    DOEpatents

    De Vries, Willem H; Olivier, Scot S; Pertica, Alexander J

    2014-10-14

    A system for tracking objects that are in earth orbit via a constellation or network of satellites having imaging devices is provided. An object tracking system includes a ground controller and, for each satellite in the constellation, an onboard controller. The ground controller receives ephemeris information for a target object and directs that ephemeris information be transmitted to the satellites. Each onboard controller receives ephemeris information for a target object, collects images of the target object based on the expected location of the target object at an expected time, identifies actual locations of the target object from the collected images, and identifies a next expected location at a next expected time based on the identified actual locations of the target object. The onboard controller processes the collected image to identify the actual location of the target object and transmits the actual location information to the ground controller.

  9. Change detection from synthetic aperture radar images based on neighborhood-based ratio and extreme learning machine

    NASA Astrophysics Data System (ADS)

    Gao, Feng; Dong, Junyu; Li, Bo; Xu, Qizhi; Xie, Cui

    2016-10-01

    Change detection is of high practical value to hazard assessment, crop growth monitoring, and urban sprawl detection. A synthetic aperture radar (SAR) image is the ideal information source for performing change detection since it is independent of atmospheric and sunlight conditions. Existing SAR image change detection methods usually generate a difference image (DI) first and use clustering methods to classify the pixels of DI into changed class and unchanged class. Some useful information may get lost in the DI generation process. This paper proposed an SAR image change detection method based on neighborhood-based ratio (NR) and extreme learning machine (ELM). NR operator is utilized for obtaining some interested pixels that have high probability of being changed or unchanged. Then, image patches centered at these pixels are generated, and ELM is employed to train a model by using these patches. Finally, pixels in both original SAR images are classified by the pretrained ELM model. The preclassification result and the ELM classification result are combined to form the final change map. The experimental results obtained on three real SAR image datasets and one simulated dataset show that the proposed method is robust to speckle noise and is effective to detect change information among multitemporal SAR images.

  10. TOF-SIMS imaging technique with information entropy

    NASA Astrophysics Data System (ADS)

    Aoyagi, Satoka; Kawashima, Y.; Kudo, Masahiro

    2005-05-01

    Time-of-flight secondary ion mass spectrometry (TOF-SIMS) is capable of chemical imaging of proteins on insulated samples in principal. However, selection of specific peaks related to a particular protein, which are necessary for chemical imaging, out of numerous candidates had been difficult without an appropriate spectrum analysis technique. Therefore multivariate analysis techniques, such as principal component analysis (PCA), and analysis with mutual information defined by information theory, have been applied to interpret SIMS spectra of protein samples. In this study mutual information was applied to select specific peaks related to proteins in order to obtain chemical images. Proteins on insulated materials were measured with TOF-SIMS and then SIMS spectra were analyzed by means of the analysis method based on the comparison using mutual information. Chemical mapping of each protein was obtained using specific peaks related to each protein selected based on values of mutual information. The results of TOF-SIMS images of proteins on the materials provide some useful information on properties of protein adsorption, optimality of immobilization processes and reaction between proteins. Thus chemical images of proteins by TOF-SIMS contribute to understand interactions between material surfaces and proteins and to develop sophisticated biomaterials.

  11. Display system for imaging scientific telemetric information

    NASA Technical Reports Server (NTRS)

    Zabiyakin, G. I.; Rykovanov, S. N.

    1979-01-01

    A system for imaging scientific telemetric information, based on the M-6000 minicomputer and the SIGD graphic display, is described. Two dimensional graphic display of telemetric information and interaction with the computer, in analysis and processing of telemetric parameters displayed on the screen is provided. The running parameter information output method is presented. User capabilities in the analysis and processing of telemetric information imaged on the display screen and the user language are discussed and illustrated.

  12. Optical information authentication using compressed double-random-phase-encoded images and quick-response codes.

    PubMed

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2015-03-09

    In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system.

  13. Multistage morphological segmentation of bright-field and fluorescent microscopy images

    NASA Astrophysics Data System (ADS)

    Korzyńska, A.; Iwanowski, M.

    2012-06-01

    This paper describes the multistage morphological segmentation method (MSMA) for microscopic cell images. The proposed method enables us to study the cell behaviour by using a sequence of two types of microscopic images: bright field images and/or fluorescent images. The proposed method is based on two types of information: the cell texture coming from the bright field images and intensity of light emission, done by fluorescent markers. The method is dedicated to the image sequences segmentation and it is based on mathematical morphology methods supported by other image processing techniques. The method allows for detecting cells in image independently from a degree of their flattening and from presenting structures which produce the texture. It makes use of some synergic information from the fluorescent light emission image as the support information. The MSMA method has been applied to images acquired during the experiments on neural stem cells as well as to artificial images. In order to validate the method, two types of errors have been considered: the error of cell area detection and the error of cell position using artificial images as the "gold standard".

  14. Large scale track analysis for wide area motion imagery surveillance

    NASA Astrophysics Data System (ADS)

    van Leeuwen, C. J.; van Huis, J. R.; Baan, J.

    2016-10-01

    Wide Area Motion Imagery (WAMI) enables image based surveillance of areas that can cover multiple square kilometers. Interpreting and analyzing information from such sources, becomes increasingly time consuming as more data is added from newly developed methods for information extraction. Captured from a moving Unmanned Aerial Vehicle (UAV), the high-resolution images allow detection and tracking of moving vehicles, but this is a highly challenging task. By using a chain of computer vision detectors and machine learning techniques, we are capable of producing high quality track information of more than 40 thousand vehicles per five minutes. When faced with such a vast number of vehicular tracks, it is useful for analysts to be able to quickly query information based on region of interest, color, maneuvers or other high-level types of information, to gain insight and find relevant activities in the flood of information. In this paper we propose a set of tools, combined in a graphical user interface, which allows data analysts to survey vehicles in a large observed area. In order to retrieve (parts of) images from the high-resolution data, we developed a multi-scale tile-based video file format that allows to quickly obtain only a part, or a sub-sampling of the original high resolution image. By storing tiles of a still image according to a predefined order, we can quickly retrieve a particular region of the image at any relevant scale, by skipping to the correct frames and reconstructing the image. Location based queries allow a user to select tracks around a particular region of interest such as landmark, building or street. By using an integrated search engine, users can quickly select tracks that are in the vicinity of locations of interest. Another time-reducing method when searching for a particular vehicle, is to filter on color or color intensity. Automatic maneuver detection adds information to the tracks that can be used to find vehicles based on their behavior.

  15. Study on the key technology of optical encryption based on compressive ghost imaging with double random-phase encoding

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Pan, Zilan; Liang, Dong; Ma, Xiuhua; Zhang, Dawei

    2015-12-01

    An optical encryption method based on compressive ghost imaging (CGI) with double random-phase encoding (DRPE), named DRPE-CGI, is proposed. The information is first encrypted by the sender with DRPE, the DRPE-coded image is encrypted by the system of computational ghost imaging with a secret key. The key of N random-phase vectors is generated by the sender and will be shared with the receiver who is the authorized user. The receiver decrypts the DRPE-coded image with the key, with the aid of CGI and a compressive sensing technique, and then reconstructs the original information by the technique of DRPE-decoding. The experiments suggest that cryptanalysts cannot get any useful information about the original image even if they eavesdrop 60% of the key at a given time, so the security of DRPE-CGI is higher than that of the security of conventional ghost imaging. Furthermore, this method can reduce 40% of the information quantity compared with ghost imaging while the qualities of reconstructing the information are the same. It can also improve the quality of the reconstructed plaintext information compared with DRPE-GI with the same sampling times. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.

  16. Atmospheric correction for remote sensing image based on multi-spectral information

    NASA Astrophysics Data System (ADS)

    Wang, Yu; He, Hongyan; Tan, Wei; Qi, Wenwen

    2018-03-01

    The light collected from remote sensors taken from space must transit through the Earth's atmosphere. All satellite images are affected at some level by lightwave scattering and absorption from aerosols, water vapor and particulates in the atmosphere. For generating high-quality scientific data, atmospheric correction is required to remove atmospheric effects and to convert digital number (DN) values to surface reflectance (SR). Every optical satellite in orbit observes the earth through the same atmosphere, but each satellite image is impacted differently because atmospheric conditions are constantly changing. A physics-based detailed radiative transfer model 6SV requires a lot of key ancillary information about the atmospheric conditions at the acquisition time. This paper investigates to achieve the simultaneous acquisition of atmospheric radiation parameters based on the multi-spectral information, in order to improve the estimates of surface reflectance through physics-based atmospheric correction. Ancillary information on the aerosol optical depth (AOD) and total water vapor (TWV) derived from the multi-spectral information based on specific spectral properties was used for the 6SV model. The experimentation was carried out on images of Sentinel-2, which carries a Multispectral Instrument (MSI), recording in 13 spectral bands, covering a wide range of wavelengths from 440 up to 2200 nm. The results suggest that per-pixel atmospheric correction through 6SV model, integrating AOD and TWV derived from multispectral information, is better suited for accurate analysis of satellite images and quantitative remote sensing application.

  17. An information hiding method based on LSB and tent chaotic map

    NASA Astrophysics Data System (ADS)

    Song, Jianhua; Ding, Qun

    2011-06-01

    In order to protect information security more effectively, a novel information hiding method based on LSB and Tent chaotic map was proposed, first the secret message is Tent chaotic encrypted, and then LSB steganography is executed for the encrypted message in the cover-image. Compared to the traditional image information hiding method, the simulation results indicate that the method greatly improved in imperceptibility and security, and acquired good results.

  18. An information based approach to improving overhead imagery collection

    NASA Astrophysics Data System (ADS)

    Sourwine, Matthew J.; Hintz, Kenneth J.

    2011-06-01

    Recent growth in commercial imaging satellite development has resulted in a complex and diverse set of systems. To simplify this environment for both customer and vendor, an information based sensor management model was built to integrate tasking and scheduling systems. By establishing a relationship between image quality and information, tasking by NIIRS can be utilized to measure the customer's required information content. Focused on a reduction in uncertainty about a target of interest, the sensor manager finds the best sensors to complete the task given the active suite of imaging sensors' functions. This is done through determination of which satellite will meet customer information and timeliness requirements with low likelihood of interference at the highest rate of return.

  19. Information recovery through image sequence fusion under wavelet transformation

    NASA Astrophysics Data System (ADS)

    He, Qiang

    2010-04-01

    Remote sensing is widely applied to provide information of areas with limited ground access with applications such as to assess the destruction from natural disasters and to plan relief and recovery operations. However, the data collection of aerial digital images is constrained by bad weather, atmospheric conditions, and unstable camera or camcorder. Therefore, how to recover the information from the low-quality remote sensing images and how to enhance the image quality becomes very important for many visual understanding tasks, such like feature detection, object segmentation, and object recognition. The quality of remote sensing imagery can be improved through meaningful combination of the employed images captured from different sensors or from different conditions through information fusion. Here we particularly address information fusion to remote sensing images under multi-resolution analysis in the employed image sequences. The image fusion is to recover complete information by integrating multiple images captured from the same scene. Through image fusion, a new image with high-resolution or more perceptive for human and machine is created from a time series of low-quality images based on image registration between different video frames.

  20. Research on HDR image fusion algorithm based on Laplace pyramid weight transform with extreme low-light CMOS

    NASA Astrophysics Data System (ADS)

    Guan, Wen; Li, Li; Jin, Weiqi; Qiu, Su; Zou, Yan

    2015-10-01

    Extreme-Low-Light CMOS has been widely applied in the field of night-vision as a new type of solid image sensor. But if the illumination in the scene has drastic changes or the illumination is too strong, Extreme-Low-Light CMOS can't both clearly present the high-light scene and low-light region. According to the partial saturation problem in the field of night-vision, a HDR image fusion algorithm based on the Laplace Pyramid was researched. The overall gray value and the contrast of the low light image is very low. We choose the fusion strategy based on regional average gradient for the top layer of the long exposure image and short exposure image, which has rich brightness and textural features. The remained layers which represent the edge feature information of the target are based on the fusion strategy based on regional energy. In the process of source image reconstruction with Laplacian pyramid image, we compare the fusion results with four kinds of basal images. The algorithm is tested using Matlab and compared with the different fusion strategies. We use information entropy, average gradient and standard deviation these three objective evaluation parameters for the further analysis of the fusion result. Different low illumination environment experiments show that the algorithm in this paper can rapidly get wide dynamic range while keeping high entropy. Through the verification of this algorithm features, there is a further application prospect of the optimized algorithm. Keywords: high dynamic range imaging, image fusion, multi-exposure image, weight coefficient, information fusion, Laplacian pyramid transform.

  1. The crack detection algorithm of pavement image based on edge information

    NASA Astrophysics Data System (ADS)

    Yang, Chunde; Geng, Mingyue

    2018-05-01

    As the images of pavement cracks are affected by a large amount of complicated noises, such as uneven illumination and water stains, the detected cracks are discontinuous and the main body information at the edge of the cracks is easily lost. In order to solve the problem, a crack detection algorithm in pavement image based on edge information is proposed. Firstly, the image is pre-processed by the nonlinear gray-scale transform function and reconstruction filter to enhance the linear characteristic of the crack. At the same time, an adaptive thresholding method is designed to coarsely extract the cracks edge according to the gray-scale gradient feature and obtain the crack gradient information map. Secondly, the candidate edge points are obtained according to the gradient information, and the edge is detected based on the single pixel percolation processing, which is improved by using the local difference between pixels in the fixed region. Finally, complete crack is obtained by filling the crack edge. Experimental results show that the proposed method can accurately detect pavement cracks and preserve edge information.

  2. Information based universal feature extraction

    NASA Astrophysics Data System (ADS)

    Amiri, Mohammad; Brause, Rüdiger

    2015-02-01

    In many real world image based pattern recognition tasks, the extraction and usage of task-relevant features are the most crucial part of the diagnosis. In the standard approach, they mostly remain task-specific, although humans who perform such a task always use the same image features, trained in early childhood. It seems that universal feature sets exist, but they are not yet systematically found. In our contribution, we tried to find those universal image feature sets that are valuable for most image related tasks. In our approach, we trained a neural network by natural and non-natural images of objects and background, using a Shannon information-based algorithm and learning constraints. The goal was to extract those features that give the most valuable information for classification of visual objects hand-written digits. This will give a good start and performance increase for all other image learning tasks, implementing a transfer learning approach. As result, in our case we found that we could indeed extract features which are valid in all three kinds of tasks.

  3. Three-dimensional image authentication scheme using sparse phase information in double random phase encoded integral imaging.

    PubMed

    Yi, Faliu; Jeoung, Yousun; Moon, Inkyu

    2017-05-20

    In recent years, many studies have focused on authentication of two-dimensional (2D) images using double random phase encryption techniques. However, there has been little research on three-dimensional (3D) imaging systems, such as integral imaging, for 3D image authentication. We propose a 3D image authentication scheme based on a double random phase integral imaging method. All of the 2D elemental images captured through integral imaging are encrypted with a double random phase encoding algorithm and only partial phase information is reserved. All the amplitude and other miscellaneous phase information in the encrypted elemental images is discarded. Nevertheless, we demonstrate that 3D images from integral imaging can be authenticated at different depths using a nonlinear correlation method. The proposed 3D image authentication algorithm can provide enhanced information security because the decrypted 2D elemental images from the sparse phase cannot be easily observed by the naked eye. Additionally, using sparse phase images without any amplitude information can greatly reduce data storage costs and aid in image compression and data transmission.

  4. Prospective comparison of the usage of conventional film and PACS based computed radiography for portable chest x-ray imaging in a medical intensive care unit

    NASA Astrophysics Data System (ADS)

    Kundel, Harold L.; Seshadri, Sridhar B.; Langlotz, Curtis P.; Lanken, Paul N.; Horii, Steven C.; Polansky, Marcia; Kishore, Sheel; Finegold, Eric; Brikman, Inna; Bozzo, Mary T.; Redfern, Regina O.

    1995-05-01

    The purpose of this study was to compare the efficiency of image delivery, the effectiveness of image information transfer, and the timeliness of clinical actions in a medical intensive care unit (MICU) using either conventional screen-film imaging (SF-HC), computed radiography (CR-HC) or a CR based PACS. When the CR based PACS was in use, images could be viewed in the MICU on digital workstation (CR-WS) or in the radiology department as laser printed hard copy (CR-HC). Data were collected by daily interviews with the house-staff, by monitoring computer log-ons and other time stamped activities, and by observing film viewing times in the radiology department with surveillance cameras. The time at which image information was made available to the MICU physicians was decreased during the CR-PACS period as compared with either the SF-HC periods or the CR-HC periods but the image information was not accessed more quickly by the clinical staff. However, the time required to perform image related clinical actions for pulmonary and pleural problems was decreased when images were viewed on the workstation.

  5. Fusion of infrared and visible images based on saliency scale-space in frequency domain

    NASA Astrophysics Data System (ADS)

    Chen, Yanfei; Sang, Nong; Dan, Zhiping

    2015-12-01

    A fusion algorithm of infrared and visible images based on saliency scale-space in the frequency domain was proposed. Focus of human attention is directed towards the salient targets which interpret the most important information in the image. For the given registered infrared and visible images, firstly, visual features are extracted to obtain the input hypercomplex matrix. Secondly, the Hypercomplex Fourier Transform (HFT) is used to obtain the salient regions of the infrared and visible images respectively, the convolution of the input hypercomplex matrix amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale which is equivalent to an image saliency detector are done. The saliency maps are obtained by reconstructing the 2D signal using the original phase and the amplitude spectrum, filtered at a scale selected by minimizing saliency map entropy. Thirdly, the salient regions are fused with the adoptive weighting fusion rules, and the nonsalient regions are fused with the rule based on region energy (RE) and region sharpness (RS), then the fused image is obtained. Experimental results show that the presented algorithm can hold high spectrum information of the visual image, and effectively get the thermal targets information at different scales of the infrared image.

  6. Image classification using multiscale information fusion based on saliency driven nonlinear diffusion filtering.

    PubMed

    Hu, Weiming; Hu, Ruiguang; Xie, Nianhua; Ling, Haibin; Maybank, Stephen

    2014-04-01

    In this paper, we propose saliency driven image multiscale nonlinear diffusion filtering. The resulting scale space in general preserves or even enhances semantically important structures such as edges, lines, or flow-like structures in the foreground, and inhibits and smoothes clutter in the background. The image is classified using multiscale information fusion based on the original image, the image at the final scale at which the diffusion process converges, and the image at a midscale. Our algorithm emphasizes the foreground features, which are important for image classification. The background image regions, whether considered as contexts of the foreground or noise to the foreground, can be globally handled by fusing information from different scales. Experimental tests of the effectiveness of the multiscale space for the image classification are conducted on the following publicly available datasets: 1) the PASCAL 2005 dataset; 2) the Oxford 102 flowers dataset; and 3) the Oxford 17 flowers dataset, with high classification rates.

  7. Small-target leak detection for a closed vessel via infrared image sequences

    NASA Astrophysics Data System (ADS)

    Zhao, Ling; Yang, Hongjiu

    2017-03-01

    This paper focus on a leak diagnosis and localization method based on infrared image sequences. Some problems on high probability of false warning and negative affect for marginal information are solved by leak detection. An experimental model is established for leak diagnosis and localization on infrared image sequences. The differential background prediction is presented to eliminate the negative affect of marginal information on test vessel based on a kernel regression method. A pipeline filter based on layering voting is designed to reduce probability of leak point false warning. A synthesize leak diagnosis and localization algorithm is proposed based on infrared image sequences. The effectiveness and potential are shown for developed techniques through experimental results.

  8. Using image mapping towards biomedical and biological data sharing

    PubMed Central

    2013-01-01

    Image-based data integration in eHealth and life sciences is typically concerned with the method used for anatomical space mapping, needed to retrieve, compare and analyse large volumes of biomedical data. In mapping one image onto another image, a mechanism is used to match and find the corresponding spatial regions which have the same meaning between the source and the matching image. Image-based data integration is useful for integrating data of various information structures. Here we discuss a broad range of issues related to data integration of various information structures, review exemplary work on image representation and mapping, and discuss the challenges that these techniques may bring. PMID:24059352

  9. Image BOSS: a biomedical object storage system

    NASA Astrophysics Data System (ADS)

    Stacy, Mahlon C.; Augustine, Kurt E.; Robb, Richard A.

    1997-05-01

    Researchers using biomedical images have data management needs which are oriented perpendicular to clinical PACS. The image BOSS system is designed to permit researchers to organize and select images based on research topic, image metadata, and a thumbnail of the image. Image information is captured from existing images in a Unix based filesystem, stored in an object oriented database, and presented to the user in a familiar laboratory notebook metaphor. In addition, the ImageBOSS is designed to provide an extensible infrastructure for future content-based queries directly on the images.

  10. Spatial Mutual Information Based Hyperspectral Band Selection for Classification

    PubMed Central

    2015-01-01

    The amount of information involved in hyperspectral imaging is large. Hyperspectral band selection is a popular method for reducing dimensionality. Several information based measures such as mutual information have been proposed to reduce information redundancy among spectral bands. Unfortunately, mutual information does not take into account the spatial dependency between adjacent pixels in images thus reducing its robustness as a similarity measure. In this paper, we propose a new band selection method based on spatial mutual information. As validation criteria, a supervised classification method using support vector machine (SVM) is used. Experimental results of the classification of hyperspectral datasets show that the proposed method can achieve more accurate results. PMID:25918742

  11. Image-based change estimation (ICE): monitoring land use, land cover and agent of change information for all lands

    Treesearch

    Kevin Megown; Andy Lister; Paul Patterson; Tracey Frescino; Dennis Jacobs; Jeremy Webb; Nicholas Daniels; Mark Finco

    2015-01-01

    The Image-based Change Estimation (ICE) protocols have been designed to respond to several Agency and Department information requirements. These include provisions set forth by the 2014 Farm Bill, the Forest Service Action Plan and Strategic Plan, the 2012 Planning Rule, and the 2015 Planning Directives. ICE outputs support the information needs by providing estimates...

  12. An initial study on the estimation of time-varying volumetric treatment images and 3D tumor localization from single MV cine EPID images

    PubMed Central

    Mishra, Pankaj; Li, Ruijiang; Mak, Raymond H.; Rottmann, Joerg; Bryant, Jonathan H.; Williams, Christopher L.; Berbeco, Ross I.; Lewis, John H.

    2014-01-01

    Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculated through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model. This is the first method to estimate volumetric time-varying images from single MV cine EPID images, and has the potential to provide volumetric information with no additional imaging dose to the patient. PMID:25086523

  13. An initial study on the estimation of time-varying volumetric treatment images and 3D tumor localization from single MV cine EPID images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Pankaj, E-mail: pankaj.mishra@varian.com; Mak, Raymond H.; Rottmann, Joerg

    2014-08-15

    Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculatedmore » through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model. This is the first method to estimate volumetric time-varying images from single MV cine EPID images, and has the potential to provide volumetric information with no additional imaging dose to the patient.« less

  14. Fusion and quality analysis for remote sensing images using contourlet transform

    NASA Astrophysics Data System (ADS)

    Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram

    2013-05-01

    Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.

  15. Medical Image Fusion Based on Feature Extraction and Sparse Representation

    PubMed Central

    Wei, Gao; Zongxi, Song

    2017-01-01

    As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM) and energy information map (EM) as well as structure and energy map (SEM) to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG) and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods. PMID:28321246

  16. Restoration of Motion-Blurred Image Based on Border Deformation Detection: A Traffic Sign Restoration Model

    PubMed Central

    Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Wang, Qi; Gao, Jing

    2015-01-01

    Due to the rapid development of motor vehicle Driver Assistance Systems (DAS), the safety problems associated with automatic driving have become a hot issue in Intelligent Transportation. The traffic sign is one of the most important tools used to reinforce traffic rules. However, traffic sign image degradation based on computer vision is unavoidable during the vehicle movement process. In order to quickly and accurately recognize traffic signs in motion-blurred images in DAS, a new image restoration algorithm based on border deformation detection in the spatial domain is proposed in this paper. The border of a traffic sign is extracted using color information, and then the width of the border is measured in all directions. According to the width measured and the corresponding direction, both the motion direction and scale of the image can be confirmed, and this information can be used to restore the motion-blurred image. Finally, a gray mean grads (GMG) ratio is presented to evaluate the image restoration quality. Compared to the traditional restoration approach which is based on the blind deconvolution method and Lucy-Richardson method, our method can greatly restore motion blurred images and improve the correct recognition rate. Our experiments show that the proposed method is able to restore traffic sign information accurately and efficiently. PMID:25849350

  17. Restoration of motion-blurred image based on border deformation detection: a traffic sign restoration model.

    PubMed

    Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Wang, Qi; Gao, Jing

    2015-01-01

    Due to the rapid development of motor vehicle Driver Assistance Systems (DAS), the safety problems associated with automatic driving have become a hot issue in Intelligent Transportation. The traffic sign is one of the most important tools used to reinforce traffic rules. However, traffic sign image degradation based on computer vision is unavoidable during the vehicle movement process. In order to quickly and accurately recognize traffic signs in motion-blurred images in DAS, a new image restoration algorithm based on border deformation detection in the spatial domain is proposed in this paper. The border of a traffic sign is extracted using color information, and then the width of the border is measured in all directions. According to the width measured and the corresponding direction, both the motion direction and scale of the image can be confirmed, and this information can be used to restore the motion-blurred image. Finally, a gray mean grads (GMG) ratio is presented to evaluate the image restoration quality. Compared to the traditional restoration approach which is based on the blind deconvolution method and Lucy-Richardson method, our method can greatly restore motion blurred images and improve the correct recognition rate. Our experiments show that the proposed method is able to restore traffic sign information accurately and efficiently.

  18. Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)

    1993-01-01

    Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.

  19. Steganography on quantum pixel images using Shannon entropy

    NASA Astrophysics Data System (ADS)

    Laurel, Carlos Ortega; Dong, Shi-Hai; Cruz-Irisson, M.

    2016-07-01

    This paper presents a steganographical algorithm based on least significant bit (LSB) from the most significant bit information (MSBI) and the equivalence of a bit pixel image to a quantum pixel image, which permits to make the information communicate secretly onto quantum pixel images for its secure transmission through insecure channels. This algorithm offers higher security since it exploits the Shannon entropy for an image.

  20. A content-based image retrieval method for optical colonoscopy images based on image recognition techniques

    NASA Astrophysics Data System (ADS)

    Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro

    2015-03-01

    This paper proposes a content-based image retrieval method for optical colonoscopy images that can find images similar to ones being diagnosed. Optical colonoscopy is a method of direct observation for colons and rectums to diagnose bowel diseases. It is the most common procedure for screening, surveillance and treatment. However, diagnostic accuracy for intractable inflammatory bowel diseases, such as ulcerative colitis (UC), is highly dependent on the experience and knowledge of the medical doctor, because there is considerable variety in the appearances of colonic mucosa within inflammations with UC. In order to solve this issue, this paper proposes a content-based image retrieval method based on image recognition techniques. The proposed retrieval method can find similar images from a database of images diagnosed as UC, and can potentially furnish the medical records associated with the retrieved images to assist the UC diagnosis. Within the proposed method, color histogram features and higher order local auto-correlation (HLAC) features are adopted to represent the color information and geometrical information of optical colonoscopy images, respectively. Moreover, considering various characteristics of UC colonoscopy images, such as vascular patterns and the roughness of the colonic mucosa, we also propose an image enhancement method to highlight the appearances of colonic mucosa in UC. In an experiment using 161 UC images from 32 patients, we demonstrate that our method improves the accuracy of retrieving similar UC images.

  1. A novel image watermarking method based on singular value decomposition and digital holography

    NASA Astrophysics Data System (ADS)

    Cai, Zhishan

    2016-10-01

    According to the information optics theory, a novel watermarking method based on Fourier-transformed digital holography and singular value decomposition (SVD) is proposed in this paper. First of all, a watermark image is converted to a digital hologram using the Fourier transform. After that, the original image is divided into many non-overlapping blocks. All the blocks and the hologram are decomposed using SVD. The singular value components of the hologram are then embedded into the singular value components of each block using an addition principle. Finally, SVD inverse transformation is carried out on the blocks and hologram to generate the watermarked image. The watermark information embedded in each block is extracted at first when the watermark is extracted. After that, an averaging operation is carried out on the extracted information to generate the final watermark information. Finally, the algorithm is simulated. Furthermore, to test the encrypted image's resistance performance against attacks, various attack tests are carried out. The results show that the proposed algorithm has very good robustness against noise interference, image cut, compression, brightness stretching, etc. In particular, when the image is rotated by a large angle, the watermark information can still be extracted correctly.

  2. [Design of the image browser for PACS image workstation].

    PubMed

    Li, Feng; Zhou, He-Qin

    2006-09-01

    The design of PACS image workstation based on DICOM3.0 is introduced in the paper, then the designing method of the PACS image browser based on the control system theory is presented,focusing on two main units:DICOM analyzer and the information mapping transformer.

  3. Evaluation of GMI and PMI diffeomorphic‐based demons algorithms for aligning PET and CT Images

    PubMed Central

    Yang, Juan; Zhang, You; Yin, Yong

    2015-01-01

    Fusion of anatomic information in computed tomography (CT) and functional information in F18‐FDG positron emission tomography (PET) is crucial for accurate differentiation of tumor from benign masses, designing radiotherapy treatment plan and staging of cancer. Although current PET and CT images can be acquired from combined F18‐FDG PET/CT scanner, the two acquisitions are scanned separately and take a long time, which may induce potential positional errors in global and local caused by respiratory motion or organ peristalsis. So registration (alignment) of whole‐body PET and CT images is a prerequisite for their meaningful fusion. The purpose of this study was to assess the performance of two multimodal registration algorithms for aligning PET and CT images. The proposed gradient of mutual information (GMI)‐based demons algorithm, which incorporated the GMI between two images as an external force to facilitate the alignment, was compared with the point‐wise mutual information (PMI) diffeomorphic‐based demons algorithm whose external force was modified by replacing the image intensity difference in diffeomorphic demons algorithm with the PMI to make it appropriate for multimodal image registration. Eight patients with esophageal cancer(s) were enrolled in this IRB‐approved study. Whole‐body PET and CT images were acquired from a combined F18‐FDG PET/CT scanner for each patient. The modified Hausdorff distance (dMH) was used to evaluate the registration accuracy of the two algorithms. Of all patients, the mean values and standard deviations (SDs) of dMH were 6.65 (± 1.90) voxels and 6.01 (± 1.90) after the GMI‐based demons and the PMI diffeomorphic‐based demons registration algorithms respectively. Preliminary results on oncological patients showed that the respiratory motion and organ peristalsis in PET/CT esophageal images could not be neglected, although a combined F18‐FDG PET/CT scanner was used for image acquisition. The PMI diffeomorphic‐based demons algorithm was more accurate than the GMI‐based demons algorithm in registering PET/CT esophageal images. PACS numbers: 87.57.nj, 87.57. Q‐, 87.57.uk PMID:26218993

  4. Evaluation of GMI and PMI diffeomorphic-based demons algorithms for aligning PET and CT Images.

    PubMed

    Yang, Juan; Wang, Hongjun; Zhang, You; Yin, Yong

    2015-07-08

    Fusion of anatomic information in computed tomography (CT) and functional information in 18F-FDG positron emission tomography (PET) is crucial for accurate differentiation of tumor from benign masses, designing radiotherapy treatment plan and staging of cancer. Although current PET and CT images can be acquired from combined 18F-FDG PET/CT scanner, the two acquisitions are scanned separately and take a long time, which may induce potential positional errors in global and local caused by respiratory motion or organ peristalsis. So registration (alignment) of whole-body PET and CT images is a prerequisite for their meaningful fusion. The purpose of this study was to assess the performance of two multimodal registration algorithms for aligning PET and CT images. The proposed gradient of mutual information (GMI)-based demons algorithm, which incorporated the GMI between two images as an external force to facilitate the alignment, was compared with the point-wise mutual information (PMI) diffeomorphic-based demons algorithm whose external force was modified by replacing the image intensity difference in diffeomorphic demons algorithm with the PMI to make it appropriate for multimodal image registration. Eight patients with esophageal cancer(s) were enrolled in this IRB-approved study. Whole-body PET and CT images were acquired from a combined 18F-FDG PET/CT scanner for each patient. The modified Hausdorff distance (d(MH)) was used to evaluate the registration accuracy of the two algorithms. Of all patients, the mean values and standard deviations (SDs) of d(MH) were 6.65 (± 1.90) voxels and 6.01 (± 1.90) after the GMI-based demons and the PMI diffeomorphic-based demons registration algorithms respectively. Preliminary results on oncological patients showed that the respiratory motion and organ peristalsis in PET/CT esophageal images could not be neglected, although a combined 18F-FDG PET/CT scanner was used for image acquisition. The PMI diffeomorphic-based demons algorithm was more accurate than the GMI-based demons algorithm in registering PET/CT esophageal images.

  5. Effective spatial database support for acquiring spatial information from remote sensing images

    NASA Astrophysics Data System (ADS)

    Jin, Peiquan; Wan, Shouhong; Yue, Lihua

    2009-12-01

    In this paper, a new approach to maintain spatial information acquiring from remote-sensing images is presented, which is based on Object-Relational DBMS. According to this approach, the detected and recognized results of targets are stored and able to be further accessed in an ORDBMS-based spatial database system, and users can access the spatial information using the standard SQL interface. This approach is different from the traditional ArcSDE-based method, because the spatial information management module is totally integrated into the DBMS and becomes one of the core modules in the DBMS. We focus on three issues, namely the general framework for the ORDBMS-based spatial database system, the definitions of the add-in spatial data types and operators, and the process to develop a spatial Datablade on Informix. The results show that the ORDBMS-based spatial database support for image-based target detecting and recognition is easy and practical to be implemented.

  6. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.

    PubMed

    Park, Chulhee; Kang, Moon Gi

    2016-05-18

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.

  7. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

    PubMed Central

    Park, Chulhee; Kang, Moon Gi

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381

  8. Development of an information data base for watershed monitoring

    NASA Technical Reports Server (NTRS)

    Smith, A. Y.; Blackwell, R. J.

    1980-01-01

    Landsat multispectral scanner data, Defense Mapping Agency digital terrain data, conventional maps, and ground data were integrated to create a comprehensive information data base (the Image Based Information System), to monitor the water quality of the Lake Tahoe Basin. Landsat imagery was used as the planimetric base to which all other data were registered. A georeference image plane, which provided an interface between all data planes for the Lake Tahoe Basin data base, was created from the drainage basin map. The data base was used to extract each drainage basin for separate display. The Defense Mapping Agency-created elevation image was processed with VICAR software to produce a component representing slope magnitude, which was cross-tabulated with the drainage basin georeference table. Future applications of the data base include the development of precipitation modeling, surface runoff models, and classification of drainage basin cover types.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kohli, K; Liu, F; Krishnan, K

    Purpose: Multi-frequency EIT has been reported to be a potential tool in distinguishing a tissue anomaly from background. In this study, we investigate the feasibility of acquiring functional information by comparing multi-frequency EIT images in reference to the structural information from the CT image through fusion. Methods: EIT data was acquired from a slice of winter melon using sixteen electrodes around the phantom, injecting a current of 0.4mA at 100, 66, 24.8 and 9.9 kHz. Differential EIT images were generated by considering different combinations of pair frequencies, one serving as reference data and the other as test data. The experimentmore » was repeated after creating an anomaly in the form of an off-centered cavity of diameter 4.5 cm inside the melon. All EIT images were reconstructed using Electrical Impedance Tomography and Diffuse Optical Tomography Reconstruction Software (EIDORS) package in 2-D differential imaging mode using one-step Gaussian Newton minimization solver. CT image of the melon was obtained using a Phillips CT Scanner. A segmented binary mask image was generated based on the reference electrode position and the CT image to define the regions of interest. The region selected by the user was fused with the CT image through logical indexing. Results: Differential images based on the reference and test signal frequencies were reconstructed from EIT data. Result illustrated distinct structural inhomogeneity in seeded region compared to fruit flesh. The seeded region was seen as a higherimpedance region if the test frequency was lower than the base frequency in the differential EIT reconstruction. When the test frequency was higher than the base frequency, the signal experienced less electrical impedance in the seeded region during the EIT data acquisition. Conclusion: Frequency-based differential EIT imaging can be explored to provide additional functional information along with structural information from CT for identifying different tissues.« less

  10. On the usefulness of gradient information in multi-objective deformable image registration using a B-spline-based dual-dynamic transformation model: comparison of three optimization algorithms

    NASA Astrophysics Data System (ADS)

    Pirpinia, Kleopatra; Bosman, Peter A. N.; Sonke, Jan-Jakob; van Herk, Marcel; Alderliesten, Tanja

    2015-03-01

    The use of gradient information is well-known to be highly useful in single-objective optimization-based image registration methods. However, its usefulness has not yet been investigated for deformable image registration from a multi-objective optimization perspective. To this end, within a previously introduced multi-objective optimization framework, we use a smooth B-spline-based dual-dynamic transformation model that allows us to derive gradient information analytically, while still being able to account for large deformations. Within the multi-objective framework, we previously employed a powerful evolutionary algorithm (EA) that computes and advances multiple outcomes at once, resulting in a set of solutions (a so-called Pareto front) that represents efficient trade-offs between the objectives. With the addition of the B-spline-based transformation model, we studied the usefulness of gradient information in multiobjective deformable image registration using three different optimization algorithms: the (gradient-less) EA, a gradientonly algorithm, and a hybridization of these two. We evaluated the algorithms to register highly deformed images: 2D MRI slices of the breast in prone and supine positions. Results demonstrate that gradient-based multi-objective optimization significantly speeds up optimization in the initial stages of optimization. However, allowing sufficient computational resources, better results could still be obtained with the EA. Ultimately, the hybrid EA found the best overall approximation of the optimal Pareto front, further indicating that adding gradient-based optimization for multiobjective optimization-based deformable image registration can indeed be beneficial

  11. Optical threshold secret sharing scheme based on basic vector operations and coherence superposition

    NASA Astrophysics Data System (ADS)

    Deng, Xiaopeng; Wen, Wei; Mi, Xianwu; Long, Xuewen

    2015-04-01

    We propose, to our knowledge for the first time, a simple optical algorithm for secret image sharing with the (2,n) threshold scheme based on basic vector operations and coherence superposition. The secret image to be shared is firstly divided into n shadow images by use of basic vector operations. In the reconstruction stage, the secret image can be retrieved by recording the intensity of the coherence superposition of any two shadow images. Compared with the published encryption techniques which focus narrowly on information encryption, the proposed method can realize information encryption as well as secret sharing, which further ensures the safety and integrality of the secret information and prevents power from being kept centralized and abused. The feasibility and effectiveness of the proposed method are demonstrated by numerical results.

  12. Spatial data software integration - Merging CAD/CAM/mapping with GIS and image processing

    NASA Technical Reports Server (NTRS)

    Logan, Thomas L.; Bryant, Nevin A.

    1987-01-01

    The integration of CAD/CAM/mapping with image processing using geographic information systems (GISs) as the interface is examined. Particular emphasis is given to the development of software interfaces between JPL's Video Image Communication and Retrieval (VICAR)/Imaged Based Information System (IBIS) raster-based GIS and the CAD/CAM/mapping system. The design and functions of the VICAR and IBIS are described. Vector data capture and editing are studied. Various software programs for interfacing between the VICAR/IBIS and CAD/CAM/mapping are presented and analyzed.

  13. Content-Based Medical Image Retrieval

    NASA Astrophysics Data System (ADS)

    Müller, Henning; Deserno, Thomas M.

    This chapter details the necessity for alternative access concepts to the currently mainly text-based methods in medical information retrieval. This need is partly due to the large amount of visual data produced, the increasing variety of medical imaging data and changing user patterns. The stored visual data contain large amounts of unused information that, if well exploited, can help diagnosis, teaching and research. The chapter briefly reviews the history of image retrieval and its general methods before technologies that have been developed in the medical domain are focussed. We also discuss evaluation of medical content-based image retrieval (CBIR) systems and conclude with pointing out their strengths, gaps, and further developments. As examples, the MedGIFT project and the Image Retrieval in Medical Applications (IRMA) framework are presented.

  14. Neural networks for data compression and invariant image recognition

    NASA Technical Reports Server (NTRS)

    Gardner, Sheldon

    1989-01-01

    An approach to invariant image recognition (I2R), based upon a model of biological vision in the mammalian visual system (MVS), is described. The complete I2R model incorporates several biologically inspired features: exponential mapping of retinal images, Gabor spatial filtering, and a neural network associative memory. In the I2R model, exponentially mapped retinal images are filtered by a hierarchical set of Gabor spatial filters (GSF) which provide compression of the information contained within a pixel-based image. A neural network associative memory (AM) is used to process the GSF coded images. We describe a 1-D shape function method for coding of scale and rotationally invariant shape information. This method reduces image shape information to a periodic waveform suitable for coding as an input vector to a neural network AM. The shape function method is suitable for near term applications on conventional computing architectures equipped with VLSI FFT chips to provide a rapid image search capability.

  15. An image based information system - Architecture for correlating satellite and topological data bases

    NASA Technical Reports Server (NTRS)

    Bryant, N. A.; Zobrist, A. L.

    1978-01-01

    The paper describes the development of an image based information system and its use to process a Landsat thematic map showing land use or land cover in conjunction with a census tract polygon file to produce a tabulation of land use acreages per census tract. The system permits the efficient cross-tabulation of two or more geo-coded data sets, thereby setting the stage for the practical implementation of models of diffusion processes or cellular transformation. Characteristics of geographic information systems are considered, and functional requirements, such as data management, geocoding, image data management, and data analysis are discussed. The system is described, and the potentialities of its use are examined.

  16. Optical image encryption method based on incoherent imaging and polarized light encoding

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Xiong, D.; Alfalou, A.; Brosseau, C.

    2018-05-01

    We propose an incoherent encoding system for image encryption based on a polarized encoding method combined with an incoherent imaging. Incoherent imaging is the core component of this proposal, in which the incoherent point-spread function (PSF) of the imaging system serves as the main key to encode the input intensity distribution thanks to a convolution operation. An array of retarders and polarizers is placed on the input plane of the imaging structure to encrypt the polarized state of light based on Mueller polarization calculus. The proposal makes full use of randomness of polarization parameters and incoherent PSF so that a multidimensional key space is generated to deal with illegal attacks. Mueller polarization calculus and incoherent illumination of imaging structure ensure that only intensity information is manipulated. Another key advantage is that complicated processing and recording related to a complex-valued signal are avoided. The encoded information is just an intensity distribution, which is advantageous for data storage and transition because information expansion accompanying conventional encryption methods is also avoided. The decryption procedure can be performed digitally or using optoelectronic devices. Numerical simulation tests demonstrate the validity of the proposed scheme.

  17. Corpus callosum segmentation using deep neural networks with prior information from multi-atlas images

    NASA Astrophysics Data System (ADS)

    Park, Gilsoon; Hong, Jinwoo; Lee, Jong-Min

    2018-03-01

    In human brain, Corpus Callosum (CC) is the largest white matter structure, connecting between right and left hemispheres. Structural features such as shape and size of CC in midsagittal plane are of great significance for analyzing various neurological diseases, for example Alzheimer's disease, autism and epilepsy. For quantitative and qualitative studies of CC in brain MR images, robust segmentation of CC is important. In this paper, we present a novel method for CC segmentation. Our approach is based on deep neural networks and the prior information generated from multi-atlas images. Deep neural networks have recently shown good performance in various image processing field. Convolutional neural networks (CNN) have shown outstanding performance for classification and segmentation in medical image fields. We used convolutional neural networks for CC segmentation. Multi-atlas based segmentation model have been widely used in medical image segmentation because atlas has powerful information about the target structure we want to segment, consisting of MR images and corresponding manual segmentation of the target structure. We combined the prior information, such as location and intensity distribution of target structure (i.e. CC), made from multi-atlas images in CNN training process for more improving training. The CNN with prior information showed better segmentation performance than without.

  18. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    NASA Astrophysics Data System (ADS)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  19. 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display.

    PubMed

    Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen

    2017-07-01

    Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Infrared Ship Target Segmentation Based on Spatial Information Improved FCM.

    PubMed

    Bai, Xiangzhi; Chen, Zhiguo; Zhang, Yu; Liu, Zhaoying; Lu, Yi

    2016-12-01

    Segmentation of infrared (IR) ship images is always a challenging task, because of the intensity inhomogeneity and noise. The fuzzy C-means (FCM) clustering is a classical method widely used in image segmentation. However, it has some shortcomings, like not considering the spatial information or being sensitive to noise. In this paper, an improved FCM method based on the spatial information is proposed for IR ship target segmentation. The improvements include two parts: 1) adding the nonlocal spatial information based on the ship target and 2) using the spatial shape information of the contour of the ship target to refine the local spatial constraint by Markov random field. In addition, the results of K -means are used to initialize the improved FCM method. Experimental results show that the improved method is effective and performs better than the existing methods, including the existing FCM methods, for segmentation of the IR ship images.

  1. A Method of Spatial Mapping and Reclassification for High-Spatial-Resolution Remote Sensing Image Classification

    PubMed Central

    Wang, Guizhou; Liu, Jianbo; He, Guojin

    2013-01-01

    This paper presents a new classification method for high-spatial-resolution remote sensing images based on a strategic mechanism of spatial mapping and reclassification. The proposed method includes four steps. First, the multispectral image is classified by a traditional pixel-based classification method (support vector machine). Second, the panchromatic image is subdivided by watershed segmentation. Third, the pixel-based multispectral image classification result is mapped to the panchromatic segmentation result based on a spatial mapping mechanism and the area dominant principle. During the mapping process, an area proportion threshold is set, and the regional property is defined as unclassified if the maximum area proportion does not surpass the threshold. Finally, unclassified regions are reclassified based on spectral information using the minimum distance to mean algorithm. Experimental results show that the classification method for high-spatial-resolution remote sensing images based on the spatial mapping mechanism and reclassification strategy can make use of both panchromatic and multispectral information, integrate the pixel- and object-based classification methods, and improve classification accuracy. PMID:24453808

  2. Feedback mechanism for smart nozzles and nebulizers

    DOEpatents

    Montaser, Akbar [Potomac, MD; Jorabchi, Kaveh [Arlington, VA; Kahen, Kaveh [Kleinburg, CA

    2009-01-27

    Nozzles and nebulizers able to produce aerosol with optimum and reproducible quality based on feedback information obtained using laser imaging techniques. Two laser-based imaging techniques based on particle image velocimetry (PTV) and optical patternation map and contrast size and velocity distributions for indirect and direct pneumatic nebulizations in plasma spectrometry. Two pulses from thin laser sheet with known time difference illuminate droplets flow field. Charge coupled device (CCL)) captures scattering of laser light from droplets, providing two instantaneous particle images. Pointwise cross-correlation of corresponding images yields two-dimensional velocity map of aerosol velocity field. For droplet size distribution studies, solution is doped with fluorescent dye and both laser induced florescence (LIF) and Mie scattering images are captured simultaneously by two CCDs with the same field of view. Ratio of LIF/Mie images provides relative droplet size information, then scaled by point calibration method via phase Doppler particle analyzer.

  3. Image enhancement based on in vivo hyperspectral gastroscopic images: a case study

    NASA Astrophysics Data System (ADS)

    Gu, Xiaozhou; Han, Zhimin; Yao, Liqing; Zhong, Yunshi; Shi, Qiang; Fu, Ye; Liu, Changsheng; Wang, Xiguang; Xie, Tianyu

    2016-10-01

    Hyperspectral imaging (HSI) has been recognized as a powerful tool for noninvasive disease detection in the gastrointestinal field. However, most of the studies on HSI in this field have involved ex vivo biopsies or resected tissues. We proposed an image enhancement method based on in vivo hyperspectral gastroscopic images. First, we developed a flexible gastroscopy system capable of obtaining in vivo hyperspectral images of different types of stomach disease mucosa. Then, depending on a specific object, an appropriate band selection algorithm based on dependence of information was employed to determine a subset of spectral bands that would yield useful spatial information. Finally, these bands were assigned to be the color components of an enhanced image of the object. A gastric ulcer case study demonstrated that our method yields higher color tone contrast, which enhanced the displays of the gastric ulcer regions, and that it will be valuable in clinical applications.

  4. Knowledge Translation and Barriers to Imaging Optimization in the Emergency Department: A Research Agenda.

    PubMed

    Probst, Marc A; Dayan, Peter S; Raja, Ali S; Slovis, Benjamin H; Yadav, Kabir; Lam, Samuel H; Shapiro, Jason S; Farris, Coreen; Babcock, Charlene I; Griffey, Richard T; Robey, Thomas E; Fortin, Emily M; Johnson, Jamlik O; Chong, Suzanne T; Davenport, Moira; Grigat, Daniel W; Lang, Eddy L

    2015-12-01

    Researchers have attempted to optimize imaging utilization by describing which clinical variables are more predictive of acute disease and, conversely, what combination of variables can obviate the need for imaging. These results are then used to develop evidence-based clinical pathways, clinical decision instruments, and clinical practice guidelines. Despite the validation of these results in subsequent studies, with some demonstrating improved outcomes, their actual use is often limited. This article outlines a research agenda to promote the dissemination and implementation (also known as knowledge translation) of evidence-based interventions for emergency department (ED) imaging, i.e., clinical pathways, clinical decision instruments, and clinical practice guidelines. We convened a multidisciplinary group of stakeholders and held online and telephone discussions over a 6-month period culminating in an in-person meeting at the 2015 Academic Emergency Medicine consensus conference. We identified the following four overarching research questions: 1) what determinants (barriers and facilitators) influence emergency physicians' use of evidence-based interventions when ordering imaging in the ED; 2) what implementation strategies at the institutional level can improve the use of evidence-based interventions for ED imaging; 3) what interventions at the health care policy level can facilitate the adoption of evidence-based interventions for ED imaging; and 4) how can health information technology, including electronic health records, clinical decision support, and health information exchanges, be used to increase awareness, use, and adherence to evidence-based interventions for ED imaging? Advancing research that addresses these questions will provide valuable information as to how we can use evidence-based interventions to optimize imaging utilization and ultimately improve patient care. © 2015 by the Society for Academic Emergency Medicine.

  5. Study on the application of MRF and the D-S theory to image segmentation of the human brain and quantitative analysis of the brain tissue

    NASA Astrophysics Data System (ADS)

    Guan, Yihong; Luo, Yatao; Yang, Tao; Qiu, Lei; Li, Junchang

    2012-01-01

    The features of the spatial information of Markov random field image was used in image segmentation. It can effectively remove the noise, and get a more accurate segmentation results. Based on the fuzziness and clustering of pixel grayscale information, we find clustering center of the medical image different organizations and background through Fuzzy cmeans clustering method. Then we find each threshold point of multi-threshold segmentation through two dimensional histogram method, and segment it. The features of fusing multivariate information based on the Dempster-Shafer evidence theory, getting image fusion and segmentation. This paper will adopt the above three theories to propose a new human brain image segmentation method. Experimental result shows that the segmentation result is more in line with human vision, and is of vital significance to accurate analysis and application of tissues.

  6. Biomedical image representation approach using visualness and spatial information in a concept feature space for interactive region-of-interest-based retrieval.

    PubMed

    Rahman, Md Mahmudur; Antani, Sameer K; Demner-Fushman, Dina; Thoma, George R

    2015-10-01

    This article presents an approach to biomedical image retrieval by mapping image regions to local concepts where images are represented in a weighted entropy-based concept feature space. The term "concept" refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as the Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist the user in interactively selecting a region-of-interest (ROI) and searching for similar image ROIs. Further, a spatial verification step is used as a postprocessing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval is validated through experiments on two different data sets, which are collected from open access biomedical literature.

  7. Biomedical image representation approach using visualness and spatial information in a concept feature space for interactive region-of-interest-based retrieval

    PubMed Central

    Rahman, Md. Mahmudur; Antani, Sameer K.; Demner-Fushman, Dina; Thoma, George R.

    2015-01-01

    Abstract. This article presents an approach to biomedical image retrieval by mapping image regions to local concepts where images are represented in a weighted entropy-based concept feature space. The term “concept” refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as the Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist the user in interactively selecting a region-of-interest (ROI) and searching for similar image ROIs. Further, a spatial verification step is used as a postprocessing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval is validated through experiments on two different data sets, which are collected from open access biomedical literature. PMID:26730398

  8. [An improved medical image fusion algorithm and quality evaluation].

    PubMed

    Chen, Meiling; Tao, Ling; Qian, Zhiyu

    2009-08-01

    Medical image fusion is of very important value for application in medical image analysis and diagnosis. In this paper, the conventional method of wavelet fusion is improved,so a new algorithm of medical image fusion is presented and the high frequency and low frequency coefficients are studied respectively. When high frequency coefficients are chosen, the regional edge intensities of each sub-image are calculated to realize adaptive fusion. The choice of low frequency coefficient is based on the edges of images, so that the fused image preserves all useful information and appears more distinctly. We apply the conventional and the improved fusion algorithms based on wavelet transform to fuse two images of human body and also evaluate the fusion results through a quality evaluation method. Experimental results show that this algorithm can effectively retain the details of information on original images and enhance their edge and texture features. This new algorithm is better than the conventional fusion algorithm based on wavelet transform.

  9. Radiomics: Extracting more information from medical images using advanced feature analysis

    PubMed Central

    Lambin, Philippe; Rios-Velazquez, Emmanuel; Leijenaar, Ralph; Carvalho, Sara; van Stiphout, Ruud G.P.M.; Granton, Patrick; Zegers, Catharina M.L.; Gillies, Robert; Boellard, Ronald; Dekker, André; Aerts, Hugo J.W.L.

    2015-01-01

    Solid cancers are spatially and temporally heterogeneous. This limits the use of invasive biopsy based molecular assays but gives huge potential for medical imaging, which has the ability to capture intra-tumoural heterogeneity in a non-invasive way. During the past decades, medical imaging innovations with new hardware, new imaging agents and standardised protocols, allows the field to move towards quantitative imaging. Therefore, also the development of automated and reproducible analysis methodologies to extract more information from image-based features is a requirement. Radiomics – the high-throughput extraction of large amounts of image features from radiographic images – addresses this problem and is one of the approaches that hold great promises but need further validation in multi-centric settings and in the laboratory. PMID:22257792

  10. Image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  11. Accelerometer-Based Method for Extracting Respiratory and Cardiac Gating Information for Dual Gating during Nuclear Medicine Imaging

    PubMed Central

    Pänkäälä, Mikko; Paasio, Ari

    2014-01-01

    Both respiratory and cardiac motions reduce the quality and consistency of medical imaging specifically in nuclear medicine imaging. Motion artifacts can be eliminated by gating the image acquisition based on the respiratory phase and cardiac contractions throughout the medical imaging procedure. Electrocardiography (ECG), 3-axis accelerometer, and respiration belt data were processed and analyzed from ten healthy volunteers. Seismocardiography (SCG) is a noninvasive accelerometer-based method that measures accelerations caused by respiration and myocardial movements. This study was conducted to investigate the feasibility of the accelerometer-based method in dual gating technique. The SCG provides accelerometer-derived respiratory (ADR) data and accurate information about quiescent phases within the cardiac cycle. The correct information about the status of ventricles and atria helps us to create an improved estimate for quiescent phases within a cardiac cycle. The correlation of ADR signals with the reference respiration belt was investigated using Pearson correlation. High linear correlation was observed between accelerometer-based measurement and reference measurement methods (ECG and Respiration belt). Above all, due to the simplicity of the proposed method, the technique has high potential to be applied in dual gating in clinical cardiac positron emission tomography (PET) to obtain motion-free images in the future. PMID:25120563

  12. A Dynamic Graph Cuts Method with Integrated Multiple Feature Maps for Segmenting Kidneys in 2D Ultrasound Images.

    PubMed

    Zheng, Qiang; Warner, Steven; Tasian, Gregory; Fan, Yong

    2018-02-12

    Automatic segmentation of kidneys in ultrasound (US) images remains a challenging task because of high speckle noise, low contrast, and large appearance variations of kidneys in US images. Because texture features may improve the US image segmentation performance, we propose a novel graph cuts method to segment kidney in US images by integrating image intensity information and texture feature maps. We develop a new graph cuts-based method to segment kidney US images by integrating original image intensity information and texture feature maps extracted using Gabor filters. To handle large appearance variation within kidney images and improve computational efficiency, we build a graph of image pixels close to kidney boundary instead of building a graph of the whole image. To make the kidney segmentation robust to weak boundaries, we adopt localized regional information to measure similarity between image pixels for computing edge weights to build the graph of image pixels. The localized graph is dynamically updated and the graph cuts-based segmentation iteratively progresses until convergence. Our method has been evaluated based on kidney US images of 85 subjects. The imaging data of 20 randomly selected subjects were used as training data to tune parameters of the image segmentation method, and the remaining data were used as testing data for validation. Experiment results demonstrated that the proposed method obtained promising segmentation results for bilateral kidneys (average Dice index = 0.9446, average mean distance = 2.2551, average specificity = 0.9971, average accuracy = 0.9919), better than other methods under comparison (P < .05, paired Wilcoxon rank sum tests). The proposed method achieved promising performance for segmenting kidneys in two-dimensional US images, better than segmentation methods built on any single channel of image information. This method will facilitate extraction of kidney characteristics that may predict important clinical outcomes such as progression of chronic kidney disease. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  13. Combined X-ray CT and mass spectrometry for biomedical imaging applications

    NASA Astrophysics Data System (ADS)

    Schioppa, E., Jr.; Ellis, S.; Bruinen, A. L.; Visser, J.; Heeren, R. M. A.; Uher, J.; Koffeman, E.

    2014-04-01

    Imaging technologies play a key role in many branches of science, especially in biology and medicine. They provide an invaluable insight into both internal structure and processes within a broad range of samples. There are many techniques that allow one to obtain images of an object. Different techniques are based on the analysis of a particular sample property by means of a dedicated imaging system, and as such, each imaging modality provides the researcher with different information. The use of multimodal imaging (imaging with several different techniques) can provide additional and complementary information that is not possible when employing a single imaging technique alone. In this study, we present for the first time a multi-modal imaging technique where X-ray computerized tomography (CT) is combined with mass spectrometry imaging (MSI). While X-ray CT provides 3-dimensional information regarding the internal structure of the sample based on X-ray absorption coefficients, MSI of thin sections acquired from the same sample allows the spatial distribution of many elements/molecules, each distinguished by its unique mass-to-charge ratio (m/z), to be determined within a single measurement and with a spatial resolution as low as 1 μm or even less. The aim of the work is to demonstrate how molecular information from MSI can be spatially correlated with 3D structural information acquired from X-ray CT. In these experiments, frozen samples are imaged in an X-ray CT setup using Medipix based detectors equipped with a CO2 cooled sample holder. Single projections are pre-processed before tomographic reconstruction using a signal-to-thickness calibration. In the second step, the object is sliced into thin sections (circa 20 μm) that are then imaged using both matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) and secondary ion (SIMS) mass spectrometry, where the spatial distribution of specific molecules within the sample is determined. The combination of two vastly different imaging approaches provides complementary information (i.e., anatomical and molecular distributions) that allows the correlation of distinct structural features with specific molecules distributions leading to unique insights in disease development.

  14. Medical Image Databases

    PubMed Central

    Tagare, Hemant D.; Jaffe, C. Carl; Duncan, James

    1997-01-01

    Abstract Information contained in medical images differs considerably from that residing in alphanumeric format. The difference can be attributed to four characteristics: (1) the semantics of medical knowledge extractable from images is imprecise; (2) image information contains form and spatial data, which are not expressible in conventional language; (3) a large part of image information is geometric; (4) diagnostic inferences derived from images rest on an incomplete, continuously evolving model of normality. This paper explores the differentiating characteristics of text versus images and their impact on design of a medical image database intended to allow content-based indexing and retrieval. One strategy for implementing medical image databases is presented, which employs object-oriented iconic queries, semantics by association with prototypes, and a generic schema. PMID:9147338

  15. Using In Vitro Live-cell Imaging to Explore Chemotherapeutics Delivered by Lipid-based Nanoparticles.

    PubMed

    Seynhaeve, Ann L B; Ten Hagen, Timo L M

    2017-11-01

    Conventional imaging techniques can provide detailed information about cellular processes. However, this information is based on static images in an otherwise dynamic system, and successive phases are easily overlooked or misinterpreted. Live-cell imaging and time-lapse microscopy, in which living cells can be followed for hours or even days in a more or less continuous fashion, are therefore very informative. The protocol described here allows for the investigation of the fate of chemotherapeutic nanoparticles after the delivery of doxorubicin (dox) in living cells. Dox is an intercalating agent that must be released from its nanocarrier to become biologically active. In spite of its clinical registration for more than two decades, its uptake, breakdown, and drug release are still not fully understood. This article explores the hypothesis that lipid-based nanoparticles are taken up by the tumor cells and are slowly degraded. Released dox is then translocated to the nucleus. To prevent fixation artifacts, live-cell imaging and time-lapse microscopy, described in this experimental procedure, can be applied.

  16. Breast Histopathological Image Retrieval Based on Latent Dirichlet Allocation.

    PubMed

    Ma, Yibing; Jiang, Zhiguo; Zhang, Haopeng; Xie, Fengying; Zheng, Yushan; Shi, Huaqiang; Zhao, Yu

    2017-07-01

    In the field of pathology, whole slide image (WSI) has become the major carrier of visual and diagnostic information. Content-based image retrieval among WSIs can aid the diagnosis of an unknown pathological image by finding its similar regions in WSIs with diagnostic information. However, the huge size and complex content of WSI pose several challenges for retrieval. In this paper, we propose an unsupervised, accurate, and fast retrieval method for a breast histopathological image. Specifically, the method presents a local statistical feature of nuclei for morphology and distribution of nuclei, and employs the Gabor feature to describe the texture information. The latent Dirichlet allocation model is utilized for high-level semantic mining. Locality-sensitive hashing is used to speed up the search. Experiments on a WSI database with more than 8000 images from 15 types of breast histopathology demonstrate that our method achieves about 0.9 retrieval precision as well as promising efficiency. Based on the proposed framework, we are developing a search engine for an online digital slide browsing and retrieval platform, which can be applied in computer-aided diagnosis, pathology education, and WSI archiving and management.

  17. Mobile object retrieval in server-based image databases

    NASA Astrophysics Data System (ADS)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  18. Medical Image Retrieval: A Multimodal Approach

    PubMed Central

    Cao, Yu; Steffey, Shawn; He, Jianbiao; Xiao, Degui; Tao, Cui; Chen, Ping; Müller, Henning

    2014-01-01

    Medical imaging is becoming a vital component of war on cancer. Tremendous amounts of medical image data are captured and recorded in a digital format during cancer care and cancer research. Facing such an unprecedented volume of image data with heterogeneous image modalities, it is necessary to develop effective and efficient content-based medical image retrieval systems for cancer clinical practice and research. While substantial progress has been made in different areas of content-based image retrieval (CBIR) research, direct applications of existing CBIR techniques to the medical images produced unsatisfactory results, because of the unique characteristics of medical images. In this paper, we develop a new multimodal medical image retrieval approach based on the recent advances in the statistical graphic model and deep learning. Specifically, we first investigate a new extended probabilistic Latent Semantic Analysis model to integrate the visual and textual information from medical images to bridge the semantic gap. We then develop a new deep Boltzmann machine-based multimodal learning model to learn the joint density model from multimodal information in order to derive the missing modality. Experimental results with large volume of real-world medical images have shown that our new approach is a promising solution for the next-generation medical imaging indexing and retrieval system. PMID:26309389

  19. The algorithm of fast image stitching based on multi-feature extraction

    NASA Astrophysics Data System (ADS)

    Yang, Chunde; Wu, Ge; Shi, Jing

    2018-05-01

    This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.

  20. Infrared and visible image fusion based on visual saliency map and weighted least square optimization

    NASA Astrophysics Data System (ADS)

    Ma, Jinlei; Zhou, Zhiqiang; Wang, Bo; Zong, Hua

    2017-05-01

    The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional "averaging" fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.

  1. A comparative study of multi-focus image fusion validation metrics

    NASA Astrophysics Data System (ADS)

    Giansiracusa, Michael; Lutz, Adam; Messer, Neal; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Manno, Michael

    2016-05-01

    Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).

  2. Ontology-guided organ detection to retrieve web images of disease manifestation: towards the construction of a consumer-based health image library.

    PubMed

    Chen, Yang; Ren, Xiaofeng; Zhang, Guo-Qiang; Xu, Rong

    2013-01-01

    Visual information is a crucial aspect of medical knowledge. Building a comprehensive medical image base, in the spirit of the Unified Medical Language System (UMLS), would greatly benefit patient education and self-care. However, collection and annotation of such a large-scale image base is challenging. To combine visual object detection techniques with medical ontology to automatically mine web photos and retrieve a large number of disease manifestation images with minimal manual labeling effort. As a proof of concept, we first learnt five organ detectors on three detection scales for eyes, ears, lips, hands, and feet. Given a disease, we used information from the UMLS to select affected body parts, ran the pretrained organ detectors on web images, and combined the detection outputs to retrieve disease images. Compared with a supervised image retrieval approach that requires training images for every disease, our ontology-guided approach exploits shared visual information of body parts across diseases. In retrieving 2220 web images of 32 diseases, we reduced manual labeling effort to 15.6% while improving the average precision by 3.9% from 77.7% to 81.6%. For 40.6% of the diseases, we improved the precision by 10%. The results confirm the concept that the web is a feasible source for automatic disease image retrieval for health image database construction. Our approach requires a small amount of manual effort to collect complex disease images, and to annotate them by standard medical ontology terms.

  3. Image-based aircraft pose estimation: a comparison of simulations and real-world data

    NASA Astrophysics Data System (ADS)

    Breuers, Marcel G. J.; de Reus, Nico

    2001-10-01

    The problem of estimating aircraft pose information from mono-ocular image data is considered using a Fourier descriptor based algorithm. The dependence of pose estimation accuracy on image resolution and aspect angle is investigated through simulations using sets of synthetic aircraft images. Further evaluation shows that god pose estimation accuracy can be obtained in real world image sequences.

  4. Multi-sensor image registration based on algebraic projective invariants.

    PubMed

    Li, Bin; Wang, Wei; Ye, Hao

    2013-04-22

    A new automatic feature-based registration algorithm is presented for multi-sensor images with projective deformation. Contours are firstly extracted from both reference and sensed images as basic features in the proposed method. Since it is difficult to design a projective-invariant descriptor from the contour information directly, a new feature named Five Sequential Corners (FSC) is constructed based on the corners detected from the extracted contours. By introducing algebraic projective invariants, we design a descriptor for each FSC that is ensured to be robust against projective deformation. Further, no gray scale related information is required in calculating the descriptor, thus it is also robust against the gray scale discrepancy between the multi-sensor image pairs. Experimental results utilizing real image pairs are presented to show the merits of the proposed registration method.

  5. New segmentation-based tone mapping algorithm for high dynamic range image

    NASA Astrophysics Data System (ADS)

    Duan, Weiwei; Guo, Huinan; Zhou, Zuofeng; Huang, Huimin; Cao, Jianzhong

    2017-07-01

    The traditional tone mapping algorithm for the display of high dynamic range (HDR) image has the drawback of losing the impression of brightness, contrast and color information. To overcome this phenomenon, we propose a new tone mapping algorithm based on dividing the image into different exposure regions in this paper. Firstly, the over-exposure region is determined using the Local Binary Pattern information of HDR image. Then, based on the peak and average gray of the histogram, the under-exposure and normal-exposure region of HDR image are selected separately. Finally, the different exposure regions are mapped by differentiated tone mapping methods to get the final result. The experiment results show that the proposed algorithm achieve the better performance both in visual quality and objective contrast criterion than other algorithms.

  6. A sparsity-based simplification method for segmentation of spectral-domain optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Meiniel, William; Gan, Yu; Olivo-Marin, Jean-Christophe; Angelini, Elsa

    2017-08-01

    Optical coherence tomography (OCT) has emerged as a promising image modality to characterize biological tissues. With axio-lateral resolutions at the micron-level, OCT images provide detailed morphological information and enable applications such as optical biopsy and virtual histology for clinical needs. Image enhancement is typically required for morphological segmentation, to improve boundary localization, rather than enrich detailed tissue information. We propose to formulate image enhancement as an image simplification task such that tissue layers are smoothed while contours are enhanced. For this purpose, we exploit a Total Variation sparsity-based image reconstruction, inspired by the Compressed Sensing (CS) theory, but specialized for images with structures arranged in layers. We demonstrate the potential of our approach on OCT human heart and retinal images for layers segmentation. We also compare our image enhancement capabilities to the state-of-the-art denoising techniques.

  7. Mitigating illumination gradients in a SAR image based on the image data and antenna beam pattern

    DOEpatents

    Doerry, Armin W.

    2013-04-30

    Illumination gradients in a synthetic aperture radar (SAR) image of a target can be mitigated by determining a correction for pixel values associated with the SAR image. This correction is determined based on information indicative of a beam pattern used by a SAR antenna apparatus to illuminate the target, and also based on the pixel values associated with the SAR image. The correction is applied to the pixel values associated with the SAR image to produce corrected pixel values that define a corrected SAR image.

  8. Developing a comprehensive system for content-based retrieval of image and text data from a national survey

    NASA Astrophysics Data System (ADS)

    Antani, Sameer K.; Natarajan, Mukil; Long, Jonathan L.; Long, L. Rodney; Thoma, George R.

    2005-04-01

    The article describes the status of our ongoing R&D at the U.S. National Library of Medicine (NLM) towards the development of an advanced multimedia database biomedical information system that supports content-based image retrieval (CBIR). NLM maintains a collection of 17,000 digitized spinal X-rays along with text survey data from the Second National Health and Nutritional Examination Survey (NHANES II). These data serve as a rich data source for epidemiologists and researchers of osteoarthritis and musculoskeletal diseases. It is currently possible to access these through text keyword queries using our Web-based Medical Information Retrieval System (WebMIRS). CBIR methods developed specifically for biomedical images could offer direct visual searching of these images by means of example image or user sketch. We are building a system which supports hybrid queries that have text and image-content components. R&D goals include developing algorithms for robust image segmentation for localizing and identifying relevant anatomy, labeling the segmented anatomy based on its pathology, developing suitable indexing and similarity matching methods for images and image features, and associating the survey text information for query and retrieval along with the image data. Some highlights of the system developed in MATLAB and Java are: use of a networked or local centralized database for text and image data; flexibility to incorporate new research work; provides a means to control access to system components under development; and use of XML for structured reporting. The article details the design, features, and algorithms in this third revision of this prototype system, CBIR3.

  9. Multimodal Medical Image Fusion by Adaptive Manifold Filter.

    PubMed

    Geng, Peng; Liu, Shuaiqi; Zhuang, Shanna

    2015-01-01

    Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. The modified local contrast information is proposed to fuse multimodal medical images. Firstly, the adaptive manifold filter is introduced into filtering source images as the low-frequency part in the modified local contrast. Secondly, the modified spatial frequency of the source images is adopted as the high-frequency part in the modified local contrast. Finally, the pixel with larger modified local contrast is selected into the fused image. The presented scheme outperforms the guided filter method in spatial domain, the dual-tree complex wavelet transform-based method, nonsubsampled contourlet transform-based method, and four classic fusion methods in terms of visual quality. Furthermore, the mutual information values by the presented method are averagely 55%, 41%, and 62% higher than the three methods and those values of edge based similarity measure by the presented method are averagely 13%, 33%, and 14% higher than the three methods for the six pairs of source images.

  10. A no-key-exchange secure image sharing scheme based on Shamir's three-pass cryptography protocol and the multiple-parameter fractional Fourier transform.

    PubMed

    Lang, Jun

    2012-01-30

    In this paper, we propose a novel secure image sharing scheme based on Shamir's three-pass protocol and the multiple-parameter fractional Fourier transform (MPFRFT), which can safely exchange information with no advance distribution of either secret keys or public keys between users. The image is encrypted directly by the MPFRFT spectrum without the use of phase keys, and information can be shared by transmitting the encrypted image (or message) three times between users. Numerical simulation results are given to verify the performance of the proposed algorithm.

  11. Information mining in remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Li, Jiang

    The volume of remotely sensed imagery continues to grow at an enormous rate due to the advances in sensor technology, and our capability for collecting and storing images has greatly outpaced our ability to analyze and retrieve information from the images. This motivates us to develop image information mining techniques, which is very much an interdisciplinary endeavor drawing upon expertise in image processing, databases, information retrieval, machine learning, and software design. This dissertation proposes and implements an extensive remote sensing image information mining (ReSIM) system prototype for mining useful information implicitly stored in remote sensing imagery. The system consists of three modules: image processing subsystem, database subsystem, and visualization and graphical user interface (GUI) subsystem. Land cover and land use (LCLU) information corresponding to spectral characteristics is identified by supervised classification based on support vector machines (SVM) with automatic model selection, while textural features that characterize spatial information are extracted using Gabor wavelet coefficients. Within LCLU categories, textural features are clustered using an optimized k-means clustering approach to acquire search efficient space. The clusters are stored in an object-oriented database (OODB) with associated images indexed in an image database (IDB). A k-nearest neighbor search is performed using a query-by-example (QBE) approach. Furthermore, an automatic parametric contour tracing algorithm and an O(n) time piecewise linear polygonal approximation (PLPA) algorithm are developed for shape information mining of interesting objects within the image. A fuzzy object-oriented database based on the fuzzy object-oriented data (FOOD) model is developed to handle the fuzziness and uncertainty. Three specific applications are presented: integrated land cover and texture pattern mining, shape information mining for change detection of lakes, and fuzzy normalized difference vegetation index (NDVI) pattern mining. The study results show the effectiveness of the proposed system prototype and the potentials for other applications in remote sensing.

  12. Digital document imaging systems: An overview and guide

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This is an aid to NASA managers in planning the selection of a Digital Document Imaging System (DDIS) as a possible solution for document information processing and storage. Intended to serve as a manager's guide, this document contains basic information on digital imaging systems, technology, equipment standards, issues of interoperability and interconnectivity, and issues related to selecting appropriate imaging equipment based upon well defined needs.

  13. Content based image retrieval for matching images of improvised explosive devices in which snake initialization is viewed as an inverse problem

    NASA Astrophysics Data System (ADS)

    Acton, Scott T.; Gilliam, Andrew D.; Li, Bing; Rossi, Adam

    2008-02-01

    Improvised explosive devices (IEDs) are common and lethal instruments of terrorism, and linking a terrorist entity to a specific device remains a difficult task. In the effort to identify persons associated with a given IED, we have implemented a specialized content based image retrieval system to search and classify IED imagery. The system makes two contributions to the art. First, we introduce a shape-based matching technique exploiting shape, color, and texture (wavelet) information, based on novel vector field convolution active contours and a novel active contour initialization method which treats coarse segmentation as an inverse problem. Second, we introduce a unique graph theoretic approach to match annotated printed circuit board images for which no schematic or connectivity information is available. The shape-based image retrieval method, in conjunction with the graph theoretic tool, provides an efficacious system for matching IED images. For circuit imagery, the basic retrieval mechanism has a precision of 82.1% and the graph based method has a precision of 98.1%. As of the fall of 2007, the working system has processed over 400,000 case images.

  14. Fizeau Fourier transform imaging spectroscopy: missing data reconstruction.

    PubMed

    Thurman, Samuel T; Fienup, James R

    2008-04-28

    Fizeau Fourier transform imaging spectroscopy yields both spatial and spectral information about an object. Spectral information, however, is not obtained for a finite area of low spatial frequencies. A nonlinear reconstruction algorithm based on a gray-world approximation is presented. Reconstruction results from simulated data agree well with ideal Michelson interferometer-based spectral imagery. This result implies that segmented-aperture telescopes and multiple telescope arrays designed for conventional imaging can be used to gather useful spectral data through Fizeau FTIS without the need for additional hardware.

  15. Polarization-multiplexing ghost imaging

    NASA Astrophysics Data System (ADS)

    Dongfeng, Shi; Jiamin, Zhang; Jian, Huang; Yingjian, Wang; Kee, Yuan; Kaifa, Cao; Chenbo, Xie; Dong, Liu; Wenyue, Zhu

    2018-03-01

    A novel technique for polarization-multiplexing ghost imaging is proposed to simultaneously obtain multiple polarimetric information by a single detector. Here, polarization-division multiplexing speckles are employed for object illumination. The light reflected from the objects is detected by a single-pixel detector. An iterative reconstruction method is used to restore the fused image containing the different polarimetric information by using the weighted sum of the multiplexed speckles based on the correlation coefficients obtained from the detected intensities. Next, clear images of the different polarimetric information are recovered by demultiplexing the fused image. The results clearly demonstrate that the proposed method is effective.

  16. Image Mining in Remote Sensing for Coastal Wetlands Mapping: from Pixel Based to Object Based Approach

    NASA Astrophysics Data System (ADS)

    Farda, N. M.; Danoedoro, P.; Hartono; Harjoko, A.

    2016-11-01

    The availably of remote sensing image data is numerous now, and with a large amount of data it makes “knowledge gap” in extraction of selected information, especially coastal wetlands. Coastal wetlands provide ecosystem services essential to people and the environment. The aim of this research is to extract coastal wetlands information from satellite data using pixel based and object based image mining approach. Landsat MSS, Landsat 5 TM, Landsat 7 ETM+, and Landsat 8 OLI images located in Segara Anakan lagoon are selected to represent data at various multi temporal images. The input for image mining are visible and near infrared bands, PCA band, invers PCA bands, mean shift segmentation bands, bare soil index, vegetation index, wetness index, elevation from SRTM and ASTER GDEM, and GLCM (Harralick) or variability texture. There is three methods were applied to extract coastal wetlands using image mining: pixel based - Decision Tree C4.5, pixel based - Back Propagation Neural Network, and object based - Mean Shift segmentation and Decision Tree C4.5. The results show that remote sensing image mining can be used to map coastal wetlands ecosystem. Decision Tree C4.5 can be mapped with highest accuracy (0.75 overall kappa). The availability of remote sensing image mining for mapping coastal wetlands is very important to provide better understanding about their spatiotemporal coastal wetlands dynamics distribution.

  17. A novel segmentation method for uneven lighting image with noise injection based on non-local spatial information and intuitionistic fuzzy entropy

    NASA Astrophysics Data System (ADS)

    Yu, Haiyan; Fan, Jiulun

    2017-12-01

    Local thresholding methods for uneven lighting image segmentation always have the limitations that they are very sensitive to noise injection and that the performance relies largely upon the choice of the initial window size. This paper proposes a novel algorithm for segmenting uneven lighting images with strong noise injection based on non-local spatial information and intuitionistic fuzzy theory. We regard an image as a gray wave in three-dimensional space, which is composed of many peaks and troughs, and these peaks and troughs can divide the image into many local sub-regions in different directions. Our algorithm computes the relative characteristic of each pixel located in the corresponding sub-region based on fuzzy membership function and uses it to replace its absolute characteristic (its gray level) to reduce the influence of uneven light on image segmentation. At the same time, the non-local adaptive spatial constraints of pixels are introduced to avoid noise interference with the search of local sub-regions and the computation of local characteristics. Moreover, edge information is also taken into account to avoid false peak and trough labeling. Finally, a global method based on intuitionistic fuzzy entropy is employed on the wave transformation image to obtain the segmented result. Experiments on several test images show that the proposed method has excellent capability of decreasing the influence of uneven illumination on images and noise injection and behaves more robustly than several classical global and local thresholding methods.

  18. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, G; Pan, X; Stayman, J

    2014-06-15

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within themore » reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical applications. Learning Objectives: Learn the general methodologies associated with model-based 3D image reconstruction. Learn the potential advantages in image quality and dose associated with model-based image reconstruction. Learn the challenges associated with computational load and image quality assessment for such reconstruction methods. Learn how imaging task can be incorporated as a means to drive optimal image acquisition and reconstruction techniques. Learn how model-based reconstruction methods can incorporate prior information to improve image quality, ease sampling requirements, and reduce dose.« less

  19. Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method

    NASA Astrophysics Data System (ADS)

    Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao

    2017-03-01

    Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.

  20. Benefits of Red-Edge Spectral Band and Texture Features for the Object-based Classification using RapidEye sSatellite Image data

    NASA Astrophysics Data System (ADS)

    Kim, H. O.; Yeom, J. M.

    2014-12-01

    Space-based remote sensing in agriculture is particularly relevant to issues such as global climate change, food security, and precision agriculture. Recent satellite missions have opened up new perspectives by offering high spatial resolution, various spectral properties, and fast revisit rates to the same regions. Here, we examine the utility of broadband red-edge spectral information in multispectral satellite image data for classifying paddy rice crops in South Korea. Additionally, we examine how object-based spectral features affect the classification of paddy rice growth stages. For the analysis, two seasons of RapidEye satellite image data were used. The results showed that the broadband red-edge information slightly improved the classification accuracy of the crop condition in heterogeneous paddy rice crop environments, particularly when single-season image data were used. This positive effect appeared to be offset by the multi-temporal image data. Additional texture information brought only a minor improvement or a slight decline, although it is well known to be advantageous for object-based classification in general. We conclude that broadband red-edge information derived from conventional multispectral satellite data has the potential to improve space-based crop monitoring. Because the positive or negative effects of texture features for object-based crop classification could barely be interpreted, the relationships between the textual properties and paddy rice crop parameters at the field scale should be further examined in depth.

  1. Automatic glaucoma diagnosis through medical imaging informatics.

    PubMed

    Liu, Jiang; Zhang, Zhuo; Wong, Damon Wing Kee; Xu, Yanwu; Yin, Fengshou; Cheng, Jun; Tan, Ngan Meng; Kwoh, Chee Keong; Xu, Dong; Tham, Yih Chung; Aung, Tin; Wong, Tien Yin

    2013-01-01

    Computer-aided diagnosis for screening utilizes computer-based analytical methodologies to process patient information. Glaucoma is the leading irreversible cause of blindness. Due to the lack of an effective and standard screening practice, more than 50% of the cases are undiagnosed, which prevents the early treatment of the disease. To design an automatic glaucoma diagnosis architecture automatic glaucoma diagnosis through medical imaging informatics (AGLAIA-MII) that combines patient personal data, medical retinal fundus image, and patient's genome information for screening. 2258 cases from a population study were used to evaluate the screening software. These cases were attributed with patient personal data, retinal images and quality controlled genome data. Utilizing the multiple kernel learning-based classifier, AGLAIA-MII, combined patient personal data, major image features, and important genome single nucleotide polymorphism (SNP) features. Receiver operating characteristic curves were plotted to compare AGLAIA-MII's performance with classifiers using patient personal data, images, and genome SNP separately. AGLAIA-MII was able to achieve an area under curve value of 0.866, better than 0.551, 0.722 and 0.810 by the individual personal data, image and genome information components, respectively. AGLAIA-MII also demonstrated a substantial improvement over the current glaucoma screening approach based on intraocular pressure. AGLAIA-MII demonstrates for the first time the capability of integrating patients' personal data, medical retinal image and genome information for automatic glaucoma diagnosis and screening in a large dataset from a population study. It paves the way for a holistic approach for automatic objective glaucoma diagnosis and screening.

  2. Medical image reconstruction algorithm based on the geometric information between sensor detector and ROI

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Roh, Seungkuk

    2016-05-01

    In this paper, we propose a new image reconstruction algorithm considering the geometric information of acoustic sources and senor detector and review the two-step reconstruction algorithm which was previously proposed based on the geometrical information of ROI(region of interest) considering the finite size of acoustic sensor element. In a new image reconstruction algorithm, not only mathematical analysis is very simple but also its software implementation is very easy because we don't need to use the FFT. We verify the effectiveness of the proposed reconstruction algorithm by showing the simulation results by using Matlab k-wave toolkit.

  3. GPS and GIS-Based Data Collection and Image Mapping in the Antarctic Peninsula

    USGS Publications Warehouse

    Sanchez, Richard D.

    1999-01-01

    High-resolution satellite images combined with the rapidly evolving global positioning system (GPS) and geographic information system (GIS) technology may offer a quick and effective way to gather information in Antarctica. GPS- and GIS-based data collection systems are used in this project to determine their applicability for gathering ground truthing data in the Antarctic Peninsula. These baseline data will be used in a later study to examine changes in penguin habitats resulting in part from regional climate warming. The research application in this study yields important information on the usefulness and limits of data capture and high-resolution images for mapping in the Antarctic Peninsula.

  4. A Query Expansion Framework in Image Retrieval Domain Based on Local and Global Analysis

    PubMed Central

    Rahman, M. M.; Antani, S. K.; Thoma, G. R.

    2011-01-01

    We present an image retrieval framework based on automatic query expansion in a concept feature space by generalizing the vector space model of information retrieval. In this framework, images are represented by vectors of weighted concepts similar to the keyword-based representation used in text retrieval. To generate the concept vocabularies, a statistical model is built by utilizing Support Vector Machine (SVM)-based classification techniques. The images are represented as “bag of concepts” that comprise perceptually and/or semantically distinguishable color and texture patches from local image regions in a multi-dimensional feature space. To explore the correlation between the concepts and overcome the assumption of feature independence in this model, we propose query expansion techniques in the image domain from a new perspective based on both local and global analysis. For the local analysis, the correlations between the concepts based on the co-occurrence pattern, and the metrical constraints based on the neighborhood proximity between the concepts in encoded images, are analyzed by considering local feedback information. We also analyze the concept similarities in the collection as a whole in the form of a similarity thesaurus and propose an efficient query expansion based on the global analysis. The experimental results on a photographic collection of natural scenes and a biomedical database of different imaging modalities demonstrate the effectiveness of the proposed framework in terms of precision and recall. PMID:21822350

  5. Knowledge-Based Vision Techniques for the Autonomous Land Vehicle Program

    DTIC Science & Technology

    1991-10-01

    Knowledge System The CKS is an object-oriented knowledge database that was originally designed to serve as the central information manager for a...34 Representation Space: An Approach to the Integra- tion of Visual Information ," Proc. of DARPA Image Understanding Workshop, Palo Alto, CA, pp. 263-272, May 1989...Strat, " Information Management in a Sensor-Based Au- tonomous System," Proc. DARPA Image Understanding Workshop, University of Southern CA, Vol.1, pp

  6. Optical multiple-image authentication based on cascaded phase filtering structure

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Alfalou, A.; Brosseau, C.

    2016-10-01

    In this study, we report on the recent developments of optical image authentication algorithms. Compared with conventional optical encryption, optical image authentication achieves more security strength because such methods do not need to recover information of plaintext totally during the decryption period. Several recently proposed authentication systems are briefly introduced. We also propose a novel multiple-image authentication system, where multiple original images are encoded into a photon-limited encoded image by using a triple-plane based phase retrieval algorithm and photon counting imaging (PCI) technique. One can only recover a noise-like image using correct keys. To check authority of multiple images, a nonlinear fractional correlation is employed to recognize the original information hidden in the decrypted results. The proposal can be implemented optically using a cascaded phase filtering configuration. Computer simulation results are presented to evaluate the performance of this proposal and its effectiveness.

  7. An automated distinction of DICOM images for lung cancer CAD system

    NASA Astrophysics Data System (ADS)

    Suzuki, H.; Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nishitani, H.; Ohmatsu, H.; Eguchi, K.; Kaneko, M.; Moriyama, N.

    2009-02-01

    Automated distinction of medical images is an important preprocessing in Computer-Aided Diagnosis (CAD) systems. The CAD systems have been developed using medical image sets with specific scan conditions and body parts. However, varied examinations are performed in medical sites. The specification of the examination is contained into DICOM textual meta information. Most DICOM textual meta information can be considered reliable, however the body part information cannot always be considered reliable. In this paper, we describe an automated distinction of DICOM images as a preprocessing for lung cancer CAD system. Our approach uses DICOM textual meta information and low cost image processing. Firstly, the textual meta information such as scan conditions of DICOM image is distinguished. Secondly, the DICOM image is set to distinguish the body parts which are identified by image processing. The identification of body parts is based on anatomical structure which is represented by features of three regions, body tissue, bone, and air. The method is effective to the practical use of lung cancer CAD system in medical sites.

  8. Model-based vision for space applications

    NASA Technical Reports Server (NTRS)

    Chaconas, Karen; Nashman, Marilyn; Lumia, Ronald

    1992-01-01

    This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks.

  9. Multi-scale learning based segmentation of glands in digital colonrectal pathology images.

    PubMed

    Gao, Yi; Liu, William; Arjun, Shipra; Zhu, Liangjia; Ratner, Vadim; Kurc, Tahsin; Saltz, Joel; Tannenbaum, Allen

    2016-02-01

    Digital histopathological images provide detailed spatial information of the tissue at micrometer resolution. Among the available contents in the pathology images, meso-scale information, such as the gland morphology, texture, and distribution, are useful diagnostic features. In this work, focusing on the colon-rectal cancer tissue samples, we propose a multi-scale learning based segmentation scheme for the glands in the colon-rectal digital pathology slides. The algorithm learns the gland and non-gland textures from a set of training images in various scales through a sparse dictionary representation. After the learning step, the dictionaries are used collectively to perform the classification and segmentation for the new image.

  10. Multi-scale learning based segmentation of glands in digital colonrectal pathology images

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Liu, William; Arjun, Shipra; Zhu, Liangjia; Ratner, Vadim; Kurc, Tahsin; Saltz, Joel; Tannenbaum, Allen

    2016-03-01

    Digital histopathological images provide detailed spatial information of the tissue at micrometer resolution. Among the available contents in the pathology images, meso-scale information, such as the gland morphology, texture, and distribution, are useful diagnostic features. In this work, focusing on the colon-rectal cancer tissue samples, we propose a multi-scale learning based segmentation scheme for the glands in the colon-rectal digital pathology slides. The algorithm learns the gland and non-gland textures from a set of training images in various scales through a sparse dictionary representation. After the learning step, the dictionaries are used collectively to perform the classification and segmentation for the new image.

  11. Global rotational motion and displacement estimation of digital image stabilization based on the oblique vectors matching algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Fei; Hui, Mei; Zhao, Yue-jin

    2009-08-01

    The image block matching algorithm based on motion vectors of correlative pixels in oblique direction is presented for digital image stabilization. The digital image stabilization is a new generation of image stabilization technique which can obtains the information of relative motion among frames of dynamic image sequences by the method of digital image processing. In this method the matching parameters are calculated from the vectors projected in the oblique direction. The matching parameters based on the vectors contain the information of vectors in transverse and vertical direction in the image blocks at the same time. So the better matching information can be obtained after making correlative operation in the oblique direction. And an iterative weighted least square method is used to eliminate the error of block matching. The weights are related with the pixels' rotational angle. The center of rotation and the global emotion estimation of the shaking image can be obtained by the weighted least square from the estimation of each block chosen evenly from the image. Then, the shaking image can be stabilized with the center of rotation and the global emotion estimation. Also, the algorithm can run at real time by the method of simulated annealing in searching method of block matching. An image processing system based on DSP was used to exam this algorithm. The core processor in the DSP system is TMS320C6416 of TI, and the CCD camera with definition of 720×576 pixels was chosen as the input video signal. Experimental results show that the algorithm can be performed at the real time processing system and have an accurate matching precision.

  12. Fusion of LBP and SWLD using spatio-spectral information for hyperspectral face recognition

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Jiang, Peng; Zhang, Shuai; Xiong, Jinquan

    2018-01-01

    Hyperspectral imaging, recording intrinsic spectral information of the skin cross different spectral bands, become an important issue for robust face recognition. However, the main challenges for hyperspectral face recognition are high data dimensionality, low signal to noise ratio and inter band misalignment. In this paper, hyperspectral face recognition based on LBP (Local binary pattern) and SWLD (Simplified Weber local descriptor) is proposed to extract discriminative local features from spatio-spectral fusion information. Firstly, the spatio-spectral fusion strategy based on statistical information is used to attain discriminative features of hyperspectral face images. Secondly, LBP is applied to extract the orientation of the fusion face edges. Thirdly, SWLD is proposed to encode the intensity information in hyperspectral images. Finally, we adopt a symmetric Kullback-Leibler distance to compute the encoded face images. The hyperspectral face recognition is tested on Hong Kong Polytechnic University Hyperspectral Face database (PolyUHSFD). Experimental results show that the proposed method has higher recognition rate (92.8%) than the state of the art hyperspectral face recognition algorithms.

  13. A Locality-Constrained and Label Embedding Dictionary Learning Algorithm for Image Classification.

    PubMed

    Zhengming Li; Zhihui Lai; Yong Xu; Jian Yang; Zhang, David

    2017-02-01

    Locality and label information of training samples play an important role in image classification. However, previous dictionary learning algorithms do not take the locality and label information of atoms into account together in the learning process, and thus their performance is limited. In this paper, a discriminative dictionary learning algorithm, called the locality-constrained and label embedding dictionary learning (LCLE-DL) algorithm, was proposed for image classification. First, the locality information was preserved using the graph Laplacian matrix of the learned dictionary instead of the conventional one derived from the training samples. Then, the label embedding term was constructed using the label information of atoms instead of the classification error term, which contained discriminating information of the learned dictionary. The optimal coding coefficients derived by the locality-based and label-based reconstruction were effective for image classification. Experimental results demonstrated that the LCLE-DL algorithm can achieve better performance than some state-of-the-art algorithms.

  14. Modal-Power-Based Haptic Motion Recognition

    NASA Astrophysics Data System (ADS)

    Kasahara, Yusuke; Shimono, Tomoyuki; Kuwahara, Hiroaki; Sato, Masataka; Ohnishi, Kouhei

    Motion recognition based on sensory information is important for providing assistance to human using robots. Several studies have been carried out on motion recognition based on image information. However, in the motion of humans contact with an object can not be evaluated precisely by image-based recognition. This is because the considering force information is very important for describing contact motion. In this paper, a modal-power-based haptic motion recognition is proposed; modal power is considered to reveal information on both position and force. Modal power is considered to be one of the defining features of human motion. A motion recognition algorithm based on linear discriminant analysis is proposed to distinguish between similar motions. Haptic information is extracted using a bilateral master-slave system. Then, the observed motion is decomposed in terms of primitive functions in a modal space. The experimental results show the effectiveness of the proposed method.

  15. Image Understanding Architecture

    DTIC Science & Technology

    1991-09-01

    architecture to support real-time, knowledge -based image understanding , and develop the software support environment that will be needed to utilize...NUMBER OF PAGES Image Understanding Architecture, Knowledge -Based Vision, AI Real-Time Computer Vision, Software Simulator, Parallel Processor IL PRICE... information . In addition to sensory and knowledge -based processing it is useful to introduce a level of symbolic processing. Thus, vision researchers

  16. Interferometric and nonlinear-optical spectral-imaging techniques for outer space and live cells

    NASA Astrophysics Data System (ADS)

    Itoh, Kazuyoshi

    2015-12-01

    Multidimensional signals such as the spectral images allow us to have deeper insights into the natures of objects. In this paper the spectral imaging techniques that are based on optical interferometry and nonlinear optics are presented. The interferometric imaging technique is based on the unified theory of Van Cittert-Zernike and Wiener-Khintchine theorems and allows us to retrieve a spectral image of an object in the far zone from the 3D spatial coherence function. The retrieval principle is explained using a very simple object. The promising applications to space interferometers for astronomy that are currently in progress will also be briefly touched on. An interesting extension of interferometric spectral imaging is a 3D and spectral imaging technique that records 4D information of objects where the 3D and spectral information is retrieved from the cross-spectral density function of optical field. The 3D imaging is realized via the numerical inverse propagation of the cross-spectral density. A few techniques suggested recently are introduced. The nonlinear optical technique that utilizes stimulated Raman scattering (SRS) for spectral imaging of biomedical targets is presented lastly. The strong signals of SRS permit us to get vibrational information of molecules in the live cell or tissue in real time. The vibrational information of unstained or unlabeled molecules is crucial especially for medical applications. The 3D information due to the optical nonlinearity is also the attractive feature of SRS spectral microscopy.

  17. SU-F-J-96: Comparison of Frame-Based and Mutual Information Registration Techniques for CT and MR Image Sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popple, R; Bredel, M; Brezovich, I

    Purpose: To compare the accuracy of CT-MR registration using a mutual information method with registration using a frame-based localizer box. Methods: Ten patients having the Leksell head frame and scanned with a modality specific localizer box were imported into the treatment planning system. The fiducial rods of the localizer box were contoured on both the MR and CT scans. The skull was contoured on the CT images. The MR and CT images were registered by two methods. The frame-based method used the transformation that minimized the mean square distance of the centroids of the contours of the fiducial rods frommore » a mathematical model of the localizer. The mutual information method used automated image registration tools in the TPS and was restricted to a volume-of-interest defined by the skull contours with a 5 mm margin. For each case, the two registrations were adjusted by two evaluation teams, each comprised of an experienced radiation oncologist and neurosurgeon, to optimize alignment in the region of the brainstem. The teams were blinded to the registration method. Results: The mean adjustment was 0.4 mm (range 0 to 2 mm) and 0.2 mm (range 0 to 1 mm) for the frame and mutual information methods, respectively. The median difference between the frame and mutual information registrations was 0.3 mm, but was not statistically significant using the Wilcoxon signed rank test (p=0.37). Conclusion: The difference between frame and mutual information registration techniques was neither statistically significant nor, for most applications, clinically important. These results suggest that mutual information is equivalent to frame-based image registration for radiosurgery. Work is ongoing to add additional evaluators and to assess the differences between evaluators.« less

  18. Prospective regularization design in prior-image-based reconstruction

    NASA Astrophysics Data System (ADS)

    Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.

    2015-12-01

    Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in phantoms where the optimal parameters vary spatially by an order of magnitude or more. In a series of studies designed to explore potential unknowns associated with accurate PIBR, optimal prior image strength was found to vary with attenuation differences associated with anatomical change but exhibited only small variations as a function of the shape and size of the change. The results suggest that, given a target change attenuation, prospective patient-, change-, and data-specific customization of the prior image strength can be performed to ensure reliable reconstruction of specific anatomical changes.

  19. A hierarchical SVG image abstraction layer for medical imaging

    NASA Astrophysics Data System (ADS)

    Kim, Edward; Huang, Xiaolei; Tan, Gang; Long, L. Rodney; Antani, Sameer

    2010-03-01

    As medical imaging rapidly expands, there is an increasing need to structure and organize image data for efficient analysis, storage and retrieval. In response, a large fraction of research in the areas of content-based image retrieval (CBIR) and picture archiving and communication systems (PACS) has focused on structuring information to bridge the "semantic gap", a disparity between machine and human image understanding. An additional consideration in medical images is the organization and integration of clinical diagnostic information. As a step towards bridging the semantic gap, we design and implement a hierarchical image abstraction layer using an XML based language, Scalable Vector Graphics (SVG). Our method encodes features from the raw image and clinical information into an extensible "layer" that can be stored in a SVG document and efficiently searched. Any feature extracted from the raw image including, color, texture, orientation, size, neighbor information, etc., can be combined in our abstraction with high level descriptions or classifications. And our representation can natively characterize an image in a hierarchical tree structure to support multiple levels of segmentation. Furthermore, being a world wide web consortium (W3C) standard, SVG is able to be displayed by most web browsers, interacted with by ECMAScript (standardized scripting language, e.g. JavaScript, JScript), and indexed and retrieved by XML databases and XQuery. Using these open source technologies enables straightforward integration into existing systems. From our results, we show that the flexibility and extensibility of our abstraction facilitates effective storage and retrieval of medical images.

  20. World Wide Web Based Image Search Engine Using Text and Image Content Features

    NASA Astrophysics Data System (ADS)

    Luo, Bo; Wang, Xiaogang; Tang, Xiaoou

    2003-01-01

    Using both text and image content features, a hybrid image retrieval system for Word Wide Web is developed in this paper. We first use a text-based image meta-search engine to retrieve images from the Web based on the text information on the image host pages to provide an initial image set. Because of the high-speed and low cost nature of the text-based approach, we can easily retrieve a broad coverage of images with a high recall rate and a relatively low precision. An image content based ordering is then performed on the initial image set. All the images are clustered into different folders based on the image content features. In addition, the images can be re-ranked by the content features according to the user feedback. Such a design makes it truly practical to use both text and image content for image retrieval over the Internet. Experimental results confirm the efficiency of the system.

  1. Image Hashes as Templates for Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janik, Tadeusz; Jarman, Kenneth D.; Robinson, Sean M.

    2012-07-17

    Imaging systems can provide measurements that confidently assess characteristics of nuclear weapons and dismantled weapon components, and such assessment will be needed in future verification for arms control. Yet imaging is often viewed as too intrusive, raising concern about the ability to protect sensitive information. In particular, the prospect of using image-based templates for verifying the presence or absence of a warhead, or of the declared configuration of fissile material in storage, may be rejected out-of-hand as being too vulnerable to violation of information barrier (IB) principles. Development of a rigorous approach for generating and comparing reduced-information templates from images,more » and assessing the security, sensitivity, and robustness of verification using such templates, are needed to address these concerns. We discuss our efforts to develop such a rigorous approach based on a combination of image-feature extraction and encryption-utilizing hash functions to confirm proffered declarations, providing strong classified data security while maintaining high confidence for verification. The proposed work is focused on developing secure, robust, tamper-sensitive and automatic techniques that may enable the comparison of non-sensitive hashed image data outside an IB. It is rooted in research on so-called perceptual hash functions for image comparison, at the interface of signal/image processing, pattern recognition, cryptography, and information theory. Such perceptual or robust image hashing—which, strictly speaking, is not truly cryptographic hashing—has extensive application in content authentication and information retrieval, database search, and security assurance. Applying and extending the principles of perceptual hashing to imaging for arms control, we propose techniques that are sensitive to altering, forging and tampering of the imaged object yet robust and tolerant to content-preserving image distortions and noise. Ensuring that the information contained in the hashed image data (available out-of-IB) cannot be used to extract sensitive information about the imaged object is of primary concern. Thus the techniques are characterized by high unpredictability to guarantee security. We will present an assessment of the performance of our techniques with respect to security, sensitivity and robustness on the basis of a methodical and mathematically precise framework.« less

  2. Identification of cultivated land using remote sensing images based on object-oriented artificial bee colony algorithm

    NASA Astrophysics Data System (ADS)

    Li, Nan; Zhu, Xiufang

    2017-04-01

    Cultivated land resources is the key to ensure food security. Timely and accurate access to cultivated land information is conducive to a scientific planning of food production and management policies. The GaoFen 1 (GF-1) images have high spatial resolution and abundant texture information and thus can be used to identify fragmentized cultivated land. In this paper, an object-oriented artificial bee colony algorithm was proposed for extracting cultivated land from GF-1 images. Firstly, the GF-1 image was segmented by eCognition software and some samples from the segments were manually identified into 2 types (cultivated land and non-cultivated land). Secondly, the artificial bee colony (ABC) algorithm was used to search for classification rules based on the spectral and texture information extracted from the image objects. Finally, the extracted classification rules were used to identify the cultivated land area on the image. The experiment was carried out in Hongze area, Jiangsu Province using wide field-of-view sensor on the GF-1 satellite image. The total precision of classification result was 94.95%, and the precision of cultivated land was 92.85%. The results show that the object-oriented ABC algorithm can overcome the defect of insufficient spectral information in GF-1 images and obtain high precision in cultivated identification.

  3. Rotational-translational fourier imaging system

    NASA Technical Reports Server (NTRS)

    Campbell, Jonathan W. (Inventor)

    2004-01-01

    This invention has the ability to create Fourier-based images with only two grid pairs. The two grid pairs are manipulated in a manner that allows (1) a first grid pair to provide multiple real components of the Fourier-based image and (2) a second grid pair to provide multiple imaginary components of the Fourier-based image. The novelty of this invention resides in the use of only two grid pairs to provide the same imaging information that has been traditionally collected with multiple grid pairs.

  4. Automatic classification of minimally invasive instruments based on endoscopic image sequences

    NASA Astrophysics Data System (ADS)

    Speidel, Stefanie; Benzko, Julia; Krappe, Sebastian; Sudra, Gunther; Azad, Pedram; Müller-Stich, Beat Peter; Gutt, Carsten; Dillmann, Rüdiger

    2009-02-01

    Minimally invasive surgery is nowadays a frequently applied technique and can be regarded as a major breakthrough in surgery. The surgeon has to adopt special operation-techniques and deal with difficulties like the complex hand-eye coordination and restricted mobility. To alleviate these constraints we propose to enhance the surgeon's capabilities by providing a context-aware assistance using augmented reality techniques. To analyze the current situation for context-aware assistance, we need intraoperatively gained sensor data and a model of the intervention. A situation consists of information about the performed activity, the used instruments, the surgical objects, the anatomical structures and defines the state of an intervention for a given moment in time. The endoscopic images provide a rich source of information which can be used for an image-based analysis. Different visual cues are observed in order to perform an image-based analysis with the objective to gain as much information as possible about the current situation. An important visual cue is the automatic recognition of the instruments which appear in the scene. In this paper we present the classification of minimally invasive instruments using the endoscopic images. The instruments are not modified by markers. The system segments the instruments in the current image and recognizes the instrument type based on three-dimensional instrument models.

  5. Nationwide Hybrid Change Detection of Buildings

    NASA Astrophysics Data System (ADS)

    Hron, V.; Halounova, L.

    2016-06-01

    The Fundamental Base of Geographic Data of the Czech Republic (hereinafter FBGD) is a national 2D geodatabase at a 1:10,000 scale with more than 100 geographic objects. This paper describes the design of the permanent updating mechanism of buildings in FBGD. The proposed procedure belongs to the category of hybrid change detection (HCD) techniques which combine pixel-based and object-based evaluation. The main sources of information for HCD are cadastral information and bi-temporal vertical digital aerial photographs. These photographs have great information potential because they contain multispectral, position and also elevation information. Elevation information represents a digital surface model (DSM) which can be obtained using the image matching technique. Pixel-based evaluation of bi-temporal DSMs enables fast localization of places with potential building changes. These coarse results are subsequently classified through the object-based image analysis (OBIA) using spectral, textural and contextual features and GIS tools. The advantage of the two-stage evaluation is the pre-selection of locations where image segmentation (a computationally demanding part of OBIA) is performed. It is not necessary to apply image segmentation to the entire scene, but only to the surroundings of detected changes, which contributes to significantly faster processing and lower hardware requirements. The created technology is based on open-source software solutions that allow easy portability on multiple computers and parallelization of processing. This leads to significant savings of financial resources which can be expended on the further development of FBGD.

  6. Shape based segmentation of MRIs of the bones in the knee using phase and intensity information

    NASA Astrophysics Data System (ADS)

    Fripp, Jurgen; Bourgeat, Pierrick; Crozier, Stuart; Ourselin, Sébastien

    2007-03-01

    The segmentation of the bones from MR images is useful for performing subsequent segmentation and quantitative measurements of cartilage tissue. In this paper, we present a shape based segmentation scheme for the bones that uses texture features derived from the phase and intensity information in the complex MR image. The phase can provide additional information about the tissue interfaces, but due to the phase unwrapping problem, this information is usually discarded. By using a Gabor filter bank on the complex MR image, texture features (including phase) can be extracted without requiring phase unwrapping. These texture features are then analyzed using a support vector machine classifier to obtain probability tissue matches. The segmentation of the bone is fully automatic and performed using a 3D active shape model based approach driven using gradient and texture information. The 3D active shape model is automatically initialized using a robust affine registration. The approach is validated using a database of 18 FLASH MR images that are manually segmented, with an average segmentation overlap (Dice similarity coefficient) of 0.92 compared to 0.9 obtained using the classifier only.

  7. Image/text automatic indexing and retrieval system using context vector approach

    NASA Astrophysics Data System (ADS)

    Qing, Kent P.; Caid, William R.; Ren, Clara Z.; McCabe, Patrick

    1995-11-01

    Thousands of documents and images are generated daily both on and off line on the information superhighway and other media. Storage technology has improved rapidly to handle these data but indexing this information is becoming very costly. HNC Software Inc. has developed a technology for automatic indexing and retrieval of free text and images. This technique is demonstrated and is based on the concept of `context vectors' which encode a succinct representation of the associated text and features of sub-image. In this paper, we will describe the Automated Librarian System which was designed for free text indexing and the Image Content Addressable Retrieval System (ICARS) which extends the technique from the text domain into the image domain. Both systems have the ability to automatically assign indices for a new document and/or image based on the content similarities in the database. ICARS also has the capability to retrieve images based on similarity of content using index terms, text description, and user-generated images as a query without performing segmentation or object recognition.

  8. Invariant Feature Matching for Image Registration Application Based on New Dissimilarity of Spatial Features

    PubMed Central

    Mousavi Kahaki, Seyed Mostafa; Nordin, Md Jan; Ashtari, Amir H.; J. Zahra, Sophia

    2016-01-01

    An invariant feature matching method is proposed as a spatially invariant feature matching approach. Deformation effects, such as affine and homography, change the local information within the image and can result in ambiguous local information pertaining to image points. New method based on dissimilarity values, which measures the dissimilarity of the features through the path based on Eigenvector properties, is proposed. Evidence shows that existing matching techniques using similarity metrics—such as normalized cross-correlation, squared sum of intensity differences and correlation coefficient—are insufficient for achieving adequate results under different image deformations. Thus, new descriptor’s similarity metrics based on normalized Eigenvector correlation and signal directional differences, which are robust under local variation of the image information, are proposed to establish an efficient feature matching technique. The method proposed in this study measures the dissimilarity in the signal frequency along the path between two features. Moreover, these dissimilarity values are accumulated in a 2D dissimilarity space, allowing accurate corresponding features to be extracted based on the cumulative space using a voting strategy. This method can be used in image registration applications, as it overcomes the limitations of the existing approaches. The output results demonstrate that the proposed technique outperforms the other methods when evaluated using a standard dataset, in terms of precision-recall and corner correspondence. PMID:26985996

  9. Image-based information, communication, and retrieval

    NASA Technical Reports Server (NTRS)

    Bryant, N. A.; Zobrist, A. L.

    1980-01-01

    IBIS/VICAR system combines video image processing and information management. Flexible programs require user to supply only parameters specific to particular application. Special-purpose input/output routines transfer image data with reduced memory requirements. New application programs are easily incorporated. Program is written in FORTRAN IV, Assembler, and OS JCL for batch execution and has been implemented on IBM 360.

  10. RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information

    PubMed Central

    Di, Kaichang; Zhao, Qiang; Wan, Wenhui; Wang, Yexin; Gao, Yunjun

    2016-01-01

    In the study of SLAM problem using an RGB-D camera, depth information and visual information as two types of primary measurement data are rarely tightly coupled during refinement of camera pose estimation. In this paper, a new method of RGB-D camera SLAM is proposed based on extended bundle adjustment with integrated 2D and 3D information on the basis of a new projection model. First, the geometric relationship between the image plane coordinates and the depth values is constructed through RGB-D camera calibration. Then, 2D and 3D feature points are automatically extracted and matched between consecutive frames to build a continuous image network. Finally, extended bundle adjustment based on the new projection model, which takes both image and depth measurements into consideration, is applied to the image network for high-precision pose estimation. Field experiments show that the proposed method has a notably better performance than the traditional method, and the experimental results demonstrate the effectiveness of the proposed method in improving localization accuracy. PMID:27529256

  11. Complex-based OCT angiography algorithm recovers microvascular information better than amplitude- or phase-based algorithms in phase-stable systems

    NASA Astrophysics Data System (ADS)

    Xu, Jingjiang; Song, Shaozhen; Li, Yuandong; Wang, Ruikang K.

    2018-01-01

    Optical coherence tomography angiography (OCTA) is increasingly becoming a popular inspection tool for biomedical imaging applications. By exploring the amplitude, phase and complex information available in OCT signals, numerous algorithms have been proposed that contrast functional vessel networks within microcirculatory tissue beds. However, it is not clear which algorithm delivers optimal imaging performance. Here, we investigate systematically how amplitude and phase information have an impact on the OCTA imaging performance, to establish the relationship of amplitude and phase stability with OCT signal-to-noise ratio (SNR), time interval and particle dynamics. With either repeated A-scan or repeated B-scan imaging protocols, the amplitude noise increases with the increase of OCT SNR; however, the phase noise does the opposite, i.e. it increases with the decrease of OCT SNR. Coupled with experimental measurements, we utilize a simple Monte Carlo (MC) model to simulate the performance of amplitude-, phase- and complex-based algorithms for OCTA imaging, the results of which suggest that complex-based algorithms deliver the best performance when the phase noise is  <  ~40 mrad. We also conduct a series of in vivo vascular imaging in animal models and human retina to verify the findings from the MC model through assessing the OCTA performance metrics of vessel connectivity, image SNR and contrast-to-noise ratio. We show that for all the metrics assessed, the complex-based algorithm delivers better performance than either the amplitude- or phase-based algorithms for both the repeated A-scan and the B-scan imaging protocols, which agrees well with the conclusion drawn from the MC simulations.

  12. Complex-based OCT angiography algorithm recovers microvascular information better than amplitude- or phase-based algorithms in phase-stable systems.

    PubMed

    Xu, Jingjiang; Song, Shaozhen; Li, Yuandong; Wang, Ruikang K

    2017-12-19

    Optical coherence tomography angiography (OCTA) is increasingly becoming a popular inspection tool for biomedical imaging applications. By exploring the amplitude, phase and complex information available in OCT signals, numerous algorithms have been proposed that contrast functional vessel networks within microcirculatory tissue beds. However, it is not clear which algorithm delivers optimal imaging performance. Here, we investigate systematically how amplitude and phase information have an impact on the OCTA imaging performance, to establish the relationship of amplitude and phase stability with OCT signal-to-noise ratio (SNR), time interval and particle dynamics. With either repeated A-scan or repeated B-scan imaging protocols, the amplitude noise increases with the increase of OCT SNR; however, the phase noise does the opposite, i.e. it increases with the decrease of OCT SNR. Coupled with experimental measurements, we utilize a simple Monte Carlo (MC) model to simulate the performance of amplitude-, phase- and complex-based algorithms for OCTA imaging, the results of which suggest that complex-based algorithms deliver the best performance when the phase noise is  <  ~40 mrad. We also conduct a series of in vivo vascular imaging in animal models and human retina to verify the findings from the MC model through assessing the OCTA performance metrics of vessel connectivity, image SNR and contrast-to-noise ratio. We show that for all the metrics assessed, the complex-based algorithm delivers better performance than either the amplitude- or phase-based algorithms for both the repeated A-scan and the B-scan imaging protocols, which agrees well with the conclusion drawn from the MC simulations.

  13. Groupwise registration of MR brain images with tumors.

    PubMed

    Tang, Zhenyu; Wu, Yihong; Fan, Yong

    2017-08-04

    A novel groupwise image registration framework is developed for registering MR brain images with tumors. Our method iteratively estimates a normal-appearance counterpart for each tumor image to be registered and constructs a directed graph (digraph) of normal-appearance images to guide the groupwise image registration. Particularly, our method maps each tumor image to its normal appearance counterpart by identifying and inpainting brain tumor regions with intensity information estimated using a low-rank plus sparse matrix decomposition based image representation technique. The estimated normal-appearance images are groupwisely registered to a group center image guided by a digraph of images so that the total length of 'image registration paths' to be the minimum, and then the original tumor images are warped to the group center image using the resulting deformation fields. We have evaluated our method based on both simulated and real MR brain tumor images. The registration results were evaluated with overlap measures of corresponding brain regions and average entropy of image intensity information, and Wilcoxon signed rank tests were adopted to compare different methods with respect to their regional overlap measures. Compared with a groupwise image registration method that is applied to normal-appearance images estimated using the traditional low-rank plus sparse matrix decomposition based image inpainting, our method achieved higher image registration accuracy with statistical significance (p  =  7.02  ×  10 -9 ).

  14. B-spline based image tracking by detection

    NASA Astrophysics Data System (ADS)

    Balaji, Bhashyam; Sithiravel, Rajiv; Damini, Anthony; Kirubarajan, Thiagalingam; Rajan, Sreeraman

    2016-05-01

    Visual image tracking involves the estimation of the motion of any desired targets in a surveillance region using a sequence of images. A standard method of isolating moving targets in image tracking uses background subtraction. The standard background subtraction method is often impacted by irrelevant information in the images, which can lead to poor performance in image-based target tracking. In this paper, a B-Spline based image tracking is implemented. The novel method models the background and foreground using the B-Spline method followed by a tracking-by-detection algorithm. The effectiveness of the proposed algorithm is demonstrated.

  15. Design and realization of retina-like three-dimensional imaging based on a MOEMS mirror

    NASA Astrophysics Data System (ADS)

    Cao, Jie; Hao, Qun; Xia, Wenze; Peng, Yuxin; Cheng, Yang; Mu, Jiaxing; Wang, Peng

    2016-07-01

    To balance conflicts for high-resolution, large-field-of-view and real-time imaging, a retina-like imaging method based on time-of flight (TOF) is proposed. Mathematical models of 3D imaging based on MOEMS are developed. Based on this method, we perform simulations of retina-like scanning properties, including compression of redundant information and rotation and scaling invariance. To validate the theory, we develop a prototype and conduct relevant experiments. The preliminary results agree well with the simulations.

  16. Infrared and visible image fusion based on robust principal component analysis and compressed sensing

    NASA Astrophysics Data System (ADS)

    Li, Jun; Song, Minghui; Peng, Yuanxi

    2018-03-01

    Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.

  17. Some technical considerations on the evolution of the IBIS system. [Image Based Information System

    NASA Technical Reports Server (NTRS)

    Bryant, N. A.; Zobrist, A. L.

    1982-01-01

    In connection with work related to the use of earth-resources images, it became apparent by 1974, that certain system improvements are necessary for the efficient processing of digital data. To resolve this dilemma, Billingsley and Bryant (1975) proposed the use of image processing technology. Bryant and Zobrist (1976) reported the development of the Image Based Information System (IBIS) as a subset of an overall Video Image Communication and Retrieval (VICAR) image processing system. A description of IBIS is presented, and its employment in connection with advanced applications is discussed. It is concluded that several important lessons have been learned from the development of IBIS. The development of a flexible system such as IBIS is found to rest upon the prior development of a general purpose image processing system, such as VICAR.

  18. Application and evaluation of ISVR method in QuickBird image fusion

    NASA Astrophysics Data System (ADS)

    Cheng, Bo; Song, Xiaolu

    2014-05-01

    QuickBird satellite images are widely used in many fields, and applications have put forward high requirements for the integration of the spatial information and spectral information of the imagery. A fusion method for high resolution remote sensing images based on ISVR is identified in this study. The core principle of ISVS is taking the advantage of radicalization targeting to remove the effect of different gain and error of satellites' sensors. Transformed from DN to radiance, the multi-spectral image's energy is used to simulate the panchromatic band. The linear regression analysis is carried through the simulation process to find a new synthetically panchromatic image, which is highly linearly correlated to the original panchromatic image. In order to evaluate, test and compare the algorithm results, this paper used ISVR and other two different fusion methods to give a comparative study of the spatial information and spectral information, taking the average gradient and the correlation coefficient as an indicator. Experiments showed that this method could significantly improve the quality of fused image, especially in preserving spectral information, to maximize the spectral information of original multispectral images, while maintaining abundant spatial information.

  19. Multi-scale image segmentation method with visual saliency constraints and its application

    NASA Astrophysics Data System (ADS)

    Chen, Yan; Yu, Jie; Sun, Kaimin

    2018-03-01

    Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.

  20. Effect of Reading Ability and Internet Experience on Keyword-Based Image Search

    ERIC Educational Resources Information Center

    Lei, Pei-Lan; Lin, Sunny S. J.; Sun, Chuen-Tsai

    2013-01-01

    Image searches are now crucial for obtaining information, constructing knowledge, and building successful educational outcomes. We investigated how reading ability and Internet experience influence keyword-based image search behaviors and performance. We categorized 58 junior-high-school students into four groups of high/low reading ability and…

  1. Fusion of spectral and panchromatic images using false color mapping and wavelet integrated approach

    NASA Astrophysics Data System (ADS)

    Zhao, Yongqiang; Pan, Quan; Zhang, Hongcai

    2006-01-01

    With the development of sensory technology, new image sensors have been introduced that provide a greater range of information to users. But as the power limitation of radiation, there will always be some trade-off between spatial and spectral resolution in the image captured by specific sensors. Images with high spatial resolution can locate objects with high accuracy, whereas images with high spectral resolution can be used to identify the materials. Many applications in remote sensing require fusing low-resolution imaging spectral images with panchromatic images to identify materials at high resolution in clutter. A pixel-based false color mapping and wavelet transform integrated fusion algorithm is presented in this paper, the resulting images have a higher information content than each of the original images and retain sensor-specific image information. The simulation results show that this algorithm can enhance the visibility of certain details and preserve the difference of different materials.

  2. Space based topographic mapping experiment using Seasat synthetic aperture radar and LANDSAT 3 return beam vidicon imagery

    NASA Technical Reports Server (NTRS)

    Mader, G. L.

    1981-01-01

    A technique for producing topographic information is described which is based on same side/same time viewing using a dissimilar combination of radar imagery and photographic images. Common geographic areas viewed from similar space reference locations produce scene elevation displacements in opposite direction and proper use of this characteristic can yield the perspective information necessary for determination of base to height ratios. These base to height ratios can in turn be used to produce a topographic map. A test area covering the Harrisburg, Pennsylvania region was observed by synthetic aperture radar on the Seasat satellite and by return beam vidicon on by the LANDSAT - 3 satellite. The techniques developed for the scaling re-orientation and common registration of the two images are presented along with the topographic determination data. Topographic determination based exclusively on the images content is compared to the map information which is used as a performance calibration base.

  3. Content-based image retrieval by matching hierarchical attributed region adjacency graphs

    NASA Astrophysics Data System (ADS)

    Fischer, Benedikt; Thies, Christian J.; Guld, Mark O.; Lehmann, Thomas M.

    2004-05-01

    Content-based image retrieval requires a formal description of visual information. In medical applications, all relevant biological objects have to be represented by this description. Although color as the primary feature has proven successful in publicly available retrieval systems of general purpose, this description is not applicable to most medical images. Additionally, it has been shown that global features characterizing the whole image do not lead to acceptable results in the medical context or that they are only suitable for specific applications. For a general purpose content-based comparison of medical images, local, i.e. regional features that are collected on multiple scales must be used. A hierarchical attributed region adjacency graph (HARAG) provides such a representation and transfers image comparison to graph matching. However, building a HARAG from an image requires a restriction in size to be computationally feasible while at the same time all visually plausible information must be preserved. For this purpose, mechanisms for the reduction of the graph size are presented. Even with a reduced graph, the problem of graph matching remains NP-complete. In this paper, the Similarity Flooding approach and Hopfield-style neural networks are adapted from the graph matching community to the needs of HARAG comparison. Based on synthetic image material build from simple geometric objects, all visually similar regions were matched accordingly showing the framework's general applicability to content-based image retrieval of medical images.

  4. Quick multitemporal approach to get cloudless improved multispectral imagery for large geographical areas

    NASA Astrophysics Data System (ADS)

    Colaninno, Nicola; Marambio Castillo, Alejandro; Roca Cladera, Josep

    2017-10-01

    The demand for remotely sensed data is growing increasingly, due to the possibility of managing information about huge geographic areas, in digital format, at different time periods, and suitable for analysis in GIS platforms. However, primary satellite information is not such immediate as desirable. Beside geometric and atmospheric limitations, clouds, cloud shadows, and haze generally contaminate optical images. In terms of land cover, such a contamination is intended as missing information and should be replaced. Generally, image reconstruction is classified according to three main approaches, i.e. in-painting-based, multispectral-based, and multitemporal-based methods. This work relies on a multitemporal-based approach to retrieve uncontaminated pixels for an image scene. We explore an automatic method for quickly getting daytime cloudless and shadow-free image at moderate spatial resolution for large geographical areas. The process expects two main steps: a multitemporal effect adjustment to avoid significant seasonal variations, and a data reconstruction phase, based on automatic selection of uncontaminated pixels from an image stack. The result is a composite image based on middle values of the stack, over a year. The assumption is that, for specific purposes, land cover changes at a coarse scale are not significant over relatively short time periods. Because it is largely recognized that satellite imagery along tropical areas are generally strongly affected by clouds, the methodology is tested for the case study of the Dominican Republic at the year 2015; while Landsat 8 imagery are employed to test the approach.

  5. Comparing the quality of accessing medical literature using content-based visual and textual information retrieval

    NASA Astrophysics Data System (ADS)

    Müller, Henning; Kalpathy-Cramer, Jayashree; Kahn, Charles E., Jr.; Hersh, William

    2009-02-01

    Content-based visual information (or image) retrieval (CBIR) has been an extremely active research domain within medical imaging over the past ten years, with the goal of improving the management of visual medical information. Many technical solutions have been proposed, and application scenarios for image retrieval as well as image classification have been set up. However, in contrast to medical information retrieval using textual methods, visual retrieval has only rarely been applied in clinical practice. This is despite the large amount and variety of visual information produced in hospitals every day. This information overload imposes a significant burden upon clinicians, and CBIR technologies have the potential to help the situation. However, in order for CBIR to become an accepted clinical tool, it must demonstrate a higher level of technical maturity than it has to date. Since 2004, the ImageCLEF benchmark has included a task for the comparison of visual information retrieval algorithms for medical applications. In 2005, a task for medical image classification was introduced and both tasks have been run successfully for the past four years. These benchmarks allow an annual comparison of visual retrieval techniques based on the same data sets and the same query tasks, enabling the meaningful comparison of various retrieval techniques. The datasets used from 2004-2007 contained images and annotations from medical teaching files. In 2008, however, the dataset used was made up of 67,000 images (along with their associated figure captions and the full text of their corresponding articles) from two Radiological Society of North America (RSNA) scientific journals. This article describes the results of the medical image retrieval task of the ImageCLEF 2008 evaluation campaign. We compare the retrieval results of both visual and textual information retrieval systems from 15 research groups on the aforementioned data set. The results show clearly that, currently, visual retrieval alone does not achieve the performance necessary for real-world clinical applications. Most of the common visual retrieval techniques have a MAP (Mean Average Precision) of around 2-3%, which is much lower than that achieved using textual retrieval (MAP=29%). Advanced machine learning techniques, together with good training data, have been shown to improve the performance of visual retrieval systems in the past. Multimodal retrieval (basing retrieval on both visual and textual information) can achieve better results than purely visual, but only when carefully applied. In many cases, multimodal retrieval systems performed even worse than purely textual retrieval systems. On the other hand, some multimodal retrieval systems demonstrated significantly increased early precision, which has been shown to be a desirable behavior in real-world systems.

  6. High-fidelity video and still-image communication based on spectral information: natural vision system and its applications

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Masahiro; Haneishi, Hideaki; Fukuda, Hiroyuki; Kishimoto, Junko; Kanazawa, Hiroshi; Tsuchida, Masaru; Iwama, Ryo; Ohyama, Nagaaki

    2006-01-01

    In addition to the great advancement of high-resolution and large-screen imaging technology, the issue of color is now receiving considerable attention as another aspect than the image resolution. It is difficult to reproduce the original color of subject in conventional imaging systems, and that obstructs the applications of visual communication systems in telemedicine, electronic commerce, and digital museum. To breakthrough the limitation of conventional RGB 3-primary systems, "Natural Vision" project aims at an innovative video and still-image communication technology with high-fidelity color reproduction capability, based on spectral information. This paper summarizes the results of NV project including the development of multispectral and multiprimary imaging technologies and the experimental investigations on the applications to medicine, digital archives, electronic commerce, and computer graphics.

  7. Quality assessment of remote sensing image fusion using feature-based fourth-order correlation coefficient

    NASA Astrophysics Data System (ADS)

    Ma, Dan; Liu, Jun; Chen, Kai; Li, Huali; Liu, Ping; Chen, Huijuan; Qian, Jing

    2016-04-01

    In remote sensing fusion, the spatial details of a panchromatic (PAN) image and the spectrum information of multispectral (MS) images will be transferred into fused images according to the characteristics of the human visual system. Thus, a remote sensing image fusion quality assessment called feature-based fourth-order correlation coefficient (FFOCC) is proposed. FFOCC is based on the feature-based coefficient concept. Spatial features related to spatial details of the PAN image and spectral features related to the spectrum information of MS images are first extracted from the fused image. Then, the fourth-order correlation coefficient between the spatial and spectral features is calculated and treated as the assessment result. FFOCC was then compared with existing widely used indices, such as Erreur Relative Globale Adimensionnelle de Synthese, and quality assessed with no reference. Results of the fusion and distortion experiments indicate that the FFOCC is consistent with subjective evaluation. FFOCC significantly outperforms the other indices in evaluating fusion images that are produced by different fusion methods and that are distorted in spatial and spectral features by blurring, adding noise, and changing intensity. All the findings indicate that the proposed method is an objective and effective quality assessment for remote sensing image fusion.

  8. Image object recognition based on the Zernike moment and neural networks

    NASA Astrophysics Data System (ADS)

    Wan, Jianwei; Wang, Ling; Huang, Fukan; Zhou, Liangzhu

    1998-03-01

    This paper first give a comprehensive discussion about the concept of artificial neural network its research methods and the relations with information processing. On the basis of such a discussion, we expound the mathematical similarity of artificial neural network and information processing. Then, the paper presents a new method of image recognition based on invariant features and neural network by using image Zernike transform. The method not only has the invariant properties for rotation, shift and scale of image object, but also has good fault tolerance and robustness. Meanwhile, it is also compared with statistical classifier and invariant moments recognition method.

  9. Unsupervised change detection of multispectral images based on spatial constraint chi-squared transform and Markov random field model

    NASA Astrophysics Data System (ADS)

    Shi, Aiye; Wang, Chao; Shen, Shaohong; Huang, Fengchen; Ma, Zhenli

    2016-10-01

    Chi-squared transform (CST), as a statistical method, can describe the difference degree between vectors. The CST-based methods operate directly on information stored in the difference image and are simple and effective methods for detecting changes in remotely sensed images that have been registered and aligned. However, the technique does not take spatial information into consideration, which leads to much noise in the result of change detection. An improved unsupervised change detection method is proposed based on spatial constraint CST (SCCST) in combination with a Markov random field (MRF) model. First, the mean and variance matrix of the difference image of bitemporal images are estimated by an iterative trimming method. In each iteration, spatial information is injected to reduce scattered changed points (also known as "salt and pepper" noise). To determine the key parameter confidence level in the SCCST method, a pseudotraining dataset is constructed to estimate the optimal value. Then, the result of SCCST, as an initial solution of change detection, is further improved by the MRF model. The experiments on simulated and real multitemporal and multispectral images indicate that the proposed method performs well in comprehensive indices compared with other methods.

  10. Line fitting based feature extraction for object recognition

    NASA Astrophysics Data System (ADS)

    Li, Bing

    2014-06-01

    Image feature extraction plays a significant role in image based pattern applications. In this paper, we propose a new approach to generate hierarchical features. This new approach applies line fitting to adaptively divide regions based upon the amount of information and creates line fitting features for each subsequent region. It overcomes the feature wasting drawback of the wavelet based approach and demonstrates high performance in real applications. For gray scale images, we propose a diffusion equation approach to map information-rich pixels (pixels near edges and ridge pixels) into high values, and pixels in homogeneous regions into small values near zero that form energy map images. After the energy map images are generated, we propose a line fitting approach to divide regions recursively and create features for each region simultaneously. This new feature extraction approach is similar to wavelet based hierarchical feature extraction in which high layer features represent global characteristics and low layer features represent local characteristics. However, the new approach uses line fitting to adaptively focus on information-rich regions so that we avoid the feature waste problems of the wavelet approach in homogeneous regions. Finally, the experiments for handwriting word recognition show that the new method provides higher performance than the regular handwriting word recognition approach.

  11. BIRAM: a content-based image retrieval framework for medical images

    NASA Astrophysics Data System (ADS)

    Moreno, Ramon A.; Furuie, Sergio S.

    2006-03-01

    In the medical field, digital images are becoming more and more important for diagnostics and therapy of the patients. At the same time, the development of new technologies has increased the amount of image data produced in a hospital. This creates a demand for access methods that offer more than text-based queries for retrieval of the information. In this paper is proposed a framework for the retrieval of medical images that allows the use of different algorithms for the search of medical images by similarity. The framework also enables the search for textual information from an associated medical report and DICOM header information. The proposed system can be used for support of clinical decision making and is intended to be integrated with an open source picture, archiving and communication systems (PACS). The BIRAM has the following advantages: (i) Can receive several types of algorithms for image similarity search; (ii) Allows the codification of the report according to a medical dictionary, improving the indexing of the information and retrieval; (iii) The algorithms can be selectively applied to images with the appropriated characteristics, for instance, only in magnetic resonance images. The framework was implemented in Java language using a MS Access 97 database. The proposed framework can still be improved, by the use of regions of interest (ROI), indexing with slim-trees and integration with a PACS Server.

  12. A novel edge based embedding in medical images based on unique key generated using sudoku puzzle design.

    PubMed

    Santhi, B; Dheeptha, B

    2016-01-01

    The field of telemedicine has gained immense momentum, owing to the need for transmitting patients' information securely. This paper puts forth a unique method for embedding data in medical images. It is based on edge based embedding and XOR coding. The algorithm proposes a novel key generation technique by utilizing the design of a sudoku puzzle to enhance the security of the transmitted message. The edge blocks of the cover image alone, are utilized to embed the payloads. The least significant bit of the pixel values are changed by XOR coding depending on the data to be embedded and the key generated. Hence the distortion in the stego image is minimized and the information is retrieved accurately. Data is embedded in the RGB planes of the cover image, thus increasing its embedding capacity. Several measures including peak signal noise ratio (PSNR), mean square error (MSE), universal image quality index (UIQI) and correlation coefficient (R) are the image quality measures that have been used to analyze the quality of the stego image. It is evident from the results that the proposed technique outperforms the former methodologies.

  13. Video Information Communication and Retrieval/Image Based Information System (VICAR/IBIS)

    NASA Technical Reports Server (NTRS)

    Wherry, D. B.

    1981-01-01

    The acquisition, operation, and planning stages of installing a VICAR/IBIS system are described. The system operates in an IBM mainframe environment, and provides image processing of raster data. System support problems with software and documentation are discussed.

  14. Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone

    NASA Astrophysics Data System (ADS)

    Xia, G.; Hu, C.

    2018-04-01

    The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.

  15. Development of a mobile emergency patient information and imaging communication system based on CDMA-1X EVDO

    NASA Astrophysics Data System (ADS)

    Yang, Keon Ho; Jung, Haijo; Kang, Won-Suk; Jang, Bong Mun; Kim, Joong Il; Han, Dong Hoon; Yoo, Sun-Kook; Yoo, Hyung-Sik; Kim, Hee-Joung

    2006-03-01

    The wireless mobile service with a high bit rate using CDMA-1X EVDO is now widely used in Korea. Mobile devices are also increasingly being used as the conventional communication mechanism. We have developed a web-based mobile system that communicates patient information and images, using CDMA-1X EVDO for emergency diagnosis. It is composed of a Mobile web application system using the Microsoft Windows 2003 server and an internet information service. Also, a mobile web PACS used for a database managing patient information and images was developed by using Microsoft access 2003. A wireless mobile emergency patient information and imaging communication system is developed by using Microsoft Visual Studio.NET, and JPEG 2000 ActiveX control for PDA phone was developed by using the Microsoft Embedded Visual C++. Also, the CDMA-1X EVDO is used for connections between mobile web servers and the PDA phone. This system allows fast access to the patient information database, storing both medical images and patient information anytime and anywhere. Especially, images were compressed into a JPEG2000 format and transmitted from a mobile web PACS inside the hospital to the radiologist using a PDA phone located outside the hospital. Also, this system shows radiological images as well as physiological signal data, including blood pressure, vital signs and so on, in the web browser of the PDA phone so radiologists can diagnose more effectively. Also, we acquired good results using an RW-6100 PDA phone used in the university hospital system of the Sinchon Severance Hospital in Korea.

  16. Medical Image Tamper Detection Based on Passive Image Authentication.

    PubMed

    Ulutas, Guzin; Ustubioglu, Arda; Ustubioglu, Beste; V Nabiyev, Vasif; Ulutas, Mustafa

    2017-12-01

    Telemedicine has gained popularity in recent years. Medical images can be transferred over the Internet to enable the telediagnosis between medical staffs and to make the patient's history accessible to medical staff from anywhere. Therefore, integrity protection of the medical image is a serious concern due to the broadcast nature of the Internet. Some watermarking techniques are proposed to control the integrity of medical images. However, they require embedding of extra information (watermark) into image before transmission. It decreases visual quality of the medical image and can cause false diagnosis. The proposed method uses passive image authentication mechanism to detect the tampered regions on medical images. Structural texture information is obtained from the medical image by using local binary pattern rotation invariant (LBPROT) to make the keypoint extraction techniques more successful. Keypoints on the texture image are obtained with scale invariant feature transform (SIFT). Tampered regions are detected by the method by matching the keypoints. The method improves the keypoint-based passive image authentication mechanism (they do not detect tampering when the smooth region is used for covering an object) by using LBPROT before keypoint extraction because smooth regions also have texture information. Experimental results show that the method detects tampered regions on the medical images even if the forged image has undergone some attacks (Gaussian blurring/additive white Gaussian noise) or the forged regions are scaled/rotated before pasting.

  17. A graph-based approach to detect spatiotemporal dynamics in satellite image time series

    NASA Astrophysics Data System (ADS)

    Guttler, Fabio; Ienco, Dino; Nin, Jordi; Teisseire, Maguelonne; Poncelet, Pascal

    2017-08-01

    Enhancing the frequency of satellite acquisitions represents a key issue for Earth Observation community nowadays. Repeated observations are crucial for monitoring purposes, particularly when intra-annual process should be taken into account. Time series of images constitute a valuable source of information in these cases. The goal of this paper is to propose a new methodological framework to automatically detect and extract spatiotemporal information from satellite image time series (SITS). Existing methods dealing with such kind of data are usually classification-oriented and cannot provide information about evolutions and temporal behaviors. In this paper we propose a graph-based strategy that combines object-based image analysis (OBIA) with data mining techniques. Image objects computed at each individual timestamp are connected across the time series and generates a set of evolution graphs. Each evolution graph is associated to a particular area within the study site and stores information about its temporal evolution. Such information can be deeply explored at the evolution graph scale or used to compare the graphs and supply a general picture at the study site scale. We validated our framework on two study sites located in the South of France and involving different types of natural, semi-natural and agricultural areas. The results obtained from a Landsat SITS support the quality of the methodological approach and illustrate how the framework can be employed to extract and characterize spatiotemporal dynamics.

  18. Restoration of color in a remote sensing image and its quality evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Wang, Zhihe

    2003-09-01

    This paper is focused on the restoration of color remote sensing (including airborne photo). A complete approach is recommended. It propose that two main aspects should be concerned in restoring a remote sensing image, that are restoration of space information, restoration of photometric information. In this proposal, the restoration of space information can be performed by making the modulation transfer function (MTF) as degradation function, in which the MTF is obtained by measuring the edge curve of origin image. The restoration of photometric information can be performed by improved local maximum entropy algorithm. What's more, a valid approach in processing color remote sensing image is recommended. That is splits the color remote sensing image into three monochromatic images which corresponding three visible light bands and synthesizes the three images after being processed separately with psychological color vision restriction. Finally, three novel evaluation variables are obtained based on image restoration to evaluate the image restoration quality in space restoration quality and photometric restoration quality. An evaluation is provided at last.

  19. Tissues segmentation based on multi spectral medical images

    NASA Astrophysics Data System (ADS)

    Li, Ya; Wang, Ying

    2017-11-01

    Each band image contains the most obvious tissue feature according to the optical characteristics of different tissues in different specific bands for multispectral medical images. In this paper, the tissues were segmented by their spectral information at each multispectral medical images. Four Local Binary Patter descriptors were constructed to extract blood vessels based on the gray difference between the blood vessels and their neighbors. The segmented tissue in each band image was merged to a clear image.

  20. A data mining based approach to predict spatiotemporal changes in satellite images

    NASA Astrophysics Data System (ADS)

    Boulila, W.; Farah, I. R.; Ettabaa, K. Saheb; Solaiman, B.; Ghézala, H. Ben

    2011-06-01

    The interpretation of remotely sensed images in a spatiotemporal context is becoming a valuable research topic. However, the constant growth of data volume in remote sensing imaging makes reaching conclusions based on collected data a challenging task. Recently, data mining appears to be a promising research field leading to several interesting discoveries in various areas such as marketing, surveillance, fraud detection and scientific discovery. By integrating data mining and image interpretation techniques, accurate and relevant information (i.e. functional relation between observed parcels and a set of informational contents) can be automatically elicited. This study presents a new approach to predict spatiotemporal changes in satellite image databases. The proposed method exploits fuzzy sets and data mining concepts to build predictions and decisions for several remote sensing fields. It takes into account imperfections related to the spatiotemporal mining process in order to provide more accurate and reliable information about land cover changes in satellite images. The proposed approach is validated using SPOT images representing the Saint-Denis region, capital of Reunion Island. Results show good performances of the proposed framework in predicting change for the urban zone.

  1. Non-contact tissue perfusion and oxygenation imaging using a LED based multispectral and a thermal imaging system, first results of clinical intervention studies

    NASA Astrophysics Data System (ADS)

    Klaessens, John H. G. M.; Nelisse, Martin; Verdaasdonk, Rudolf M.; Noordmans, Herke Jan

    2013-03-01

    During clinical interventions objective and quantitative information of the tissue perfusion, oxygenation or temperature can be useful for the surgical strategy. Local (point) measurements give limited information and affected areas can easily be missed, therefore imaging large areas is required. In this study a LED based multispectral imaging system (MSI, 17 different wavelengths 370nm-880nm) and a thermo camera were applied during clinical interventions: tissue flap transplantations (ENT), local anesthetic block and during open brain surgery (epileptic seizure). The images covered an area of 20x20 cm, when doing measurements in an (operating) room, they turned out to be more complicated than laboratory experiments due to light fluctuations, movement of the patient and limited angle of view. By constantly measuring the background light and the use of a white reference, light fluctuations and movement were corrected. Oxygenation concentration images could be calculated and combined with the thermal images. The effectively of local anesthesia of a hand could be predicted in an early stage using the thermal camera and the reperfusion of transplanted skin flap could be imaged. During brain surgery, a temporary hyper-perfused area was witnessed which was probably related to an epileptic attack. A LED based multispectral imaging system combined with thermal imaging provide complementary information on perfusion and oxygenation changes and are promising techniques for real-time diagnostics during clinical interventions.

  2. Feature and Intensity Based Medical Image Registration Using Particle Swarm Optimization.

    PubMed

    Abdel-Basset, Mohamed; Fakhry, Ahmed E; El-Henawy, Ibrahim; Qiu, Tie; Sangaiah, Arun Kumar

    2017-11-03

    Image registration is an important aspect in medical image analysis, and kinds use in a variety of medical applications. Examples include diagnosis, pre/post surgery guidance, comparing/merging/integrating images from multi-modal like Magnetic Resonance Imaging (MRI), and Computed Tomography (CT). Whether registering images across modalities for a single patient or registering across patients for a single modality, registration is an effective way to combine information from different images into a normalized frame for reference. Registered datasets can be used for providing information relating to the structure, function, and pathology of the organ or individual being imaged. In this paper a hybrid approach for medical images registration has been developed. It employs a modified Mutual Information (MI) as a similarity metric and Particle Swarm Optimization (PSO) method. Computation of mutual information is modified using a weighted linear combination of image intensity and image gradient vector flow (GVF) intensity. In this manner, statistical as well as spatial image information is included into the image registration process. Maximization of the modified mutual information is effected using the versatile Particle Swarm Optimization which is developed easily with adjusted less parameter. The developed approach has been tested and verified successfully on a number of medical image data sets that include images with missing parts, noise contamination, and/or of different modalities (CT, MRI). The registration results indicate the proposed model as accurate and effective, and show the posture contribution in inclusion of both statistical and spatial image data to the developed approach.

  3. Validating a Geographical Image Retrieval System.

    ERIC Educational Resources Information Center

    Zhu, Bin; Chen, Hsinchun

    2000-01-01

    Summarizes a prototype geographical image retrieval system that demonstrates how to integrate image processing and information analysis techniques to support large-scale content-based image retrieval. Describes an experiment to validate the performance of this image retrieval system against that of human subjects by examining similarity analysis…

  4. Signal digitizing system and method based on amplitude-to-time optical mapping

    DOEpatents

    Chou, Jason; Bennett, Corey V; Hernandez, Vince

    2015-01-13

    A signal digitizing system and method based on analog-to-time optical mapping, optically maps amplitude information of an analog signal of interest first into wavelength information using an amplitude tunable filter (ATF) to impress spectral changes induced by the amplitude of the analog signal onto a carrier signal, i.e. a train of optical pulses, and next from wavelength information to temporal information using a dispersive element so that temporal information representing the amplitude information is encoded in the time domain in the carrier signal. Optical-to-electrical conversion of the optical pulses into voltage waveforms and subsequently digitizing the voltage waveforms into a digital image enables the temporal information to be resolved and quantized in the time domain. The digital image may them be digital signal processed to digitally reconstruct the analog signal based on the temporal information with high fidelity.

  5. Spatially weighted mutual information image registration for image guided radiation therapy.

    PubMed

    Park, Samuel B; Rhee, Frank C; Monroe, James I; Sohn, Jason W

    2010-09-01

    To develop a new metric for image registration that incorporates the (sub)pixelwise differential importance along spatial location and to demonstrate its application for image guided radiation therapy (IGRT). It is well known that rigid-body image registration with mutual information is dependent on the size and location of the image subset on which the alignment analysis is based [the designated region of interest (ROI)]. Therefore, careful review and manual adjustments of the resulting registration are frequently necessary. Although there were some investigations of weighted mutual information (WMI), these efforts could not apply the differential importance to a particular spatial location since WMI only applies the weight to the joint histogram space. The authors developed the spatially weighted mutual information (SWMI) metric by incorporating an adaptable weight function with spatial localization into mutual information. SWMI enables the user to apply the selected transform to medically "important" areas such as tumors and critical structures, so SWMI is neither dominated by, nor neglects the neighboring structures. Since SWMI can be utilized with any weight function form, the authors presented two examples of weight functions for IGRT application: A Gaussian-shaped weight function (GW) applied to a user-defined location and a structures-of-interest (SOI) based weight function. An image registration example using a synthesized 2D image is presented to illustrate the efficacy of SWMI. The convergence and feasibility of the registration method as applied to clinical imaging is illustrated by fusing a prostate treatment planning CT with a clinical cone beam CT (CBCT) image set acquired for patient alignment. Forty-one trials are run to test the speed of convergence. The authors also applied SWMI registration using two types of weight functions to two head and neck cases and a prostate case with clinically acquired CBCT/ MVCT image sets. The SWMI registration with a Gaussian weight function (SWMI-GW) was tested between two different imaging modalities: CT and MRI image sets. SWMI-GW converges 10% faster than registration using mutual information with an ROI. SWMI-GW as well as SWMI with SOI-based weight function (SWMI-SOI) shows better compensation of the target organ's deformation and neighboring critical organs' deformation. SWMI-GW was also used to successfully fuse MRI and CT images. Rigid-body image registration using our SWMI-GW and SWMI-SOI as cost functions can achieve better registration results in (a) designated image region(s) as well as faster convergence. With the theoretical foundation established, we believe SWMI could be extended to larger clinical testing.

  6. In Vivo Small Animal Imaging using Micro-CT and Digital Subtraction Angiography

    PubMed Central

    Badea, C.T.; Drangova, M.; Holdsworth, D.W.; Johnson, G.A.

    2009-01-01

    Small animal imaging has a critical role in phenotyping, drug discovery, and in providing a basic understanding of mechanisms of disease. Translating imaging methods from humans to small animals is not an easy task. The purpose of this work is to review in vivo X-ray based small animal imaging, with a focus on in vivo micro-computed tomography (micro-CT) and digital subtraction angiography (DSA). We present the principles, technologies, image quality parameters and types of applications. We show that both methods can be used not only to provide morphological, but also functional information, such as cardiac function estimation or perfusion. Compared to other modalities, x-ray based imaging is usually regarded as being able to provide higher throughput at lower cost and adequate resolution. The limitations are usually associated with the relatively poor contrast mechanisms and potential radiation damage due to ionizing radiation, although the use of contrast agents and careful design of studies can address these limitations. We hope that the information will effectively address how x-ray based imaging can be exploited for successful in vivo preclinical imaging. PMID:18758005

  7. Simultaneous multiplexing and encoding of multiple images based on a double random phase encryption system

    NASA Astrophysics Data System (ADS)

    Alfalou, Ayman; Mansour, Ali

    2009-09-01

    Nowadays, protecting information is a major issue in any transmission system, as showed by an increasing number of research papers related to this topic. Optical encoding methods, such as a Double Random Phase encryption system i.e. DRP, are widely used and cited in the literature. DRP systems have very simple principle and they are easily applicable to most images (B&W, gray levels or color). Moreover, some applications require an enhanced encoding level based on multiencryption scheme and including biometric keys (as digital fingerprints). The enhancement should be done without increasing transmitted or stored information. In order to achieve that goal, a new approach for simultaneous multiplexing & encoding of several target images is developed in this manuscript. By introducing two additional security levels, our approach enhances the security level of a classic "DRP" system. Our first security level consists in using several independent image-keys (randomly and structurally) along with a new multiplexing algorithm. At this level, several target images (multiencryption) are used. This part can reduce needed information (encoding information). At the second level a standard DRP system is included. Finally, our approach can detect if any vandalism attempt has been done on transmitted encrypted images.

  8. MRI (Magnetic Resonance Imaging)

    MedlinePlus

    ... IV in the arm. MRI Research Programs at FDA Magnetic Resonance Imaging (MRI) Safety Electromagnetic Modeling Related ... Resonance Imaging Equipment in Clinical Use (March 2015) FDA/CDER: Information on Gadolinium-Based Contrast Agents Safety ...

  9. Registration of PET and CT images based on multiresolution gradient of mutual information demons algorithm for positioning esophageal cancer patients.

    PubMed

    Jin, Shuo; Li, Dengwang; Wang, Hongjun; Yin, Yong

    2013-01-07

    Accurate registration of 18F-FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from (18)F-FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information-based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application.

  10. A minimum spanning forest based classification method for dedicated breast CT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pike, Robert; Sechopoulos, Ioannis; Fei, Baowei, E-mail: bfei@emory.edu

    Purpose: To develop and test an automated algorithm to classify different types of tissue in dedicated breast CT images. Methods: Images of a single breast of five different patients were acquired with a dedicated breast CT clinical prototype. The breast CT images were processed by a multiscale bilateral filter to reduce noise while keeping edge information and were corrected to overcome cupping artifacts. As skin and glandular tissue have similar CT values on breast CT images, morphologic processing is used to identify the skin based on its position information. A support vector machine (SVM) is trained and the resulting modelmore » used to create a pixelwise classification map of fat and glandular tissue. By combining the results of the skin mask with the SVM results, the breast tissue is classified as skin, fat, and glandular tissue. This map is then used to identify markers for a minimum spanning forest that is grown to segment the image using spatial and intensity information. To evaluate the authors’ classification method, they use DICE overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on five patient images. Results: Comparison between the automatic and the manual segmentation shows that the minimum spanning forest based classification method was able to successfully classify dedicated breast CT image with average DICE ratios of 96.9%, 89.8%, and 89.5% for fat, glandular, and skin tissue, respectively. Conclusions: A 2D minimum spanning forest based classification method was proposed and evaluated for classifying the fat, skin, and glandular tissue in dedicated breast CT images. The classification method can be used for dense breast tissue quantification, radiation dose assessment, and other applications in breast imaging.« less

  11. Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU

    NASA Astrophysics Data System (ADS)

    Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee

    2013-02-01

    3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.

  12. Supervised graph hashing for histopathology image retrieval and classification.

    PubMed

    Shi, Xiaoshuang; Xing, Fuyong; Xu, KaiDi; Xie, Yuanpu; Su, Hai; Yang, Lin

    2017-12-01

    In pathology image analysis, morphological characteristics of cells are critical to grade many diseases. With the development of cell detection and segmentation techniques, it is possible to extract cell-level information for further analysis in pathology images. However, it is challenging to conduct efficient analysis of cell-level information on a large-scale image dataset because each image usually contains hundreds or thousands of cells. In this paper, we propose a novel image retrieval based framework for large-scale pathology image analysis. For each image, we encode each cell into binary codes to generate image representation using a novel graph based hashing model and then conduct image retrieval by applying a group-to-group matching method to similarity measurement. In order to improve both computational efficiency and memory requirement, we further introduce matrix factorization into the hashing model for scalable image retrieval. The proposed framework is extensively validated with thousands of lung cancer images, and it achieves 97.98% classification accuracy and 97.50% retrieval precision with all cells of each query image used. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim

    2008-01-01

    Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.

  14. Arithmetic of five-part of leukocytes based on image process

    NASA Astrophysics Data System (ADS)

    Li, Yian; Wang, Guoyou; Liu, Jianguo

    2007-12-01

    This paper apply computer image processing and pattern recognizition methods to solve the problem of auto classification and counting of leukocytes (white blood cell) in peripheral blood. In this paper a new leukocyte arithmetic of five-part based on image process and pattern recognizition is presented, which relized auto classify of leukocyte. The first aim is detect the leukocytes . A major requirement of the whole system is to classify these leukocytes to 5 classes. This arithmetic bases on notability mechanism of eyes, process image by sequence, divides up leukocytes and pick up characters. Using the prior kwonledge of cells and image shape information, this arithmetic divides up the probable shape of Leukocyte first by a new method based on Chamfer and then gets the detail characters. It can reduce the mistake judge rate and the calculation greatly. It also has the learning fuction. This paper also presented a new measurement of karyon's shape which can provide more accurate information. This algorithm has great application value in clinical blood test .

  15. [Mobile phone-computer wireless interactive graphics transmission technology and its medical application].

    PubMed

    Huang, Shuo; Liu, Jing

    2010-05-01

    Application of clinical digital medical imaging has raised many tough issues to tackle, such as data storage, management, and information sharing. Here we investigated a mobile phone based medical image management system which is capable of achieving personal medical imaging information storage, management and comprehensive health information analysis. The technologies related to the management system spanning the wireless transmission technology, the technical capabilities of phone in mobile health care and management of mobile medical database were discussed. Taking medical infrared images transmission between phone and computer as an example, the working principle of the present system was demonstrated.

  16. Information theoretic analysis of linear shift-invariant edge-detection operators

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2012-06-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the influences by the image gathering process. However, experiments show that the image gathering process has a profound impact on the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. We perform an end-to-end information theory based system analysis to assess linear shift-invariant edge-detection algorithms. We evaluate the performance of the different algorithms as a function of the characteristics of the scene and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge-detection algorithm is regarded as having high performance only if the information rate from the scene to the edge image approaches its maximum possible. This goal can be achieved only by jointly optimizing all processes. Our information-theoretic assessment provides a new tool that allows us to compare different linear shift-invariant edge detectors in a common environment.

  17. Seeing is believing: on the use of image databases for visually exploring plant organelle dynamics.

    PubMed

    Mano, Shoji; Miwa, Tomoki; Nishikawa, Shuh-ichi; Mimura, Tetsuro; Nishimura, Mikio

    2009-12-01

    Organelle dynamics vary dramatically depending on cell type, developmental stage and environmental stimuli, so that various parameters, such as size, number and behavior, are required for the description of the dynamics of each organelle. Imaging techniques are superior to other techniques for describing organelle dynamics because these parameters are visually exhibited. Therefore, as the results can be seen immediately, investigators can more easily grasp organelle dynamics. At present, imaging techniques are emerging as fundamental tools in plant organelle research, and the development of new methodologies to visualize organelles and the improvement of analytical tools and equipment have allowed the large-scale generation of image and movie data. Accordingly, image databases that accumulate information on organelle dynamics are an increasingly indispensable part of modern plant organelle research. In addition, image databases are potentially rich data sources for computational analyses, as image and movie data reposited in the databases contain valuable and significant information, such as size, number, length and velocity. Computational analytical tools support image-based data mining, such as segmentation, quantification and statistical analyses, to extract biologically meaningful information from each database and combine them to construct models. In this review, we outline the image databases that are dedicated to plant organelle research and present their potential as resources for image-based computational analyses.

  18. Classification of the Gabon SAR Mosaic Using a Wavelet Based Rule Classifier

    NASA Technical Reports Server (NTRS)

    Simard, Marc; Saatchi, Sasan; DeGrandi, Gianfranco

    2000-01-01

    A method is developed for semi-automated classification of SAR images of the tropical forest. Information is extracted using the wavelet transform (WT). The transform allows for extraction of structural information in the image as a function of scale. In order to classify the SAR image, a Desicion Tree Classifier is used. The method of pruning is used to optimize classification rate versus tree size. The results give explicit insight on the type of information useful for a given class.

  19. MRI intensity nonuniformity correction using simultaneously spatial and gray-level histogram information.

    PubMed

    Milles, Julien; Zhu, Yue Min; Gimenez, Gérard; Guttmann, Charles R G; Magnin, Isabelle E

    2007-03-01

    A novel approach for correcting intensity nonuniformity in magnetic resonance imaging (MRI) is presented. This approach is based on the simultaneous use of spatial and gray-level histogram information. Spatial information about intensity nonuniformity is obtained using cubic B-spline smoothing. Gray-level histogram information of the image corrupted by intensity nonuniformity is exploited from a frequential point of view. The proposed correction method is illustrated using both physical phantom and human brain images. The results are consistent with theoretical prediction, and demonstrate a new way of dealing with intensity nonuniformity problems. They are all the more significant as the ground truth on intensity nonuniformity is unknown in clinical images.

  20. Detection of small bowel tumors in capsule endoscopy frames using texture analysis based on the discrete wavelet transform.

    PubMed

    Barbosa, Daniel J C; Ramos, Jaime; Lima, Carlos S

    2008-01-01

    Capsule endoscopy is an important tool to diagnose tumor lesions in the small bowel. The capsule endoscopic images possess vital information expressed by color and texture. This paper presents an approach based in the textural analysis of the different color channels, using the wavelet transform to select the bands with the most significant texture information. A new image is then synthesized from the selected wavelet bands, trough the inverse wavelet transform. The features of each image are based on second-order textural information, and they are used in a classification scheme using a multilayer perceptron neural network. The proposed methodology has been applied in real data taken from capsule endoscopic exams and reached 98.7% sensibility and 96.6% specificity. These results support the feasibility of the proposed algorithm.

  1. Intelligent web image retrieval system

    NASA Astrophysics Data System (ADS)

    Hong, Sungyong; Lee, Chungwoo; Nah, Yunmook

    2001-07-01

    Recently, the web sites such as e-business sites and shopping mall sites deal with lots of image information. To find a specific image from these image sources, we usually use web search engines or image database engines which rely on keyword only retrievals or color based retrievals with limited search capabilities. This paper presents an intelligent web image retrieval system. We propose the system architecture, the texture and color based image classification and indexing techniques, and representation schemes of user usage patterns. The query can be given by providing keywords, by selecting one or more sample texture patterns, by assigning color values within positional color blocks, or by combining some or all of these factors. The system keeps track of user's preferences by generating user query logs and automatically add more search information to subsequent user queries. To show the usefulness of the proposed system, some experimental results showing recall and precision are also explained.

  2. Image standards in tissue-based diagnosis (diagnostic surgical pathology).

    PubMed

    Kayser, Klaus; Görtler, Jürgen; Goldmann, Torsten; Vollmer, Ekkehard; Hufnagl, Peter; Kayser, Gian

    2008-04-18

    Progress in automated image analysis, virtual microscopy, hospital information systems, and interdisciplinary data exchange require image standards to be applied in tissue-based diagnosis. To describe the theoretical background, practical experiences and comparable solutions in other medical fields to promote image standards applicable for diagnostic pathology. THEORY AND EXPERIENCES: Images used in tissue-based diagnosis present with pathology-specific characteristics. It seems appropriate to discuss their characteristics and potential standardization in relation to the levels of hierarchy in which they appear. All levels can be divided into legal, medical, and technological properties. Standards applied to the first level include regulations or aims to be fulfilled. In legal properties, they have to regulate features of privacy, image documentation, transmission, and presentation; in medical properties, features of disease-image combination, human-diagnostics, automated information extraction, archive retrieval and access; and in technological properties features of image acquisition, display, formats, transfer speed, safety, and system dynamics. The next lower second level has to implement the prescriptions of the upper one, i.e. describe how they are implemented. Legal aspects should demand secure encryption for privacy of all patient related data, image archives that include all images used for diagnostics for a period of 10 years at minimum, accurate annotations of dates and viewing, and precise hardware and software information. Medical aspects should demand standardized patients' files such as DICOM 3 or HL 7 including history and previous examinations, information of image display hardware and software, of image resolution and fields of view, of relation between sizes of biological objects and image sizes, and of access to archives and retrieval. Technological aspects should deal with image acquisition systems (resolution, colour temperature, focus, brightness, and quality evaluation procedures), display resolution data, implemented image formats, storage, cycle frequency, backup procedures, operation system, and external system accessibility. The lowest third level describes the permitted limits and threshold in detail. At present, an applicable standard including all mentioned features does not exist to our knowledge; some aspects can be taken from radiological standards (PACS, DICOM 3); others require specific solutions or are not covered yet. The progress in virtual microscopy and application of artificial intelligence (AI) in tissue-based diagnosis demands fast preparation and implementation of an internationally acceptable standard. The described hierarchic order as well as analytic investigation in all potentially necessary aspects and details offers an appropriate tool to specifically determine standardized requirements.

  3. Kingfisher: a system for remote sensing image database management

    NASA Astrophysics Data System (ADS)

    Bruzzo, Michele; Giordano, Ferdinando; Dellepiane, Silvana G.

    2003-04-01

    At present retrieval methods in remote sensing image database are mainly based on spatial-temporal information. The increasing amount of images to be collected by the ground station of earth observing systems emphasizes the need for database management with intelligent data retrieval capabilities. The purpose of the proposed method is to realize a new content based retrieval system for remote sensing images database with an innovative search tool based on image similarity. This methodology is quite innovative for this application, at present many systems exist for photographic images, as for example QBIC and IKONA, but they are not able to extract and describe properly remote image content. The target database is set by an archive of images originated from an X-SAR sensor (spaceborne mission, 1994). The best content descriptors, mainly texture parameters, guarantees high retrieval performances and can be extracted without losses independently of image resolution. The latter property allows DBMS (Database Management System) to process low amount of information, as in the case of quick-look images, improving time performance and memory access without reducing retrieval accuracy. The matching technique has been designed to enable image management (database population and retrieval) independently of dimensions (width and height). Local and global content descriptors are compared, during retrieval phase, with the query image and results seem to be very encouraging.

  4. Fluid Registration of Diffusion Tensor Images Using Information Theory

    PubMed Central

    Chiang, Ming-Chang; Leow, Alex D.; Klunder, Andrea D.; Dutton, Rebecca A.; Barysheva, Marina; Rose, Stephen E.; McMahon, Katie L.; de Zubicaray, Greig I.; Toga, Arthur W.; Thompson, Paul M.

    2008-01-01

    We apply an information-theoretic cost metric, the symmetrized Kullback-Leibler (sKL) divergence, or J-divergence, to fluid registration of diffusion tensor images. The difference between diffusion tensors is quantified based on the sKL-divergence of their associated probability density functions (PDFs). Three-dimensional DTI data from 34 subjects were fluidly registered to an optimized target image. To allow large image deformations but preserve image topology, we regularized the flow with a large-deformation diffeomorphic mapping based on the kinematics of a Navier-Stokes fluid. A driving force was developed to minimize the J-divergence between the deforming source and target diffusion functions, while reorienting the flowing tensors to preserve fiber topography. In initial experiments, we showed that the sKL-divergence based on full diffusion PDFs is adaptable to higher-order diffusion models, such as high angular resolution diffusion imaging (HARDI). The sKL-divergence was sensitive to subtle differences between two diffusivity profiles, showing promise for nonlinear registration applications and multisubject statistical analysis of HARDI data. PMID:18390342

  5. Method of passive ranging from infrared image sequence based on equivalent area

    NASA Astrophysics Data System (ADS)

    Yang, Weiping; Shen, Zhenkang

    2007-11-01

    The information of range between missile and targets is important not only to missile controlling component, but also to automatic target recognition, so studying the technique of passive ranging from infrared images has important theoretic and practical meanings. Here we tried to get the range between guided missile and target and help to identify targets or dodge a hit. The issue of distance between missile and target is currently a hot and difficult research content. As all know, infrared imaging detector can not range so that it restricts the functions of the guided information processing system based on infrared images. In order to break through the technical puzzle, we investigated the principle of the infrared imaging, after analysing the imaging geometric relationship between the guided missile and the target, we brought forward the method of passive ranging based on equivalent area and provided mathematical analytic formulas. Validating Experiments demonstrate that the presented method has good effect, the lowest relative error can reach 10% in some circumstances.

  6. Can state-of-the-art HVS-based objective image quality criteria be used for image reconstruction techniques based on ROI analysis?

    NASA Astrophysics Data System (ADS)

    Dostal, P.; Krasula, L.; Klima, M.

    2012-06-01

    Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.

  7. Colour flow and motion imaging.

    PubMed

    Evans, D H

    2010-01-01

    Colour flow imaging (CFI) is an ultrasound imaging technique whereby colour-coded maps of tissue velocity are superimposed on grey-scale pulse-echo images of tissue anatomy. The most widespread use of the method is to image the movement of blood through arteries and veins, but it may also be used to image the motion of solid tissue. The production of velocity information is technically more demanding than the production of the anatomical information, partly because the target of interest is often blood, which backscatters significantly less power than solid tissues, and partly because several transmit-receive cycles are necessary for each velocity estimate. This review first describes the various components of basic CFI systems necessary to generate the velocity information and to combine it with anatomical information. It then describes a number of variations on the basic autocorrelation technique, including cross-correlation-based techniques, power Doppler, Doppler tissue imaging, and three-dimensional (3D) Doppler imaging. Finally, a number of limitations of current techniques and some potential solutions are reviewed.

  8. Spot counting on fluorescence in situ hybridization in suspension images using Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin

    2015-03-01

    Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.

  9. A Remote Sensing Image Fusion Method based on adaptive dictionary learning

    NASA Astrophysics Data System (ADS)

    He, Tongdi; Che, Zongxi

    2018-01-01

    This paper discusses using a remote sensing fusion method, based on' adaptive sparse representation (ASP)', to provide improved spectral information, reduce data redundancy and decrease system complexity. First, the training sample set is formed by taking random blocks from the images to be fused, the dictionary is then constructed using the training samples, and the remaining terms are clustered to obtain the complete dictionary by iterated processing at each step. Second, the self-adaptive weighted coefficient rule of regional energy is used to select the feature fusion coefficients and complete the reconstruction of the image blocks. Finally, the reconstructed image blocks are rearranged and an average is taken to obtain the final fused images. Experimental results show that the proposed method is superior to other traditional remote sensing image fusion methods in both spectral information preservation and spatial resolution.

  10. Enhanced facial recognition for thermal imagery using polarimetric imaging.

    PubMed

    Gurton, Kristan P; Yuffa, Alex J; Videen, Gorden W

    2014-07-01

    We present a series of long-wave-infrared (LWIR) polarimetric-based thermal images of facial profiles in which polarization-state information of the image-forming radiance is retained and displayed. The resultant polarimetric images show enhanced facial features, additional texture, and details that are not present in corresponding conventional thermal imagery. It has been generally thought that conventional thermal imagery (MidIR or LWIR) could not produce the detailed spatial information required for reliable human identification due to the so-called "ghosting" effect often seen in thermal imagery of human subjects. By using polarimetric information, we are able to extract subtle surface features of the human face, thus improving subject identification. Polarimetric image sets considered include the conventional thermal intensity image, S0, the two Stokes images, S1 and S2, and a Stokes image product called the degree-of-linear-polarization image.

  11. Enhanced EDX images by fusion of multimodal SEM images using pansharpening techniques.

    PubMed

    Franchi, G; Angulo, J; Moreaud, M; Sorbier, L

    2018-01-01

    The goal of this paper is to explore the potential interest of image fusion in the context of multimodal scanning electron microscope (SEM) imaging. In particular, we aim at merging the backscattered electron images that usually have a high spatial resolution but do not provide enough discriminative information to physically classify the nature of the sample, with energy-dispersive X-ray spectroscopy (EDX) images that have discriminative information but a lower spatial resolution. The produced images are named enhanced EDX. To achieve this goal, we have compared the results obtained with classical pansharpening techniques for image fusion with an original approach tailored for multimodal SEM fusion of information. Quantitative assessment is obtained by means of two SEM images and a simulated dataset produced by a software based on PENELOPE. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  12. Infrared moving small target detection based on saliency extraction and image sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaomin; Ren, Kan; Gao, Jin; Li, Chaowei; Gu, Guohua; Wan, Minjie

    2016-10-01

    Moving small target detection in infrared image is a crucial technique of infrared search and tracking system. This paper present a novel small target detection technique based on frequency-domain saliency extraction and image sparse representation. First, we exploit the features of Fourier spectrum image and magnitude spectrum of Fourier transform to make a rough extract of saliency regions and use a threshold segmentation system to classify the regions which look salient from the background, which gives us a binary image as result. Second, a new patch-image model and over-complete dictionary were introduced to the detection system, then the infrared small target detection was converted into a problem solving and optimization process of patch-image information reconstruction based on sparse representation. More specifically, the test image and binary image can be decomposed into some image patches follow certain rules. We select the target potential area according to the binary patch-image which contains salient region information, then exploit the over-complete infrared small target dictionary to reconstruct the test image blocks which may contain targets. The coefficients of target image patch satisfy sparse features. Finally, for image sequence, Euclidean distance was used to reduce false alarm ratio and increase the detection accuracy of moving small targets in infrared images due to the target position correlation between frames.

  13. A method of non-contact reading code based on computer vision

    NASA Astrophysics Data System (ADS)

    Zhang, Chunsen; Zong, Xiaoyu; Guo, Bingxuan

    2018-03-01

    With the purpose of guarantee the computer information exchange security between internal and external network (trusted network and un-trusted network), A non-contact Reading code method based on machine vision has been proposed. Which is different from the existing network physical isolation method. By using the computer monitors, camera and other equipment. Deal with the information which will be on exchanged, Include image coding ,Generate the standard image , Display and get the actual image , Calculate homography matrix, Image distort correction and decoding in calibration, To achieve the computer information security, Non-contact, One-way transmission between the internal and external network , The effectiveness of the proposed method is verified by experiments on real computer text data, The speed of data transfer can be achieved 24kb/s. The experiment shows that this algorithm has the characteristics of high security, fast velocity and less loss of information. Which can meet the daily needs of the confidentiality department to update the data effectively and reliably, Solved the difficulty of computer information exchange between Secret network and non-secret network, With distinctive originality, practicability, and practical research value.

  14. Knowledge guided information fusion for segmentation of multiple sclerosis lesions in MRI images

    NASA Astrophysics Data System (ADS)

    Zhu, Chaozhe; Jiang, Tianzi

    2003-05-01

    In this work, T1-, T2- and PD-weighted MR images of multiple sclerosis (MS) patients, providing information on the properties of tissues from different aspects, are treated as three independent information sources for the detection and segmentation of MS lesions. Based on information fusion theory, a knowledge guided information fusion framework is proposed to accomplish 3-D segmentation of MS lesions. This framework consists of three parts: (1) information extraction, (2) information fusion, and (3) decision. Information provided by different spectral images is extracted and modeled separately in each spectrum using fuzzy sets, aiming at managing the uncertainty and ambiguity in the images due to noise and partial volume effect. In the second part, the possible fuzzy map of MS lesions in each spectral image is constructed from the extracted information under the guidance of experts' knowledge, and then the final fuzzy map of MS lesions is constructed through the fusion of the fuzzy maps obtained from different spectrum. Finally, 3-D segmentation of MS lesions is derived from the final fuzzy map. Experimental results show that this method is fast and accurate.

  15. Retrieval of land cover information under thin fog in Landsat TM image

    NASA Astrophysics Data System (ADS)

    Wei, Yuchun

    2008-04-01

    Thin fog, which often appears in remote sensing image of subtropical climate region, has resulted in the low image quantity and bad image mapping. Therefore, it is necessary to develop the image processing method to retrieve land cover information under thin fog. In this paper, the Landsat TM image near the Taihu Lake that is in the subtropical climate zone of China was used as an example, and the workflow and method used to retrieve the land cover information under thin fog have been built based on ENVI software and a single TM image. The basic step covers three parts: 1) isolating the thin fog area in image according to the spectral difference of different bands; 2) retrieving the visible band information of different land cover types under thin fog from the near-infrared bands according to the relationships between near-infrared bands and visible bands of different land cover types in the area without fog; 3) image post-process. The result showed that the method in the paper is easy and suitable, and can be used to improve the quantity of TM image mapping more effectively.

  16. Development and Evaluation of Reference Standards for Image-based Telemedicine Diagnosis and Clinical Research Studies in Ophthalmology

    PubMed Central

    Ryan, Michael C.; Ostmo, Susan; Jonas, Karyn; Berrocal, Audina; Drenser, Kimberly; Horowitz, Jason; Lee, Thomas C.; Simmons, Charles; Martinez-Castellanos, Maria-Ana; Chan, R.V. Paul; Chiang, Michael F.

    2014-01-01

    Information systems managing image-based data for telemedicine or clinical research applications require a reference standard representing the correct diagnosis. Accurate reference standards are difficult to establish because of imperfect agreement among physicians, and discrepancies between clinical vs. image-based diagnosis. This study is designed to describe the development and evaluation of reference standards for image-based diagnosis, which combine diagnostic impressions of multiple image readers with the actual clinical diagnoses. We show that agreement between image reading and clinical examinations was imperfect (689 [32%] discrepancies in 2148 image readings), as was inter-reader agreement (kappa 0.490-0.652). This was improved by establishing an image-based reference standard defined as the majority diagnosis given by three readers (13% discrepancies with image readers). It was further improved by establishing an overall reference standard that incorporated the clinical diagnosis (10% discrepancies with image readers). These principles of establishing reference standards may be applied to improve robustness of real-world systems supporting image-based diagnosis. PMID:25954463

  17. Image Encryption Algorithm Based on Hyperchaotic Maps and Nucleotide Sequences Database

    PubMed Central

    2017-01-01

    Image encryption technology is one of the main means to ensure the safety of image information. Using the characteristics of chaos, such as randomness, regularity, ergodicity, and initial value sensitiveness, combined with the unique space conformation of DNA molecules and their unique information storage and processing ability, an efficient method for image encryption based on the chaos theory and a DNA sequence database is proposed. In this paper, digital image encryption employs a process of transforming the image pixel gray value by using chaotic sequence scrambling image pixel location and establishing superchaotic mapping, which maps quaternary sequences and DNA sequences, and by combining with the logic of the transformation between DNA sequences. The bases are replaced under the displaced rules by using DNA coding in a certain number of iterations that are based on the enhanced quaternary hyperchaotic sequence; the sequence is generated by Chen chaos. The cipher feedback mode and chaos iteration are employed in the encryption process to enhance the confusion and diffusion properties of the algorithm. Theoretical analysis and experimental results show that the proposed scheme not only demonstrates excellent encryption but also effectively resists chosen-plaintext attack, statistical attack, and differential attack. PMID:28392799

  18. Computer vision-based analysis of foods: a non-destructive colour measurement tool to monitor quality and safety.

    PubMed

    Mogol, Burçe Ataç; Gökmen, Vural

    2014-05-01

    Computer vision-based image analysis has been widely used in food industry to monitor food quality. It allows low-cost and non-contact measurements of colour to be performed. In this paper, two computer vision-based image analysis approaches are discussed to extract mean colour or featured colour information from the digital images of foods. These types of information may be of particular importance as colour indicates certain chemical changes or physical properties in foods. As exemplified here, the mean CIE a* value or browning ratio determined by means of computer vision-based image analysis algorithms can be correlated with acrylamide content of potato chips or cookies. Or, porosity index as an important physical property of breadcrumb can be calculated easily. In this respect, computer vision-based image analysis provides a useful tool for automatic inspection of food products in a manufacturing line, and it can be actively involved in the decision-making process where rapid quality/safety evaluation is needed. © 2013 Society of Chemical Industry.

  19. A Novel Segmentation Approach Combining Region- and Edge-Based Information for Ultrasound Images

    PubMed Central

    Luo, Yaozhong; Liu, Longzhong; Li, Xuelong

    2017-01-01

    Ultrasound imaging has become one of the most popular medical imaging modalities with numerous diagnostic applications. However, ultrasound (US) image segmentation, which is the essential process for further analysis, is a challenging task due to the poor image quality. In this paper, we propose a new segmentation scheme to combine both region- and edge-based information into the robust graph-based (RGB) segmentation method. The only interaction required is to select two diagonal points to determine a region of interest (ROI) on the original image. The ROI image is smoothed by a bilateral filter and then contrast-enhanced by histogram equalization. Then, the enhanced image is filtered by pyramid mean shift to improve homogeneity. With the optimization of particle swarm optimization (PSO) algorithm, the RGB segmentation method is performed to segment the filtered image. The segmentation results of our method have been compared with the corresponding results obtained by three existing approaches, and four metrics have been used to measure the segmentation performance. The experimental results show that the method achieves the best overall performance and gets the lowest ARE (10.77%), the second highest TPVF (85.34%), and the second lowest FPVF (4.48%). PMID:28536703

  20. Quality optimized medical image information hiding algorithm that employs edge detection and data coding.

    PubMed

    Al-Dmour, Hayat; Al-Ani, Ahmed

    2016-04-01

    The present work has the goal of developing a secure medical imaging information system based on a combined steganography and cryptography technique. It attempts to securely embed patient's confidential information into his/her medical images. The proposed information security scheme conceals coded Electronic Patient Records (EPRs) into medical images in order to protect the EPRs' confidentiality without affecting the image quality and particularly the Region of Interest (ROI), which is essential for diagnosis. The secret EPR data is converted into ciphertext using private symmetric encryption method. Since the Human Visual System (HVS) is less sensitive to alterations in sharp regions compared to uniform regions, a simple edge detection method has been introduced to identify and embed in edge pixels, which will lead to an improved stego image quality. In order to increase the embedding capacity, the algorithm embeds variable number of bits (up to 3) in edge pixels based on the strength of edges. Moreover, to increase the efficiency, two message coding mechanisms have been utilized to enhance the ±1 steganography. The first one, which is based on Hamming code, is simple and fast, while the other which is known as the Syndrome Trellis Code (STC), is more sophisticated as it attempts to find a stego image that is close to the cover image through minimizing the embedding impact. The proposed steganography algorithm embeds the secret data bits into the Region of Non Interest (RONI), where due to its importance; the ROI is preserved from modifications. The experimental results demonstrate that the proposed method can embed large amount of secret data without leaving a noticeable distortion in the output image. The effectiveness of the proposed algorithm is also proven using one of the efficient steganalysis techniques. The proposed medical imaging information system proved to be capable of concealing EPR data and producing imperceptible stego images with minimal embedding distortions compared to other existing methods. In order to refrain from introducing any modifications to the ROI, the proposed system only utilizes the Region of Non Interest (RONI) in embedding the EPR data. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    PubMed Central

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  2. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    PubMed

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  3. Gradient-based multiresolution image fusion.

    PubMed

    Petrović, Valdimir S; Xydeas, Costas S

    2004-02-01

    A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.

  4. Document image database indexing with pictorial dictionary

    NASA Astrophysics Data System (ADS)

    Akbari, Mohammad; Azimi, Reza

    2010-02-01

    In this paper we introduce a new approach for information retrieval from Persian document image database without using Optical Character Recognition (OCR).At first an attribute called subword upper contour label is defined then, a pictorial dictionary is constructed based on this attribute for the subwords. By this approach we address two issues in document image retrieval: keyword spotting and retrieval according to the document similarities. The proposed methods have been evaluated on a Persian document image database. The results have proved the ability of this approach in document image information retrieval.

  5. Vision communications based on LED array and imaging sensor

    NASA Astrophysics Data System (ADS)

    Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.

  6. MetaSEEk: a content-based metasearch engine for images

    NASA Astrophysics Data System (ADS)

    Beigi, Mandis; Benitez, Ana B.; Chang, Shih-Fu

    1997-12-01

    Search engines are the most powerful resources for finding information on the rapidly expanding World Wide Web (WWW). Finding the desired search engines and learning how to use them, however, can be very time consuming. The integration of such search tools enables the users to access information across the world in a transparent and efficient manner. These systems are called meta-search engines. The recent emergence of visual information retrieval (VIR) search engines on the web is leading to the same efficiency problem. This paper describes and evaluates MetaSEEk, a content-based meta-search engine used for finding images on the Web based on their visual information. MetaSEEk is designed to intelligently select and interface with multiple on-line image search engines by ranking their performance for different classes of user queries. User feedback is also integrated in the ranking refinement. We compare MetaSEEk with a base line version of meta-search engine, which does not use the past performance of the different search engines in recommending target search engines for future queries.

  7. Defect detection in slab surface: a novel dual Charge-coupled Device imaging-based fuzzy connectedness strategy.

    PubMed

    Zhao, Liming; Ouyang, Qi; Chen, Dengfu; Udupa, Jayaram K; Wang, Huiqian; Zeng, Yuebin

    2014-11-01

    To provide an accurate surface defects inspection system and make the automation of robust image segmentation method a reality in routine production line, a general approach is presented for continuous casting slab (CC-slab) surface defects extraction and delineation. The applicability of the system is not tied to CC-slab exclusively. We combined the line array CCD (Charge-coupled Device) traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging) strategies in designing the system. Its aim is to suppress the respective imaging system's limitations. In the system, the images acquired from the two CCD sensors are carefully aligned in space and in time by maximum mutual information-based full-fledged registration schema. Subsequently, the image information is fused from these two subsystems such as the unbroken 2D information in LS-imaging and 3D depressed information in AL-imaging. Finally, on the basis of the established dual scanning imaging system the region of interest (ROI) localization by seed specification was designed, and the delineation for ROI by iterative relative fuzzy connectedness (IRFC) algorithm was utilized to get a precise inspection result. Our method takes into account the complementary advantages in the two common machine vision (MV) systems and it performs competitively with the state-of-the-art as seen from the comparison of experimental results. For the first time, a joint imaging scanning strategy is proposed for CC-slab surface defect inspection that allows a feasible way of powerful ROI delineation strategies to be applied to the MV inspection field. Multi-ROI delineation by using IRFC in this research field may further improve the results.

  8. Prediction of compression-induced image interpretability degradation

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Chen, Hua-Mei; Irvine, John M.; Wang, Zhonghai; Chen, Genshe; Nagy, James; Scott, Stephen

    2018-04-01

    Image compression is an important component in modern imaging systems as the volume of the raw data collected is increasing. To reduce the volume of data while collecting imagery useful for analysis, choosing the appropriate image compression method is desired. Lossless compression is able to preserve all the information, but it has limited reduction power. On the other hand, lossy compression, which may result in very high compression ratios, suffers from information loss. We model the compression-induced information loss in terms of the National Imagery Interpretability Rating Scale or NIIRS. NIIRS is a user-based quantification of image interpretability widely adopted by the Geographic Information System community. Specifically, we present the Compression Degradation Image Function Index (CoDIFI) framework that predicts the NIIRS degradation (i.e., a decrease of NIIRS level) for a given compression setting. The CoDIFI-NIIRS framework enables a user to broker the maximum compression setting while maintaining a specified NIIRS rating.

  9. Mobile cosmetics advisor: an imaging based mobile service

    NASA Astrophysics Data System (ADS)

    Bhatti, Nina; Baker, Harlyn; Chao, Hui; Clearwater, Scott; Harville, Mike; Jain, Jhilmil; Lyons, Nic; Marguier, Joanna; Schettino, John; Süsstrunk, Sabine

    2010-01-01

    Selecting cosmetics requires visual information and often benefits from the assessments of a cosmetics expert. In this paper we present a unique mobile imaging application that enables women to use their cell phones to get immediate expert advice when selecting personal cosmetic products. We derive the visual information from analysis of camera phone images, and provide the judgment of the cosmetics specialist through use of an expert system. The result is a new paradigm for mobile interactions-image-based information services exploiting the ubiquity of camera phones. The application is designed to work with any handset over any cellular carrier using commonly available MMS and SMS features. Targeted at the unsophisticated consumer, it must be quick and easy to use, not requiring download capabilities or preplanning. Thus, all application processing occurs in the back-end system and not on the handset itself. We present the imaging pipeline technology and a comparison of the services' accuracy with respect to human experts.

  10. MR Guided PET Image Reconstruction

    PubMed Central

    Bai, Bing; Li, Quanzheng; Leahy, Richard M.

    2013-01-01

    The resolution of PET images is limited by the physics of positron-electron annihilation and instrumentation for photon coincidence detection. Model based methods that incorporate accurate physical and statistical models have produced significant improvements in reconstructed image quality when compared to filtered backprojection reconstruction methods. However, it has often been suggested that by incorporating anatomical information, the resolution and noise properties of PET images could be improved, leading to better quantitation or lesion detection. With the recent development of combined MR-PET scanners, it is possible to collect intrinsically co-registered MR images. It is therefore now possible to routinely make use of anatomical information in PET reconstruction, provided appropriate methods are available. In this paper we review research efforts over the past 20 years to develop these methods. We discuss approaches based on the use of both Markov random field priors and joint information or entropy measures. The general framework for these methods is described and their performance and longer term potential and limitations discussed. PMID:23178087

  11. The system analysis of light field information collection based on the light field imaging

    NASA Astrophysics Data System (ADS)

    Wang, Ye; Li, Wenhua; Hao, Chenyang

    2016-10-01

    Augmented reality(AR) technology is becoming the study focus, and the AR effect of the light field imaging makes the research of light field camera attractive. The micro array structure was adopted in most light field information acquisition system(LFIAS) since emergence of light field camera, micro lens array(MLA) and micro pinhole array(MPA) system mainly included. It is reviewed in this paper the structure of the LFIAS that the Light field camera commonly used in recent years. LFIAS has been analyzed based on the theory of geometrical optics. Meanwhile, this paper presents a novel LFIAS, plane grating system, we call it "micro aperture array(MAA." And the LFIAS are analyzed based on the knowledge of information optics; This paper proves that there is a little difference in the multiple image produced by the plane grating system. And the plane grating system can collect and record the amplitude and phase information of the field light.

  12. A Local Fast Marching-Based Diffusion Tensor Image Registration Algorithm by Simultaneously Considering Spatial Deformation and Tensor Orientation

    PubMed Central

    Xue, Zhong; Li, Hai; Guo, Lei; Wong, Stephen T.C.

    2010-01-01

    It is a key step to spatially align diffusion tensor images (DTI) to quantitatively compare neural images obtained from different subjects or the same subject at different timepoints. Different from traditional scalar or multi-channel image registration methods, tensor orientation should be considered in DTI registration. Recently, several DTI registration methods have been proposed in the literature, but deformation fields are purely dependent on the tensor features not the whole tensor information. Other methods, such as the piece-wise affine transformation and the diffeomorphic non-linear registration algorithms, use analytical gradients of the registration objective functions by simultaneously considering the reorientation and deformation of tensors during the registration. However, only relatively local tensor information such as voxel-wise tensor-similarity, is utilized. This paper proposes a new DTI image registration algorithm, called local fast marching (FM)-based simultaneous registration. The algorithm not only considers the orientation of tensors during registration but also utilizes the neighborhood tensor information of each voxel to drive the deformation, and such neighborhood tensor information is extracted from a local fast marching algorithm around the voxels of interest. These local fast marching-based tensor features efficiently reflect the diffusion patterns around each voxel within a spherical neighborhood and can capture relatively distinctive features of the anatomical structures. Using simulated and real DTI human brain data the experimental results show that the proposed algorithm is more accurate compared with the FA-based registration and is more efficient than its counterpart, the neighborhood tensor similarity-based registration. PMID:20382233

  13. Information security using multiple reference-based optical joint transform correlation and orthogonal code

    NASA Astrophysics Data System (ADS)

    Nazrul Islam, Mohammed; Karim, Mohammad A.; Vijayan Asari, K.

    2013-09-01

    Protecting and processing of confidential information, such as personal identification, biometrics, remains a challenging task for further research and development. A new methodology to ensure enhanced security of information in images through the use of encryption and multiplexing is proposed in this paper. We use orthogonal encoding scheme to encode multiple information independently and then combine them together to save storage space and transmission bandwidth. The encoded and multiplexed image is encrypted employing multiple reference-based joint transform correlation. The encryption key is fed into four channels which are relatively phase shifted by different amounts. The input image is introduced to all the channels and then Fourier transformed to obtain joint power spectra (JPS) signals. The resultant JPS signals are again phase-shifted and then combined to form a modified JPS signal which yields the encrypted image after having performed an inverse Fourier transformation. The proposed cryptographic system makes the confidential information absolutely inaccessible to any unauthorized intruder, while allows for the retrieval of the information to the respective authorized recipient without any distortion. The proposed technique is investigated through computer simulations under different practical conditions in order to verify its overall robustness.

  14. Image denoising and deblurring using multispectral data

    NASA Astrophysics Data System (ADS)

    Semenishchev, E. A.; Voronin, V. V.; Marchuk, V. I.

    2017-05-01

    Currently decision-making systems get widespread. These systems are based on the analysis video sequences and also additional data. They are volume, change size, the behavior of one or a group of objects, temperature gradient, the presence of local areas with strong differences, and others. Security and control system are main areas of application. A noise on the images strongly influences the subsequent processing and decision making. This paper considers the problem of primary signal processing for solving the tasks of image denoising and deblurring of multispectral data. The additional information from multispectral channels can improve the efficiency of object classification. In this paper we use method of combining information about the objects obtained by the cameras in different frequency bands. We apply method based on simultaneous minimization L2 and the first order square difference sequence of estimates to denoising and restoring the blur on the edges. In case of loss of the information will be applied an approach based on the interpolation of data taken from the analysis of objects located in other areas and information obtained from multispectral camera. The effectiveness of the proposed approach is shown in a set of test images.

  15. The use of atlas registration and graph cuts for prostate segmentation in magnetic resonance images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korsager, Anne Sofie, E-mail: asko@hst.aau.dk; Østergaard, Lasse Riis; Fortunati, Valerio

    2015-04-15

    Purpose: An automatic method for 3D prostate segmentation in magnetic resonance (MR) images is presented for planning image-guided radiotherapy treatment of prostate cancer. Methods: A spatial prior based on intersubject atlas registration is combined with organ-specific intensity information in a graph cut segmentation framework. The segmentation is tested on 67 axial T{sub 2}-weighted MR images in a leave-one-out cross validation experiment and compared with both manual reference segmentations and with multiatlas-based segmentations using majority voting atlas fusion. The impact of atlas selection is investigated in both the traditional atlas-based segmentation and the new graph cut method that combines atlas andmore » intensity information in order to improve the segmentation accuracy. Best results were achieved using the method that combines intensity information, shape information, and atlas selection in the graph cut framework. Results: A mean Dice similarity coefficient (DSC) of 0.88 and a mean surface distance (MSD) of 1.45 mm with respect to the manual delineation were achieved. Conclusions: This approaches the interobserver DSC of 0.90 and interobserver MSD 0f 1.15 mm and is comparable to other studies performing prostate segmentation in MR.« less

  16. Contextually guided very-high-resolution imagery classification with semantic segments

    NASA Astrophysics Data System (ADS)

    Zhao, Wenzhi; Du, Shihong; Wang, Qiao; Emery, William J.

    2017-10-01

    Contextual information, revealing relationships and dependencies between image objects, is one of the most important information for the successful interpretation of very-high-resolution (VHR) remote sensing imagery. Over the last decade, geographic object-based image analysis (GEOBIA) technique has been widely used to first divide images into homogeneous parts, and then to assign semantic labels according to the properties of image segments. However, due to the complexity and heterogeneity of VHR images, segments without semantic labels (i.e., semantic-free segments) generated with low-level features often fail to represent geographic entities (such as building roofs usually be partitioned into chimney/antenna/shadow parts). As a result, it is hard to capture contextual information across geographic entities when using semantic-free segments. In contrast to low-level features, "deep" features can be used to build robust segments with accurate labels (i.e., semantic segments) in order to represent geographic entities at higher levels. Based on these semantic segments, semantic graphs can be constructed to capture contextual information in VHR images. In this paper, semantic segments were first explored with convolutional neural networks (CNN) and a conditional random field (CRF) model was then applied to model the contextual information between semantic segments. Experimental results on two challenging VHR datasets (i.e., the Vaihingen and Beijing scenes) indicate that the proposed method is an improvement over existing image classification techniques in classification performance (overall accuracy ranges from 82% to 96%).

  17. Extraction of Urban Trees from Integrated Airborne Based Digital Image and LIDAR Point Cloud Datasets - Initial Results

    NASA Astrophysics Data System (ADS)

    Dogon-yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.

    2016-10-01

    Timely and accurate acquisition of information on the condition and structural changes of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting tree features include; ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraint, such as labour intensive field work, a lot of financial requirement, influences by weather condition and topographical covers which can be overcome by means of integrated airborne based LiDAR and very high resolution digital image datasets. This study presented a semi-automated approach for extracting urban trees from integrated airborne based LIDAR and multispectral digital image datasets over Istanbul city of Turkey. The above scheme includes detection and extraction of shadow free vegetation features based on spectral properties of digital images using shadow index and NDVI techniques and automated extraction of 3D information about vegetation features from the integrated processing of shadow free vegetation image and LiDAR point cloud datasets. The ability of the developed algorithms shows a promising result as an automated and cost effective approach to estimating and delineated 3D information of urban trees. The research also proved that integrated datasets is a suitable technology and a viable source of information for city managers to be used in urban trees management.

  18. Change detection from remotely sensed images: From pixel-based to object-based approaches

    NASA Astrophysics Data System (ADS)

    Hussain, Masroor; Chen, Dongmei; Cheng, Angela; Wei, Hui; Stanley, David

    2013-06-01

    The appetite for up-to-date information about earth's surface is ever increasing, as such information provides a base for a large number of applications, including local, regional and global resources monitoring, land-cover and land-use change monitoring, and environmental studies. The data from remote sensing satellites provide opportunities to acquire information about land at varying resolutions and has been widely used for change detection studies. A large number of change detection methodologies and techniques, utilizing remotely sensed data, have been developed, and newer techniques are still emerging. This paper begins with a discussion of the traditionally pixel-based and (mostly) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context. This is succeeded by a review of object-based change detection techniques. Finally there is a brief discussion of spatial data mining techniques in image processing and change detection from remote sensing data. The merits and issues of different techniques are compared. The importance of the exponential increase in the image data volume and multiple sensors and associated challenges on the development of change detection techniques are highlighted. With the wide use of very-high-resolution (VHR) remotely sensed images, object-based methods and data mining techniques may have more potential in change detection.

  19. A combined learning algorithm for prostate segmentation on 3D CT images.

    PubMed

    Ma, Ling; Guo, Rongrong; Zhang, Guoyi; Schuster, David M; Fei, Baowei

    2017-11-01

    Segmentation of the prostate on CT images has many applications in the diagnosis and treatment of prostate cancer. Because of the low soft-tissue contrast on CT images, prostate segmentation is a challenging task. A learning-based segmentation method is proposed for the prostate on three-dimensional (3D) CT images. We combine population-based and patient-based learning methods for segmenting the prostate on CT images. Population data can provide useful information to guide the segmentation processing. Because of inter-patient variations, patient-specific information is particularly useful to improve the segmentation accuracy for an individual patient. In this study, we combine a population learning method and a patient-specific learning method to improve the robustness of prostate segmentation on CT images. We train a population model based on the data from a group of prostate patients. We also train a patient-specific model based on the data of the individual patient and incorporate the information as marked by the user interaction into the segmentation processing. We calculate the similarity between the two models to obtain applicable population and patient-specific knowledge to compute the likelihood of a pixel belonging to the prostate tissue. A new adaptive threshold method is developed to convert the likelihood image into a binary image of the prostate, and thus complete the segmentation of the gland on CT images. The proposed learning-based segmentation algorithm was validated using 3D CT volumes of 92 patients. All of the CT image volumes were manually segmented independently three times by two, clinically experienced radiologists and the manual segmentation results served as the gold standard for evaluation. The experimental results show that the segmentation method achieved a Dice similarity coefficient of 87.18 ± 2.99%, compared to the manual segmentation. By combining the population learning and patient-specific learning methods, the proposed method is effective for segmenting the prostate on 3D CT images. The prostate CT segmentation method can be used in various applications including volume measurement and treatment planning of the prostate. © 2017 American Association of Physicists in Medicine.

  20. A knowledge-based framework for image enhancement in aviation security.

    PubMed

    Singh, Maneesha; Singh, Sameer; Partridge, Derek

    2004-12-01

    The main aim of this paper is to present a knowledge-based framework for automatically selecting the best image enhancement algorithm from several available on a per image basis in the context of X-ray images of airport luggage. The approach detailed involves a system that learns to map image features that represent its viewability to one or more chosen enhancement algorithms. Viewability measures have been developed to provide an automatic check on the quality of the enhanced image, i.e., is it really enhanced? The choice is based on ground-truth information generated by human X-ray screening experts. Such a system, for a new image, predicts the best-suited enhancement algorithm. Our research details the various characteristics of the knowledge-based system and shows extensive results on real images.

  1. An atlas-based multimodal registration method for 2D images with discrepancy structures.

    PubMed

    Lv, Wenchao; Chen, Houjin; Peng, Yahui; Li, Yanfeng; Li, Jupeng

    2018-06-04

    An atlas-based multimodal registration method for 2-dimension images with discrepancy structures was proposed in this paper. Atlas was utilized for complementing the discrepancy structure information in multimodal medical images. The scheme includes three steps: floating image to atlas registration, atlas to reference image registration, and field-based deformation. To evaluate the performance, a frame model, a brain model, and clinical images were employed in registration experiments. We measured the registration performance by the squared sum of intensity differences. Results indicate that this method is robust and performs better than the direct registration for multimodal images with discrepancy structures. We conclude that the proposed method is suitable for multimodal images with discrepancy structures. Graphical Abstract An Atlas-based multimodal registration method schematic diagram.

  2. Authenticity preservation with histogram-based reversible data hiding and quadtree concepts.

    PubMed

    Huang, Hsiang-Cheh; Fang, Wai-Chi

    2011-01-01

    With the widespread use of identification systems, establishing authenticity with sensors has become an important research issue. Among the schemes for making authenticity verification based on information security possible, reversible data hiding has attracted much attention during the past few years. With its characteristics of reversibility, the scheme is required to fulfill the goals from two aspects. On the one hand, at the encoder, the secret information needs to be embedded into the original image by some algorithms, such that the output image will resemble the input one as much as possible. On the other hand, at the decoder, both the secret information and the original image must be correctly extracted and recovered, and they should be identical to their embedding counterparts. Under the requirement of reversibility, for evaluating the performance of the data hiding algorithm, the output image quality, named imperceptibility, and the number of bits for embedding, called capacity, are the two key factors to access the effectiveness of the algorithm. Besides, the size of side information for making decoding possible should also be evaluated. Here we consider using the characteristics of original images for developing our method with better performance. In this paper, we propose an algorithm that has the ability to provide more capacity than conventional algorithms, with similar output image quality after embedding, and comparable side information produced. Simulation results demonstrate the applicability and better performance of our algorithm.

  3. Landmark-based deep multi-instance learning for brain disease diagnosis.

    PubMed

    Liu, Mingxia; Zhang, Jun; Adeli, Ehsan; Shen, Dinggang

    2018-01-01

    In conventional Magnetic Resonance (MR) image based methods, two stages are often involved to capture brain structural information for disease diagnosis, i.e., 1) manually partitioning each MR image into a number of regions-of-interest (ROIs), and 2) extracting pre-defined features from each ROI for diagnosis with a certain classifier. However, these pre-defined features often limit the performance of the diagnosis, due to challenges in 1) defining the ROIs and 2) extracting effective disease-related features. In this paper, we propose a landmark-based deep multi-instance learning (LDMIL) framework for brain disease diagnosis. Specifically, we first adopt a data-driven learning approach to discover disease-related anatomical landmarks in the brain MR images, along with their nearby image patches. Then, our LDMIL framework learns an end-to-end MR image classifier for capturing both the local structural information conveyed by image patches located by landmarks and the global structural information derived from all detected landmarks. We have evaluated our proposed framework on 1526 subjects from three public datasets (i.e., ADNI-1, ADNI-2, and MIRIAD), and the experimental results show that our framework can achieve superior performance over state-of-the-art approaches. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. PACS-Based Computer-Aided Detection and Diagnosis

    NASA Astrophysics Data System (ADS)

    Huang, H. K. (Bernie); Liu, Brent J.; Le, Anh HongTu; Documet, Jorge

    The ultimate goal of Picture Archiving and Communication System (PACS)-based Computer-Aided Detection and Diagnosis (CAD) is to integrate CAD results into daily clinical practice so that it becomes a second reader to aid the radiologist's diagnosis. Integration of CAD and Hospital Information System (HIS), Radiology Information System (RIS) or PACS requires certain basic ingredients from Health Level 7 (HL7) standard for textual data, Digital Imaging and Communications in Medicine (DICOM) standard for images, and Integrating the Healthcare Enterprise (IHE) workflow profiles in order to comply with the Health Insurance Portability and Accountability Act (HIPAA) requirements to be a healthcare information system. Among the DICOM standards and IHE workflow profiles, DICOM Structured Reporting (DICOM-SR); and IHE Key Image Note (KIN), Simple Image and Numeric Report (SINR) and Post-processing Work Flow (PWF) are utilized in CAD-HIS/RIS/PACS integration. These topics with examples are presented in this chapter.

  5. Classification of endoscopic capsule images by using color wavelet features, higher order statistics and radial basis functions.

    PubMed

    Lima, C S; Barbosa, D; Ramos, J; Tavares, A; Monteiro, L; Carvalho, L

    2008-01-01

    This paper presents a system to support medical diagnosis and detection of abnormal lesions by processing capsule endoscopic images. Endoscopic images possess rich information expressed by texture. Texture information can be efficiently extracted from medium scales of the wavelet transform. The set of features proposed in this paper to code textural information is named color wavelet covariance (CWC). CWC coefficients are based on the covariances of second order textural measures, an optimum subset of them is proposed. Third and forth order moments are added to cope with distributions that tend to become non-Gaussian, especially in some pathological cases. The proposed approach is supported by a classifier based on radial basis functions procedure for the characterization of the image regions along the video frames. The whole methodology has been applied on real data containing 6 full endoscopic exams and reached 95% specificity and 93% sensitivity.

  6. Study on an agricultural environment monitoring server system using Wireless Sensor Networks.

    PubMed

    Hwang, Jeonghwan; Shin, Changsun; Yoe, Hyun

    2010-01-01

    This paper proposes an agricultural environment monitoring server system for monitoring information concerning an outdoors agricultural production environment utilizing Wireless Sensor Network (WSN) technology. The proposed agricultural environment monitoring server system collects environmental and soil information on the outdoors through WSN-based environmental and soil sensors, collects image information through CCTVs, and collects location information using GPS modules. This collected information is converted into a database through the agricultural environment monitoring server consisting of a sensor manager, which manages information collected from the WSN sensors, an image information manager, which manages image information collected from CCTVs, and a GPS manager, which processes location information of the agricultural environment monitoring server system, and provides it to producers. In addition, a solar cell-based power supply is implemented for the server system so that it could be used in agricultural environments with insufficient power infrastructure. This agricultural environment monitoring server system could even monitor the environmental information on the outdoors remotely, and it could be expected that the use of such a system could contribute to increasing crop yields and improving quality in the agricultural field by supporting the decision making of crop producers through analysis of the collected information.

  7. Standards to support information systems integration in anatomic pathology.

    PubMed

    Daniel, Christel; García Rojo, Marcial; Bourquard, Karima; Henin, Dominique; Schrader, Thomas; Della Mea, Vincenzo; Gilbertson, John; Beckwith, Bruce A

    2009-11-01

    Integrating anatomic pathology information- text and images-into electronic health care records is a key challenge for enhancing clinical information exchange between anatomic pathologists and clinicians. The aim of the Integrating the Healthcare Enterprise (IHE) international initiative is precisely to ensure interoperability of clinical information systems by using existing widespread industry standards such as Digital Imaging and Communication in Medicine (DICOM) and Health Level Seven (HL7). To define standard-based informatics transactions to integrate anatomic pathology information to the Healthcare Enterprise. We used the methodology of the IHE initiative. Working groups from IHE, HL7, and DICOM, with special interest in anatomic pathology, defined consensual technical solutions to provide end-users with improved access to consistent information across multiple information systems. The IHE anatomic pathology technical framework describes a first integration profile, "Anatomic Pathology Workflow," dedicated to the diagnostic process including basic image acquisition and reporting solutions. This integration profile relies on 10 transactions based on HL7 or DICOM standards. A common specimen model was defined to consistently identify and describe specimens in both HL7 and DICOM transactions. The IHE anatomic pathology working group has defined standard-based informatics transactions to support the basic diagnostic workflow in anatomic pathology laboratories. In further stages, the technical framework will be completed to manage whole-slide images and semantically rich structured reports in the diagnostic workflow and to integrate systems used for patient care and those used for research activities (such as tissue bank databases or tissue microarrayers).

  8. Fast image matching algorithm based on projection characteristics

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  9. The informatics of a C57BL/6J mouse brain atlas.

    PubMed

    MacKenzie-Graham, Allan; Jones, Eagle S; Shattuck, David W; Dinov, Ivo D; Bota, Mihail; Toga, Arthur W

    2003-01-01

    The Mouse Atlas Project (MAP) aims to produce a framework for organizing and analyzing the large volumes of neuroscientific data produced by the proliferation of genetically modified animals. Atlases provide an invaluable aid in understanding the impact of genetic manipulations by providing a standard for comparison. We use a digital atlas as the hub of an informatics network, correlating imaging data, such as structural imaging and histology, with text-based data, such as nomenclature, connections, and references. We generated brain volumes using magnetic resonance microscopy (MRM), classical histology, and immunohistochemistry, and registered them into a common and defined coordinate system. Specially designed viewers were developed in order to visualize multiple datasets simultaneously and to coordinate between textual and image data. Researchers can navigate through the brain interchangeably, in either a text-based or image-based representation that automatically updates information as they move. The atlas also allows the independent entry of other types of data, the facile retrieval of information, and the straight-forward display of images. In conjunction with centralized servers, image and text data can be kept current and can decrease the burden on individual researchers' computers. A comprehensive framework that encompasses many forms of information in the context of anatomic imaging holds tremendous promise for producing new insights. The atlas and associated tools can be found at http://www.loni.ucla.edu/MAP.

  10. Status Report on Image Information Systems and Image Data Base Technology

    DTIC Science & Technology

    1989-12-01

    PowerHouse, StarGate , StarNet. Significant Recent Developments: Acceptance major teaching Universities (Australia), U.S.A.F. Major Corporations. Future...scenario, all computers must be VAX). STARBASE StarBase StarNet, (Network server), StarBase StarGate , (SQL gateway). SYBASE Sybase is an inherently

  11. Development of a digital-micromirror-device-based multishot snapshot spectral imaging system.

    PubMed

    Wu, Yuehao; Mirza, Iftekhar O; Arce, Gonzalo R; Prather, Dennis W

    2011-07-15

    We report on the development of a digital-micromirror-device (DMD)-based multishot snapshot spectral imaging (DMD-SSI) system as an alternative to current piezostage-based multishot coded aperture snapshot spectral imager (CASSI) systems. In this system, a DMD is used to implement compressive sensing (CS) measurement patterns for reconstructing the spatial/spectral information of an imaging scene. Based on the CS measurement results, we demonstrated the concurrent reconstruction of 24 spectral images. The DMD-SSI system is versatile in nature as it can be used to implement independent CS measurement patterns in addition to spatially shifted patterns that piezostage-based systems can offer. © 2011 Optical Society of America

  12. Exploration of mineral resource deposits based on analysis of aerial and satellite image data employing artificial intelligence methods

    NASA Astrophysics Data System (ADS)

    Osipov, Gennady

    2013-04-01

    We propose a solution to the problem of exploration of various mineral resource deposits, determination of their forms / classification of types (oil, gas, minerals, gold, etc.) with the help of satellite photography of the region of interest. Images received from satellite are processed and analyzed to reveal the presence of specific signs of deposits of various minerals. Course of data processing and making forecast can be divided into some stages: Pre-processing of images. Normalization of color and luminosity characteristics, determination of the necessary contrast level and integration of a great number of separate photos into a single map of the region are performed. Construction of semantic map image. Recognition of bitmapped image and allocation of objects and primitives known to system are realized. Intelligent analysis. At this stage acquired information is analyzed with the help of a knowledge base, which contain so-called "attention landscapes" of experts. Used methods of recognition and identification of images: a) combined method of image recognition, b)semantic analysis of posterized images, c) reconstruction of three-dimensional objects from bitmapped images, d)cognitive technology of processing and interpretation of images. This stage is fundamentally new and it distinguishes suggested technology from all others. Automatic registration of allocation of experts` attention - registration of so-called "attention landscape" of experts - is the base of the technology. Landscapes of attention are, essentially, highly effective filters that cut off unnecessary information and emphasize exactly the factors used by an expert for making a decision. The technology based on denoted principles involves the next stages, which are implemented in corresponding program agents. Training mode -> Creation of base of ophthalmologic images (OI) -> Processing and making generalized OI (GOI) -> Mode of recognition and interpretation of unknown images. Training mode includes noncontact registration of eye motion, reconstruction of "attention landscape" fixed by the expert, recording the comments of the expert who is a specialist in the field of images` interpretation, and transfer this information into knowledge base.Creation of base of ophthalmologic images (OI) includes making semantic contacts from great number of OI based on analysis of OI and expert's comments.Processing of OI and making generalized OI (GOI) is realized by inductive logic algorithms and consists in synthesis of structural invariants of OI. The mode of recognition and interpretation of unknown images consists of several stages, which include: comparison of unknown image with the base of structural invariants of OI; revealing of structural invariants in unknown images; ynthesis of interpretive message of the structural invariants base and OI base (the experts` comments stored in it). We want to emphasize that the training mode does not assume special involvement of experts to teach the system - it is realized in the process of regular experts` work on image interpretation and it becomes possible after installation of a special apparatus for non contact registration of experts` attention. Consequently, the technology, which principles is described there, provides fundamentally new effective solution to the problem of exploration of mineral resource deposits based on computer analysis of aerial and satellite image data.

  13. [Medical imaging in tumor precision medicine: opportunities and challenges].

    PubMed

    Xu, Jingjing; Tan, Yanbin; Zhang, Minming

    2017-05-25

    Tumor precision medicine is an emerging approach for tumor diagnosis, treatment and prevention, which takes account of individual variability of environment, lifestyle and genetic information. Tumor precision medicine is built up on the medical imaging innovations developed during the past decades, including the new hardware, new imaging agents, standardized protocols, image analysis and multimodal imaging fusion technology. Also the development of automated and reproducible analysis algorithm has extracted large amount of information from image-based features. With the continuous development and mining of tumor clinical and imaging databases, the radiogenomics, radiomics and artificial intelligence have been flourishing. Therefore, these new technological advances bring new opportunities and challenges to the application of imaging in tumor precision medicine.

  14. [Application of computer-assisted 3D imaging simulation for surgery].

    PubMed

    Matsushita, S; Suzuki, N

    1994-03-01

    This article describes trends in application of various imaging technology in surgical planning, navigation, and computer aided surgery. Imaging information is essential factor for simulation in medicine. It includes three dimensional (3D) image reconstruction, neuro-surgical navigation, creating substantial model based on 3D imaging data and etc. These developments depend mostly on 3D imaging technique, which is much contributed by recent computer technology. 3D imaging can offer new intuitive information to physician and surgeon, and this method is suitable for mechanical control. By utilizing simulated results, we can obtain more precise surgical orientation, estimation, and operation. For more advancement, automatic and high speed recognition of medical imaging is being developed.

  15. Using fuzzy fractal features of digital images for the material surface analisys

    NASA Astrophysics Data System (ADS)

    Privezentsev, D. G.; Zhiznyakov, A. L.; Astafiev, A. V.; Pugin, E. V.

    2018-01-01

    Edge detection is an important task in image processing. There are a lot of approaches in this area: Sobel, Canny operators and others. One of the perspective techniques in image processing is the use of fuzzy logic and fuzzy sets theory. They allow us to increase processing quality by representing information in its fuzzy form. Most of the existing fuzzy image processing methods switch to fuzzy sets on very late stages, so this leads to some useful information loss. In this paper, a novel method of edge detection based on fuzzy image representation and fuzzy pixels is proposed. With this approach, we convert the image to fuzzy form on the first step. Different approaches to this conversion are described. Several membership functions for fuzzy pixel description and requirements for their form and view are given. A novel approach to edge detection based on Sobel operator and fuzzy image representation is proposed. Experimental testing of developed method was performed on remote sensing images.

  16. Egocentric Direction and Position Perceptions are Dissociable Based on Only Static Lane Edge Information

    PubMed Central

    Nakashima, Ryoichi; Iwai, Ritsuko; Ueda, Sayako; Kumada, Takatsune

    2015-01-01

    When observers perceive several objects in a space, at the same time, they should effectively perceive their own position as a viewpoint. However, little is known about observers’ percepts of their own spatial location based on the visual scene information viewed from them. Previous studies indicate that two distinct visual spatial processes exist in the locomotion situation: the egocentric position perception and egocentric direction perception. Those studies examined such perceptions in information rich visual environments where much dynamic and static visual information was available. This study examined these two perceptions in information of impoverished environments, including only static lane edge information (i.e., limited information). We investigated the visual factors associated with static lane edge information that may affect these perceptions. Especially, we examined the effects of the two factors on egocentric direction and position perceptions. One is the “uprightness factor” that “far” visual information is seen at upper location than “near” visual information. The other is the “central vision factor” that observers usually look at “far” visual information using central vision (i.e., foveal vision) whereas ‘near’ visual information using peripheral vision. Experiment 1 examined the effect of the “uprightness factor” using normal and inverted road images. Experiment 2 examined the effect of the “central vision factor” using normal and transposed road images where the upper half of the normal image was presented under the lower half. Experiment 3 aimed to replicate the results of Experiments 1 and 2. Results showed that egocentric direction perception is interfered with image inversion or image transposition, whereas egocentric position perception is robust against these image transformations. That is, both “uprightness” and “central vision” factors are important for egocentric direction perception, but not for egocentric position perception. Therefore, the two visual spatial perceptions about observers’ own viewpoints are fundamentally dissociable. PMID:26648895

  17. Broadband Phase Retrieval for Image-Based Wavefront Sensing

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A focus-diverse phase-retrieval algorithm has been shown to perform adequately for the purpose of image-based wavefront sensing when (1) broadband light (typically spanning the visible spectrum) is used in forming the images by use of an optical system under test and (2) the assumption of monochromaticity is applied to the broadband image data. Heretofore, it had been assumed that in order to obtain adequate performance, it is necessary to use narrowband or monochromatic light. Some background information, including definitions of terms and a brief description of pertinent aspects of image-based phase retrieval, is prerequisite to a meaningful summary of the present development. Phase retrieval is a general term used in optics to denote estimation of optical imperfections or aberrations of an optical system under test. The term image-based wavefront sensing refers to a general class of algorithms that recover optical phase information, and phase-retrieval algorithms constitute a subset of this class. In phase retrieval, one utilizes the measured response of the optical system under test to produce a phase estimate. The optical response of the system is defined as the image of a point-source object, which could be a star or a laboratory point source. The phase-retrieval problem is characterized as image-based in the sense that a charge-coupled-device camera, preferably of scientific imaging quality, is used to collect image data where the optical system would normally form an image. In a variant of phase retrieval, denoted phase-diverse phase retrieval [which can include focus-diverse phase retrieval (in which various defocus planes are used)], an additional known aberration (or an equivalent diversity function) is superimposed as an aid in estimating unknown aberrations by use of an image-based wavefront-sensing algorithm. Image-based phase-retrieval differs from such other wavefront-sensing methods, such as interferometry, shearing interferometry, curvature wavefront sensing, and Shack-Hartmann sensing, all of which entail disadvantages in comparison with image-based methods. The main disadvantages of these non-image based methods are complexity of test equipment and the need for a wavefront reference.

  18. Design and deployment of a large brain-image database for clinical and nonclinical research

    NASA Astrophysics Data System (ADS)

    Yang, Guo Liang; Lim, Choie Cheio Tchoyoson; Banukumar, Narayanaswami; Aziz, Aamer; Hui, Francis; Nowinski, Wieslaw L.

    2004-04-01

    An efficient database is an essential component of organizing diverse information on image metadata and patient information for research in medical imaging. This paper describes the design, development and deployment of a large database system serving as a brain image repository that can be used across different platforms in various medical researches. It forms the infrastructure that links hospitals and institutions together and shares data among them. The database contains patient-, pathology-, image-, research- and management-specific data. The functionalities of the database system include image uploading, storage, indexing, downloading and sharing as well as database querying and management with security and data anonymization concerns well taken care of. The structure of database is multi-tier client-server architecture with Relational Database Management System, Security Layer, Application Layer and User Interface. Image source adapter has been developed to handle most of the popular image formats. The database has a user interface based on web browsers and is easy to handle. We have used Java programming language for its platform independency and vast function libraries. The brain image database can sort data according to clinically relevant information. This can be effectively used in research from the clinicians" points of view. The database is suitable for validation of algorithms on large population of cases. Medical images for processing could be identified and organized based on information in image metadata. Clinical research in various pathologies can thus be performed with greater efficiency and large image repositories can be managed more effectively. The prototype of the system has been installed in a few hospitals and is working to the satisfaction of the clinicians.

  19. A wavelet domain adaptive image watermarking method based on chaotic encryption

    NASA Astrophysics Data System (ADS)

    Wei, Fang; Liu, Jian; Cao, Hanqiang; Yang, Jun

    2009-10-01

    A digital watermarking technique is a specific branch of steganography, which can be used in various applications, provides a novel way to solve security problems for multimedia information. In this paper, we proposed a kind of wavelet domain adaptive image digital watermarking method using chaotic stream encrypt and human eye visual property. The secret information that can be seen as a watermarking is hidden into a host image, which can be publicly accessed, so the transportation of the secret information will not attract the attention of illegal receiver. The experimental results show that the method is invisible and robust against some image processing.

  20. Simulations of multi-contrast x-ray imaging using near-field speckles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zdora, Marie-Christine; Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom and Department of Physics & Astronomy, University College London, London, WC1E 6BT; Thibault, Pierre

    2016-01-28

    X-ray dark-field and phase-contrast imaging using near-field speckles is a novel technique that overcomes limitations inherent in conventional absorption x-ray imaging, i.e. poor contrast for features with similar density. Speckle-based imaging yields a wealth of information with a simple setup tolerant to polychromatic and divergent beams, and simple data acquisition and analysis procedures. Here, we present a simulation software used to model the image formation with the speckle-based technique, and we compare simulated results on a phantom sample with experimental synchrotron data. Thorough simulation of a speckle-based imaging experiment will help for better understanding and optimising the technique itself.

  1. Drug related webpages classification using images and text information based on multi-kernel learning

    NASA Astrophysics Data System (ADS)

    Hu, Ruiguang; Xiao, Liping; Zheng, Wenjuan

    2015-12-01

    In this paper, multi-kernel learning(MKL) is used for drug-related webpages classification. First, body text and image-label text are extracted through HTML parsing, and valid images are chosen by the FOCARSS algorithm. Second, text based BOW model is used to generate text representation, and image-based BOW model is used to generate images representation. Last, text and images representation are fused with a few methods. Experimental results demonstrate that the classification accuracy of MKL is higher than those of all other fusion methods in decision level and feature level, and much higher than the accuracy of single-modal classification.

  2. Developing stereo image based robot control system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suprijadi,; Pambudi, I. R.; Woran, M.

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based onmore » stereovision captures.« less

  3. Remote Sensing Image Change Detection Based on NSCT-HMT Model and Its Application.

    PubMed

    Chen, Pengyun; Zhang, Yichen; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola

    2017-06-06

    Traditional image change detection based on a non-subsampled contourlet transform always ignores the neighborhood information's relationship to the non-subsampled contourlet coefficients, and the detection results are susceptible to noise interference. To address these disadvantages, we propose a denoising method based on the non-subsampled contourlet transform domain that uses the Hidden Markov Tree model (NSCT-HMT) for change detection of remote sensing images. First, the ENVI software is used to calibrate the original remote sensing images. After that, the mean-ratio operation is adopted to obtain the difference image that will be denoised by the NSCT-HMT model. Then, using the Fuzzy Local Information C-means (FLICM) algorithm, the difference image is divided into the change area and unchanged area. The proposed algorithm is applied to a real remote sensing data set. The application results show that the proposed algorithm can effectively suppress clutter noise, and retain more detailed information from the original images. The proposed algorithm has higher detection accuracy than the Markov Random Field-Fuzzy C-means (MRF-FCM), the non-subsampled contourlet transform-Fuzzy C-means clustering (NSCT-FCM), the pointwise approach and graph theory (PA-GT), and the Principal Component Analysis-Nonlocal Means (PCA-NLM) denosing algorithm. Finally, the five algorithms are used to detect the southern boundary of the Gurbantunggut Desert in Xinjiang Uygur Autonomous Region of China, and the results show that the proposed algorithm has the best effect on real remote sensing image change detection.

  4. Harnessing the power of multimedia in offender-based law enforcement information systems

    NASA Astrophysics Data System (ADS)

    Zimmerman, Alan P.

    1997-02-01

    Criminal offenders are increasingly administratively processed by automated multimedia information systems. During this processing, case and offender biographical data, mugshot photos, fingerprints and other valuable information and media are collected by law enforcement officers. As part of their criminal investigations, law enforcement officers are routinely called to solve criminal cases based upon limited evidence . . . evidence increasingly comprised of human DNA, ballistic casings and projectiles, chemical residues, latent fingerprints, surveillance camera facial images and voices. As multimedia systems receive greater use in law enforcement, traditional approaches used to index text data are not appropriate for images and signal data which comprise a multimedia database. Multimedia systems with integrated advanced pattern matching tools will provide law enforcement the ability to effectively locate multimedia information based upon content, without reliance upon the accuracy or completeness of text-based indexing.

  5. Optimization-based methods for road image registration

    DOT National Transportation Integrated Search

    2008-02-01

    A number of transportation agencies are now relying on direct imaging for monitoring and cataloguing the state of their roadway systems. Images provide objective information to characterize the pavement as well as roadside hardware. The tasks of proc...

  6. Spatial Uncertainty Modeling of Fuzzy Information in Images for Pattern Classification

    PubMed Central

    Pham, Tuan D.

    2014-01-01

    The modeling of the spatial distribution of image properties is important for many pattern recognition problems in science and engineering. Mathematical methods are needed to quantify the variability of this spatial distribution based on which a decision of classification can be made in an optimal sense. However, image properties are often subject to uncertainty due to both incomplete and imprecise information. This paper presents an integrated approach for estimating the spatial uncertainty of vagueness in images using the theory of geostatistics and the calculus of probability measures of fuzzy events. Such a model for the quantification of spatial uncertainty is utilized as a new image feature extraction method, based on which classifiers can be trained to perform the task of pattern recognition. Applications of the proposed algorithm to the classification of various types of image data suggest the usefulness of the proposed uncertainty modeling technique for texture feature extraction. PMID:25157744

  7. Ultrathin Nonlinear Metasurface for Optical Image Encoding.

    PubMed

    Walter, Felicitas; Li, Guixin; Meier, Cedrik; Zhang, Shuang; Zentgraf, Thomas

    2017-05-10

    Security of optical information is of great importance in modern society. Many cryptography techniques based on classical and quantum optics have been widely explored in the linear optical regime. Nonlinear optical encryption in which encoding and decoding involve nonlinear frequency conversions represents a new strategy for securing optical information. Here, we demonstrate that an ultrathin nonlinear photonic metasurface, consisting of meta-atoms with 3-fold rotational symmetry, can be used to hide optical images under illumination with a fundamental wave. However, the hidden image can be read out from second harmonic generation (SHG) waves. This is achieved by controlling the destructive and constructive interferences of SHG waves from two neighboring meta-atoms. In addition, we apply this concept to obtain gray scale SHG imaging. Nonlinear metasurfaces based on space variant optical interference open new avenues for multilevel image encryption, anticounterfeiting, and background free image reconstruction.

  8. Threshold multi-secret sharing scheme based on phase-shifting interferometry

    NASA Astrophysics Data System (ADS)

    Deng, Xiaopeng; Wen, Wei; Shi, Zhengang

    2017-03-01

    A threshold multi-secret sharing scheme is proposed based on phase-shifting interferometry. The K secret images to be shared are firstly encoded by using Fourier transformation, respectively. Then, these encoded images are shared into many shadow images based on recording principle of the phase-shifting interferometry. In the recovering stage, the secret images can be restored by combining any 2 K + 1 or more shadow images, while any 2 K or fewer shadow images cannot obtain any information about the secret images. As a result, a (2 K + 1 , N) threshold multi-secret sharing scheme can be implemented. Simulation results are presented to demonstrate the feasibility of the proposed method.

  9. Diffusion Tensor Image Registration Using Hybrid Connectivity and Tensor Features

    PubMed Central

    Wang, Qian; Yap, Pew-Thian; Wu, Guorong; Shen, Dinggang

    2014-01-01

    Most existing diffusion tensor imaging (DTI) registration methods estimate structural correspondences based on voxelwise matching of tensors. The rich connectivity information that is given by DTI, however, is often neglected. In this article, we propose to integrate complementary information given by connectivity features and tensor features for improved registration accuracy. To utilize connectivity information, we place multiple anchors representing different brain anatomies in the image space, and define the connectivity features for each voxel as the geodesic distances from all anchors to the voxel under consideration. The geodesic distance, which is computed in relation to the tensor field, encapsulates information of brain connectivity. We also extract tensor features for every voxel to reflect the local statistics of tensors in its neighborhood. We then combine both connectivity features and tensor features for registration of tensor images. From the images, landmarks are selected automatically and their correspondences are determined based on their connectivity and tensor feature vectors. The deformation field that deforms one tensor image to the other is iteratively estimated and optimized according to the landmarks and their associated correspondences. Experimental results show that, by using connectivity features and tensor features simultaneously, registration accuracy is increased substantially compared with the cases using either type of features alone. PMID:24293159

  10. Remote sensing fusion based on guided image filtering

    NASA Astrophysics Data System (ADS)

    Zhao, Wenfei; Dai, Qinling; Wang, Leiguang

    2015-12-01

    In this paper, we propose a novel remote sensing fusion approach based on guided image filtering. The fused images can well preserve the spectral features of the original multispectral (MS) images, meanwhile, enhance the spatial details information. Four quality assessment indexes are also introduced to evaluate the fusion effect when compared with other fusion methods. Experiments carried out on Gaofen-2, QuickBird, WorldView-2 and Landsat-8 images. And the results show an excellent performance of the proposed method.

  11. A novel imaging method for photonic crystal fiber fusion splicer

    NASA Astrophysics Data System (ADS)

    Bi, Weihong; Fu, Guangwei; Guo, Xuan

    2007-01-01

    Because the structure of Photonic Crystal Fiber (PCF) is very complex, and it is very difficult that traditional fiber fusion splice obtains optical axial information of PCF. Therefore, we must search for a bran-new optical imaging method to get section information of Photonic Crystal Fiber. Based on complex trait of PCF, a novel high-precision optics imaging system is presented in this article. The system uses a thinned electron-bombarded CCD (EBCCD) which is a kind of image sensor as imaging element, the thinned electron-bombarded CCD can offer low light level performance superior to conventional image intensifier coupled CCD approaches, this high-performance device can provide high contrast high resolution in low light level surveillance imaging; in order to realize precision focusing of image, we use a ultra-highprecision pace motor to adjust position of imaging lens. In this way, we can obtain legible section information of PCF. We may realize further concrete analysis for section information of PCF by digital image processing technology. Using this section information may distinguish different sorts of PCF, compute some parameters such as the size of PCF ventage, cladding structure of PCF and so on, and provide necessary analysis data for PCF fixation, adjustment, regulation, fusion and cutting system.

  12. High-performance compression and double cryptography based on compressive ghost imaging with the fast Fourier transform

    NASA Astrophysics Data System (ADS)

    Leihong, Zhang; Zilan, Pan; Luying, Wu; Xiuhua, Ma

    2016-11-01

    To solve the problem that large images can hardly be retrieved for stringent hardware restrictions and the security level is low, a method based on compressive ghost imaging (CGI) with Fast Fourier Transform (FFT) is proposed, named FFT-CGI. Initially, the information is encrypted by the sender with FFT, and the FFT-coded image is encrypted by the system of CGI with a secret key. Then the receiver decrypts the image with the aid of compressive sensing (CS) and FFT. Simulation results are given to verify the feasibility, security, and compression of the proposed encryption scheme. The experiment suggests the method can improve the quality of large images compared with conventional ghost imaging and achieve the imaging for large-sized images, further the amount of data transmitted largely reduced because of the combination of compressive sensing and FFT, and improve the security level of ghost images through ciphertext-only attack (COA), chosen-plaintext attack (CPA), and noise attack. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.

  13. Threshold secret sharing scheme based on phase-shifting interferometry.

    PubMed

    Deng, Xiaopeng; Shi, Zhengang; Wen, Wei

    2016-11-01

    We propose a new method for secret image sharing with the (3,N) threshold scheme based on phase-shifting interferometry. The secret image, which is multiplied with an encryption key in advance, is first encrypted by using Fourier transformation. Then, the encoded image is shared into N shadow images based on the recording principle of phase-shifting interferometry. Based on the reconstruction principle of phase-shifting interferometry, any three or more shadow images can retrieve the secret image, while any two or fewer shadow images cannot obtain any information of the secret image. Thus, a (3,N) threshold secret sharing scheme can be implemented. Compared with our previously reported method, the algorithm of this paper is suited for not only a binary image but also a gray-scale image. Moreover, the proposed algorithm can obtain a larger threshold value t. Simulation results are presented to demonstrate the feasibility of the proposed method.

  14. Investigations of image fusion

    NASA Astrophysics Data System (ADS)

    Zhang, Zhong

    1999-12-01

    The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D information of objects in the space monitored by the system.

  15. Fuzzy connectedness and object definition

    NASA Astrophysics Data System (ADS)

    Udupa, Jayaram K.; Samarasekera, Supun

    1995-04-01

    Approaches to object information extraction from images should attempt to use the fact that images are fuzzy. In past image segmentation research, the notion of `hanging togetherness' of image elements specified by their fuzzy connectedness has been lacking. We present a theory of fuzzy objects for n-dimensional digital spaces based on a notion of fuzzy connectedness of image elements. Although our definitions lead to problems of enormous combinatorial complexity, the theoretical results allow us to reduce this dramatically. We demonstrate the utility of the theory and algorithms in image segmentation based on several practical examples.

  16. Color enhancement and image defogging in HSI based on Retinex model

    NASA Astrophysics Data System (ADS)

    Gao, Han; Wei, Ping; Ke, Jun

    2015-08-01

    Retinex is a luminance perceptual algorithm based on color consistency. It has a good performance in color enhancement. But in some cases, the traditional Retinex algorithms, both Single-Scale Retinex(SSR) and Multi-Scale Retinex(MSR) in RGB color space, do not work well and will cause color deviation. To solve this problem, we present improved SSR and MSR algorithms. Compared to other Retinex algorithms, we implement Retinex algorithms in HSI(Hue, Saturation, Intensity) color space, and use a parameter αto improve quality of the image. Moreover, the algorithms presented in this paper has a good performance in image defogging. Contrasted with traditional Retinex algorithms, we use intensity channel to obtain reflection information of an image. The intensity channel is processed using a Gaussian center-surround image filter to get light information, which should be removed from intensity channel. After that, we subtract the light information from intensity channel to obtain the reflection image, which only includes the attribute of the objects in image. Using the reflection image and a parameter α, which is an arbitrary scale factor set manually, we improve the intensity channel, and complete the color enhancement. Our experiments show that this approach works well compared with existing methods for color enhancement. Besides a better performance in color deviation problem and image defogging, a visible improvement in the image quality for human contrast perception is also observed.

  17. Structural-functional lung imaging using a combined CT-EIT and a Discrete Cosine Transformation reconstruction method.

    PubMed

    Schullcke, Benjamin; Gong, Bo; Krueger-Ziolek, Sabine; Soleimani, Manuchehr; Mueller-Lisse, Ullrich; Moeller, Knut

    2016-05-16

    Lung EIT is a functional imaging method that utilizes electrical currents to reconstruct images of conductivity changes inside the thorax. This technique is radiation free and applicable at the bedside, but lacks of spatial resolution compared to morphological imaging methods such as X-ray computed tomography (CT). In this article we describe an approach for EIT image reconstruction using morphologic information obtained from other structural imaging modalities. This leads to recon- structed images of lung ventilation that can easily be superimposed with structural CT or MRI images, which facilitates image interpretation. The approach is based on a Discrete Cosine Transformation (DCT) of an image of the considered transversal thorax slice. The use of DCT enables reduction of the dimensionality of the reconstruction and ensures that only conductivity changes of the lungs are reconstructed and displayed. The DCT based approach is well suited to fuse morphological image information with functional lung imaging at low computational costs. Results on simulated data indicate that this approach preserves the morphological structures of the lungs and avoids blurring of the solution. Images from patient measurements reveal the capabilities of the method and demonstrate benefits in possible applications.

  18. Structural-functional lung imaging using a combined CT-EIT and a Discrete Cosine Transformation reconstruction method

    PubMed Central

    Schullcke, Benjamin; Gong, Bo; Krueger-Ziolek, Sabine; Soleimani, Manuchehr; Mueller-Lisse, Ullrich; Moeller, Knut

    2016-01-01

    Lung EIT is a functional imaging method that utilizes electrical currents to reconstruct images of conductivity changes inside the thorax. This technique is radiation free and applicable at the bedside, but lacks of spatial resolution compared to morphological imaging methods such as X-ray computed tomography (CT). In this article we describe an approach for EIT image reconstruction using morphologic information obtained from other structural imaging modalities. This leads to recon- structed images of lung ventilation that can easily be superimposed with structural CT or MRI images, which facilitates image interpretation. The approach is based on a Discrete Cosine Transformation (DCT) of an image of the considered transversal thorax slice. The use of DCT enables reduction of the dimensionality of the reconstruction and ensures that only conductivity changes of the lungs are reconstructed and displayed. The DCT based approach is well suited to fuse morphological image information with functional lung imaging at low computational costs. Results on simulated data indicate that this approach preserves the morphological structures of the lungs and avoids blurring of the solution. Images from patient measurements reveal the capabilities of the method and demonstrate benefits in possible applications. PMID:27181695

  19. Method for the reduction of image content redundancy in large image databases

    DOEpatents

    Tobin, Kenneth William; Karnowski, Thomas P.

    2010-03-02

    A method of increasing information content for content-based image retrieval (CBIR) systems includes the steps of providing a CBIR database, the database having an index for a plurality of stored digital images using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the images. A visual similarity parameter value is calculated based on a degree of visual similarity between features vectors of an incoming image being considered for entry into the database and feature vectors associated with a most similar of the stored images. Based on said visual similarity parameter value it is determined whether to store or how long to store the feature vectors associated with the incoming image in the database.

  20. Methods in quantitative image analysis.

    PubMed

    Oberholzer, M; Ostreicher, M; Christen, H; Brühlmann, M

    1996-05-01

    The main steps of image analysis are image capturing, image storage (compression), correcting imaging defects (e.g. non-uniform illumination, electronic-noise, glare effect), image enhancement, segmentation of objects in the image and image measurements. Digitisation is made by a camera. The most modern types include a frame-grabber, converting the analog-to-digital signal into digital (numerical) information. The numerical information consists of the grey values describing the brightness of every point within the image, named a pixel. The information is stored in bits. Eight bits are summarised in one byte. Therefore, grey values can have a value between 0 and 256 (2(8)). The human eye seems to be quite content with a display of 5-bit images (corresponding to 64 different grey values). In a digitised image, the pixel grey values can vary within regions that are uniform in the original scene: the image is noisy. The noise is mainly manifested in the background of the image. For an optimal discrimination between different objects or features in an image, uniformity of illumination in the whole image is required. These defects can be minimised by shading correction [subtraction of a background (white) image from the original image, pixel per pixel, or division of the original image by the background image]. The brightness of an image represented by its grey values can be analysed for every single pixel or for a group of pixels. The most frequently used pixel-based image descriptors are optical density, integrated optical density, the histogram of the grey values, mean grey value and entropy. The distribution of the grey values existing within an image is one of the most important characteristics of the image. However, the histogram gives no information about the texture of the image. The simplest way to improve the contrast of an image is to expand the brightness scale by spreading the histogram out to the full available range. Rules for transforming the grey value histogram of an existing image (input image) into a new grey value histogram (output image) are most quickly handled by a look-up table (LUT). The histogram of an image can be influenced by gain, offset and gamma of the camera. Gain defines the voltage range, offset defines the reference voltage and gamma the slope of the regression line between the light intensity and the voltage of the camera. A very important descriptor of neighbourhood relations in an image is the co-occurrence matrix. The distance between the pixels (original pixel and its neighbouring pixel) can influence the various parameters calculated from the co-occurrence matrix. The main goals of image enhancement are elimination of surface roughness in an image (smoothing), correction of defects (e.g. noise), extraction of edges, identification of points, strengthening texture elements and improving contrast. In enhancement, two types of operations can be distinguished: pixel-based (point operations) and neighbourhood-based (matrix operations). The most important pixel-based operations are linear stretching of grey values, application of pre-stored LUTs and histogram equalisation. The neighbourhood-based operations work with so-called filters. These are organising elements with an original or initial point in their centre. Filters can be used to accentuate or to suppress specific structures within the image. Filters can work either in the spatial or in the frequency domain. The method used for analysing alterations of grey value intensities in the frequency domain is the Hartley transform. Filter operations in the spatial domain can be based on averaging or ranking the grey values occurring in the organising element. The most important filters, which are usually applied, are the Gaussian filter and the Laplace filter (both averaging filters), and the median filter, the top hat filter and the range operator (all ranking filters). Segmentation of objects is traditionally based on threshold grey values. (AB

  1. A similarity learning approach to content-based image retrieval: application to digital mammography.

    PubMed

    El-Naqa, Issam; Yang, Yongyi; Galatsanos, Nikolas P; Nishikawa, Robert M; Wernick, Miles N

    2004-10-01

    In this paper, we describe an approach to content-based retrieval of medical images from a database, and provide a preliminary demonstration of our approach as applied to retrieval of digital mammograms. Content-based image retrieval (CBIR) refers to the retrieval of images from a database using information derived from the images themselves, rather than solely from accompanying text indices. In the medical-imaging context, the ultimate aim of CBIR is to provide radiologists with a diagnostic aid in the form of a display of relevant past cases, along with proven pathology and other suitable information. CBIR may also be useful as a training tool for medical students and residents. The goal of information retrieval is to recall from a database information that is relevant to the user's query. The most challenging aspect of CBIR is the definition of relevance (similarity), which is used to guide the retrieval machine. In this paper, we pursue a new approach, in which similarity is learned from training examples provided by human observers. Specifically, we explore the use of neural networks and support vector machines to predict the user's notion of similarity. Within this framework we propose using a hierarchal learning approach, which consists of a cascade of a binary classifier and a regression module to optimize retrieval effectiveness and efficiency. We also explore how to incorporate online human interaction to achieve relevance feedback in this learning framework. Our experiments are based on a database consisting of 76 mammograms, all of which contain clustered microcalcifications (MCs). Our goal is to retrieve mammogram images containing similar MC clusters to that in a query. The performance of the retrieval system is evaluated using precision-recall curves computed using a cross-validation procedure. Our experimental results demonstrate that: 1) the learning framework can accurately predict the perceptual similarity reported by human observers, thereby serving as a basis for CBIR; 2) the learning-based framework can significantly outperform a simple distance-based similarity metric; 3) the use of the hierarchical two-stage network can improve retrieval performance; and 4) relevance feedback can be effectively incorporated into this learning framework to achieve improvement in retrieval precision based on online interaction with users; and 5) the retrieved images by the network can have predicting value for the disease condition of the query.

  2. Quantum color image watermarking based on Arnold transformation and LSB steganography

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Luo, Gaofeng

    In this paper, a quantum color image watermarking scheme is proposed through twice-scrambling of Arnold transformations and steganography of least significant bit (LSB). Both carrier image and watermark images are represented by the novel quantum representation of color digital images model (NCQI). The image sizes for carrier and watermark are assumed to be 2n×2n and 2n‑1×2n‑1, respectively. At first, the watermark is scrambled into a disordered form through image preprocessing technique of exchanging the image pixel position and altering the color information based on Arnold transforms, simultaneously. Then, the scrambled watermark with 2n‑1×2n‑1 image size and 24-qubit grayscale is further expanded to an image with size 2n×2n and 6-qubit grayscale using the nearest-neighbor interpolation method. Finally, the scrambled and expanded watermark is embedded into the carrier by steganography of LSB scheme, and a key image with 2n×2n size and 3-qubit information is generated at the meantime, which only can use the key image to retrieve the original watermark. The extraction of watermark is the reverse process of embedding, which is achieved by applying a sequence of operations in the reverse order. Simulation-based experimental results involving different carrier and watermark images (i.e. conventional or non-quantum) are simulated based on the classical computer’s MATLAB 2014b software, which illustrates that the present method has a good performance in terms of three items: visual quality, robustness and steganography capacity.

  3. Multi-image encryption based on synchronization of chaotic lasers and iris authentication

    NASA Astrophysics Data System (ADS)

    Banerjee, Santo; Mukhopadhyay, Sumona; Rondoni, Lamberto

    2012-07-01

    A new technique of transmitting encrypted combinations of gray scaled and chromatic images using chaotic lasers derived from Maxwell-Bloch's equations has been proposed. This novel scheme utilizes the general method of solution of a set of linear equations to transmit similar sized heterogeneous images which are a combination of monochrome and chromatic images. The chaos encrypted gray scaled images are concatenated along the three color planes resulting in color images. These are then transmitted over a secure channel along with a cover image which is an iris scan. The entire cryptology is augmented with an iris-based authentication scheme. The secret messages are retrieved once the authentication is successful. The objective of our work is briefly outlined as (a) the biometric information is the iris which is encrypted before transmission, (b) the iris is used for personal identification and verifying for message integrity, (c) the information is transmitted securely which are colored images resulting from a combination of gray images, (d) each of the images transmitted are encrypted through chaos based cryptography, (e) these encrypted multiple images are then coupled with the iris through linear combination of images before being communicated over the network. The several layers of encryption together with the ergodicity and randomness of chaos render enough confusion and diffusion properties which guarantee a fool-proof approach in achieving secure communication as demonstrated by exhaustive statistical methods. The result is vital from the perspective of opening a fundamental new dimension in multiplexing and simultaneous transmission of several monochromatic and chromatic images along with biometry based authentication and cryptography.

  4. Research of building information extraction and evaluation based on high-resolution remote-sensing imagery

    NASA Astrophysics Data System (ADS)

    Cao, Qiong; Gu, Lingjia; Ren, Ruizhi; Wang, Lang

    2016-09-01

    Building extraction currently is important in the application of high-resolution remote sensing imagery. At present, quite a few algorithms are available for detecting building information, however, most of them still have some obvious disadvantages, such as the ignorance of spectral information, the contradiction between extraction rate and extraction accuracy. The purpose of this research is to develop an effective method to detect building information for Chinese GF-1 data. Firstly, the image preprocessing technique is used to normalize the image and image enhancement is used to highlight the useful information in the image. Secondly, multi-spectral information is analyzed. Subsequently, an improved morphological building index (IMBI) based on remote sensing imagery is proposed to get the candidate building objects. Furthermore, in order to refine building objects and further remove false objects, the post-processing (e.g., the shape features, the vegetation index and the water index) is employed. To validate the effectiveness of the proposed algorithm, the omission errors (OE), commission errors (CE), the overall accuracy (OA) and Kappa are used at final. The proposed method can not only effectively use spectral information and other basic features, but also avoid extracting excessive interference details from high-resolution remote sensing images. Compared to the original MBI algorithm, the proposed method reduces the OE by 33.14% .At the same time, the Kappa increase by 16.09%. In experiments, IMBI achieved satisfactory results and outperformed other algorithms in terms of both accuracies and visual inspection

  5. Image alignment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dowell, Larry Jonathan

    Disclosed is a method and device for aligning at least two digital images. An embodiment may use frequency-domain transforms of small tiles created from each image to identify substantially similar, "distinguishing" features within each of the images, and then align the images together based on the location of the distinguishing features. To accomplish this, an embodiment may create equal sized tile sub-images for each image. A "key" for each tile may be created by performing a frequency-domain transform calculation on each tile. A information-distance difference between each possible pair of tiles on each image may be calculated to identify distinguishingmore » features. From analysis of the information-distance differences of the pairs of tiles, a subset of tiles with high discrimination metrics in relation to other tiles may be located for each image. The subset of distinguishing tiles for each image may then be compared to locate tiles with substantially similar keys and/or information-distance metrics to other tiles of other images. Once similar tiles are located for each image, the images may be aligned in relation to the identified similar tiles.« less

  6. Hippocampus segmentation using locally weighted prior based level set

    NASA Astrophysics Data System (ADS)

    Achuthan, Anusha; Rajeswari, Mandava

    2015-12-01

    Segmentation of hippocampus in the brain is one of a major challenge in medical image segmentation due to its' imaging characteristics, with almost similar intensity between another adjacent gray matter structure, such as amygdala. The intensity similarity has causes the hippocampus to have weak or fuzzy boundaries. With this main challenge being demonstrated by hippocampus, a segmentation method that relies on image information alone may not produce accurate segmentation results. Therefore, it is needed an assimilation of prior information such as shape and spatial information into existing segmentation method to produce the expected segmentation. Previous studies has widely integrated prior information into segmentation methods. However, the prior information has been utilized through a global manner integration, and this does not reflect the real scenario during clinical delineation. Therefore, in this paper, a locally integrated prior information into a level set model is presented. This work utilizes a mean shape model to provide automatic initialization for level set evolution, and has been integrated as prior information into the level set model. The local integration of edge based information and prior information has been implemented through an edge weighting map that decides at voxel level which information need to be observed during a level set evolution. The edge weighting map shows which corresponding voxels having sufficient edge information. Experiments shows that the proposed integration of prior information locally into a conventional edge-based level set model, known as geodesic active contour has shown improvement of 9% in averaged Dice coefficient.

  7. Hybrid model based unified scheme for endoscopic Cerenkov and radio-luminescence tomography: Simulation demonstration

    NASA Astrophysics Data System (ADS)

    Wang, Lin; Cao, Xin; Ren, Qingyun; Chen, Xueli; He, Xiaowei

    2018-05-01

    Cerenkov luminescence imaging (CLI) is an imaging method that uses an optical imaging scheme to probe a radioactive tracer. Application of CLI with clinically approved radioactive tracers has opened an opportunity for translating optical imaging from preclinical to clinical applications. Such translation was further improved by developing an endoscopic CLI system. However, two-dimensional endoscopic imaging cannot identify accurate depth and obtain quantitative information. Here, we present an imaging scheme to retrieve the depth and quantitative information from endoscopic Cerenkov luminescence tomography, which can also be applied for endoscopic radio-luminescence tomography. In the scheme, we first constructed a physical model for image collection, and then a mathematical model for characterizing the luminescent light propagation from tracer to the endoscopic detector. The mathematical model is a hybrid light transport model combined with the 3rd order simplified spherical harmonics approximation, diffusion, and radiosity equations to warrant accuracy and speed. The mathematical model integrates finite element discretization, regularization, and primal-dual interior-point optimization to retrieve the depth and the quantitative information of the tracer. A heterogeneous-geometry-based numerical simulation was used to explore the feasibility of the unified scheme, which demonstrated that it can provide a satisfactory balance between imaging accuracy and computational burden.

  8. Airborne Infrared and Visible Image Fusion Combined with Region Segmentation

    PubMed Central

    Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao

    2017-01-01

    This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems. PMID:28505137

  9. Airborne Infrared and Visible Image Fusion Combined with Region Segmentation.

    PubMed

    Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao

    2017-05-15

    This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems.

  10. MRI brain tumor segmentation based on improved fuzzy c-means method

    NASA Astrophysics Data System (ADS)

    Deng, Wankai; Xiao, Wei; Pan, Chao; Liu, Jianguo

    2009-10-01

    This paper focuses on the image segmentation, which is one of the key problems in medical image processing. A new medical image segmentation method is proposed based on fuzzy c- means algorithm and spatial information. Firstly, we classify the image into the region of interest and background using fuzzy c means algorithm. Then we use the information of the tissues' gradient and the intensity inhomogeneities of regions to improve the quality of segmentation. The sum of the mean variance in the region and the reciprocal of the mean gradient along the edge of the region are chosen as an objective function. The minimum of the sum is optimum result. The result shows that the clustering segmentation algorithm is effective.

  11. Modes of Visual Recognition and Perceptually Relevant Sketch-based Coding for Images

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.

    1991-01-01

    A review of visual recognition studies is used to define two levels of information requirements. These two levels are related to two primary subdivisions of the spatial frequency domain of images and reflect two distinct different physical properties of arbitrary scenes. In particular, pathologies in recognition due to cerebral dysfunction point to a more complete split into two major types of processing: high spatial frequency edge based recognition vs. low spatial frequency lightness (and color) based recognition. The former is more central and general while the latter is more specific and is necessary for certain special tasks. The two modes of recognition can also be distinguished on the basis of physical scene properties: the highly localized edges associated with reflectance and sharp topographic transitions vs. smooth topographic undulation. The extreme case of heavily abstracted images is pursued to gain an understanding of the minimal information required to support both modes of recognition. Here the intention is to define the semantic core of transmission. This central core of processing can then be fleshed out with additional image information and coding and rendering techniques.

  12. Object-Based Change Detection Using High-Resolution Remotely Sensed Data and GIS

    NASA Astrophysics Data System (ADS)

    Sofina, N.; Ehlers, M.

    2012-08-01

    High resolution remotely sensed images provide current, detailed, and accurate information for large areas of the earth surface which can be used for change detection analyses. Conventional methods of image processing permit detection of changes by comparing remotely sensed multitemporal images. However, for performing a successful analysis it is desirable to take images from the same sensor which should be acquired at the same time of season, at the same time of a day, and - for electro-optical sensors - in cloudless conditions. Thus, a change detection analysis could be problematic especially for sudden catastrophic events. A promising alternative is the use of vector-based maps containing information about the original urban layout which can be related to a single image obtained after the catastrophe. The paper describes a methodology for an object-based search of destroyed buildings as a consequence of a natural or man-made catastrophe (e.g., earthquakes, flooding, civil war). The analysis is based on remotely sensed and vector GIS data. It includes three main steps: (i) generation of features describing the state of buildings; (ii) classification of building conditions; and (iii) data import into a GIS. One of the proposed features is a newly developed 'Detected Part of Contour' (DPC). Additionally, several features based on the analysis of textural information corresponding to the investigated vector objects are calculated. The method is applied to remotely sensed images of areas that have been subjected to an earthquake. The results show the high reliability of the DPC feature as an indicator for change.

  13. Superpixel-based spectral classification for the detection of head and neck cancer with hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Chung, Hyunkoo; Lu, Guolan; Tian, Zhiqiang; Wang, Dongsheng; Chen, Zhuo Georgia; Fei, Baowei

    2016-03-01

    Hyperspectral imaging (HSI) is an emerging imaging modality for medical applications. HSI acquires two dimensional images at various wavelengths. The combination of both spectral and spatial information provides quantitative information for cancer detection and diagnosis. This paper proposes using superpixels, principal component analysis (PCA), and support vector machine (SVM) to distinguish regions of tumor from healthy tissue. The classification method uses 2 principal components decomposed from hyperspectral images and obtains an average sensitivity of 93% and an average specificity of 85% for 11 mice. The hyperspectral imaging technology and classification method can have various applications in cancer research and management.

  14. Information and image integration: project spectrum

    NASA Astrophysics Data System (ADS)

    Blaine, G. James; Jost, R. Gilbert; Martin, Lori; Weiss, David A.; Lehmann, Ron; Fritz, Kevin

    1998-07-01

    The BJC Health System (BJC) and the Washington University School of Medicine (WUSM) formed a technology alliance with industry collaborators to develop and implement an integrated, advanced clinical information system. The industry collaborators include IBM, Kodak, SBC and Motorola. The activity, called Project Spectrum, provides an integrated clinical repository for the multiple hospital facilities of the BJC. The BJC System consists of 12 acute care hospitals serving over one million patients in Missouri and Illinois. An interface engine manages transactions from each of the hospital information systems, lab systems and radiology information systems. Data is normalized to provide a consistent view for the primary care physician. Access to the clinical repository is supported by web-based server/browser technology which delivers patient data to the physician's desktop. An HL7 based messaging system coordinates the acquisition and management of radiological image data and sends image keys to the clinical data repository. Access to the clinical chart browser currently provides radiology reports, laboratory data, vital signs and transcribed medical reports. A chart metaphor provides tabs for the selection of the clinical record for review. Activation of the radiology tab facilitates a standardized view of radiology reports and provides an icon used to initiate retrieval of available radiology images. The selection of the image icon spawns an image browser plug-in and utilizes the image key from the clinical repository to access the image server for the requested image data. The Spectrum system is collecting clinical data from five hospital systems and imaging data from two hospitals. Domain specific radiology imaging systems support the acquisition and primary interpretation of radiology exams. The spectrum clinical workstations are deployed to over 200 sites utilizing local area networks and ISDN connectivity.

  15. Attention to local and global levels of hierarchical Navon figures affects rapid scene categorization.

    PubMed

    Brand, John; Johnson, Aaron P

    2014-01-01

    In four experiments, we investigated how attention to local and global levels of hierarchical Navon figures affected the selection of diagnostic spatial scale information used in scene categorization. We explored this issue by asking observers to classify hybrid images (i.e., images that contain low spatial frequency (LSF) content of one image, and high spatial frequency (HSF) content from a second image) immediately following global and local Navon tasks. Hybrid images can be classified according to either their LSF, or HSF content; thus, making them ideal for investigating diagnostic spatial scale preference. Although observers were sensitive to both spatial scales (Experiment 1), they overwhelmingly preferred to classify hybrids based on LSF content (Experiment 2). In Experiment 3, we demonstrated that LSF based hybrid categorization was faster following global Navon tasks, suggesting that LSF processing associated with global Navon tasks primed the selection of LSFs in hybrid images. In Experiment 4, replicating Experiment 3 but suppressing the LSF information in Navon letters by contrast balancing the stimuli examined this hypothesis. Similar to Experiment 3, observers preferred to classify hybrids based on LSF content; however and in contrast, LSF based hybrid categorization was slower following global than local Navon tasks.

  16. Attention to local and global levels of hierarchical Navon figures affects rapid scene categorization

    PubMed Central

    Brand, John; Johnson, Aaron P.

    2014-01-01

    In four experiments, we investigated how attention to local and global levels of hierarchical Navon figures affected the selection of diagnostic spatial scale information used in scene categorization. We explored this issue by asking observers to classify hybrid images (i.e., images that contain low spatial frequency (LSF) content of one image, and high spatial frequency (HSF) content from a second image) immediately following global and local Navon tasks. Hybrid images can be classified according to either their LSF, or HSF content; thus, making them ideal for investigating diagnostic spatial scale preference. Although observers were sensitive to both spatial scales (Experiment 1), they overwhelmingly preferred to classify hybrids based on LSF content (Experiment 2). In Experiment 3, we demonstrated that LSF based hybrid categorization was faster following global Navon tasks, suggesting that LSF processing associated with global Navon tasks primed the selection of LSFs in hybrid images. In Experiment 4, replicating Experiment 3 but suppressing the LSF information in Navon letters by contrast balancing the stimuli examined this hypothesis. Similar to Experiment 3, observers preferred to classify hybrids based on LSF content; however and in contrast, LSF based hybrid categorization was slower following global than local Navon tasks. PMID:25520675

  17. Simultaneous acquisition of differing image types

    DOEpatents

    Demos, Stavros G

    2012-10-09

    A system in one embodiment includes an image forming device for forming an image from an area of interest containing different image components; an illumination device for illuminating the area of interest with light containing multiple components; at least one light source coupled to the illumination device, the at least one light source providing light to the illumination device containing different components, each component having distinct spectral characteristics and relative intensity; an image analyzer coupled to the image forming device, the image analyzer decomposing the image formed by the image forming device into multiple component parts based on type of imaging; and multiple image capture devices, each image capture device receiving one of the component parts of the image. A method in one embodiment includes receiving an image from an image forming device; decomposing the image formed by the image forming device into multiple component parts based on type of imaging; receiving the component parts of the image; and outputting image information based on the component parts of the image. Additional systems and methods are presented.

  18. Digital focusing of OCT images based on scalar diffraction theory and information entropy.

    PubMed

    Liu, Guozhong; Zhi, Zhongwei; Wang, Ruikang K

    2012-11-01

    This paper describes a digital method that is capable of automatically focusing optical coherence tomography (OCT) en face images without prior knowledge of the point spread function of the imaging system. The method utilizes a scalar diffraction model to simulate wave propagation from out-of-focus scatter to the focal plane, from which the propagation distance between the out-of-focus plane and the focal plane is determined automatically via an image-definition-evaluation criterion based on information entropy theory. By use of the proposed approach, we demonstrate that the lateral resolution close to that at the focal plane can be recovered from the imaging planes outside the depth of field region with minimal loss of resolution. Fresh onion tissues and mouse fat tissues are used in the experiments to show the performance of the proposed method.

  19. Classification Comparisons Between Compact Polarimetric and Quad-Pol SAR Imagery

    NASA Astrophysics Data System (ADS)

    Souissi, Boularbah; Doulgeris, Anthony P.; Eltoft, Torbjørn

    2015-04-01

    Recent interest in dual-pol SAR systems has lead to a novel approach, the so-called compact polarimetric imaging mode (CP) which attempts to reconstruct fully polarimetric information based on a few simple assumptions. In this work, the CP image is simulated from the full quad-pol (QP) image. We present here the initial comparison of polarimetric information content between QP and CP imaging modes. The analysis of multi-look polarimetric covariance matrix data uses an automated statistical clustering method based upon the expectation maximization (EM) algorithm for finite mixture modeling, using the complex Wishart probability density function. Our results showed that there are some different characteristics between the QP and CP modes. The classification is demonstrated using a E-SAR and Radarsat2 polarimetric SAR images acquired over DLR Oberpfaffenhofen in Germany and Algiers in Algeria respectively.

  20. Patch-Based Super-Resolution of MR Spectroscopic Images: Application to Multiple Sclerosis

    PubMed Central

    Jain, Saurabh; Sima, Diana M.; Sanaei Nezhad, Faezeh; Hangel, Gilbert; Bogner, Wolfgang; Williams, Stephen; Van Huffel, Sabine; Maes, Frederik; Smeets, Dirk

    2017-01-01

    Purpose: Magnetic resonance spectroscopic imaging (MRSI) provides complementary information to conventional magnetic resonance imaging. Acquiring high resolution MRSI is time consuming and requires complex reconstruction techniques. Methods: In this paper, a patch-based super-resolution method is presented to increase the spatial resolution of metabolite maps computed from MRSI. The proposed method uses high resolution anatomical MR images (T1-weighted and Fluid-attenuated inversion recovery) to regularize the super-resolution process. The accuracy of the method is validated against conventional interpolation techniques using a phantom, as well as simulated and in vivo acquired human brain images of multiple sclerosis subjects. Results: The method preserves tissue contrast and structural information, and matches well with the trend of acquired high resolution MRSI. Conclusions: These results suggest that the method has potential for clinically relevant neuroimaging applications. PMID:28197066

  1. Score-Level Fusion of Phase-Based and Feature-Based Fingerprint Matching Algorithms

    NASA Astrophysics Data System (ADS)

    Ito, Koichi; Morita, Ayumi; Aoki, Takafumi; Nakajima, Hiroshi; Kobayashi, Koji; Higuchi, Tatsuo

    This paper proposes an efficient fingerprint recognition algorithm combining phase-based image matching and feature-based matching. In our previous work, we have already proposed an efficient fingerprint recognition algorithm using Phase-Only Correlation (POC), and developed commercial fingerprint verification units for access control applications. The use of Fourier phase information of fingerprint images makes it possible to achieve robust recognition for weakly impressed, low-quality fingerprint images. This paper presents an idea of improving the performance of POC-based fingerprint matching by combining it with feature-based matching, where feature-based matching is introduced in order to improve recognition efficiency for images with nonlinear distortion. Experimental evaluation using two different types of fingerprint image databases demonstrates efficient recognition performance of the combination of the POC-based algorithm and the feature-based algorithm.

  2. Characterizing the spatial structure of endangered species habitat using geostatistical analysis of IKONOS imagery

    USGS Publications Warehouse

    Wallace, C.S.A.; Marsh, S.E.

    2005-01-01

    Our study used geostatistics to extract measures that characterize the spatial structure of vegetated landscapes from satellite imagery for mapping endangered Sonoran pronghorn habitat. Fine spatial resolution IKONOS data provided information at the scale of individual trees or shrubs that permitted analysis of vegetation structure and pattern. We derived images of landscape structure by calculating local estimates of the nugget, sill, and range variogram parameters within 25 ?? 25-m image windows. These variogram parameters, which describe the spatial autocorrelation of the 1-m image pixels, are shown in previous studies to discriminate between different species-specific vegetation associations. We constructed two independent models of pronghorn landscape preference by coupling the derived measures with Sonoran pronghorn sighting data: a distribution-based model and a cluster-based model. The distribution-based model used the descriptive statistics for variogram measures at pronghorn sightings, whereas the cluster-based model used the distribution of pronghorn sightings within clusters of an unsupervised classification of derived images. Both models define similar landscapes, and validation results confirm they effectively predict the locations of an independent set of pronghorn sightings. Such information, although not a substitute for field-based knowledge of the landscape and associated ecological processes, can provide valuable reconnaissance information to guide natural resource management efforts. ?? 2005 Taylor & Francis Group Ltd.

  3. Small scale photo probability sampling and vegetation classification in southeast Arizona as an ecological base for resource inventory. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Johnson, J. R. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. The broad scale vegetation classification was developed for a 3,200 sq mile area in southeastern Arizona. The 31 vegetation types were derived from association tables which contained information taken at about 500 ground sites. The classification provided an information base that was suitable for use with small scale photography. A procedure was developed and tested for objectively comparing photo images. The procedure consisted of two parts, image groupability testing and image complexity testing. The Apollo and ERTS photos were compared for relative suitability as first stage stratification bases in two stage proportional probability sampling. High altitude photography was used in common at the second stage.

  4. Synchrotron-based coherent scatter x-ray projection imaging using an array of monoenergetic pencil beams.

    PubMed

    Landheer, Karl; Johns, Paul C

    2012-09-01

    Traditional projection x-ray imaging utilizes only the information from the primary photons. Low-angle coherent scatter images can be acquired simultaneous to the primary images and provide additional information. In medical applications scatter imaging can improve x-ray contrast or reduce dose using information that is currently discarded in radiological images to augment the transmitted radiation information. Other applications include non-destructive testing and security. A system at the Canadian Light Source synchrotron was configured which utilizes multiple pencil beams (up to five) to create both primary and coherent scatter projection images, simultaneously. The sample was scanned through the beams using an automated step-and-shoot setup. Pixels were acquired in a hexagonal lattice to maximize packing efficiency. The typical pitch was between 1.0 and 1.6 mm. A Maximum Likelihood-Expectation Maximization-based iterative method was used to disentangle the overlapping information from the flat panel digital x-ray detector. The pixel value of the coherent scatter image was generated by integrating the radial profile (scatter intensity versus scattering angle) over an angular range. Different angular ranges maximize the contrast between different materials of interest. A five-beam primary and scatter image set (which had a pixel beam time of 990 ms and total scan time of 56 min) of a porcine phantom is included. For comparison a single-beam coherent scatter image of the same phantom is included. The muscle-fat contrast was 0.10 ± 0.01 and 1.16 ± 0.03 for the five-beam primary and scatter images, respectively. The air kerma was measured free in air using aluminum oxide optically stimulated luminescent dosimeters. The total area-averaged air kerma for the scan was measured to be 7.2 ± 0.4 cGy although due to difficulties in small-beam dosimetry this number could be inaccurate.

  5. Study on polarization image methods in turbid medium

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong

    2014-11-01

    Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.

  6. Augmenting atlas-based liver segmentation for radiotherapy treatment planning by incorporating image features proximal to the atlas contours

    NASA Astrophysics Data System (ADS)

    Li, Dengwang; Liu, Li; Chen, Jinhu; Li, Hongsheng; Yin, Yong; Ibragimov, Bulat; Xing, Lei

    2017-01-01

    Atlas-based segmentation utilizes a library of previously delineated contours of similar cases to facilitate automatic segmentation. The problem, however, remains challenging because of limited information carried by the contours in the library. In this studying, we developed a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. This study presented a new concept of atlas based segmentation method. Instead of using the complete volume of the target organs, only information along the organ contours from the atlas images was used for guiding segmentation of the new image. In setting up an atlas-based library, we included not only the coordinates of contour points, but also the image features adjacent to the contour. In this work, 139 CT images with normal appearing livers collected for radiotherapy treatment planning were used to construct the library. The CT images within the library were first registered to each other using affine registration. The nonlinear narrow shell was generated alongside the object contours of registered images. Matching voxels were selected inside common narrow shell image features of a library case and a new case using a speed-up robust features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the new image by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy optimization within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by physicians. A novel atlas-based segmentation technique with inclusion of neighborhood image features through the introduction of a narrow-shell surrounding the target objects was established. Application of the technique to 30 liver cases suggested that the technique was capable to reliably segment liver cases from CT, 4D-CT, and CBCT images with little human interaction. The accuracy and speed of the proposed method are quantitatively validated by comparing automatic segmentation results with the manual delineation results. The Jaccard similarity metric between the automatically generated liver contours obtained by the proposed method and the physician delineated results are on an average 90%-96% for planning images. Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically practical solution for auto-segmentation. The proposed mountainous narrow shell atlas based method can achieve efficient automatic liver propagation for CT, 4D-CT and CBCT images with following treatment planning and should find widespread application in future treatment planning systems.

  7. Augmenting atlas-based liver segmentation for radiotherapy treatment planning by incorporating image features proximal to the atlas contours.

    PubMed

    Li, Dengwang; Liu, Li; Chen, Jinhu; Li, Hongsheng; Yin, Yong; Ibragimov, Bulat; Xing, Lei

    2017-01-07

    Atlas-based segmentation utilizes a library of previously delineated contours of similar cases to facilitate automatic segmentation. The problem, however, remains challenging because of limited information carried by the contours in the library. In this studying, we developed a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. This study presented a new concept of atlas based segmentation method. Instead of using the complete volume of the target organs, only information along the organ contours from the atlas images was used for guiding segmentation of the new image. In setting up an atlas-based library, we included not only the coordinates of contour points, but also the image features adjacent to the contour. In this work, 139 CT images with normal appearing livers collected for radiotherapy treatment planning were used to construct the library. The CT images within the library were first registered to each other using affine registration. The nonlinear narrow shell was generated alongside the object contours of registered images. Matching voxels were selected inside common narrow shell image features of a library case and a new case using a speed-up robust features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the new image by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy optimization within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by physicians. A novel atlas-based segmentation technique with inclusion of neighborhood image features through the introduction of a narrow-shell surrounding the target objects was established. Application of the technique to 30 liver cases suggested that the technique was capable to reliably segment liver cases from CT, 4D-CT, and CBCT images with little human interaction. The accuracy and speed of the proposed method are quantitatively validated by comparing automatic segmentation results with the manual delineation results. The Jaccard similarity metric between the automatically generated liver contours obtained by the proposed method and the physician delineated results are on an average 90%-96% for planning images. Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically practical solution for auto-segmentation. The proposed mountainous narrow shell atlas based method can achieve efficient automatic liver propagation for CT, 4D-CT and CBCT images with following treatment planning and should find widespread application in future treatment planning systems.

  8. The information security needs in radiological information systems-an insight on state hospitals of Iran, 2012.

    PubMed

    Farhadi, Akram; Ahmadi, Maryam

    2013-12-01

    Picture Archiving and Communications System (PACS) was originally developed for radiology services over 20 years ago to capture medical images electronically. Medical diagnosis methods are based on images such as clinical radiographs, ultrasounds, CT scans, MRIs, or other imaging modalities. Information obtained from these images is correlated with patient information. So with regards to the important role of PACS in hospitals, we aimed to evaluate the PACS and survey the information security needed in the Radiological Information system. First, we surveyed the different aspects of PACS that should be in any health organizations based on Department of Health standards and prepared checklists for assessing the PACS in different hospitals. Second, we surveyed the security controls that should be implemented in PACS. Checklists reliability is affirmed by professors of Tehran Science University. Then, the final data are inputted in SPSS software and analyzed. The results indicate that PACS in hospitals can transfer patient demographic information but they do not show route of information. These systems are not open source. They don't use XML-based standard and HL7 standard for exchanging the data. They do not use DS digital signature. They use passwords and the user can correct or change the medical information. PACS can detect alternation rendered. The survey of results demonstrates that PACS in all hospitals has the same features. These systems have the patient demographic data but they do not have suitable flexibility to interface network or taking reports. For the privacy of PACS in all hospitals, there were passwords for users and the system could show the changes that have been made; but there was no water making or digital signature for the users.

  9. Integration, acceptance testing, and clinical operation of the Medical Information, Communication and Archive System, phase II.

    PubMed

    Smith, E M; Wandtke, J; Robinson, A

    1999-05-01

    The Medical Information, Communication and Archive System (MICAS) is a multivendor incremental approach to picture archiving and communications system (PACS). It is a multimodality integrated image management system that is seamlessly integrated with the radiology information system (RIS). Phase II enhancements of MICAS include a permanent archive, automated workflow, study caches, Microsoft (Redmond, WA) Windows NT diagnostic workstations with all components adhering to Digital Information Communications in Medicine (DICOM) standards. MICAS is designed as an enterprise-wide PACS to provide images and reports throughout the Strong Health healthcare network. Phase II includes the addition of a Cemax-Icon (Fremont, CA) archive, PACS broker (Mitra, Waterloo, Canada), an interface (IDX PACSlink, Burlington, VT) to the RIS (IDXrad) plus the conversion of the UNIX-based redundant array of inexpensive disks (RAID) 5 temporary archives in phase I to NT-based RAID 0 DICOM modality-specific study caches (ImageLabs, Bedford, MA). The phase I acquisition engines and workflow management software was uninstalled and the Cemax archive manager (AM) assumed these functions. The existing ImageLabs UNIX-based viewing software was enhanced and converted to an NT-based DICOM viewer. Installation of phase II hardware and software and integration with existing components began in July 1998. Phase II of MICAS demonstrates that a multivendor open-system incremental approach to PACS is feasible, cost-effective, and has significant advantages over a single-vendor implementation.

  10. Information retrieval based on single-pixel optical imaging with quick-response code

    NASA Astrophysics Data System (ADS)

    Xiao, Yin; Chen, Wen

    2018-04-01

    Quick-response (QR) code technique is combined with ghost imaging (GI) to recover original information with high quality. An image is first transformed into a QR code. Then the QR code is treated as an input image in the input plane of a ghost imaging setup. After measurements, traditional correlation algorithm of ghost imaging is utilized to reconstruct an image (QR code form) with low quality. With this low-quality image as an initial guess, a Gerchberg-Saxton-like algorithm is used to improve its contrast, which is actually a post processing. Taking advantage of high error correction capability of QR code, original information can be recovered with high quality. Compared to the previous method, our method can obtain a high-quality image with comparatively fewer measurements, which means that the time-consuming postprocessing procedure can be avoided to some extent. In addition, for conventional ghost imaging, the larger the image size is, the more measurements are needed. However, for our method, images with different sizes can be converted into QR code with the same small size by using a QR generator. Hence, for the larger-size images, the time required to recover original information with high quality will be dramatically reduced. Our method makes it easy to recover a color image in a ghost imaging setup, because it is not necessary to divide the color image into three channels and respectively recover them.

  11. A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images

    PubMed Central

    Yang, Qiyao; Wang, Zhiguo; Zhang, Guoxu

    2017-01-01

    The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one. PMID:28316979

  12. Radiotherapy supporting system based on the image database using IS&C magneto-optical disk

    NASA Astrophysics Data System (ADS)

    Ando, Yutaka; Tsukamoto, Nobuhiro; Kunieda, Etsuo; Kubo, Atsushi

    1994-05-01

    Since radiation oncologists make the treatment plan by prior experience, information about previous cases is helpful in planning the radiation treatment. We have developed an supporting system for the radiation therapy. The case-based reasoning method was implemented in order to search old treatments and images of past cases. This system evaluates similarities between the current case and all stored cases (case base). The portal images of the similar cases can be retrieved for reference images, as well as treatment records which show examples of the radiation treatment. By this system radiotherapists can easily make suitable plans of the radiation therapy. This system is useful to prevent inaccurate plannings due to preconceptions and/or lack of knowledge. Images were stored into magneto-optical disks and the demographic data is recorded to the hard disk which is equipped in the personal computer. Images can be displayed quickly on the radiotherapist's demands. The radiation oncologist can refer past cases which are recorded in the case base and decide the radiation treatment of the current case. The file and data format of magneto-optical disk is the IS&C format. This format provides the interchangeability and reproducibility of the medical information which includes images and other demographic data.

  13. Brain connectivity study of joint attention using frequency-domain optical imaging technique

    NASA Astrophysics Data System (ADS)

    Chaudhary, Ujwal; Zhu, Banghe; Godavarty, Anuradha

    2010-02-01

    Autism is a socio-communication brain development disorder. It is marked by degeneration in the ability to respond to joint attention skill task, from as early as 12 to 18 months of age. This trait is used to distinguish autistic from nonautistic populations. In this study, diffuse optical imaging is being used to study brain connectivity for the first time in response to joint attention experience in normal adults. The prefrontal region of the brain was non-invasively imaged using a frequency-domain based optical imager. The imaging studies were performed on 11 normal right-handed adults and optical measurements were acquired in response to joint-attention based video clips. While the intensity-based optical data provides information about the hemodynamic response of the underlying neural process, the time-dependent phase-based optical data has the potential to explicate the directional information on the activation of the brain. Thus brain connectivity studies are performed by computing covariance/correlations between spatial units using this frequency-domain based optical measurements. The preliminary results indicate that the extent of synchrony and directional variation in the pattern of activation varies in the left and right frontal cortex. The results have significant implication for research in neural pathways associated with autism that can be mapped using diffuse optical imaging tools in the future.

  14. [Identification of green tea brand based on hyperspectra imaging technology].

    PubMed

    Zhang, Hai-Liang; Liu, Xiao-Li; Zhu, Feng-Le; He, Yong

    2014-05-01

    Hyperspectral imaging technology was developed to identify different brand famous green tea based on PCA information and image information fusion. First 512 spectral images of six brands of famous green tea in the 380 approximately 1 023 nm wavelength range were collected and principal component analysis (PCA) was performed with the goal of selecting two characteristic bands (545 and 611 nm) that could potentially be used for classification system. Then, 12 gray level co-occurrence matrix (GLCM) features (i. e., mean, covariance, homogeneity, energy, contrast, correlation, entropy, inverse gap, contrast, difference from the second-order and autocorrelation) based on the statistical moment were extracted from each characteristic band image. Finally, integration of the 12 texture features and three PCA spectral characteristics for each green tea sample were extracted as the input of LS-SVM. Experimental results showed that discriminating rate was 100% in the prediction set. The receiver operating characteristic curve (ROC) assessment methods were used to evaluate the LS-SVM classification algorithm. Overall results sufficiently demonstrate that hyperspectral imaging technology can be used to perform classification of green tea.

  15. Information Security Scheme Based on Computational Temporal Ghost Imaging.

    PubMed

    Jiang, Shan; Wang, Yurong; Long, Tao; Meng, Xiangfeng; Yang, Xiulun; Shu, Rong; Sun, Baoqing

    2017-08-09

    An information security scheme based on computational temporal ghost imaging is proposed. A sequence of independent 2D random binary patterns are used as encryption key to multiply with the 1D data stream. The cipher text is obtained by summing the weighted encryption key. The decryption process can be realized by correlation measurement between the encrypted information and the encryption key. Due to the instinct high-level randomness of the key, the security of this method is greatly guaranteed. The feasibility of this method and robustness against both occlusion and additional noise attacks are discussed with simulation, respectively.

  16. Imaging nanoscale lattice variations by machine learning of x-ray diffraction microscopy data

    DOE PAGES

    Laanait, Nouamane; Zhang, Zhan; Schlepütz, Christian M.

    2016-08-09

    In this paper, we present a novel methodology based on machine learning to extract lattice variations in crystalline materials, at the nanoscale, from an x-ray Bragg diffraction-based imaging technique. By employing a full-field microscopy setup, we capture real space images of materials, with imaging contrast determined solely by the x-ray diffracted signal. The data sets that emanate from this imaging technique are a hybrid of real space information (image spatial support) and reciprocal lattice space information (image contrast), and are intrinsically multidimensional (5D). By a judicious application of established unsupervised machine learning techniques and multivariate analysis to this multidimensional datamore » cube, we show how to extract features that can be ascribed physical interpretations in terms of common structural distortions, such as lattice tilts and dislocation arrays. Finally, we demonstrate this 'big data' approach to x-ray diffraction microscopy by identifying structural defects present in an epitaxial ferroelectric thin-film of lead zirconate titanate.« less

  17. High dynamic range algorithm based on HSI color space

    NASA Astrophysics Data System (ADS)

    Zhang, Jiancheng; Liu, Xiaohua; Dong, Liquan; Zhao, Yuejin; Liu, Ming

    2014-10-01

    This paper presents a High Dynamic Range algorithm based on HSI color space. To keep hue and saturation of original image and conform to human eye vision effect is the first problem, convert the input image data to HSI color space which include intensity dimensionality. To raise the speed of the algorithm is the second problem, use integral image figure out the average of every pixel intensity value under a certain scale, as local intensity component of the image, and figure out detail intensity component. To adjust the overall image intensity is the third problem, we can get an S type curve according to the original image information, adjust the local intensity component according to the S type curve. To enhance detail information is the fourth problem, adjust the detail intensity component according to the curve designed in advance. The weighted sum of local intensity component after adjusted and detail intensity component after adjusted is final intensity. Converting synthetic intensity and other two dimensionality to output color space can get final processed image.

  18. Imaging nanoscale lattice variations by machine learning of x-ray diffraction microscopy data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laanait, Nouamane; Zhang, Zhan; Schlepütz, Christian M.

    In this paper, we present a novel methodology based on machine learning to extract lattice variations in crystalline materials, at the nanoscale, from an x-ray Bragg diffraction-based imaging technique. By employing a full-field microscopy setup, we capture real space images of materials, with imaging contrast determined solely by the x-ray diffracted signal. The data sets that emanate from this imaging technique are a hybrid of real space information (image spatial support) and reciprocal lattice space information (image contrast), and are intrinsically multidimensional (5D). By a judicious application of established unsupervised machine learning techniques and multivariate analysis to this multidimensional datamore » cube, we show how to extract features that can be ascribed physical interpretations in terms of common structural distortions, such as lattice tilts and dislocation arrays. Finally, we demonstrate this 'big data' approach to x-ray diffraction microscopy by identifying structural defects present in an epitaxial ferroelectric thin-film of lead zirconate titanate.« less

  19. Technical Review: Microscopy and Image Processing Tools to Analyze Plant Chromatin: Practical Considerations.

    PubMed

    Baroux, Célia; Schubert, Veit

    2018-01-01

    In situ nucleus and chromatin analyses rely on microscopy imaging that benefits from versatile, efficient fluorescent probes and proteins for static or live imaging. Yet the broad choice in imaging instruments offered to the user poses orientation problems. Which imaging instrument should be used for which purpose? What are the main caveats and what are the considerations to best exploit each instrument's ability to obtain informative and high-quality images? How to infer quantitative information on chromatin or nuclear organization from microscopy images? In this review, we present an overview of common, fluorescence-based microscopy systems and discuss recently developed super-resolution microscopy systems, which are able to bridge the resolution gap between common fluorescence microscopy and electron microscopy. We briefly present their basic principles and discuss their possible applications in the field, while providing experience-based recommendations to guide the user toward best-possible imaging. In addition to raw data acquisition methods, we discuss commercial and noncommercial processing tools required for optimal image presentation and signal evaluation in two and three dimensions.

  20. Image Fusion of CT and MR with Sparse Representation in NSST Domain

    PubMed Central

    Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan

    2017-01-01

    Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. PMID:29250134

  1. Image Fusion of CT and MR with Sparse Representation in NSST Domain.

    PubMed

    Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan; Xia, Shunren

    2017-01-01

    Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation.

  2. Reconstruction of fluorescence molecular tomography with a cosinoidal level set method.

    PubMed

    Zhang, Xuanxuan; Cao, Xu; Zhu, Shouping

    2017-06-27

    Implicit shape-based reconstruction method in fluorescence molecular tomography (FMT) is capable of achieving higher image clarity than image-based reconstruction method. However, the implicit shape method suffers from a low convergence speed and performs unstably due to the utilization of gradient-based optimization methods. Moreover, the implicit shape method requires priori information about the number of targets. A shape-based reconstruction scheme of FMT with a cosinoidal level set method is proposed in this paper. The Heaviside function in the classical implicit shape method is replaced with a cosine function, and then the reconstruction can be accomplished with the Levenberg-Marquardt method rather than gradient-based methods. As a result, the priori information about the number of targets is not required anymore and the choice of step length is avoided. Numerical simulations and phantom experiments were carried out to validate the proposed method. Results of the proposed method show higher contrast to noise ratios and Pearson correlations than the implicit shape method and image-based reconstruction method. Moreover, the number of iterations required in the proposed method is much less than the implicit shape method. The proposed method performs more stably, provides a faster convergence speed than the implicit shape method, and achieves higher image clarity than the image-based reconstruction method.

  3. [A new concept for integration of image databanks into a comprehensive patient documentation].

    PubMed

    Schöll, E; Holm, J; Eggli, S

    2001-05-01

    Image processing and archiving are of increasing importance in the practice of modern medicine. Particularly due to the introduction of computer-based investigation methods, physicians are dealing with a wide variety of analogue and digital picture archives. On the other hand, clinical information is stored in various text-based information systems without integration of image components. The link between such traditional medical databases and picture archives is a prerequisite for efficient data management as well as for continuous quality control and medical education. At the Department of Orthopedic Surgery, University of Berne, a software program was developed to create a complete multimedia electronic patient record. The client-server system contains all patients' data, questionnaire-based quality control, and a digital picture archive. Different interfaces guarantee the integration into the hospital's data network. This article describes our experiences in the development and introduction of a comprehensive image archiving system at a large orthopedic center.

  4. Image gathering and coding for digital restoration: Information efficiency and visual quality

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; John, Sarah; Mccormick, Judith A.; Narayanswamy, Ramkumar

    1989-01-01

    Image gathering and coding are commonly treated as tasks separate from each other and from the digital processing used to restore and enhance the images. The goal is to develop a method that allows us to assess quantitatively the combined performance of image gathering and coding for the digital restoration of images with high visual quality. Digital restoration is often interactive because visual quality depends on perceptual rather than mathematical considerations, and these considerations vary with the target, the application, and the observer. The approach is based on the theoretical treatment of image gathering as a communication channel (J. Opt. Soc. Am. A2, 1644(1985);5,285(1988). Initial results suggest that the practical upper limit of the information contained in the acquired image data range typically from approximately 2 to 4 binary information units (bifs) per sample, depending on the design of the image-gathering system. The associated information efficiency of the transmitted data (i.e., the ratio of information over data) ranges typically from approximately 0.3 to 0.5 bif per bit without coding to approximately 0.5 to 0.9 bif per bit with lossless predictive compression and Huffman coding. The visual quality that can be attained with interactive image restoration improves perceptibly as the available information increases to approximately 3 bifs per sample. However, the perceptual improvements that can be attained with further increases in information are very subtle and depend on the target and the desired enhancement.

  5. A Medical Image Backup Architecture Based on a NoSQL Database and Cloud Computing Services.

    PubMed

    Santos Simões de Almeida, Luan Henrique; Costa Oliveira, Marcelo

    2015-01-01

    The use of digital systems for storing medical images generates a huge volume of data. Digital images are commonly stored and managed on a Picture Archiving and Communication System (PACS), under the DICOM standard. However, PACS is limited because it is strongly dependent on the server's physical space. Alternatively, Cloud Computing arises as an extensive, low cost, and reconfigurable resource. However, medical images contain patient information that can not be made available in a public cloud. Therefore, a mechanism to anonymize these images is needed. This poster presents a solution for this issue by taking digital images from PACS, converting the information contained in each image file to a NoSQL database, and using cloud computing to store digital images.

  6. Providing image management and communication functionality as an integral part of an existing hospital information system

    NASA Astrophysics Data System (ADS)

    Dayhoff, Ruth E.; Maloney, Daniel L.

    1990-08-01

    The effective delivery of health care has become increasingly dependent on a wide range of medical data which includes a variety of images. Manual and computer-based medical records ordinarily do not contain image data, leaving the physician to deal with a fragmented patient record widely scattered throughout the hospital. The Department of Veterans Affairs (VA) is currently installing a prototype hospital information system (HIS) workstation network to demonstrate the feasibility of providing image management and communications (IMAC) functionality as an integral part of an existing hospital information system. The core of this system is a database management system adapted to handle images as a new data type. A general model for this integration is discussed and specifics of the hospital-wide network of image display workstations are given.

  7. Application development environment for advanced digital workstations

    NASA Astrophysics Data System (ADS)

    Valentino, Daniel J.; Harreld, Michael R.; Liu, Brent J.; Brown, Matthew S.; Huang, Lu J.

    1998-06-01

    One remaining barrier to the clinical acceptance of electronic imaging and information systems is the difficulty in providing intuitive access to the information needed for a specific clinical task (such as reaching a diagnosis or tracking clinical progress). The purpose of this research was to create a development environment that enables the design and implementation of advanced digital imaging workstations. We used formal data and process modeling to identify the diagnostic and quantitative data that radiologists use and the tasks that they typically perform to make clinical decisions. We studied a diverse range of radiology applications, including diagnostic neuroradiology in an academic medical center, pediatric radiology in a children's hospital, screening mammography in a breast cancer center, and thoracic radiology consultation for an oncology clinic. We used object- oriented analysis to develop software toolkits that enable a programmer to rapidly implement applications that closely match clinical tasks. The toolkits support browsing patient information, integrating patient images and reports, manipulating images, and making quantitative measurements on images. Collectively, we refer to these toolkits as the UCLA Digital ViewBox toolkit (ViewBox/Tk). We used the ViewBox/Tk to rapidly prototype and develop a number of diverse medical imaging applications. Our task-based toolkit approach enabled rapid and iterative prototyping of workstations that matched clinical tasks. The toolkit functionality and performance provided a 'hands-on' feeling for manipulating images, and for accessing textual information and reports. The toolkits directly support a new concept for protocol based-reading of diagnostic studies. The design supports the implementation of network-based application services (e.g., prefetching, workflow management, and post-processing) that will facilitate the development of future clinical applications.

  8. Parallelized Bayesian inversion for three-dimensional dental X-ray imaging.

    PubMed

    Kolehmainen, Ville; Vanne, Antti; Siltanen, Samuli; Järvenpää, Seppo; Kaipio, Jari P; Lassas, Matti; Kalke, Martti

    2006-02-01

    Diagnostic and operational tasks based on dental radiology often require three-dimensional (3-D) information that is not available in a single X-ray projection image. Comprehensive 3-D information about tissues can be obtained by computerized tomography (CT) imaging. However, in dental imaging a conventional CT scan may not be available or practical because of high radiation dose, low-resolution or the cost of the CT scanner equipment. In this paper, we consider a novel type of 3-D imaging modality for dental radiology. We consider situations in which projection images of the teeth are taken from a few sparsely distributed projection directions using the dentist's regular (digital) X-ray equipment and the 3-D X-ray attenuation function is reconstructed. A complication in these experiments is that the reconstruction of the 3-D structure based on a few projection images becomes an ill-posed inverse problem. Bayesian inversion is a well suited framework for reconstruction from such incomplete data. In Bayesian inversion, the ill-posed reconstruction problem is formulated in a well-posed probabilistic form in which a priori information is used to compensate for the incomplete information of the projection data. In this paper we propose a Bayesian method for 3-D reconstruction in dental radiology. The method is partially based on Kolehmainen et al. 2003. The prior model for dental structures consist of a weighted l1 and total variation (TV)-prior together with the positivity prior. The inverse problem is stated as finding the maximum a posteriori (MAP) estimate. To make the 3-D reconstruction computationally feasible, a parallelized version of an optimization algorithm is implemented for a Beowulf cluster computer. The method is tested with projection data from dental specimens and patient data. Tomosynthetic reconstructions are given as reference for the proposed method.

  9. Integration of digital gross pathology images for enterprise-wide access.

    PubMed

    Amin, Milon; Sharma, Gaurav; Parwani, Anil V; Anderson, Ralph; Kolowitz, Brian J; Piccoli, Anthony; Shrestha, Rasu B; Lauro, Gonzalo Romero; Pantanowitz, Liron

    2012-01-01

    Sharing digital pathology images for enterprise- wide use into a picture archiving and communication system (PACS) is not yet widely adopted. We share our solution and 3-year experience of transmitting such images to an enterprise image server (EIS). Gross pathology images acquired by prosectors were integrated with clinical cases into the laboratory information system's image management module, and stored in JPEG2000 format on a networked image server. Automated daily searches for cases with gross images were used to compile an ASCII text file that was forwarded to a separate institutional Enterprise Digital Imaging and Communications in Medicine (DICOM) Wrapper (EDW) server. Concurrently, an HL7-based image order for these cases was generated, containing the locations of images and patient data, and forwarded to the EDW, which combined data in these locations to generate images with patient data, as required by DICOM standards. The image and data were then "wrapped" according to DICOM standards, transferred to the PACS servers, and made accessible on an institution-wide basis. In total, 26,966 gross images from 9,733 cases were transmitted over the 3-year period from the laboratory information system to the EIS. The average process time for cases with successful automatic uploads (n=9,688) to the EIS was 98 seconds. Only 45 cases (0.5%) failed requiring manual intervention. Uploaded images were immediately available to institution- wide PACS users. Since inception, user feedback has been positive. Enterprise- wide PACS- based sharing of pathology images is feasible, provides useful services to clinical staff, and utilizes existing information system and telecommunications infrastructure. PACS-shared pathology images, however, require a "DICOM wrapper" for multisystem compatibility.

  10. Integration of digital gross pathology images for enterprise-wide access

    PubMed Central

    Amin, Milon; Sharma, Gaurav; Parwani, Anil V.; Anderson, Ralph; Kolowitz, Brian J; Piccoli, Anthony; Shrestha, Rasu B.; Lauro, Gonzalo Romero; Pantanowitz, Liron

    2012-01-01

    Background: Sharing digital pathology images for enterprise- wide use into a picture archiving and communication system (PACS) is not yet widely adopted. We share our solution and 3-year experience of transmitting such images to an enterprise image server (EIS). Methods: Gross pathology images acquired by prosectors were integrated with clinical cases into the laboratory information system's image management module, and stored in JPEG2000 format on a networked image server. Automated daily searches for cases with gross images were used to compile an ASCII text file that was forwarded to a separate institutional Enterprise Digital Imaging and Communications in Medicine (DICOM) Wrapper (EDW) server. Concurrently, an HL7-based image order for these cases was generated, containing the locations of images and patient data, and forwarded to the EDW, which combined data in these locations to generate images with patient data, as required by DICOM standards. The image and data were then “wrapped” according to DICOM standards, transferred to the PACS servers, and made accessible on an institution-wide basis. Results: In total, 26,966 gross images from 9,733 cases were transmitted over the 3-year period from the laboratory information system to the EIS. The average process time for cases with successful automatic uploads (n=9,688) to the EIS was 98 seconds. Only 45 cases (0.5%) failed requiring manual intervention. Uploaded images were immediately available to institution- wide PACS users. Since inception, user feedback has been positive. Conclusions: Enterprise- wide PACS- based sharing of pathology images is feasible, provides useful services to clinical staff, and utilizes existing information system and telecommunications infrastructure. PACS-shared pathology images, however, require a “DICOM wrapper” for multisystem compatibility. PMID:22530178

  11. Diffusion processes in tumors: A nuclear medicine approach

    NASA Astrophysics Data System (ADS)

    Amaya, Helman

    2016-07-01

    The number of counts used in nuclear medicine imaging techniques, only provides physical information about the desintegration of the nucleus present in the the radiotracer molecules that were uptaken in a particular anatomical region, but that information is not a real metabolic information. For this reason a mathematical method was used to find a correlation between number of counts and 18F-FDG mass concentration. This correlation allows a better interpretation of the results obtained in the study of diffusive processes in an agar phantom, and based on it, an image from the PETCETIX DICOM sample image set from OsiriX-viewer software was processed. PET-CT gradient magnitude and Laplacian images could show direct information on diffusive processes for radiopharmaceuticals that enter into the cells by simple diffusion. In the case of the radiopharmaceutical 18F-FDG is necessary to include pharmacokinetic models, to make a correct interpretation of the gradient magnitude and Laplacian of counts images.

  12. [Medical image compression: a review].

    PubMed

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.

  13. Wavelet Transforms in Parallel Image Processing

    DTIC Science & Technology

    1994-01-27

    NUMBER OF PAGES Object Segmentation, Texture Segmentation, Image Compression, Image 137 Halftoning , Neural Network, Parallel Algorithms, 2D and 3D...Vector Quantization of Wavelet Transform Coefficients ........ ............................. 57 B.1.f Adaptive Image Halftoning based on Wavelet...application has been directed to the adaptive image halftoning . The gray information at a pixel, including its gray value and gradient, is represented by

  14. Enterprise-scale image distribution with a Web PACS.

    PubMed

    Gropper, A; Doyle, S; Dreyer, K

    1998-08-01

    The integration of images with existing and new health care information systems poses a number of challenges in a multi-facility network: image distribution to clinicians; making DICOM image headers consistent across information systems; and integration of teleradiology into PACS. A novel, Web-based enterprise PACS architecture introduced at Massachusetts General Hospital provides a solution. Four AMICAS Web/Intranet Image Servers were installed as the default DICOM destination of 10 digital modalities. A fifth AMICAS receives teleradiology studies via the Internet. Each AMICAS includes: a Java-based interface to the IDXrad radiology information system (RIS), a DICOM autorouter to tape-library archives and to the Agfa PACS, a wavelet image compressor/decompressor that preserves compatibility with DICOM workstations, a Web server to distribute images throughout the enterprise, and an extensible interface which permits links between other HIS and AMICAS. Using wavelet compression and Internet standards as its native formats, AMICAS creates a bridge to the DICOM networks of remote imaging centers via the Internet. This teleradiology capability is integrated into the DICOM network and the PACS thereby eliminating the need for special teleradiology workstations. AMICAS has been installed at MGH since March of 1997. During that time, it has been a reliable component of the evolving digital image distribution system. As a result, the recently renovated neurosurgical ICU will be filmless and use only AMICAS workstations for mission-critical patient care.

  15. a Robust Descriptor Based on Spatial and Frequency Structural Information for Visible and Thermal Infrared Image Matching

    NASA Astrophysics Data System (ADS)

    Fu, Z.; Qin, Q.; Wu, C.; Chang, Y.; Luo, B.

    2017-09-01

    Due to the differences of imaging principles, image matching between visible and thermal infrared images still exist new challenges and difficulties. Inspired by the complementary spatial and frequency information of geometric structural features, a robust descriptor is proposed for visible and thermal infrared images matching. We first divide two different spatial regions to the region around point of interest, using the histogram of oriented magnitudes, which corresponds to the 2-D structural shape information to describe the larger region and the edge oriented histogram to describe the spatial distribution for the smaller region. Then the two vectors are normalized and combined to a higher feature vector. Finally, our proposed descriptor is obtained by applying principal component analysis (PCA) to reduce the dimension of the combined high feature vector to make our descriptor more robust. Experimental results showed that our proposed method was provided with significant improvements in correct matching numbers and obvious advantages by complementing information within spatial and frequency structural information.

  16. Tunable X-ray speckle-based phase-contrast and dark-field imaging using the unified modulated pattern analysis approach

    NASA Astrophysics Data System (ADS)

    Zdora, M.-C.; Thibault, P.; Deyhle, H.; Vila-Comamala, J.; Rau, C.; Zanette, I.

    2018-05-01

    X-ray phase-contrast and dark-field imaging provides valuable, complementary information about the specimen under study. Among the multimodal X-ray imaging methods, X-ray grating interferometry and speckle-based imaging have drawn particular attention, which, however, in their common implementations incur certain limitations that can restrict their range of applications. Recently, the unified modulated pattern analysis (UMPA) approach was proposed to overcome these limitations and combine grating- and speckle-based imaging in a single approach. Here, we demonstrate the multimodal imaging capabilities of UMPA and highlight its tunable character regarding spatial resolution, signal sensitivity and scan time by using different reconstruction parameters.

  17. Differential Binary Encoding Method for Calibrating Image Sensors Based on IOFBs

    PubMed Central

    Fernández, Pedro R.; Lázaro-Galilea, José Luis; Gardel, Alfredo; Espinosa, Felipe; Bravo, Ignacio; Cano, Ángel

    2012-01-01

    Image transmission using incoherent optical fiber bundles (IOFBs) requires prior calibration to obtain the spatial in-out fiber correspondence necessary to reconstruct the image captured by the pseudo-sensor. This information is recorded in a Look-Up Table called the Reconstruction Table (RT), used later for reordering the fiber positions and reconstructing the original image. This paper presents a very fast method based on image-scanning using spaces encoded by a weighted binary code to obtain the in-out correspondence. The results demonstrate that this technique yields a remarkable reduction in processing time and the image reconstruction quality is very good compared to previous techniques based on spot or line scanning, for example. PMID:22666023

  18. Beyond maximum entropy: Fractal pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, R. C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.

  19. Active vision and image/video understanding with decision structures based on the network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-08-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.

  20. A new CAD approach for improving efficacy of cancer screening

    NASA Astrophysics Data System (ADS)

    Zheng, Bin; Qian, Wei; Li, Lihua; Pu, Jiantao; Kang, Yan; Lure, Fleming; Tan, Maxine; Qiu, Yuchen

    2015-03-01

    Since performance and clinical utility of current computer-aided detection (CAD) schemes of detecting and classifying soft tissue lesions (e.g., breast masses and lung nodules) is not satisfactory, many researchers in CAD field call for new CAD research ideas and approaches. The purpose of presenting this opinion paper is to share our vision and stimulate more discussions of how to overcome or compensate the limitation of current lesion-detection based CAD schemes in the CAD research community. Since based on our observation that analyzing global image information plays an important role in radiologists' decision making, we hypothesized that using the targeted quantitative image features computed from global images could also provide highly discriminatory power, which are supplementary to the lesion-based information. To test our hypothesis, we recently performed a number of independent studies. Based on our published preliminary study results, we demonstrated that global mammographic image features and background parenchymal enhancement of breast MR images carried useful information to (1) predict near-term breast cancer risk based on negative screening mammograms, (2) distinguish between true- and false-positive recalls in mammography screening examinations, and (3) classify between malignant and benign breast MR examinations. The global case-based CAD scheme only warns a risk level of the cases without cueing a large number of false-positive lesions. It can also be applied to guide lesion-based CAD cueing to reduce false-positives but enhance clinically relevant true-positive cueing. However, before such a new CAD approach is clinically acceptable, more work is needed to optimize not only the scheme performance but also how to integrate with lesion-based CAD schemes in the clinical practice.

  1. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  2. Image acquisitions, processing and analysis in the process of obtaining characteristics of horse navicular bone

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Włodarek, J.; Przybylak, A.; Przybył, K.; Wojcieszak, D.; Czekała, W.; Ludwiczak, A.; Boniecki, P.; Koszela, K.; Przybył, J.; Skwarcz, J.

    2015-07-01

    The aim of this study was investigate the possibility of using methods of computer image analysis for the assessment and classification of morphological variability and the state of health of horse navicular bone. Assumption was that the classification based on information contained in the graphical form two-dimensional digital images of navicular bone and information of horse health. The first step in the research was define the classes of analyzed bones, and then using methods of computer image analysis for obtaining characteristics from these images. This characteristics were correlated with data concerning the animal, such as: side of hooves, number of navicular syndrome (scale 0-3), type, sex, age, weight, information about lace, information about heel. This paper shows the introduction to the study of use the neural image analysis in the diagnosis of navicular bone syndrome. Prepared method can provide an introduction to the study of non-invasive way to assess the condition of the horse navicular bone.

  3. The Research of Feature Extraction Method of Liver Pathological Image Based on Multispatial Mapping and Statistical Properties

    PubMed Central

    Liu, Huiling; Xia, Bingbing; Yi, Dehui

    2016-01-01

    We propose a new feature extraction method of liver pathological image based on multispatial mapping and statistical properties. For liver pathological images of Hematein Eosin staining, the image of R and B channels can reflect the sensitivity of liver pathological images better, while the entropy space and Local Binary Pattern (LBP) space can reflect the texture features of the image better. To obtain the more comprehensive information, we map liver pathological images to the entropy space, LBP space, R space, and B space. The traditional Higher Order Local Autocorrelation Coefficients (HLAC) cannot reflect the overall information of the image, so we propose an average correction HLAC feature. We calculate the statistical properties and the average gray value of pathological images and then update the current pixel value as the absolute value of the difference between the current pixel gray value and the average gray value, which can be more sensitive to the gray value changes of pathological images. Lastly the HLAC template is used to calculate the features of the updated image. The experiment results show that the improved features of the multispatial mapping have the better classification performance for the liver cancer. PMID:27022407

  4. 3D fingerprint imaging system based on full-field fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  5. A Depth Map Generation Algorithm Based on Saliency Detection for 2D to 3D Conversion

    NASA Astrophysics Data System (ADS)

    Yang, Yizhong; Hu, Xionglou; Wu, Nengju; Wang, Pengfei; Xu, Dong; Rong, Shen

    2017-09-01

    In recent years, 3D movies attract people's attention more and more because of their immersive stereoscopic experience. However, 3D movies is still insufficient, so estimating depth information for 2D to 3D conversion from a video is more and more important. In this paper, we present a novel algorithm to estimate depth information from a video via scene classification algorithm. In order to obtain perceptually reliable depth information for viewers, the algorithm classifies them into three categories: landscape type, close-up type, linear perspective type firstly. Then we employ a specific algorithm to divide the landscape type image into many blocks, and assign depth value by similar relative height cue with the image. As to the close-up type image, a saliency-based method is adopted to enhance the foreground in the image and the method combine it with the global depth gradient to generate final depth map. By vanishing line detection, the calculated vanishing point which is regarded as the farthest point to the viewer is assigned with deepest depth value. According to the distance between the other points and the vanishing point, the entire image is assigned with corresponding depth value. Finally, depth image-based rendering is employed to generate stereoscopic virtual views after bilateral filter. Experiments show that the proposed algorithm can achieve realistic 3D effects and yield satisfactory results, while the perception scores of anaglyph images lie between 6.8 and 7.8.

  6. Remote Sensing Image Classification Applied to the First National Geographical Information Census of China

    NASA Astrophysics Data System (ADS)

    Yu, Xin; Wen, Zongyong; Zhu, Zhaorong; Xia, Qiang; Shun, Lan

    2016-06-01

    Image classification will still be a long way in the future, although it has gone almost half a century. In fact, researchers have gained many fruits in the image classification domain, but there is still a long distance between theory and practice. However, some new methods in the artificial intelligence domain will be absorbed into the image classification domain and draw on the strength of each to offset the weakness of the other, which will open up a new prospect. Usually, networks play the role of a high-level language, as is seen in Artificial Intelligence and statistics, because networks are used to build complex model from simple components. These years, Bayesian Networks, one of probabilistic networks, are a powerful data mining technique for handling uncertainty in complex domains. In this paper, we apply Tree Augmented Naive Bayesian Networks (TAN) to texture classification of High-resolution remote sensing images and put up a new method to construct the network topology structure in terms of training accuracy based on the training samples. Since 2013, China government has started the first national geographical information census project, which mainly interprets geographical information based on high-resolution remote sensing images. Therefore, this paper tries to apply Bayesian network to remote sensing image classification, in order to improve image interpretation in the first national geographical information census project. In the experiment, we choose some remote sensing images in Beijing. Experimental results demonstrate TAN outperform than Naive Bayesian Classifier (NBC) and Maximum Likelihood Classification Method (MLC) in the overall classification accuracy. In addition, the proposed method can reduce the workload of field workers and improve the work efficiency. Although it is time consuming, it will be an attractive and effective method for assisting office operation of image interpretation.

  7. Raman Imaging of Plant Cell Walls in Sections of Cucumis sativus

    PubMed Central

    Zeise, Ingrid; Heiner, Zsuzsanna; Holz, Sabine; Joester, Maike; Büttner, Carmen

    2018-01-01

    Raman microspectra combine information on chemical composition of plant tissues with spatial information. The contributions from the building blocks of the cell walls in the Raman spectra of plant tissues can vary in the microscopic sub-structures of the tissue. Here, we discuss the analysis of 55 Raman maps of root, stem, and leaf tissues of Cucumis sativus, using different spectral contributions from cellulose and lignin in both univariate and multivariate imaging methods. Imaging based on hierarchical cluster analysis (HCA) and principal component analysis (PCA) indicates different substructures in the xylem cell walls of the different tissues. Using specific signals from the cell wall spectra, analysis of the whole set of different tissue sections based on the Raman images reveals differences in xylem tissue morphology. Due to the specifics of excitation of the Raman spectra in the visible wavelength range (532 nm), which is, e.g., in resonance with carotenoid species, effects of photobleaching and the possibility of exploiting depletion difference spectra for molecular characterization in Raman imaging of plants are discussed. The reported results provide both, specific information on the molecular composition of cucumber tissue Raman spectra, and general directions for future imaging studies in plant tissues. PMID:29370089

  8. Raman Imaging of Plant Cell Walls in Sections of Cucumis sativus.

    PubMed

    Zeise, Ingrid; Heiner, Zsuzsanna; Holz, Sabine; Joester, Maike; Büttner, Carmen; Kneipp, Janina

    2018-01-25

    Raman microspectra combine information on chemical composition of plant tissues with spatial information. The contributions from the building blocks of the cell walls in the Raman spectra of plant tissues can vary in the microscopic sub-structures of the tissue. Here, we discuss the analysis of 55 Raman maps of root, stem, and leaf tissues of Cucumis sativus , using different spectral contributions from cellulose and lignin in both univariate and multivariate imaging methods. Imaging based on hierarchical cluster analysis (HCA) and principal component analysis (PCA) indicates different substructures in the xylem cell walls of the different tissues. Using specific signals from the cell wall spectra, analysis of the whole set of different tissue sections based on the Raman images reveals differences in xylem tissue morphology. Due to the specifics of excitation of the Raman spectra in the visible wavelength range (532 nm), which is, e.g., in resonance with carotenoid species, effects of photobleaching and the possibility of exploiting depletion difference spectra for molecular characterization in Raman imaging of plants are discussed. The reported results provide both, specific information on the molecular composition of cucumber tissue Raman spectra, and general directions for future imaging studies in plant tissues.

  9. Detail-enhanced multimodality medical image fusion based on gradient minimization smoothing filter and shearing filter.

    PubMed

    Liu, Xingbin; Mei, Wenbo; Du, Huiqian

    2018-02-13

    In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.

  10. Edge-Based Image Compression with Homogeneous Diffusion

    NASA Astrophysics Data System (ADS)

    Mainberger, Markus; Weickert, Joachim

    It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.

  11. Digital Imaging

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Digital Imaging is the computer processed numerical representation of physical images. Enhancement of images results in easier interpretation. Quantitative digital image analysis by Perceptive Scientific Instruments, locates objects within an image and measures them to extract quantitative information. Applications are CAT scanners, radiography, microscopy in medicine as well as various industrial and manufacturing uses. The PSICOM 327 performs all digital image analysis functions. It is based on Jet Propulsion Laboratory technology, is accurate and cost efficient.

  12. Nighttime images fusion based on Laplacian pyramid

    NASA Astrophysics Data System (ADS)

    Wu, Cong; Zhan, Jinhao; Jin, Jicheng

    2018-02-01

    This paper expounds method of the average weighted fusion, image pyramid fusion, the wavelet transform and apply these methods on the fusion of multiple exposures nighttime images. Through calculating information entropy and cross entropy of fusion images, we can evaluate the effect of different fusion. Experiments showed that Laplacian pyramid image fusion algorithm is suitable for processing nighttime images fusion, it can reduce the halo while preserving image details.

  13. In vivo bioluminescence tomography based on multi-view projection and 3D surface reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Shuang; Wang, Kun; Leng, Chengcai; Deng, Kexin; Hu, Yifang; Tian, Jie

    2015-03-01

    Bioluminescence tomography (BLT) is a powerful optical molecular imaging modality, which enables non-invasive realtime in vivo imaging as well as 3D quantitative analysis in preclinical studies. In order to solve the inverse problem and reconstruct inner light sources accurately, the prior structural information is commonly necessary and obtained from computed tomography or magnetic resonance imaging. This strategy requires expensive hybrid imaging system, complicated operation protocol and possible involvement of ionizing radiation. The overall robustness highly depends on the fusion accuracy between the optical and structural information. In this study we present a pure optical bioluminescence tomographic system (POBTS) and a novel BLT method based on multi-view projection acquisition and 3D surface reconstruction. The POBTS acquired a sparse set of white light surface images and bioluminescent images of a mouse. Then the white light images were applied to an approximate surface model to generate a high quality textured 3D surface reconstruction of the mouse. After that we integrated multi-view luminescent images based on the previous reconstruction, and applied an algorithm to calibrate and quantify the surface luminescent flux in 3D.Finally, the internal bioluminescence source reconstruction was achieved with this prior information. A BALB/C mouse with breast tumor of 4T1-fLuc cells mouse model were used to evaluate the performance of the new system and technique. Compared with the conventional hybrid optical-CT approach using the same inverse reconstruction method, the reconstruction accuracy of this technique was improved. The distance error between the actual and reconstructed internal source was decreased by 0.184 mm.

  14. An incompressible fluid flow model with mutual information for MR image registration

    NASA Astrophysics Data System (ADS)

    Tsai, Leo; Chang, Herng-Hua

    2013-03-01

    Image registration is one of the fundamental and essential tasks within image processing. It is a process of determining the correspondence between structures in two images, which are called the template image and the reference image, respectively. The challenge of registration is to find an optimal geometric transformation between corresponding image data. This paper develops a new MR image registration algorithm that uses a closed incompressible viscous fluid model associated with mutual information. In our approach, we treat the image pixels as the fluid elements of a viscous fluid flow governed by the nonlinear Navier-Stokes partial differential equation (PDE). We replace the pressure term with the body force mainly used to guide the transformation with a weighting coefficient, which is expressed by the mutual information between the template and reference images. To solve this modified Navier-Stokes PDE, we adopted the fast numerical techniques proposed by Seibold1. The registration process of updating the body force, the velocity and deformation fields is repeated until the mutual information weight reaches a prescribed threshold. We applied our approach to the BrainWeb and real MR images. As consistent with the theory of the proposed fluid model, we found that our method accurately transformed the template images into the reference images based on the intensity flow. Experimental results indicate that our method is of potential in a wide variety of medical image registration applications.

  15. Association between pathology and texture features of multi parametric MRI of the prostate

    NASA Astrophysics Data System (ADS)

    Kuess, Peter; Andrzejewski, Piotr; Nilsson, David; Georg, Petra; Knoth, Johannes; Susani, Martin; Trygg, Johan; Helbich, Thomas H.; Polanec, Stephan H.; Georg, Dietmar; Nyholm, Tufve

    2017-10-01

    The role of multi-parametric (mp)MRI in the diagnosis and treatment of prostate cancer has increased considerably. An alternative to visual inspection of mpMRI is the evaluation using histogram-based (first order statistics) parameters and textural features (second order statistics). The aims of the present work were to investigate the relationship between benign and malignant sub-volumes of the prostate and textures obtained from mpMR images. The performance of tumor prediction was investigated based on the combination of histogram-based and textural parameters. Subsequently, the relative importance of mpMR images was assessed and the benefit of additional imaging analyzed. Finally, sub-structures based on the PI-RADS classification were investigated as potential regions to automatically detect maligned lesions. Twenty-five patients who received mpMRI prior to radical prostatectomy were included in the study. The imaging protocol included T2, DWI, and DCE. Delineation of tumor regions was performed based on pathological information. First and second order statistics were derived from each structure and for all image modalities. The resulting data were processed with multivariate analysis, using PCA (principal component analysis) and OPLS-DA (orthogonal partial least squares discriminant analysis) for separation of malignant and healthy tissue. PCA showed a clear difference between tumor and healthy regions in the peripheral zone for all investigated images. The predictive ability of the OPLS-DA models increased for all image modalities when first and second order statistics were combined. The predictive value reached a plateau after adding ADC and T2, and did not increase further with the addition of other image information. The present study indicates a distinct difference in the signatures between malign and benign prostate tissue. This is an absolute prerequisite for automatic tumor segmentation, but only the first step in that direction. For the specific identified signature, DCE did not add complementary information to T2 and ADC maps.

  16. Magnetic Resonance-based Motion Correction for Quantitative PET in Simultaneous PET-MR Imaging.

    PubMed

    Rakvongthai, Yothin; El Fakhri, Georges

    2017-07-01

    Motion degrades image quality and quantitation of PET images, and is an obstacle to quantitative PET imaging. Simultaneous PET-MR offers a tool that can be used for correcting the motion in PET images by using anatomic information from MR imaging acquired concurrently. Motion correction can be performed by transforming a set of reconstructed PET images into the same frame or by incorporating the transformation into the system model and reconstructing the motion-corrected image. Several phantom and patient studies have validated that MR-based motion correction strategies have great promise for quantitative PET imaging in simultaneous PET-MR. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Flood extent and water level estimation from SAR using data-model integration

    NASA Astrophysics Data System (ADS)

    Ajadi, O. A.; Meyer, F. J.

    2017-12-01

    Synthetic Aperture Radar (SAR) images have long been recognized as a valuable data source for flood mapping. Compared to other sources, SAR's weather and illumination independence and large area coverage at high spatial resolution supports reliable, frequent, and detailed observations of developing flood events. Accordingly, SAR has the potential to greatly aid in the near real-time monitoring of natural hazards, such as flood detection, if combined with automated image processing. This research works towards increasing the reliability and temporal sampling of SAR-derived flood hazard information by integrating information from multiple SAR sensors and SAR modalities (images and Interferometric SAR (InSAR) coherence) and by combining SAR-derived change detection information with hydrologic and hydraulic flood forecast models. First, the combination of multi-temporal SAR intensity images and coherence information for generating flood extent maps is introduced. The application of least-squares estimation integrates flood information from multiple SAR sensors, thus increasing the temporal sampling. SAR-based flood extent information will be combined with a Digital Elevation Model (DEM) to reduce false alarms and to estimate water depth and flood volume. The SAR-based flood extent map is assimilated into the Hydrologic Engineering Center River Analysis System (Hec-RAS) model to aid in hydraulic model calibration. The developed technology is improving the accuracy of flood information by exploiting information from data and models. It also provides enhanced flood information to decision-makers supporting the response to flood extent and improving emergency relief efforts.

  18. Brain CT image similarity retrieval method based on uncertain location graph.

    PubMed

    Pan, Haiwei; Li, Pengyuan; Li, Qing; Han, Qilong; Feng, Xiaoning; Gao, Linlin

    2014-03-01

    A number of brain computed tomography (CT) images stored in hospitals that contain valuable information should be shared to support computer-aided diagnosis systems. Finding the similar brain CT images from the brain CT image database can effectively help doctors diagnose based on the earlier cases. However, the similarity retrieval for brain CT images requires much higher accuracy than the general images. In this paper, a new model of uncertain location graph (ULG) is presented for brain CT image modeling and similarity retrieval. According to the characteristics of brain CT image, we propose a novel method to model brain CT image to ULG based on brain CT image texture. Then, a scheme for ULG similarity retrieval is introduced. Furthermore, an effective index structure is applied to reduce the searching time. Experimental results reveal that our method functions well on brain CT images similarity retrieval with higher accuracy and efficiency.

  19. Novel Algorithm for Classification of Medical Images

    NASA Astrophysics Data System (ADS)

    Bhushan, Bharat; Juneja, Monika

    2010-11-01

    Content-based image retrieval (CBIR) methods in medical image databases have been designed to support specific tasks, such as retrieval of medical images. These methods cannot be transferred to other medical applications since different imaging modalities require different types of processing. To enable content-based queries in diverse collections of medical images, the retrieval system must be familiar with the current Image class prior to the query processing. Further, almost all of them deal with the DICOM imaging format. In this paper a novel algorithm based on energy information obtained from wavelet transform for the classification of medical images according to their modalities is described. For this two types of wavelets have been used and have been shown that energy obtained in either case is quite distinct for each of the body part. This technique can be successfully applied to different image formats. The results are shown for JPEG imaging format.

  20. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  1. Neural network-based feature point descriptors for registration of optical and SAR images

    NASA Astrophysics Data System (ADS)

    Abulkhanov, Dmitry; Konovalenko, Ivan; Nikolaev, Dmitry; Savchik, Alexey; Shvets, Evgeny; Sidorchuk, Dmitry

    2018-04-01

    Registration of images of different nature is an important technique used in image fusion, change detection, efficient information representation and other problems of computer vision. Solving this task using feature-based approaches is usually more complex than registration of several optical images because traditional feature descriptors (SIFT, SURF, etc.) perform poorly when images have different nature. In this paper we consider the problem of registration of SAR and optical images. We train neural network to build feature point descriptors and use RANSAC algorithm to align found matches. Experimental results are presented that confirm the method's effectiveness.

  2. Biometric image enhancement using decision rule based image fusion techniques

    NASA Astrophysics Data System (ADS)

    Sagayee, G. Mary Amirtha; Arumugam, S.

    2010-02-01

    Introducing biometrics into information systems may result in considerable benefits. Most of the researchers confirmed that the finger print is widely used than the iris or face and more over it is the primary choice for most privacy concerned applications. For finger prints applications, choosing proper sensor is at risk. The proposed work deals about, how the image quality can be improved by introducing image fusion technique at sensor levels. The results of the images after introducing the decision rule based image fusion technique are evaluated and analyzed with its entropy levels and root mean square error.

  3. A Novel Bit-level Image Encryption Method Based on Chaotic Map and Dynamic Grouping

    NASA Astrophysics Data System (ADS)

    Zhang, Guo-Ji; Shen, Yan

    2012-10-01

    In this paper, a novel bit-level image encryption method based on dynamic grouping is proposed. In the proposed method, the plain-image is divided into several groups randomly, then permutation-diffusion process on bit level is carried out. The keystream generated by logistic map is related to the plain-image, which confuses the relationship between the plain-image and the cipher-image. The computer simulation results of statistical analysis, information entropy analysis and sensitivity analysis show that the proposed encryption method is secure and reliable enough to be used for communication application.

  4. Mutual information based feature selection for medical image retrieval

    NASA Astrophysics Data System (ADS)

    Zhi, Lijia; Zhang, Shaomin; Li, Yan

    2018-04-01

    In this paper, authors propose a mutual information based method for lung CT image retrieval. This method is designed to adapt to different datasets and different retrieval task. For practical applying consideration, this method avoids using a large amount of training data. Instead, with a well-designed training process and robust fundamental features and measurements, the method in this paper can get promising performance and maintain economic training computation. Experimental results show that the method has potential practical values for clinical routine application.

  5. Ground-based full-sky imaging polarimeter based on liquid crystal variable retarders.

    PubMed

    Zhang, Ying; Zhao, Huijie; Song, Ping; Shi, Shaoguang; Xu, Wujian; Liang, Xiao

    2014-04-07

    A ground-based full-sky imaging polarimeter based on liquid crystal variable retarders (LCVRs) is proposed in this paper. Our proposed method can be used to realize the rapid detection of the skylight polarization information with hemisphere field-of-view for the visual band. The characteristics of the incidence angle of light on the LCVR are investigated, based on the electrically controlled birefringence. Then, the imaging polarimeter with hemisphere field-of-view is designed. Furthermore, the polarization calibration method with the field-of-view multiplexing and piecewise linear fitting is proposed, based on the rotation symmetry of the polarimeter. The polarization calibration of the polarimeter is implemented with the hemisphere field-of-view. This imaging polarimeter is investigated by the experiment of detecting the skylight image. The consistency between the obtained experimental distribution of polarization angle with that due to Rayleigh scattering model is 90%, which confirms the effectivity of our proposed imaging polarimeter.

  6. Reducing Interpolation Artifacts for Mutual Information Based Image Registration

    PubMed Central

    Soleimani, H.; Khosravifard, M.A.

    2011-01-01

    Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673

  7. Reversible Data Hiding Based on DNA Computing

    PubMed Central

    Xie, Yingjie

    2017-01-01

    Biocomputing, especially DNA, computing has got great development. It is widely used in information security. In this paper, a novel algorithm of reversible data hiding based on DNA computing is proposed. Inspired by the algorithm of histogram modification, which is a classical algorithm for reversible data hiding, we combine it with DNA computing to realize this algorithm based on biological technology. Compared with previous results, our experimental results have significantly improved the ER (Embedding Rate). Furthermore, some PSNR (peak signal-to-noise ratios) of test images are also improved. Experimental results show that it is suitable for protecting the copyright of cover image in DNA-based information security. PMID:28280504

  8. Image processing and analysis using neural networks for optometry area

    NASA Astrophysics Data System (ADS)

    Netto, Antonio V.; Ferreira de Oliveira, Maria C.

    2002-11-01

    In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.

  9. Terrain clutter simulation using physics-based scattering model and digital terrain profile data

    NASA Astrophysics Data System (ADS)

    Park, James; Johnson, Joel T.; Ding, Kung-Hau; Kim, Kristopher; Tenbarge, Joseph

    2015-05-01

    Localization of a wireless capsule endoscope finds many clinical applications from diagnostics to therapy. There are potentially two approaches of the electromagnetic waves based localization: a) signal propagation model based localization using a priori information about the persons dielectric channels, and b) recently developed microwave imaging based localization without using any a priori information about the persons dielectric channels. In this paper, we study the second approach in terms of a variety of frequencies and signal-to-noise ratios for localization accuracy. To this end, we select a 2-D anatomically realistic numerical phantom for microwave imaging at different frequencies. The selected frequencies are 13:56 MHz, 431:5 MHz, 920 MHz, and 2380 MHz that are typically considered for medical applications. Microwave imaging of a phantom will provide us with an electromagnetic model with electrical properties (relative permittivity and conductivity) of the internal parts of the body and can be useful as a foundation for localization of an in-body RF source. Low frequency imaging at 13:56 MHz provides a low resolution image with high contrast in the dielectric properties. However, at high frequencies, the imaging algorithm is able to image only the outer boundaries of the tissues due to low penetration depth as higher frequency means higher attenuation. Furthermore, recently developed localization method based on microwave imaging is used for estimating the localization accuracy at different frequencies and signal-to-noise ratios. Statistical evaluation of the localization error is performed using the cumulative distribution function (CDF). Based on our results, we conclude that the localization accuracy is minimally affected by the frequency or the noise. However, the choice of the frequency will become critical if the purpose of the method is to image the internal parts of the body for tumor and/or cancer detection.

  10. Precise and Efficient Retrieval of Captioned Images: The MARIE Project.

    ERIC Educational Resources Information Center

    Rowe, Neil C.

    1999-01-01

    The MARIE project explores knowledge-based information retrieval of captioned images of the kind found in picture libraries and on the Internet. MARIE's five-part approach exploits the idea that images are easier to understand with context, especially descriptive text near them, but it also does image analysis. Experiments show MARIE prototypes…

  11. Convolution Operation of Optical Information via Quantum Storage

    NASA Astrophysics Data System (ADS)

    Li, Zhixiang; Liu, Jianji; Fan, Hongming; Zhang, Guoquan

    2017-06-01

    We proposed a novel method to achieve optical convolution of two input images via quantum storage based on electromagnetically induced transparency (EIT) effect. By placing an EIT media in the confocal Fourier plane of the 4f-imaging system, the optical convolution of the two input images can be achieved in the image plane.

  12. ATM experiment S-056 image processing requirements definition

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A plan is presented for satisfying the image data processing needs of the S-056 Apollo Telescope Mount experiment. The report is based on information gathered from related technical publications, consultation with numerous image processing experts, and on the experience that was in working on related image processing tasks over a two-year period.

  13. 3D automatic anatomy recognition based on iterative graph-cut-ASM

    NASA Astrophysics Data System (ADS)

    Chen, Xinjian; Udupa, Jayaram K.; Bagci, Ulas; Alavi, Abass; Torigian, Drew A.

    2010-02-01

    We call the computerized assistive process of recognizing, delineating, and quantifying organs and tissue regions in medical imaging, occurring automatically during clinical image interpretation, automatic anatomy recognition (AAR). The AAR system we are developing includes five main parts: model building, object recognition, object delineation, pathology detection, and organ system quantification. In this paper, we focus on the delineation part. For the modeling part, we employ the active shape model (ASM) strategy. For recognition and delineation, we integrate several hybrid strategies of combining purely image based methods with ASM. In this paper, an iterative Graph-Cut ASM (IGCASM) method is proposed for object delineation. An algorithm called GC-ASM was presented at this symposium last year for object delineation in 2D images which attempted to combine synergistically ASM and GC. Here, we extend this method to 3D medical image delineation. The IGCASM method effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. We propose a new GC cost function, which effectively integrates the specific image information with the ASM shape model information. The proposed methods are tested on a clinical abdominal CT data set. The preliminary results show that: (a) it is feasible to explicitly bring prior 3D statistical shape information into the GC framework; (b) the 3D IGCASM delineation method improves on ASM and GC and can provide practical operational time on clinical images.

  14. Interferometric inverse synthetic aperture radar imaging for space targets based on wideband direct sampling using two antennas

    NASA Astrophysics Data System (ADS)

    Tian, Biao; Liu, Yang; Xu, Shiyou; Chen, Zengping

    2014-01-01

    Interferometric inverse synthetic aperture radar (InISAR) imaging provides complementary information to monostatic inverse synthetic aperture radar (ISAR) imaging. This paper proposes a new InISAR imaging system for space targets based on wideband direct sampling using two antennas. The system is easy to realize in engineering since the motion trajectory of space targets can be known in advance, which is simpler than that of three receivers. In the preprocessing step, high speed movement compensation is carried out by designing an adaptive matched filter containing speed that is obtained from the narrow band information. Then, the coherent processing and keystone transform for ISAR imaging are adopted to reserve the phase history of each antenna. Through appropriate collocation of the system, image registration and phase unwrapping can be avoided. Considering the situation not to be satisfied, the influence of baseline variance is analyzed and compensation method is adopted. The corresponding size can be achieved by interferometric processing of the two complex ISAR images. Experimental results prove the validity of the analysis and the three-dimensional imaging algorithm.

  15. Decomposition-based transfer distance metric learning for image classification.

    PubMed

    Luo, Yong; Liu, Tongliang; Tao, Dacheng; Xu, Chao

    2014-09-01

    Distance metric learning (DML) is a critical factor for image analysis and pattern recognition. To learn a robust distance metric for a target task, we need abundant side information (i.e., the similarity/dissimilarity pairwise constraints over the labeled data), which is usually unavailable in practice due to the high labeling cost. This paper considers the transfer learning setting by exploiting the large quantity of side information from certain related, but different source tasks to help with target metric learning (with only a little side information). The state-of-the-art metric learning algorithms usually fail in this setting because the data distributions of the source task and target task are often quite different. We address this problem by assuming that the target distance metric lies in the space spanned by the eigenvectors of the source metrics (or other randomly generated bases). The target metric is represented as a combination of the base metrics, which are computed using the decomposed components of the source metrics (or simply a set of random bases); we call the proposed method, decomposition-based transfer DML (DTDML). In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics. The main advantage of the proposed method compared with existing transfer metric learning approaches is that we directly learn the base metric coefficients instead of the target metric. To this end, far fewer variables need to be learned. We therefore obtain more reliable solutions given the limited side information and the optimization tends to be faster. Experiments on the popular handwritten image (digit, letter) classification and challenge natural image annotation tasks demonstrate the effectiveness of the proposed method.

  16. Evidential analysis of difference images for change detection of multitemporal remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Yin; Peng, Lijuan; Cremers, Armin B.

    2018-03-01

    In this article, we develop two methods for unsupervised change detection in multitemporal remote sensing images based on Dempster-Shafer's theory of evidence (DST). In most unsupervised change detection methods, the probability of difference image is assumed to be characterized by mixture models, whose parameters are estimated by the expectation maximization (EM) method. However, the main drawback of the EM method is that it does not consider spatial contextual information, which may entail rather noisy detection results with numerous spurious alarms. To remedy this, we firstly develop an evidence theory based EM method (EEM) which incorporates spatial contextual information in EM by iteratively fusing the belief assignments of neighboring pixels to the central pixel. Secondly, an evidential labeling method in the sense of maximizing a posteriori probability (MAP) is proposed in order to further enhance the detection result. It first uses the parameters estimated by EEM to initialize the class labels of a difference image. Then it iteratively fuses class conditional information and spatial contextual information, and updates labels and class parameters. Finally it converges to a fixed state which gives the detection result. A simulated image set and two real remote sensing data sets are used to evaluate the two evidential change detection methods. Experimental results show that the new evidential methods are comparable to other prevalent methods in terms of total error rate.

  17. Novel methods for parameter-based analysis of myocardial tissue in MR images

    NASA Astrophysics Data System (ADS)

    Hennemuth, A.; Behrens, S.; Kuehnel, C.; Oeltze, S.; Konrad, O.; Peitgen, H.-O.

    2007-03-01

    The analysis of myocardial tissue with contrast-enhanced MR yields multiple parameters, which can be used to classify the examined tissue. Perfusion images are often distorted by motion, while late enhancement images are acquired with a different size and resolution. Therefore, it is common to reduce the analysis to a visual inspection, or to the examination of parameters related to the 17-segment-model proposed by the American Heart Association (AHA). As this simplification comes along with a considerable loss of information, our purpose is to provide methods for a more accurate analysis regarding topological and functional tissue features. In order to achieve this, we implemented registration methods for the motion correction of the perfusion sequence and the matching of the late enhancement information onto the perfusion image and vice versa. For the motion corrected perfusion sequence, vector images containing the voxel enhancement curves' semi-quantitative parameters are derived. The resulting vector images are combined with the late enhancement information and form the basis for the tissue examination. For the exploration of data we propose different modes: the inspection of the enhancement curves and parameter distribution in areas automatically segmented using the late enhancement information, the inspection of regions segmented in parameter space by user defined threshold intervals and the topological comparison of regions segmented with different settings. Results showed a more accurate detection of distorted regions in comparison to the AHA-model-based evaluation.

  18. Correlative super-resolution fluorescence microscopy combined with optical coherence microscopy

    NASA Astrophysics Data System (ADS)

    Kim, Sungho; Kim, Gyeong Tae; Jang, Soohyun; Shim, Sang-Hee; Bae, Sung Chul

    2015-03-01

    Recent development of super-resolution fluorescence imaging technique such as stochastic optical reconstruction microscopy (STORM) and photoactived localization microscope (PALM) has brought us beyond the diffraction limits. It allows numerous opportunities in biology because vast amount of formerly obscured molecular structures, due to lack of spatial resolution, now can be directly observed. A drawback of fluorescence imaging, however, is that it lacks complete structural information. For this reason, we have developed a super-resolution multimodal imaging system based on STORM and full-field optical coherence microscopy (FF-OCM). FF-OCM is a type of interferometry systems based on a broadband light source and a bulk Michelson interferometer, which provides label-free and non-invasive visualization of biological samples. The integration between the two systems is simple because both systems use a wide-field illumination scheme and a conventional microscope. This combined imaging system gives us both functional information at a molecular level (~20nm) and structural information at the sub-cellular level (~1μm). For thick samples such as tissue slices, while FF-OCM is readily capable of imaging the 3D architecture, STORM suffer from aberrations and high background fluorescence that substantially degrade the resolution. In order to correct the aberrations in thick tissues, we employed an adaptive optics system in the detection path of the STORM microscope. We used our multimodal system to obtain images on brain tissue samples with structural and functional information.

  19. Digital focusing of OCT images based on scalar diffraction theory and information entropy

    PubMed Central

    Liu, Guozhong; Zhi, Zhongwei; Wang, Ruikang K.

    2012-01-01

    This paper describes a digital method that is capable of automatically focusing optical coherence tomography (OCT) en face images without prior knowledge of the point spread function of the imaging system. The method utilizes a scalar diffraction model to simulate wave propagation from out-of-focus scatter to the focal plane, from which the propagation distance between the out-of-focus plane and the focal plane is determined automatically via an image-definition-evaluation criterion based on information entropy theory. By use of the proposed approach, we demonstrate that the lateral resolution close to that at the focal plane can be recovered from the imaging planes outside the depth of field region with minimal loss of resolution. Fresh onion tissues and mouse fat tissues are used in the experiments to show the performance of the proposed method. PMID:23162717

  20. Multimode nonlinear optical imaging of the dermis in ex vivo human skin based on the combination of multichannel mode and Lambda mode.

    PubMed

    Zhuo, Shuangmu; Chen, Jianxin; Luo, Tianshu; Zou, Dingsong

    2006-08-21

    A Multimode nonlinear optical imaging technique based on the combination of multichannel mode and Lambda mode is developed to investigate human dermis. Our findings show that this technique not only improves the image contrast of the structural proteins of extracellular matrix (ECM) but also provides an image-guided spectral analysis method to identify both cellular and ECM intrinsic components including collagen, elastin, NAD(P)H and flavin. By the combined use of multichannel mode and Lambda mode in tandem, the obtained in-depth two photon-excited fluorescence (TPEF) and second-harmonic generation (SHG) imaging and TPEF/SHG signals depth-dependence decay can offer a sensitive tool for obtaining quantitative tissue structural and biochemical information. These results suggest that the technique has the potential to provide more accurate information for determining tissue physiological and pathological states.

  1. Hybrid registration of PET/CT in thoracic region with pre-filtering PET sinogram

    NASA Astrophysics Data System (ADS)

    Mokri, S. S.; Saripan, M. I.; Marhaban, M. H.; Nordin, A. J.; Hashim, S.

    2015-11-01

    The integration of physiological (PET) and anatomical (CT) images in cancer delineation requires an accurate spatial registration technique. Although hybrid PET/CT scanner is used to co-register these images, significant misregistrations exist due to patient and respiratory/cardiac motions. This paper proposes a hybrid feature-intensity based registration technique for hybrid PET/CT scanner. First, simulated PET sinogram was filtered with a 3D hybrid mean-median before reconstructing the image. The features were then derived from the segmented structures (lung, heart and tumor) from both images. The registration was performed based on modified multi-modality demon registration with multiresolution scheme. Apart from visual observations improvements, the proposed registration technique increased the normalized mutual information index (NMI) between the PET/CT images after registration. All nine tested datasets show marked improvements in mutual information (MI) index than free form deformation (FFD) registration technique with the highest MI increase is 25%.

  2. Use of laser range finders and range image analysis in automated assembly tasks

    NASA Technical Reports Server (NTRS)

    Alvertos, Nicolas; Dcunha, Ivan

    1990-01-01

    A proposition to study the effect of filtering processes on range images and to evaluate the performance of two different laser range mappers is made. Median filtering was utilized to remove noise from the range images. First and second order derivatives are then utilized to locate the similarities and dissimilarities between the processed and the original images. Range depth information is converted into spatial coordinates, and a set of coefficients which describe 3-D objects is generated using the algorithm developed in the second phase of this research. Range images of spheres and cylinders are used for experimental purposes. An algorithm was developed to compare the performance of two different laser range mappers based upon the range depth information of surfaces generated by each of the mappers. Furthermore, an approach based on 2-D analytic geometry is also proposed which serves as a basis for the recognition of regular 3-D geometric objects.

  3. A Nonlinear Diffusion Equation-Based Model for Ultrasound Speckle Noise Removal

    NASA Astrophysics Data System (ADS)

    Zhou, Zhenyu; Guo, Zhichang; Zhang, Dazhi; Wu, Boying

    2018-04-01

    Ultrasound images are contaminated by speckle noise, which brings difficulties in further image analysis and clinical diagnosis. In this paper, we address this problem in the view of nonlinear diffusion equation theories. We develop a nonlinear diffusion equation-based model by taking into account not only the gradient information of the image, but also the information of the gray levels of the image. By utilizing the region indicator as the variable exponent, we can adaptively control the diffusion type which alternates between the Perona-Malik diffusion and the Charbonnier diffusion according to the image gray levels. Furthermore, we analyze the proposed model with respect to the theoretical and numerical properties. Experiments show that the proposed method achieves much better speckle suppression and edge preservation when compared with the traditional despeckling methods, especially in the low gray level and low-contrast regions.

  4. Image-based tracking of the suturing needle during laparoscopic interventions

    NASA Astrophysics Data System (ADS)

    Speidel, S.; Kroehnert, A.; Bodenstedt, S.; Kenngott, H.; Müller-Stich, B.; Dillmann, R.

    2015-03-01

    One of the most complex and difficult tasks for surgeons during minimally invasive interventions is suturing. A prerequisite to assist the suturing process is the tracking of the needle. The endoscopic images provide a rich source of information which can be used for needle tracking. In this paper, we present an image-based method for markerless needle tracking. The method uses a color-based and geometry-based segmentation to detect the needle. Once an initial needle detection is obtained, a region of interest enclosing the extracted needle contour is passed on to a reduced segmentation. It is evaluated with in vivo images from da Vinci interventions.

  5. High-performance technology for indexing of high volumes of Earth remote sensing data

    NASA Astrophysics Data System (ADS)

    Strotov, Valery V.; Taganov, Alexander I.; Kolesenkov, Aleksandr N.; Kostrov, Boris V.

    2017-10-01

    The present paper has suggested a technology for search, indexing, cataloging and distribution of aerospace images on the basis of geo-information approach, cluster and spectral analysis. It has considered information and algorithmic support of the system. Functional circuit of the system and structure of the geographical data base have been developed on the basis of the geographical online portal technology. Taking into account heterogeneity of information obtained from various sources it is reasonable to apply a geoinformation platform that allows analyzing space location of objects and territories and executing complex processing of information. Geoinformation platform is based on cartographic fundamentals with the uniform coordinate system, the geographical data base, a set of algorithms and program modules for execution of various tasks. The technology for adding by particular users and companies of images taken by means of professional and amateur devices and also processed by various software tools to the array system has been suggested. Complex usage of visual and instrumental approaches allows significantly expanding an application area of Earth remote sensing data. Development and implementation of new algorithms based on the complex usage of new methods for processing of structured and unstructured data of high volumes will increase periodicity and rate of data updating. The paper has shown that application of original algorithms for search, indexing and cataloging of aerospace images will provide an easy access to information spread by hundreds of suppliers and allow increasing an access rate to aerospace images up to 5 times in comparison with current analogues.

  6. Image Alignment for Multiple Camera High Dynamic Range Microscopy.

    PubMed

    Eastwood, Brian S; Childs, Elisabeth C

    2012-01-09

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.

  7. Image Alignment for Multiple Camera High Dynamic Range Microscopy

    PubMed Central

    Eastwood, Brian S.; Childs, Elisabeth C.

    2012-01-01

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028

  8. Evaluation method based on the image correlation for laser jamming image

    NASA Astrophysics Data System (ADS)

    Che, Jinxi; Li, Zhongmin; Gao, Bo

    2013-09-01

    The jamming effectiveness evaluation of infrared imaging system is an important part of electro-optical countermeasure. The infrared imaging devices in the military are widely used in the searching, tracking and guidance and so many other fields. At the same time, with the continuous development of laser technology, research of laser interference and damage effect developed continuously, laser has been used to disturbing the infrared imaging device. Therefore, the effect evaluation of the infrared imaging system by laser has become a meaningful problem to be solved. The information that the infrared imaging system ultimately present to the user is an image, so the evaluation on jamming effect can be made from the point of assessment of image quality. The image contains two aspects of the information, the light amplitude and light phase, so the image correlation can accurately perform the difference between the original image and disturbed image. In the paper, the evaluation method of digital image correlation, the assessment method of image quality based on Fourier transform, the estimate method of image quality based on error statistic and the evaluation method of based on peak signal noise ratio are analysed. In addition, the advantages and disadvantages of these methods are analysed. Moreover, the infrared disturbing images of the experiment result, in which the thermal infrared imager was interfered by laser, were analysed by using these methods. The results show that the methods can better reflect the jamming effects of the infrared imaging system by laser. Furthermore, there is good consistence between evaluation results by using the methods and the results of subjective visual evaluation. And it also provides well repeatability and convenient quantitative analysis. The feasibility of the methods to evaluate the jamming effect was proved. It has some extent reference value for the studying and developing on electro-optical countermeasures equipments and effectiveness evaluation.

  9. Study on key techniques for camera-based hydrological record image digitization

    NASA Astrophysics Data System (ADS)

    Li, Shijin; Zhan, Di; Hu, Jinlong; Gao, Xiangtao; Bo, Ping

    2015-10-01

    With the development of information technology, the digitization of scientific or engineering drawings has received more and more attention. In hydrology, meteorology, medicine and mining industry, the grid drawing sheet is commonly used to record the observations from sensors. However, these paper drawings may be destroyed and contaminated due to improper preservation or overuse. Further, it will be a heavy workload and prone to error if these data are manually transcripted into the computer. Hence, in order to digitize these drawings, establishing the corresponding data base will ensure the integrity of data and provide invaluable information for further research. This paper presents an automatic system for hydrological record image digitization, which consists of three key techniques, i.e., image segmentation, intersection point localization and distortion rectification. First, a novel approach to the binarization of the curves and grids in the water level sheet image has been proposed, which is based on the fusion of gradient and color information adaptively. Second, a fast search strategy for cross point location is invented and point-by-point processing is thus avoided, with the help of grid distribution information. And finally, we put forward a local rectification method through analyzing the central portions of the image and utilizing the domain knowledge of hydrology. The processing speed is accelerated, while the accuracy is still satisfying. Experiments on several real water level records show that our proposed techniques are effective and capable of recovering the hydrological observations accurately.

  10. Joint sparse reconstruction of multi-contrast MRI images with graph based redundant wavelet transform.

    PubMed

    Lai, Zongying; Zhang, Xinlin; Guo, Di; Du, Xiaofeng; Yang, Yonggui; Guo, Gang; Chen, Zhong; Qu, Xiaobo

    2018-05-03

    Multi-contrast images in magnetic resonance imaging (MRI) provide abundant contrast information reflecting the characteristics of the internal tissues of human bodies, and thus have been widely utilized in clinical diagnosis. However, long acquisition time limits the application of multi-contrast MRI. One efficient way to accelerate data acquisition is to under-sample the k-space data and then reconstruct images with sparsity constraint. However, images are compromised at high acceleration factor if images are reconstructed individually. We aim to improve the images with a jointly sparse reconstruction and Graph-based redundant wavelet transform (GBRWT). First, a sparsifying transform, GBRWT, is trained to reflect the similarity of tissue structures in multi-contrast images. Second, joint multi-contrast image reconstruction is formulated as a ℓ 2, 1 norm optimization problem under GBRWT representations. Third, the optimization problem is numerically solved using a derived alternating direction method. Experimental results in synthetic and in vivo MRI data demonstrate that the proposed joint reconstruction method can achieve lower reconstruction errors and better preserve image structures than the compared joint reconstruction methods. Besides, the proposed method outperforms single image reconstruction with joint sparsity constraint of multi-contrast images. The proposed method explores the joint sparsity of multi-contrast MRI images under graph-based redundant wavelet transform and realizes joint sparse reconstruction of multi-contrast images. Experiment demonstrate that the proposed method outperforms the compared joint reconstruction methods as well as individual reconstructions. With this high quality image reconstruction method, it is possible to achieve the high acceleration factors by exploring the complementary information provided by multi-contrast MRI.

  11. Coregistration of Magnetic Resonance and Single Photon Emission Computed Tomography Images for Noninvasive Localization of Stem Cells Grafted in the Infarcted Rat Myocardium

    PubMed Central

    Shen, Dinggang; Liu, Dengfeng; Cao, Zixiong; Acton, Paul D.; Zhou, Rong

    2008-01-01

    This paper demonstrates the application of mutual information based coregistration of radionuclide and magnetic resonance imaging (MRI) in an effort to use multimodality imaging for noninvasive localization of stem cells grafted in the infarcted myocardium in rats. Radionuclide imaging such as single photon emission computed tomography (SPECT) or positron emission tomography (PET) inherently has high sensitivity and is suitable for tracking of labeled stem cells, while high-resolution MRI is able to provide detailed anatomical and functional information of myocardium. Thus, coregistration of PET or SPECT images with MRI will map the location and distribution of stem cells on detailed myocardium structures. To validate this coregistration method, SPECT data were simulated by using a Monte Carlo-based projector that modeled the pinhole-imaging physics assuming nonzero diameter and photon penetration at the edge. Translational and rotational errors of the coregistration were examined with respect to various SPECT activities, and they are on average about 0.50 mm and 0.82°, respectively. Only the rotational error is dependent on activity of SPECT data. Stem cells were labeled with 111 Indium oxyquinoline and grafted in the ischemic myocardium of a rat model. Dual-tracer small-animal SPECT images were acquired, which allowed simultaneous detection of 111In-labeled stem cells and of [99mTc]sestamibi to assess myocardial perfusion deficit. The same animals were subjected to cardiac MRI. A mutual-information-based coregistration method was then applied to the SPECT and MRIs. By coregistration, the 111 In signal from labeled cells was mapped into the akinetic region identified on cine MRIs; the regional perfusion deficit on the SPECT images also coincided with the akinetic region on the MR image. PMID:17053860

  12. Using spectral imaging for the analysis of abnormalities for colorectal cancer: When is it helpful?

    PubMed

    Awan, Ruqayya; Al-Maadeed, Somaya; Al-Saady, Rafif

    2018-01-01

    The spectral imaging technique has been shown to provide more discriminative information than the RGB images and has been proposed for a range of problems. There are many studies demonstrating its potential for the analysis of histopathology images for abnormality detection but there have been discrepancies among previous studies as well. Many multispectral based methods have been proposed for histopathology images but the significance of the use of whole multispectral cube versus a subset of bands or a single band is still arguable. We performed comprehensive analysis using individual bands and different subsets of bands to determine the effectiveness of spectral information for determining the anomaly in colorectal images. Our multispectral colorectal dataset consists of four classes, each represented by infra-red spectrum bands in addition to the visual spectrum bands. We performed our analysis of spectral imaging by stratifying the abnormalities using both spatial and spectral information. For our experiments, we used a combination of texture descriptors with an ensemble classification approach that performed best on our dataset. We applied our method to another dataset and got comparable results with those obtained using the state-of-the-art method and convolutional neural network based method. Moreover, we explored the relationship of the number of bands with the problem complexity and found that higher number of bands is required for a complex task to achieve improved performance. Our results demonstrate a synergy between infra-red and visual spectrum by improving the classification accuracy (by 6%) on incorporating the infra-red representation. We also highlight the importance of how the dataset should be divided into training and testing set for evaluating the histopathology image-based approaches, which has not been considered in previous studies on multispectral histopathology images.

  13. Using spectral imaging for the analysis of abnormalities for colorectal cancer: When is it helpful?

    PubMed Central

    Al-Maadeed, Somaya; Al-Saady, Rafif

    2018-01-01

    The spectral imaging technique has been shown to provide more discriminative information than the RGB images and has been proposed for a range of problems. There are many studies demonstrating its potential for the analysis of histopathology images for abnormality detection but there have been discrepancies among previous studies as well. Many multispectral based methods have been proposed for histopathology images but the significance of the use of whole multispectral cube versus a subset of bands or a single band is still arguable. We performed comprehensive analysis using individual bands and different subsets of bands to determine the effectiveness of spectral information for determining the anomaly in colorectal images. Our multispectral colorectal dataset consists of four classes, each represented by infra-red spectrum bands in addition to the visual spectrum bands. We performed our analysis of spectral imaging by stratifying the abnormalities using both spatial and spectral information. For our experiments, we used a combination of texture descriptors with an ensemble classification approach that performed best on our dataset. We applied our method to another dataset and got comparable results with those obtained using the state-of-the-art method and convolutional neural network based method. Moreover, we explored the relationship of the number of bands with the problem complexity and found that higher number of bands is required for a complex task to achieve improved performance. Our results demonstrate a synergy between infra-red and visual spectrum by improving the classification accuracy (by 6%) on incorporating the infra-red representation. We also highlight the importance of how the dataset should be divided into training and testing set for evaluating the histopathology image-based approaches, which has not been considered in previous studies on multispectral histopathology images. PMID:29874262

  14. Computer-aided prognosis on breast cancer with hematoxylin and eosin histopathology images: A review.

    PubMed

    Chen, Jia-Mei; Li, Yan; Xu, Jun; Gong, Lei; Wang, Lin-Wei; Liu, Wen-Lou; Liu, Juan

    2017-03-01

    With the advance of digital pathology, image analysis has begun to show its advantages in information analysis of hematoxylin and eosin histopathology images. Generally, histological features in hematoxylin and eosin images are measured to evaluate tumor grade and prognosis for breast cancer. This review summarized recent works in image analysis of hematoxylin and eosin histopathology images for breast cancer prognosis. First, prognostic factors for breast cancer based on hematoxylin and eosin histopathology images were summarized. Then, usual procedures of image analysis for breast cancer prognosis were systematically reviewed, including image acquisition, image preprocessing, image detection and segmentation, and feature extraction. Finally, the prognostic value of image features and image feature-based prognostic models was evaluated. Moreover, we discussed the issues of current analysis, and some directions for future research.

  15. A detail enhancement and dynamic range adjustment algorithm for high dynamic range images

    NASA Astrophysics Data System (ADS)

    Xu, Bo; Wang, Huachuang; Liang, Mingtao; Yu, Cong; Hu, Jinlong; Cheng, Hua

    2014-08-01

    Although high dynamic range (HDR) images contain large amounts of information, they have weak texture and low contrast. What's more, these images are difficult to be reproduced on low dynamic range displaying mediums. If much more information is to be acquired when these images are displayed on PCs, some specific transforms, such as compressing the dynamic range, enhancing the portions of little difference in original contrast and highlighting the texture details on the premise of keeping the parts of large contrast, are needed. To this ends, a multi-scale guided filter enhancement algorithm which derives from the single-scale guided filter based on the analysis of non-physical model is proposed in this paper. Firstly, this algorithm decomposes the original HDR images into base image and detail images of different scales, and then it adaptively selects a transform function which acts on the enhanced detail images and original images. By comparing the treatment effects of HDR images and low dynamic range (LDR) images of different scene features, it proves that this algorithm, on the basis of maintaining the hierarchy and texture details of images, not only improves the contrast and enhances the details of images, but also adjusts the dynamic range well. Thus, it is much suitable for human observation or analytical processing of machines.

  16. Establishment of Imaging Spectroscopy of Nuclear Gamma-Rays based on Geometrical Optics

    PubMed Central

    Tanimori, Toru; Mizumura, Yoshitaka; Takada, Atsushi; Miyamoto, Shohei; Takemura, Taito; Kishimoto, Tetsuro; Komura, Shotaro; Kubo, Hidetoshi; Kurosawa, Shunsuke; Matsuoka, Yoshihiro; Miuchi, Kentaro; Mizumoto, Tetsuya; Nakamasu, Yuma; Nakamura, Kiseki; Parker, Joseph D.; Sawano, Tatsuya; Sonoda, Shinya; Tomono, Dai; Yoshikawa, Kei

    2017-01-01

    Since the discovery of nuclear gamma-rays, its imaging has been limited to pseudo imaging, such as Compton Camera (CC) and coded mask. Pseudo imaging does not keep physical information (intensity, or brightness in Optics) along a ray, and thus is capable of no more than qualitative imaging of bright objects. To attain quantitative imaging, cameras that realize geometrical optics is essential, which would be, for nuclear MeV gammas, only possible via complete reconstruction of the Compton process. Recently we have revealed that “Electron Tracking Compton Camera” (ETCC) provides a well-defined Point Spread Function (PSF). The information of an incoming gamma is kept along a ray with the PSF and that is equivalent to geometrical optics. Here we present an imaging-spectroscopic measurement with the ETCC. Our results highlight the intrinsic difficulty with CCs in performing accurate imaging, and show that the ETCC surmounts this problem. The imaging capability also helps the ETCC suppress the noise level dramatically by ~3 orders of magnitude without a shielding structure. Furthermore, full reconstruction of Compton process with the ETCC provides spectra free of Compton edges. These results mark the first proper imaging of nuclear gammas based on the genuine geometrical optics. PMID:28155870

  17. Establishment of Imaging Spectroscopy of Nuclear Gamma-Rays based on Geometrical Optics.

    PubMed

    Tanimori, Toru; Mizumura, Yoshitaka; Takada, Atsushi; Miyamoto, Shohei; Takemura, Taito; Kishimoto, Tetsuro; Komura, Shotaro; Kubo, Hidetoshi; Kurosawa, Shunsuke; Matsuoka, Yoshihiro; Miuchi, Kentaro; Mizumoto, Tetsuya; Nakamasu, Yuma; Nakamura, Kiseki; Parker, Joseph D; Sawano, Tatsuya; Sonoda, Shinya; Tomono, Dai; Yoshikawa, Kei

    2017-02-03

    Since the discovery of nuclear gamma-rays, its imaging has been limited to pseudo imaging, such as Compton Camera (CC) and coded mask. Pseudo imaging does not keep physical information (intensity, or brightness in Optics) along a ray, and thus is capable of no more than qualitative imaging of bright objects. To attain quantitative imaging, cameras that realize geometrical optics is essential, which would be, for nuclear MeV gammas, only possible via complete reconstruction of the Compton process. Recently we have revealed that "Electron Tracking Compton Camera" (ETCC) provides a well-defined Point Spread Function (PSF). The information of an incoming gamma is kept along a ray with the PSF and that is equivalent to geometrical optics. Here we present an imaging-spectroscopic measurement with the ETCC. Our results highlight the intrinsic difficulty with CCs in performing accurate imaging, and show that the ETCC surmounts this problem. The imaging capability also helps the ETCC suppress the noise level dramatically by ~3 orders of magnitude without a shielding structure. Furthermore, full reconstruction of Compton process with the ETCC provides spectra free of Compton edges. These results mark the first proper imaging of nuclear gammas based on the genuine geometrical optics.

  18. Study on parallel and distributed management of RS data based on spatial data base

    NASA Astrophysics Data System (ADS)

    Chen, Yingbiao; Qian, Qinglan; Liu, Shijin

    2006-12-01

    With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.

  19. Note: Sound recovery from video using SVD-based information extraction

    NASA Astrophysics Data System (ADS)

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Chang'an

    2016-08-01

    This note reports an efficient singular value decomposition (SVD)-based vibration extraction approach that recovers sound information in silent high-speed video. A high-speed camera of which frame rates are in the range of 2 kHz-10 kHz is applied to film the vibrating objects. Sub-images cut from video frames are transformed into column vectors and then reconstructed to a new matrix. The SVD of the new matrix produces orthonormal image bases (OIBs) and image projections onto specific OIB can be recovered as understandable acoustical signals. Standard frequencies of 256 Hz and 512 Hz tuning forks are extracted offline from their vibrating surfaces and a 3.35 s speech signal is recovered online from a piece of paper that is stimulated by sound waves within 1 min.

  20. Spectral embedding-based registration (SERg) for multimodal fusion of prostate histology and MRI

    NASA Astrophysics Data System (ADS)

    Hwuang, Eileen; Rusu, Mirabela; Karthigeyan, Sudha; Agner, Shannon C.; Sparks, Rachel; Shih, Natalie; Tomaszewski, John E.; Rosen, Mark; Feldman, Michael; Madabhushi, Anant

    2014-03-01

    Multi-modal image registration is needed to align medical images collected from different protocols or imaging sources, thereby allowing the mapping of complementary information between images. One challenge of multimodal image registration is that typical similarity measures rely on statistical correlations between image intensities to determine anatomical alignment. The use of alternate image representations could allow for mapping of intensities into a space or representation such that the multimodal images appear more similar, thus facilitating their co-registration. In this work, we present a spectral embedding based registration (SERg) method that uses non-linearly embedded representations obtained from independent components of statistical texture maps of the original images to facilitate multimodal image registration. Our methodology comprises the following main steps: 1) image-derived textural representation of the original images, 2) dimensionality reduction using independent component analysis (ICA), 3) spectral embedding to generate the alternate representations, and 4) image registration. The rationale behind our approach is that SERg yields embedded representations that can allow for very different looking images to appear more similar, thereby facilitating improved co-registration. Statistical texture features are derived from the image intensities and then reduced to a smaller set by using independent component analysis to remove redundant information. Spectral embedding generates a new representation by eigendecomposition from which only the most important eigenvectors are selected. This helps to accentuate areas of salience based on modality-invariant structural information and therefore better identifies corresponding regions in both the template and target images. The spirit behind SERg is that image registration driven by these areas of salience and correspondence should improve alignment accuracy. In this work, SERg is implemented using Demons to allow the algorithm to more effectively register multimodal images. SERg is also tested within the free-form deformation framework driven by mutual information. Nine pairs of synthetic T1-weighted to T2-weighted brain MRI were registered under the following conditions: five levels of noise (0%, 1%, 3%, 5%, and 7%) and two levels of bias field (20% and 40%) each with and without noise. We demonstrate that across all of these conditions, SERg yields a mean squared error that is 81.51% lower than that of Demons driven by MRI intensity alone. We also spatially align twenty-six ex vivo histology sections and in vivo prostate MRI in order to map the spatial extent of prostate cancer onto corresponding radiologic imaging. SERg performs better than intensity registration by decreasing the root mean squared distance of annotated landmarks in the prostate gland via both Demons algorithm and mutual information-driven free-form deformation. In both synthetic and clinical experiments, the observed improvement in alignment of the template and target images suggest the utility of parametric eigenvector representations and hence SERg for multimodal image registration.

  1. Research on improved edge extraction algorithm of rectangular piece

    NASA Astrophysics Data System (ADS)

    He, Yi-Bin; Zeng, Ya-Jun; Chen, Han-Xin; Xiao, San-Xia; Wang, Yan-Wei; Huang, Si-Yu

    Traditional edge detection operators such as Prewitt operator, LOG operator and Canny operator, etc. cannot meet the requirements of the modern industrial measurement. This paper proposes a kind of image edge detection algorithm based on improved morphological gradient. It can be detect the image using structural elements, which deals with the characteristic information of the image directly. Choosing different shapes and sizes of structural elements to use together, the ideal image edge information can be detected. The experimental result shows that the algorithm can well extract image edge with noise, which is clearer, and has more detailed edges compared with the previous edge detection algorithm.

  2. Example-based super-resolution for single-image analysis from the Chang'e-1 Mission

    NASA Astrophysics Data System (ADS)

    Wu, Fan-Lu; Wang, Xiang-Jun

    2016-11-01

    Due to the low spatial resolution of images taken from the Chang'e-1 (CE-1) orbiter, the details of the lunar surface are blurred and lost. Considering the limited spatial resolution of image data obtained by a CCD camera on CE-1, an example-based super-resolution (SR) algorithm is employed to obtain high-resolution (HR) images. SR reconstruction is important for the application of image data to increase the resolution of images. In this article, a novel example-based algorithm is proposed to implement SR reconstruction by single-image analysis, and the computational cost is reduced compared to other example-based SR methods. The results show that this method can enhance the resolution of images using SR and recover detailed information about the lunar surface. Thus it can be used for surveying HR terrain and geological features. Moreover, the algorithm is significant for the HR processing of remotely sensed images obtained by other imaging systems.

  3. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance

    PubMed Central

    2017-01-01

    This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529

  4. Does use of a PACS increase the number of images per study? A case study in ultrasound.

    PubMed

    Horii, Steven; Nisenbaum, Harvey; Farn, James; Coleman, Beverly; Rowling, Susan; Langer, Jill; Jacobs, Jill; Arger, Peter; Pinheiro, Lisa; Klein, Wendy; Reber, Michele; Iyoob, Christopher

    2002-03-01

    The purpose of this study was to determine if the use of a picture archiving and communications system (PACS) in ultrasonography increased the number of images acquired per examination. The hypothesis that such an increase does occur was based on anecdotal information; this study sought to test the hypothesis. A random sample of all ultrasound examination types was drawn from the period 1998 through 1999. The ultrasound PACS in use (ACCESS; Kodak Health Information Systems, Dallas, TX) records the number of grayscale and color images saved as part of each study. Each examination in the sample was checked in the ultrasound PACS database,.and the number of grayscale and color images was recorded. The comparison film-based sample was drawn from the period 1994 through 1995. The number of examinations of each type selected was based on the overall statistics of the section; that is, the sample was designed to represent the approximate frequency with which the various examination types are done. For film-based image counts, the jackets were retrieved, and the number of grayscale and color images were counted. The number of images obtained per examination (for most examinations) in ultrasound increased with PACS use. This was more evident with some examination types (eg, pelvis). This result, however, has to be examined for possible systematic biases because ultrasound practice has changed over the time since the authors stopped using film routinely. The use of PACS in ultrasonography was not associated with an increase in the number of images per examination based solely on the use of PACS, with the exception of neonatal head studies. Increases in the number of images per study was otherwise associated with examinations for which changes in protocols resulted in the increased image counts.

  5. Novel Image Encryption Scheme Based on Chebyshev Polynomial and Duffing Map

    PubMed Central

    2014-01-01

    We present a novel image encryption algorithm using Chebyshev polynomial based on permutation and substitution and Duffing map based on substitution. Comprehensive security analysis has been performed on the designed scheme using key space analysis, visual testing, histogram analysis, information entropy calculation, correlation coefficient analysis, differential analysis, key sensitivity test, and speed test. The study demonstrates that the proposed image encryption algorithm shows advantages of more than 10113 key space and desirable level of security based on the good statistical results and theoretical arguments. PMID:25143970

  6. Parts-based stereoscopic image assessment by learning binocular manifold color visual properties

    NASA Astrophysics Data System (ADS)

    Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi

    2016-11-01

    Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.

  7. Fusion of multi-tracer PET images for dose painting.

    PubMed

    Lelandais, Benoît; Ruan, Su; Denœux, Thierry; Vera, Pierre; Gardin, Isabelle

    2014-10-01

    PET imaging with FluoroDesoxyGlucose (FDG) tracer is clinically used for the definition of Biological Target Volumes (BTVs) for radiotherapy. Recently, new tracers, such as FLuoroThymidine (FLT) or FluoroMisonidazol (FMiso), have been proposed. They provide complementary information for the definition of BTVs. Our work is to fuse multi-tracer PET images to obtain a good BTV definition and to help the radiation oncologist in dose painting. Due to the noise and the partial volume effect leading, respectively, to the presence of uncertainty and imprecision in PET images, the segmentation and the fusion of PET images is difficult. In this paper, a framework based on Belief Function Theory (BFT) is proposed for the segmentation of BTV from multi-tracer PET images. The first step is based on an extension of the Evidential C-Means (ECM) algorithm, taking advantage of neighboring voxels for dealing with uncertainty and imprecision in each mono-tracer PET image. Then, imprecision and uncertainty are, respectively, reduced using prior knowledge related to defects in the acquisition system and neighborhood information. Finally, a multi-tracer PET image fusion is performed. The results are represented by a set of parametric maps that provide important information for dose painting. The performances are evaluated on PET phantoms and patient data with lung cancer. Quantitative results show good performance of our method compared with other methods. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. OC ToGo: bed site image integration into OpenClinica with mobile devices

    NASA Astrophysics Data System (ADS)

    Haak, Daniel; Gehlen, Johan; Jonas, Stephan; Deserno, Thomas M.

    2014-03-01

    Imaging and image-based measurements nowadays play an essential role in controlled clinical trials, but electronic data capture (EDC) systems insufficiently support integration of captured images by mobile devices (e.g. smartphones and tablets). The web application OpenClinica has established as one of the world's leading EDC systems and is used to collect, manage and store data of clinical trials in electronic case report forms (eCRFs). In this paper, we present a mobile application for instantaneous integration of images into OpenClinica directly during examination on patient's bed site. The communication between the Android application and OpenClinica is based on the simple object access protocol (SOAP) and representational state transfer (REST) web services for metadata, and secure file transfer protocol (SFTP) for image transfer, respectively. OpenClinica's web services are used to query context information (e.g. existing studies, events and subjects) and to import data into the eCRF, as well as export of eCRF metadata and structural information. A stable image transfer is ensured and progress information (e.g. remaining time) visualized to the user. The workflow is demonstrated for a European multi-center registry, where patients with calciphylaxis disease are included. Our approach improves the EDC workflow, saves time, and reduces costs. Furthermore, data privacy is enhanced, since storage of private health data on the imaging devices becomes obsolete.

  9. A Novel Texture-Quantization-Based Reversible Multiple Watermarking Scheme Applied to Health Information System.

    PubMed

    Turuk, Mousami; Dhande, Ashwin

    2018-04-01

    The recent innovations in information and communication technologies have appreciably changed the panorama of health information system (HIS). These advances provide new means to process, handle, and share medical images and also augment the medical image security issues in terms of confidentiality, reliability, and integrity. Digital watermarking has emerged as new era that offers acceptable solutions to the security issues in HIS. Texture is a significant feature to detect the embedding sites in an image, which further leads to substantial improvement in the robustness. However, considering the perspective of digital watermarking, this feature has received meager attention in the reported literature. This paper exploits the texture property of an image and presents a novel hybrid texture-quantization-based approach for reversible multiple watermarking. The watermarked image quality has been accessed by peak signal to noise ratio (PSNR), structural similarity measure (SSIM), and universal image quality index (UIQI), and the obtained results are superior to the state-of-the-art methods. The algorithm has been evaluated on a variety of medical imaging modalities (CT, MRA, MRI, US) and robustness has been verified, considering various image processing attacks including JPEG compression. The proposed scheme offers additional security using repetitive embedding of BCH encoded watermarks and ADM encrypted ECG signal. Experimental results achieved a maximum of 22,616 bits hiding capacity with PSNR of 53.64 dB.

  10. Automatic single-image-based rain streaks removal via image decomposition.

    PubMed

    Kang, Li-Wei; Lin, Chia-Wen; Fu, Yu-Hsiang

    2012-04-01

    Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a "rain component" and a "nonrain component" by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm.

  11. Image denoising based on noise detection

    NASA Astrophysics Data System (ADS)

    Jiang, Yuanxiang; Yuan, Rui; Sun, Yuqiu; Tian, Jinwen

    2018-03-01

    Because of the noise points in the images, any operation of denoising would change the original information of non-noise pixel. A noise detection algorithm based on fractional calculus was proposed to denoise in this paper. Convolution of the image was made to gain direction gradient masks firstly. Then, the mean gray was calculated to obtain the gradient detection maps. Logical product was made to acquire noise position image next. Comparisons in the visual effect and evaluation parameters after processing, the results of experiment showed that the denoising algorithms based on noise were better than that of traditional methods in both subjective and objective aspects.

  12. Development of Cad System for Diffuse Disease Based on Ultrasound Elasticity Images

    NASA Astrophysics Data System (ADS)

    Yamazaki, M.; Shiina, T.; Yamakawa, M.; Takizawa, H.; Tonomura, A.; Mitake, T.

    It is well known that as hepatic cirrhosis progresses, hepatocyte fibrosis spreads and nodule increases. However, it is not easy to diagnosis its early stage by conventional B-mode image because we have to read subtle change of speckle pattern which is not sensitive to the stage of fibrosis. Ultrasonic tissue elasticity imaging can provide us novel diagnostic information based on tissue hardness. We recently developed commercial-based equipment for tissue elasticity imaging. In this work, we investigated to develop the CAD system based on elasticity image for diagnosing defused type diseases such as hepatic cirrhosis. The results of clinical data analysis indicate that the CAD system is promising as means for diagnosis of diffuse disease with simple criterion.

  13. Single-pixel imaging based on compressive sensing with spectral-domain optical mixing

    NASA Astrophysics Data System (ADS)

    Zhu, Zhijing; Chi, Hao; Jin, Tao; Zheng, Shilie; Jin, Xiaofeng; Zhang, Xianmin

    2017-11-01

    In this letter a single-pixel imaging structure is proposed based on compressive sensing using a spatial light modulator (SLM)-based spectrum shaper. In the approach, an SLM-based spectrum shaper, the pattern of which is a predetermined pseudorandom bit sequence (PRBS), spectrally codes the optical pulse carrying image information. The energy of the spectrally mixed pulse is detected by a single-pixel photodiode and the measurement results are used to reconstruct the image via a sparse recovery algorithm. As the mixing of the image signal and the PRBS is performed in the spectral domain, optical pulse stretching, modulation, compression and synchronization in the time domain are avoided. Experiments are implemented to verify the feasibility of the approach.

  14. Liver segmentation from CT images using a sparse priori statistical shape model (SP-SSM).

    PubMed

    Wang, Xuehu; Zheng, Yongchang; Gan, Lan; Wang, Xuan; Sang, Xinting; Kong, Xiangfeng; Zhao, Jie

    2017-01-01

    This study proposes a new liver segmentation method based on a sparse a priori statistical shape model (SP-SSM). First, mark points are selected in the liver a priori model and the original image. Then, the a priori shape and its mark points are used to obtain a dictionary for the liver boundary information. Second, the sparse coefficient is calculated based on the correspondence between mark points in the original image and those in the a priori model, and then the sparse statistical model is established by combining the sparse coefficients and the dictionary. Finally, the intensity energy and boundary energy models are built based on the intensity information and the specific boundary information of the original image. Then, the sparse matching constraint model is established based on the sparse coding theory. These models jointly drive the iterative deformation of the sparse statistical model to approximate and accurately extract the liver boundaries. This method can solve the problems of deformation model initialization and a priori method accuracy using the sparse dictionary. The SP-SSM can achieve a mean overlap error of 4.8% and a mean volume difference of 1.8%, whereas the average symmetric surface distance and the root mean square symmetric surface distance can reach 0.8 mm and 1.4 mm, respectively.

  15. Improving the imaging of calcifications in CT by histogram-based selective deblurring

    NASA Astrophysics Data System (ADS)

    Rollano-Hijarrubia, Empar; van der Meer, Frits; van der Lugt, Add; Weinans, Harrie; Vrooman, Henry; Vossepoel, Albert; Stokking, Rik

    2005-04-01

    Imaging of small high-density structures, such as calcifications, with computed tomography (CT) is limited by the spatial resolution of the system. Blur causes small calcifications to be imaged with lower contrast and overestimated volume, thereby hampering the analysis of vessels. The aim of this work is to reduce the blur of calcifications by applying three-dimensional (3D) deconvolution. Unfortunately, the high-frequency amplification of the deconvolution produces edge-related ring artifacts and enhances noise and original artifacts, which degrades the imaging of low-density structures. A method, referred to as Histogram-based Selective Deblurring (HiSD), was implemented to avoid these negative effects. HiSD uses the histogram information to generate a restored image in which the low-intensity voxel information of the observed image is combined with the high-intensity voxel information of the deconvolved image. To evaluate HiSD we scanned four in-vitro atherosclerotic plaques of carotid arteries with a multislice spiral CT and with a microfocus CT (μCT), used as reference. Restored images were generated from the observed images, and qualitatively and quantitatively compared with their corresponding μCT images. Transverse views and maximum-intensity projections of restored images show the decrease of blur of the calcifications in 3D. Measurements of the areas of 27 calcifications and total volumes of calcification of 4 plaques show that the overestimation of calcification was smaller for restored images (mean-error: 90% for area; 92% for volume) than for observed images (143%; 213%, respectively). The qualitative and quantitative analyses show that the imaging of calcifications in CT can be improved considerably by applying HiSD.

  16. The infection algorithm: an artificial epidemic approach for dense stereo correspondence.

    PubMed

    Olague, Gustavo; Fernández, Francisco; Pérez, Cynthia B; Lutton, Evelyne

    2006-01-01

    We present a new bio-inspired approach applied to a problem of stereo image matching. This approach is based on an artificial epidemic process, which we call the infection algorithm. The problem at hand is a basic one in computer vision for 3D scene reconstruction. It has many complex aspects and is known as an extremely difficult one. The aim is to match the contents of two images in order to obtain 3D information that allows the generation of simulated projections from a viewpoint that is different from the ones of the initial photographs. This process is known as view synthesis. The algorithm we propose exploits the image contents in order to produce only the necessary 3D depth information, while saving computational time. It is based on a set of distributed rules, which propagate like an artificial epidemic over the images. Experiments on a pair of real images are presented, and realistic reprojected images have been generated.

  17. Optical colour image watermarking based on phase-truncated linear canonical transform and image decomposition

    NASA Astrophysics Data System (ADS)

    Su, Yonggang; Tang, Chen; Li, Biyuan; Lei, Zhenkun

    2018-05-01

    This paper presents a novel optical colour image watermarking scheme based on phase-truncated linear canonical transform (PT-LCT) and image decomposition (ID). In this proposed scheme, a PT-LCT-based asymmetric cryptography is designed to encode the colour watermark into a noise-like pattern, and an ID-based multilevel embedding method is constructed to embed the encoded colour watermark into a colour host image. The PT-LCT-based asymmetric cryptography, which can be optically implemented by double random phase encoding with a quadratic phase system, can provide a higher security to resist various common cryptographic attacks. And the ID-based multilevel embedding method, which can be digitally implemented by a computer, can make the information of the colour watermark disperse better in the colour host image. The proposed colour image watermarking scheme possesses high security and can achieve a higher robustness while preserving the watermark’s invisibility. The good performance of the proposed scheme has been demonstrated by extensive experiments and comparison with other relevant schemes.

  18. Deep Learning Nuclei Detection in Digitized Histology Images by Superpixels.

    PubMed

    Sornapudi, Sudhir; Stanley, Ronald Joe; Stoecker, William V; Almubarak, Haidar; Long, Rodney; Antani, Sameer; Thoma, George; Zuna, Rosemary; Frazier, Shelliane R

    2018-01-01

    Advances in image analysis and computational techniques have facilitated automatic detection of critical features in histopathology images. Detection of nuclei is critical for squamous epithelium cervical intraepithelial neoplasia (CIN) classification into normal, CIN1, CIN2, and CIN3 grades. In this study, a deep learning (DL)-based nuclei segmentation approach is investigated based on gathering localized information through the generation of superpixels using a simple linear iterative clustering algorithm and training with a convolutional neural network. The proposed approach was evaluated on a dataset of 133 digitized histology images and achieved an overall nuclei detection (object-based) accuracy of 95.97%, with demonstrated improvement over imaging-based and clustering-based benchmark techniques. The proposed DL-based nuclei segmentation Method with superpixel analysis has shown improved segmentation results in comparison to state-of-the-art methods.

  19. Tracking moving radar targets with parallel, velocity-tuned filters

    DOEpatents

    Bickel, Douglas L.; Harmony, David W.; Bielek, Timothy P.; Hollowell, Jeff A.; Murray, Margaret S.; Martinez, Ana

    2013-04-30

    Radar data associated with radar illumination of a movable target is processed to monitor motion of the target. A plurality of filter operations are performed in parallel on the radar data so that each filter operation produces target image information. The filter operations are defined to have respectively corresponding velocity ranges that differ from one another. The target image information produced by one of the filter operations represents the target more accurately than the target image information produced by the remainder of the filter operations when a current velocity of the target is within the velocity range associated with the one filter operation. In response to the current velocity of the target being within the velocity range associated with the one filter operation, motion of the target is tracked based on the target image information produced by the one filter operation.

  20. Photoplus: auxiliary information for printed images based on distributed source coding

    NASA Astrophysics Data System (ADS)

    Samadani, Ramin; Mukherjee, Debargha

    2008-01-01

    A printed photograph is difficult to reuse because the digital information that generated the print may no longer be available. This paper describes a mechanism for approximating the original digital image by combining a scan of the printed photograph with small amounts of digital auxiliary information kept together with the print. The auxiliary information consists of a small amount of digital data to enable accurate registration and color-reproduction, followed by a larger amount of digital data to recover residual errors and lost frequencies by distributed Wyner-Ziv coding techniques. Approximating the original digital image enables many uses, including making good quality reprints from the original print, even when they are faded many years later. In essence, the print itself becomes the currency for archiving and repurposing digital images, without requiring computer infrastructure.

  1. Designing Image Operators for MRI-PET Image Fusion of the Brain

    NASA Astrophysics Data System (ADS)

    Márquez, Jorge; Gastélum, Alfonso; Padilla, Miguel A.

    2006-09-01

    Our goal is to obtain images combining in a useful and precise way the information from 3D volumes of medical imaging sets. We address two modalities combining anatomy (Magnetic Resonance Imaging or MRI) and functional information (Positron Emission Tomography or PET). Commercial imaging software offers image fusion tools based on fixed blending or color-channel combination of two modalities, and color Look-Up Tables (LUTs), without considering the anatomical and functional character of the image features. We used a sensible approach for image fusion taking advantage mainly from the HSL (Hue, Saturation and Luminosity) color space, in order to enhance the fusion results. We further tested operators for gradient and contour extraction to enhance anatomical details, plus other spatial-domain filters for functional features corresponding to wide point-spread-function responses in PET images. A set of image-fusion operators was formulated and tested on PET and MRI acquisitions.

  2. Clustering-based spot segmentation of cDNA microarray images.

    PubMed

    Uslan, Volkan; Bucak, Ihsan Ömür

    2010-01-01

    Microarrays are utilized as that they provide useful information about thousands of gene expressions simultaneously. In this study segmentation step of microarray image processing has been implemented. Clustering-based methods, fuzzy c-means and k-means, have been applied for the segmentation step that separates the spots from the background. The experiments show that fuzzy c-means have segmented spots of the microarray image more accurately than the k-means.

  3. Visualization index for image-enabled medical records

    NASA Astrophysics Data System (ADS)

    Dong, Wenjie; Zheng, Weilin; Sun, Jianyong; Zhang, Jianguo

    2011-03-01

    With the widely use of healthcare information technology in hospitals, the patients' medical records are more and more complex. To transform the text- or image-based medical information into easily understandable and acceptable form for human, we designed and developed an innovation indexing method which can be used to assign an anatomical 3D structure object to every patient visually to store indexes of the patients' basic information, historical examined image information and RIS report information. When a doctor wants to review patient historical records, he or she can first load the anatomical structure object and the view the 3D index of this object using a digital human model tool kit. This prototype system helps doctors to easily and visually obtain the complete historical healthcare status of patients, including large amounts of medical data, and quickly locate detailed information, including both reports and images, from medical information systems. In this way, doctors can save time that may be better used to understand information, obtain a more comprehensive understanding of their patients' situations, and provide better healthcare services to patients.

  4. Synergetic computer and holonics - information dynamics of a semantic computer

    NASA Astrophysics Data System (ADS)

    Shimizu, H.; Yamaguchi, Y.

    1987-12-01

    The dynamics of semantic information in biosystem is studied based on holons, generators of mutual relations. Any biosystem has an internal world, a so-called "self", which has an intrinsic purpose rendering the system continuously alive and developed as much as possible against a fluctuating external world. External signals to the system through sensory organs are classified by the self into two basic categories, semantic information with some meaning and value for the purpose and inputs from background and noise sources. Due to this breaking of semantic symmetry, any input signals are transformed into a figure and background, respectively. As a typical example, the visual perception of vertebrates is studied. For such semantic transformation the external signal is first decomposed and converted into a number of elementary signs named "syntons" which are then transmitted into a sensory area of cortex corresponding to an image synthesizer. The synthesizer is a sort of autonomic parallel processor composed of autonomic units, "holons", which are characterized by many internal modes. Syntons are fed into the holons one by one. A set of the elementary meanings, the so-called "semons", provided to the synton are encoded in the internal modes of the holon; that is, each internal mode encodes a semon. A dynamic information theory for the transformation of external signals to semantic information is developed based on our model which we call holovision. Holovision is a dynamic model of visual perception that processes an autonomic ability to self-organize visual images. Autonomous oscillators are utilized as the line processors to encode line elements with specific orientations in their phases as semons. An information space is defined according to the assembly of holons; the spatial plane on which holons are arranged is a syntactic subspace while the internal modes of the holons span a semantic subspace in the orthogonal direction. In this information space, the image of a figure is self-organized - as a sort of spatiotemporal pattern - by autonomic coordinations of the holons that select relevant internal modes, accompanied with compression of irrelevant syntons that correspond to the background. Holons coded by a synton are relevantly connected by means of coherent relations, i.e., dynamic connections with time-coherence, in order to represent the image that varies in time depending on the instantaneous state of the external object. These connections depend on the internal modes that are cooperatively selectively selected by the holons. The image is regarded as a bridge between the external and internal world that has both external and internal consistency. The meaning of the image, i.e., transformed semantic information, is spontaneously transferred from semantic items that have a coherent relation with the image, and the external signal is perceived by the self through the image. We demonstrate that images are indeed self-organized in holovision in the previously described sense. Simulated processes of the creation of semantic information in holovision are shown to display typical features of the forgoing steps of information compression. Based on these results, we propose quantitative indices that represent the value of semantic information in the image processor as well as in the memory.

  5. Super-pixel extraction based on multi-channel pulse coupled neural network

    NASA Astrophysics Data System (ADS)

    Xu, GuangZhu; Hu, Song; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun

    2018-04-01

    Super-pixel extraction techniques group pixels to form over-segmented image blocks according to the similarity among pixels. Compared with the traditional pixel-based methods, the image descripting method based on super-pixel has advantages of less calculation, being easy to perceive, and has been widely used in image processing and computer vision applications. Pulse coupled neural network (PCNN) is a biologically inspired model, which stems from the phenomenon of synchronous pulse release in the visual cortex of cats. Each PCNN neuron can correspond to a pixel of an input image, and the dynamic firing pattern of each neuron contains both the pixel feature information and its context spatial structural information. In this paper, a new color super-pixel extraction algorithm based on multi-channel pulse coupled neural network (MPCNN) was proposed. The algorithm adopted the block dividing idea of SLIC algorithm, and the image was divided into blocks with same size first. Then, for each image block, the adjacent pixels of each seed with similar color were classified as a group, named a super-pixel. At last, post-processing was adopted for those pixels or pixel blocks which had not been grouped. Experiments show that the proposed method can adjust the number of superpixel and segmentation precision by setting parameters, and has good potential for super-pixel extraction.

  6. Single image super-resolution using self-optimizing mask via fractional-order gradient interpolation and reconstruction.

    PubMed

    Yang, Qi; Zhang, Yanzhu; Zhao, Tiebiao; Chen, YangQuan

    2017-04-04

    Image super-resolution using self-optimizing mask via fractional-order gradient interpolation and reconstruction aims to recover detailed information from low-resolution images and reconstruct them into high-resolution images. Due to the limited amount of data and information retrieved from low-resolution images, it is difficult to restore clear, artifact-free images, while still preserving enough structure of the image such as the texture. This paper presents a new single image super-resolution method which is based on adaptive fractional-order gradient interpolation and reconstruction. The interpolated image gradient via optimal fractional-order gradient is first constructed according to the image similarity and afterwards the minimum energy function is employed to reconstruct the final high-resolution image. Fractional-order gradient based interpolation methods provide an additional degree of freedom which helps optimize the implementation quality due to the fact that an extra free parameter α-order is being used. The proposed method is able to produce a rich texture detail while still being able to maintain structural similarity even under large zoom conditions. Experimental results show that the proposed method performs better than current single image super-resolution techniques. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Dual-modality brain PET-CT image segmentation based on adaptive use of functional and anatomical information.

    PubMed

    Xia, Yong; Eberl, Stefan; Wen, Lingfeng; Fulham, Michael; Feng, David Dagan

    2012-01-01

    Dual medical imaging modalities, such as PET-CT, are now a routine component of clinical practice. Medical image segmentation methods, however, have generally only been applied to single modality images. In this paper, we propose the dual-modality image segmentation model to segment brain PET-CT images into gray matter, white matter and cerebrospinal fluid. This model converts PET-CT image segmentation into an optimization process controlled simultaneously by PET and CT voxel values and spatial constraints. It is innovative in the creation and application of the modality discriminatory power (MDP) coefficient as a weighting scheme to adaptively combine the functional (PET) and anatomical (CT) information on a voxel-by-voxel basis. Our approach relies upon allowing the modality with higher discriminatory power to play a more important role in the segmentation process. We compared the proposed approach to three other image segmentation strategies, including PET-only based segmentation, combination of the results of independent PET image segmentation and CT image segmentation, and simultaneous segmentation of joint PET and CT images without an adaptive weighting scheme. Our results in 21 clinical studies showed that our approach provides the most accurate and reliable segmentation for brain PET-CT images. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Photogrammetric Processing of Planetary Linear Pushbroom Images Based on Approximate Orthophotos

    NASA Astrophysics Data System (ADS)

    Geng, X.; Xu, Q.; Xing, S.; Hou, Y. F.; Lan, C. Z.; Zhang, J. J.

    2018-04-01

    It is still a great challenging task to efficiently produce planetary mapping products from orbital remote sensing images. There are many disadvantages in photogrammetric processing of planetary stereo images, such as lacking ground control information and informative features. Among which, image matching is the most difficult job in planetary photogrammetry. This paper designs a photogrammetric processing framework for planetary remote sensing images based on approximate orthophotos. Both tie points extraction for bundle adjustment and dense image matching for generating digital terrain model (DTM) are performed on approximate orthophotos. Since most of planetary remote sensing images are acquired by linear scanner cameras, we mainly deal with linear pushbroom images. In order to improve the computational efficiency of orthophotos generation and coordinates transformation, a fast back-projection algorithm of linear pushbroom images is introduced. Moreover, an iteratively refined DTM and orthophotos scheme was adopted in the DTM generation process, which is helpful to reduce search space of image matching and improve matching accuracy of conjugate points. With the advantages of approximate orthophotos, the matching results of planetary remote sensing images can be greatly improved. We tested the proposed approach with Mars Express (MEX) High Resolution Stereo Camera (HRSC) and Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) images. The preliminary experimental results demonstrate the feasibility of the proposed approach.

  9. Vision-based object detection and recognition system for intelligent vehicles

    NASA Astrophysics Data System (ADS)

    Ran, Bin; Liu, Henry X.; Martono, Wilfung

    1999-01-01

    Recently, a proactive crash mitigation system is proposed to enhance the crash avoidance and survivability of the Intelligent Vehicles. Accurate object detection and recognition system is a prerequisite for a proactive crash mitigation system, as system component deployment algorithms rely on accurate hazard detection, recognition, and tracking information. In this paper, we present a vision-based approach to detect and recognize vehicles and traffic signs, obtain their information, and track multiple objects by using a sequence of color images taken from a moving vehicle. The entire system consist of two sub-systems, the vehicle detection and recognition sub-system and traffic sign detection and recognition sub-system. Both of the sub- systems consist of four models: object detection model, object recognition model, object information model, and object tracking model. In order to detect potential objects on the road, several features of the objects are investigated, which include symmetrical shape and aspect ratio of a vehicle and color and shape information of the signs. A two-layer neural network is trained to recognize different types of vehicles and a parameterized traffic sign model is established in the process of recognizing a sign. Tracking is accomplished by combining the analysis of single image frame with the analysis of consecutive image frames. The analysis of the single image frame is performed every ten full-size images. The information model will obtain the information related to the object, such as time to collision for the object vehicle and relative distance from the traffic sings. Experimental results demonstrated a robust and accurate system in real time object detection and recognition over thousands of image frames.

  10. Comparison of Unsupervised Vegetation Classification Methods from Vhr Images after Shadows Removal by Innovative Algorithms

    NASA Astrophysics Data System (ADS)

    Movia, A.; Beinat, A.; Crosilla, F.

    2015-04-01

    The recognition of vegetation by the analysis of very high resolution (VHR) aerial images provides meaningful information about environmental features; nevertheless, VHR images frequently contain shadows that generate significant problems for the classification of the image components and for the extraction of the needed information. The aim of this research is to classify, from VHR aerial images, vegetation involved in the balance process of the environmental biochemical cycle, and to discriminate it with respect to urban and agricultural features. Three classification algorithms have been experimented in order to better recognize vegetation, and compared to NDVI index; unfortunately all these methods are conditioned by the presence of shadows on the images. Literature presents several algorithms to detect and remove shadows in the scene: most of them are based on the RGB to HSI transformations. In this work some of them have been implemented and compared with one based on RGB bands. Successively, in order to remove shadows and restore brightness on the images, some innovative algorithms, based on Procrustes theory, have been implemented and applied. Among these, we evaluate the capability of the so called "not-centered oblique Procrustes" and "anisotropic Procrustes" methods to efficiently restore brightness with respect to a linear correlation correction based on the Cholesky decomposition. Some experimental results obtained by different classification methods after shadows removal carried out with the innovative algorithms are presented and discussed.

  11. A ganglion-cell-based primary image representation method and its contribution to object recognition

    NASA Astrophysics Data System (ADS)

    Wei, Hui; Dai, Zhi-Long; Zuo, Qing-Song

    2016-10-01

    A visual stimulus is represented by the biological visual system at several levels: in the order from low to high levels, they are: photoreceptor cells, ganglion cells (GCs), lateral geniculate nucleus cells and visual cortical neurons. Retinal GCs at the early level need to represent raw data only once, but meet a wide number of diverse requests from different vision-based tasks. This means the information representation at this level is general and not task-specific. Neurobiological findings have attributed this universal adaptation to GCs' receptive field (RF) mechanisms. For the purposes of developing a highly efficient image representation method that can facilitate information processing and interpretation at later stages, here we design a computational model to simulate the GC's non-classical RF. This new image presentation method can extract major structural features from raw data, and is consistent with other statistical measures of the image. Based on the new representation, the performances of other state-of-the-art algorithms in contour detection and segmentation can be upgraded remarkably. This work concludes that applying sophisticated representation schema at early state is an efficient and promising strategy in visual information processing.

  12. Novel Approaches to Improve Iris Recognition System Performance Based on Local Quality Evaluation and Feature Fusion

    PubMed Central

    2014-01-01

    For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system. PMID:24693243

  13. Novel approaches to improve iris recognition system performance based on local quality evaluation and feature fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; Chen, Huiling; He, Fei; Pang, Yutong

    2014-01-01

    For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system.

  14. What automated age estimation of hand and wrist MRI data tells us about skeletal maturation in male adolescents.

    PubMed

    Urschler, Martin; Grassegger, Sabine; Štern, Darko

    2015-01-01

    Age estimation of individuals is important in human biology and has various medical and forensic applications. Recent interest in MR-based methods aims to investigate alternatives for established methods involving ionising radiation. Automatic, software-based methods additionally promise improved estimation objectivity. To investigate how informative automatically selected image features are regarding their ability to discriminate age, by exploring a recently proposed software-based age estimation method for MR images of the left hand and wrist. One hundred and two MR datasets of left hand images are used to evaluate age estimation performance, consisting of bone and epiphyseal gap volume localisation, computation of one age regression model per bone mapping image features to age and fusion of individual bone age predictions to a final age estimate. Quantitative results of the software-based method show an age estimation performance with a mean absolute difference of 0.85 years (SD = 0.58 years) to chronological age, as determined by a cross-validation experiment. Qualitatively, it is demonstrated how feature selection works and which image features of skeletal maturation are automatically chosen to model the non-linear regression function. Feasibility of automatic age estimation based on MRI data is shown and selected image features are found to be informative for describing anatomical changes during physical maturation in male adolescents.

  15. Residual translation compensations in radar target narrowband imaging based on trajectory information

    NASA Astrophysics Data System (ADS)

    Yue, Wenjue; Peng, Bo; Wei, Xizhang; Li, Xiang; Liao, Dongping

    2018-05-01

    High velocity translation will result in defocusing scattering centers in radar imaging. In this paper, we propose a Residual Translation Compensations (RTC) method based on target trajectory information to eliminate the translation effects in radar imaging. Translation could not be simply regarded as a uniformly accelerated motion in reality. So the prior knowledge of the target trajectory is introduced to enhance compensation precision. First we use the two-body orbit model to figure out the radial distance. Then, stepwise compensations are applied to eliminate residual propagation delay based on conjugate multiplication method. Finally, tomography is used to confirm the validity of the method. Compare with translation parameters estimation method based on the spectral peak of the conjugate multiplied signal, RTC method in this paper enjoys a better tomography result. When the Signal Noise Ratio (SNR) of the radar echo signal is 4dB, the scattering centers can also be extracted clearly.

  16. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    PubMed

    Pan, Shaoming; Li, Yongkai; Xu, Zhengquan; Chong, Yanwen

    2015-01-01

    Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  17. Context-based automated defect classification system using multiple morphological masks

    DOEpatents

    Gleason, Shaun S.; Hunt, Martin A.; Sari-Sarraf, Hamed

    2002-01-01

    Automatic detection of defects during the fabrication of semiconductor wafers is largely automated, but the classification of those defects is still performed manually by technicians. This invention includes novel digital image analysis techniques that generate unique feature vector descriptions of semiconductor defects as well as classifiers that use these descriptions to automatically categorize the defects into one of a set of pre-defined classes. Feature extraction techniques based on multiple-focus images, multiple-defect mask images, and segmented semiconductor wafer images are used to create unique feature-based descriptions of the semiconductor defects. These feature-based defect descriptions are subsequently classified by a defect classifier into categories that depend on defect characteristics and defect contextual information, that is, the semiconductor process layer(s) with which the defect comes in contact. At the heart of the system is a knowledge database that stores and distributes historical semiconductor wafer and defect data to guide the feature extraction and classification processes. In summary, this invention takes as its input a set of images containing semiconductor defect information, and generates as its output a classification for the defect that describes not only the defect itself, but also the location of that defect with respect to the semiconductor process layers.

  18. Multiframe super resolution reconstruction method based on light field angular images

    NASA Astrophysics Data System (ADS)

    Zhou, Shubo; Yuan, Yan; Su, Lijuan; Ding, Xiaomin; Wang, Jichao

    2017-12-01

    The plenoptic camera can directly obtain 4-dimensional light field information from a 2-dimensional sensor. However, based on the sampling theorem, the spatial resolution is greatly limited by the microlenses. In this paper, we present a method of reconstructing high-resolution images from the angular images. First, the ray tracing method is used to model the telecentric-based light field imaging process. Then, we analyze the subpixel shifts between the angular images extracted from the defocused light field data and the blur in the angular images. According to the analysis above, we construct the observation model from the ideal high-resolution image to the angular images. Applying the regularized super resolution method, we can obtain the super resolution result with a magnification ratio of 8. The results demonstrate the effectiveness of the proposed observation model.

  19. Snapshot hyperspectral imaging probe with principal component analysis and confidence ellipse for classification

    NASA Astrophysics Data System (ADS)

    Lim, Hoong-Ta; Murukeshan, Vadakke Matham

    2017-06-01

    Hyperspectral imaging combines imaging and spectroscopy to provide detailed spectral information for each spatial point in the image. This gives a three-dimensional spatial-spatial-spectral datacube with hundreds of spectral images. Probe-based hyperspectral imaging systems have been developed so that they can be used in regions where conventional table-top platforms would find it difficult to access. A fiber bundle, which is made up of specially-arranged optical fibers, has recently been developed and integrated with a spectrograph-based hyperspectral imager. This forms a snapshot hyperspectral imaging probe, which is able to form a datacube using the information from each scan. Compared to the other configurations, which require sequential scanning to form a datacube, the snapshot configuration is preferred in real-time applications where motion artifacts and pixel misregistration can be minimized. Principal component analysis is a dimension-reducing technique that can be applied in hyperspectral imaging to convert the spectral information into uncorrelated variables known as principal components. A confidence ellipse can be used to define the region of each class in the principal component feature space and for classification. This paper demonstrates the use of the snapshot hyperspectral imaging probe to acquire data from samples of different colors. The spectral library of each sample was acquired and then analyzed using principal component analysis. Confidence ellipse was then applied to the principal components of each sample and used as the classification criteria. The results show that the applied analysis can be used to perform classification of the spectral data acquired using the snapshot hyperspectral imaging probe.

  20. Multisource image fusion method using support value transform.

    PubMed

    Zheng, Sheng; Shi, Wen-Zhong; Liu, Jian; Zhu, Guang-Xi; Tian, Jin-Wen

    2007-07-01

    With the development of numerous imaging sensors, many images can be simultaneously pictured by various sensors. However, there are many scenarios where no one sensor can give the complete picture. Image fusion is an important approach to solve this problem and produces a single image which preserves all relevant information from a set of different sensors. In this paper, we proposed a new image fusion method using the support value transform, which uses the support value to represent the salient features of image. This is based on the fact that, in support vector machines (SVMs), the data with larger support values have a physical meaning in the sense that they reveal relative more importance of the data points for contributing to the SVM model. The mapped least squares SVM (mapped LS-SVM) is used to efficiently compute the support values of image. The support value analysis is developed by using a series of multiscale support value filters, which are obtained by filling zeros in the basic support value filter deduced from the mapped LS-SVM to match the resolution of the desired level. Compared with the widely used image fusion methods, such as the Laplacian pyramid, discrete wavelet transform methods, the proposed method is an undecimated transform-based approach. The fusion experiments are undertaken on multisource images. The results demonstrate that the proposed approach is effective and is superior to the conventional image fusion methods in terms of the pertained quantitative fusion evaluation indexes, such as quality of visual information (Q(AB/F)), the mutual information, etc.

Top