Image compression system and method having optimized quantization tables
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)
1998-01-01
A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.
Digital Image Compression Using Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.
1993-01-01
The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.
Multipurpose image watermarking algorithm based on multistage vector quantization.
Lu, Zhe-Ming; Xu, Dian-Guo; Sun, Sheng-He
2005-06-01
The rapid growth of digital multimedia and Internet technologies has made copyright protection, copy protection, and integrity verification three important issues in the digital world. To solve these problems, the digital watermarking technique has been presented and widely researched. Traditional watermarking algorithms are mostly based on discrete transform domains, such as the discrete cosine transform, discrete Fourier transform (DFT), and discrete wavelet transform (DWT). Most of these algorithms are good for only one purpose. Recently, some multipurpose digital watermarking methods have been presented, which can achieve the goal of content authentication and copyright protection simultaneously. However, they are based on DWT or DFT. Lately, several robust watermarking schemes based on vector quantization (VQ) have been presented, but they can only be used for copyright protection. In this paper, we present a novel multipurpose digital image watermarking method based on the multistage vector quantizer structure, which can be applied to image authentication and copyright protection. In the proposed method, the semi-fragile watermark and the robust watermark are embedded in different VQ stages using different techniques, and both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility.
Programmable Remapper with Single Flow Architecture
NASA Technical Reports Server (NTRS)
Fisher, Timothy E. (Inventor)
1993-01-01
An apparatus for image processing comprising a camera for receiving an original visual image and transforming the original visual image into an analog image, a first converter for transforming the analog image of the camera to a digital image, a processor having a single flow architecture for receiving the digital image and producing, with a single algorithm, an output image, a second converter for transforming the digital image of the processor to an analog image, and a viewer for receiving the analog image, transforming the analog image into a transformed visual image for observing the transformations applied to the original visual image. The processor comprises one or more subprocessors for the parallel reception of a digital image for producing an output matrix of the transformed visual image. More particularly, the processor comprises a plurality of subprocessors for receiving in parallel and transforming the digital image for producing a matrix of the transformed visual image, and an output interface means for receiving the respective portions of the transformed visual image from the respective subprocessor for producing an output matrix of the transformed visual image.
Gray-scale transform and evaluation for digital x-ray chest images on CRT monitor
NASA Astrophysics Data System (ADS)
Furukawa, Isao; Suzuki, Junji; Ono, Sadayasu; Kitamura, Masayuki; Ando, Yutaka
1997-04-01
In this paper, an experimental evaluation of a super high definition (SHD) imaging system for digital x-ray chest images is presented. The SHD imaging system is proposed as a platform for integrating conventional image media. We are involved in the use of SHD images in the total digitizing of medical records that include chest x-rays and pathological microscopic images, both which demand the highest level of quality among the various types of medical images. SHD images use progressive scanning and have a spatial resolution of 2000 by 2000 pixels or more and a temporal resolution (frame rate) of 60 frames/sec or more. For displaying medical x-ray images on a CRT, we derived gray scale transform characteristics based on radiologists' comments during the experiment, and elucidated the relationship between that gray scale transform and the linearization transform for maintaining the linear relationship with the luminance of film on a light box (luminance linear transform). We then carried out viewing experiments based on a five-stage evaluation. Nine radiologists participated in our experiment, and the ten cases evaluated included pulmonary fibrosis, lung cancer, and pneumonia. The experimental results indicated that conventional film images and those on super high definition CRT monitors have nearly the same quality. They also show that the gray scale transform for CRT images decided according to radiologists' comments agrees with the luminance linear transform in the high luminance region. And in the low luminance region, it was found that the gray scale transform had the characteristics of level expansion to increase the number of levels that can be expressed.
Medical Image Authentication Using DPT Watermarking: A Preliminary Attempt
NASA Astrophysics Data System (ADS)
Wong, M. L. Dennis; Goh, Antionette W.-T.; Chua, Hong Siang
Secure authentication of digital medical image content provides great value to the e-Health community and medical insurance industries. Fragile Watermarking has been proposed to provide the mechanism to authenticate digital medical image securely. Transform Domain based Watermarking are typically slower than spatial domain watermarking owing to the overhead in calculation of coefficients. In this paper, we propose a new Discrete Pascal Transform based watermarking technique. Preliminary experiment result shows authentication capability. Possible improvements on the proposed scheme are also presented before conclusions.
Momeni, Saba; Pourghassem, Hossein
2014-08-01
Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.
Lahmiri, Salim; Gargour, Christian S; Gabrea, Marcel
2014-10-01
An automated diagnosis system that uses complex continuous wavelet transform (CWT) to process retina digital images and support vector machines (SVMs) for classification purposes is presented. In particular, each retina image is transformed into two one-dimensional signals by concatenating image rows and columns separately. The mathematical norm of phase angles found in each one-dimensional signal at each level of CWT decomposition are relied on to characterise the texture of normal images against abnormal images affected by exudates, drusen and microaneurysms. The leave-one-out cross-validation method was adopted to conduct experiments and the results from the SVM show that the proposed approach gives better results than those obtained by other methods based on the correct classification rate, sensitivity and specificity.
Discrete shearlet transform: faithful digitization concept and its applications
NASA Astrophysics Data System (ADS)
Lim, Wang-Q.
2011-09-01
Over the past years, various representation systems which sparsely approximate functions governed by anisotropic features such as edges in images have been proposed. Alongside the theoretical development of these systems, algorithmic realizations of the associated transforms were provided. However, one of the most common short-comings of these frameworks is the lack of providing a unified treatment of the continuum and digital world, i.e., allowing a digital theory to be a natural digitization of the continuum theory. Shearlets were introduced as means to sparsely encode anisotropic singularities of multivariate data while providing a unified treatment of the continuous and digital realm. In this paper, we introduce a discrete framework which allows a faithful digitization of the continuum domain shearlet transform based on compactly supported shearlets. Finally, we show numerical experiments demonstrating the potential of the discrete shearlet transform in several image processing applications.
The wavelet/scalar quantization compression standard for digital fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, J.N.; Brislawn, C.M.
1994-04-01
A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.
Skin image retrieval using Gabor wavelet texture feature.
Ou, X; Pan, W; Zhang, X; Xiao, P
2016-12-01
Skin imaging plays a key role in many clinical studies. We have used many skin imaging techniques, including the recently developed capacitive contact skin imaging based on fingerprint sensors. The aim of this study was to develop an effective skin image retrieval technique using Gabor wavelet transform, which can be used on different types of skin images, but with a special focus on skin capacitive contact images. Content-based image retrieval (CBIR) is a useful technology to retrieve stored images from database by supplying query images. In a typical CBIR, images are retrieved based on colour, shape, texture, etc. In this study, texture feature is used for retrieving skin images, and Gabor wavelet transform is used for texture feature description and extraction. The results show that the Gabor wavelet texture features can work efficiently on different types of skin images. Although Gabor wavelet transform is slower compared with other image retrieval techniques, such as principal component analysis (PCA) and grey-level co-occurrence matrix (GLCM), Gabor wavelet transform is the best for retrieving skin capacitive contact images and facial images with different orientations. Gabor wavelet transform can also work well on facial images with different expressions and skin cancer/disease images. We have developed an effective skin image retrieval method based on Gabor wavelet transform, that it is useful for retrieving different types of images, namely digital colour face images, digital colour skin cancer and skin disease images, and particularly greyscale skin capacitive contact images. Gabor wavelet transform can also be potentially useful for face recognition (with different orientation and expressions) and skin cancer/disease diagnosis. © 2016 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
A study on multiresolution lossless video coding using inter/intra frame adaptive prediction
NASA Astrophysics Data System (ADS)
Nakachi, Takayuki; Sawabe, Tomoko; Fujii, Tetsuro
2003-06-01
Lossless video coding is required in the fields of archiving and editing digital cinema or digital broadcasting contents. This paper combines a discrete wavelet transform and adaptive inter/intra-frame prediction in the wavelet transform domain to create multiresolution lossless video coding. The multiresolution structure offered by the wavelet transform facilitates interchange among several video source formats such as Super High Definition (SHD) images, HDTV, SDTV, and mobile applications. Adaptive inter/intra-frame prediction is an extension of JPEG-LS, a state-of-the-art lossless still image compression standard. Based on the image statistics of the wavelet transform domains in successive frames, inter/intra frame adaptive prediction is applied to the appropriate wavelet transform domain. This adaptation offers superior compression performance. This is achieved with low computational cost and no increase in additional information. Experiments on digital cinema test sequences confirm the effectiveness of the proposed algorithm.
Geometric processing of digital images of the planets
NASA Technical Reports Server (NTRS)
Edwards, Kathleen
1987-01-01
New procedures and software have been developed for geometric transformation of images to support digital cartography of the planets. The procedures involve the correction of spacecraft camera orientation of each image with the use of ground control and the transformation of each image to a Sinusoidal Equal-Area map projection with an algorithm which allows the number of transformation calculations to vary as the distortion varies within the image. When the distortion is low in an area of an image, few transformation computations are required, and most pixels can be interpolated. When distortion is extreme, the location of each pixel is computed. Mosaics are made of these images and stored as digital databases. Completed Sinusoidal databases may be used for digital analysis and registration with other spatial data. They may also be reproduced as published image maps by digitally transforming them to appropriate map projections.
Digital image transformation and rectification of spacecraft and radar images
NASA Technical Reports Server (NTRS)
Wu, S. S. C.
1985-01-01
The application of digital processing techniques to spacecraft television pictures and radar images is discussed. The use of digital rectification to produce contour maps from spacecraft pictures is described; images with azimuth and elevation angles are converted into point-perspective frame pictures. The digital correction of the slant angle of radar images to ground scale is examined. The development of orthophoto and stereoscopic shaded relief maps from digital terrain and digital image data is analyzed. Digital image transformations and rectifications are utilized on Viking Orbiter and Lander pictures of Mars.
Digital watermarking algorithm research of color images based on quaternion Fourier transform
NASA Astrophysics Data System (ADS)
An, Mali; Wang, Weijiang; Zhao, Zhen
2013-10-01
A watermarking algorithm of color images based on the quaternion Fourier Transform (QFFT) and improved quantization index algorithm (QIM) is proposed in this paper. The original image is transformed by QFFT, the watermark image is processed by compression and quantization coding, and then the processed watermark image is embedded into the components of the transformed original image. It achieves embedding and blind extraction of the watermark image. The experimental results show that the watermarking algorithm based on the improved QIM algorithm with distortion compensation achieves a good tradeoff between invisibility and robustness, and better robustness for the attacks of Gaussian noises, salt and pepper noises, JPEG compression, cropping, filtering and image enhancement than the traditional QIM algorithm.
A novel image watermarking method based on singular value decomposition and digital holography
NASA Astrophysics Data System (ADS)
Cai, Zhishan
2016-10-01
According to the information optics theory, a novel watermarking method based on Fourier-transformed digital holography and singular value decomposition (SVD) is proposed in this paper. First of all, a watermark image is converted to a digital hologram using the Fourier transform. After that, the original image is divided into many non-overlapping blocks. All the blocks and the hologram are decomposed using SVD. The singular value components of the hologram are then embedded into the singular value components of each block using an addition principle. Finally, SVD inverse transformation is carried out on the blocks and hologram to generate the watermarked image. The watermark information embedded in each block is extracted at first when the watermark is extracted. After that, an averaging operation is carried out on the extracted information to generate the final watermark information. Finally, the algorithm is simulated. Furthermore, to test the encrypted image's resistance performance against attacks, various attack tests are carried out. The results show that the proposed algorithm has very good robustness against noise interference, image cut, compression, brightness stretching, etc. In particular, when the image is rotated by a large angle, the watermark information can still be extracted correctly.
Copy-move forgery detection utilizing Fourier-Mellin transform log-polar features
NASA Astrophysics Data System (ADS)
Dixit, Rahul; Naskar, Ruchira
2018-03-01
In this work, we address the problem of region duplication or copy-move forgery detection in digital images, along with detection of geometric transforms (rotation and rescale) and postprocessing-based attacks (noise, blur, and brightness adjustment). Detection of region duplication, following conventional techniques, becomes more challenging when an intelligent adversary brings about such additional transforms on the duplicated regions. In this work, we utilize Fourier-Mellin transform with log-polar mapping and a color-based segmentation technique using K-means clustering, which help us to achieve invariance to all the above forms of attacks in copy-move forgery detection of digital images. Our experimental results prove the efficiency of the proposed method and its superiority to the current state of the art.
NASA Technical Reports Server (NTRS)
Jackson, Deborah J. (Inventor)
1998-01-01
An analog optical encryption system based on phase scrambling of two-dimensional optical images and holographic transformation for achieving large encryption keys and high encryption speed. An enciphering interface uses a spatial light modulator for converting a digital data stream into a two dimensional optical image. The optical image is further transformed into a hologram with a random phase distribution. The hologram is converted into digital form for transmission over a shared information channel. A respective deciphering interface at a receiver reverses the encrypting process by using a phase conjugate reconstruction of the phase scrambled hologram.
The comparison between SVD-DCT and SVD-DWT digital image watermarking
NASA Astrophysics Data System (ADS)
Wira Handito, Kurniawan; Fauzi, Zulfikar; Aminy Ma’ruf, Firda; Widyaningrum, Tanti; Muslim Lhaksmana, Kemas
2018-03-01
With internet, anyone can publish their creation into digital data simply, inexpensively, and absolutely easy to be accessed by everyone. However, the problem appears when anyone else claims that the creation is their property or modifies some part of that creation. It causes necessary protection of copyrights; one of the examples is with watermarking method in digital image. The application of watermarking technique on digital data, especially on image, enables total invisibility if inserted in carrier image. Carrier image will not undergo any decrease of quality and also the inserted image will not be affected by attack. In this paper, watermarking will be implemented on digital image using Singular Value Decomposition based on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) by expectation in good performance of watermarking result. In this case, trade-off happen between invisibility and robustness of image watermarking. In embedding process, image watermarking has a good quality for scaling factor < 0.1. The quality of image watermarking in decomposition level 3 is better than level 2 and level 1. Embedding watermark in low-frequency is robust to Gaussian blur attack, rescale, and JPEG compression, but in high-frequency is robust to Gaussian noise.
Multiresolution image registration in digital x-ray angiography with intensity variation modeling.
Nejati, Mansour; Pourghassem, Hossein
2014-02-01
Digital subtraction angiography (DSA) is a widely used technique for visualization of vessel anatomy in diagnosis and treatment. However, due to unavoidable patient motions, both externally and internally, the subtracted angiography images often suffer from motion artifacts that adversely affect the quality of the medical diagnosis. To cope with this problem and improve the quality of DSA images, registration algorithms are often employed before subtraction. In this paper, a novel elastic registration algorithm for registration of digital X-ray angiography images, particularly for the coronary location, is proposed. This algorithm includes a multiresolution search strategy in which a global transformation is calculated iteratively based on local search in coarse and fine sub-image blocks. The local searches are accomplished in a differential multiscale framework which allows us to capture both large and small scale transformations. The local registration transformation also explicitly accounts for local variations in the image intensities which incorporated into our model as a change of local contrast and brightness. These local transformations are then smoothly interpolated using thin-plate spline interpolation function to obtain the global model. Experimental results with several clinical datasets demonstrate the effectiveness of our algorithm in motion artifact reduction.
Memmolo, P; Finizio, A; Paturzo, M; Ferraro, P; Javidi, B
2012-05-01
A method based on spatial transformations of multiwavelength digital holograms and the correlation matching of their numerical reconstructions is proposed, with the aim to improve superimposition of different color reconstructed images. This method is based on an adaptive affine transform of the hologram that permits management of the physical parameters of numerical reconstruction. In addition, we present a procedure to synthesize a single digital hologram in which three different colors are multiplexed. The optical reconstruction of the synthetic hologram by a spatial light modulator at one wavelength allows us to display all color features of the object, avoiding loss of details.
GEOMETRIC PROCESSING OF DIGITAL IMAGES OF THE PLANETS.
Edwards, Kathleen
1987-01-01
New procedures and software have been developed for geometric transformations of images to support digital cartography of the planets. The procedures involve the correction of spacecraft camera orientation of each image with the use of ground control and the transformation of each image to a Sinusoidal Equal-Area map projection with an algorithm which allows the number of transformation calculations to vary as the distortion varies within the image. When the distortion is low in an area of an image, few transformation computations are required, and most pixels can be interpolated. When distortion is extreme, the location of each pixel is computed. Mosaics are made of these images and stored as digital databases.
Digital image transformation and rectification of spacecraft and radar images
Wu, S.S.C.
1985-01-01
Digital image transformation and rectification can be described in three categories: (1) digital rectification of spacecraft pictures on workable stereoplotters; (2) digital correction of radar image geometry; and (3) digital reconstruction of shaded relief maps and perspective views including stereograms. Digital rectification can make high-oblique pictures workable on stereoplotters that would otherwise not accommodate such extreme tilt angles. It also enables panoramic line-scan geometry to be used to compile contour maps with photogrammetric plotters. Rectifications were digitally processed on both Viking Orbiter and Lander pictures of Mars as well as radar images taken by various radar systems. By merging digital terrain data with image data, perspective and three-dimensional views of Olympus Mons and Tithonium Chasma, also of Mars, are reconstructed through digital image processing. ?? 1985.
Image Fusion Algorithms Using Human Visual System in Transform Domain
NASA Astrophysics Data System (ADS)
Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar
2017-08-01
The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.
An effective detection algorithm for region duplication forgery in digital images
NASA Astrophysics Data System (ADS)
Yavuz, Fatih; Bal, Abdullah; Cukur, Huseyin
2016-04-01
Powerful image editing tools are very common and easy to use these days. This situation may cause some forgeries by adding or removing some information on the digital images. In order to detect these types of forgeries such as region duplication, we present an effective algorithm based on fixed-size block computation and discrete wavelet transform (DWT). In this approach, the original image is divided into fixed-size blocks, and then wavelet transform is applied for dimension reduction. Each block is processed by Fourier Transform and represented by circle regions. Four features are extracted from each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks are detected according to comparison metric results. The experimental results show that the proposed algorithm presents computational efficiency due to fixed-size circle block architecture.
Topology-Preserving Rigid Transformation of 2D Digital Images.
Ngo, Phuc; Passat, Nicolas; Kenmochi, Yukiko; Talbot, Hugues
2014-02-01
We provide conditions under which 2D digital images preserve their topological properties under rigid transformations. We consider the two most common digital topology models, namely dual adjacency and well-composedness. This paper leads to the proposal of optimal preprocessing strategies that ensure the topological invariance of images under arbitrary rigid transformations. These results and methods are proved to be valid for various kinds of images (binary, gray-level, label), thus providing generic and efficient tools, which can be used in particular in the context of image registration and warping.
Matsushima, Kyoji
2008-07-01
Rotational transformation based on coordinate rotation in Fourier space is a useful technique for simulating wave field propagation between nonparallel planes. This technique is characterized by fast computation because the transformation only requires executing a fast Fourier transform twice and a single interpolation. It is proved that the formula of the rotational transformation mathematically satisfies the Helmholtz equation. Moreover, to verify the formulation and its usefulness in wave optics, it is also demonstrated that the transformation makes it possible to reconstruct an image on arbitrarily tilted planes from a wave field captured experimentally by using digital holography.
Digital SAR processing using a fast polynomial transform
NASA Technical Reports Server (NTRS)
Butman, S.; Lipes, R.; Rubin, A.; Truong, T. K.
1981-01-01
A new digital processing algorithm based on the fast polynomial transform is developed for producing images from Synthetic Aperture Radar data. This algorithm enables the computation of the two dimensional cyclic correlation of the raw echo data with the impulse response of a point target, thereby reducing distortions inherent in one dimensional transforms. This SAR processing technique was evaluated on a general-purpose computer and an actual Seasat SAR image was produced. However, regular production runs will require a dedicated facility. It is expected that such a new SAR processing algorithm could provide the basis for a real-time SAR correlator implementation in the Deep Space Network.
Orczyk, C; Rusinek, H; Rosenkrantz, A B; Mikheev, A; Deng, F-M; Melamed, J; Taneja, S S
2013-12-01
To assess a novel method of three-dimensional (3D) co-registration of prostate cancer digital histology and in-vivo multiparametric magnetic resonance imaging (mpMRI) image sets for clinical usefulness. A software platform was developed to achieve 3D co-registration. This software was prospectively applied to three patients who underwent radical prostatectomy. Data comprised in-vivo mpMRI [T2-weighted, dynamic contrast-enhanced weighted images (DCE); apparent diffusion coefficient (ADC)], ex-vivo T2-weighted imaging, 3D-rebuilt pathological specimen, and digital histology. Internal landmarks from zonal anatomy served as reference points for assessing co-registration accuracy and precision. Applying a method of deformable transformation based on 22 internal landmarks, a 1.6 mm accuracy was reached to align T2-weighted images and the 3D-rebuilt pathological specimen, an improvement over rigid transformation of 32% (p = 0.003). The 22 zonal anatomy landmarks were more accurately mapped using deformable transformation than rigid transformation (p = 0.0008). An automatic method based on mutual information, enabled automation of the process and to include perfusion and diffusion MRI images. Evaluation of co-registration accuracy using the volume overlap index (Dice index) met clinically relevant requirements, ranging from 0.81-0.96 for sequences tested. Ex-vivo images of the specimen did not significantly improve co-registration accuracy. This preliminary analysis suggests that deformable transformation based on zonal anatomy landmarks is accurate in the co-registration of mpMRI and histology. Including diffusion and perfusion sequences in the same 3D space as histology is essential further clinical information. The ability to localize cancer in 3D space may improve targeting for image-guided biopsy, focal therapy, and disease quantification in surveillance protocols. Copyright © 2013 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Kong, Gang; Dai, Dao-Qing; Zou, Lu-Min
2008-07-01
In order to remove the artifacts of peripheral digital subtraction angiography (DSA), an affine transformation-based automatic image registration algorithm is introduced here. The whole process is described as follows: First, rectangle feature templates are constructed with their centers of the extracted Harris corners in the mask, and motion vectors of the central feature points are estimated using template matching technology with the similarity measure of maximum histogram energy. And then the optimal parameters of the affine transformation are calculated with the matrix singular value decomposition (SVD) method. Finally, bilinear intensity interpolation is taken to the mask according to the specific affine transformation. More than 30 peripheral DSA registrations are performed with the presented algorithm, and as the result, moving artifacts of the images are removed with sub-pixel precision, and the time consumption is less enough to satisfy the clinical requirements. Experimental results show the efficiency and robustness of the algorithm.
Novel image encryption algorithm based on multiple-parameter discrete fractional random transform
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Dong, Taiji; Wu, Jianhua
2010-08-01
A new method of digital image encryption is presented by utilizing a new multiple-parameter discrete fractional random transform. Image encryption and decryption are performed based on the index additivity and multiple parameters of the multiple-parameter fractional random transform. The plaintext and ciphertext are respectively in the spatial domain and in the fractional domain determined by the encryption keys. The proposed algorithm can resist statistic analyses effectively. The computer simulation results show that the proposed encryption algorithm is sensitive to the multiple keys, and that it has considerable robustness, noise immunity and security.
Improved Remapping Processor For Digital Imagery
NASA Technical Reports Server (NTRS)
Fisher, Timothy E.
1991-01-01
Proposed digital image processor improved version of Programmable Remapper, which performs geometric and radiometric transformations on digital images. Features include overlapping and variably sized preimages. Overcomes some of limitations of image-warping circuit boards implementing only those geometric tranformations expressible in terms of polynomials of limited order. Also overcomes limitations of existing Programmable Remapper and made to perform transformations at video rate.
Discrete Walsh Hadamard transform based visible watermarking technique for digital color images
NASA Astrophysics Data System (ADS)
Santhi, V.; Thangavelu, Arunkumar
2011-10-01
As the size of the Internet is growing enormously the illegal manipulation of digital multimedia data become very easy with the advancement in technology tools. In order to protect those multimedia data from unauthorized access the digital watermarking system is used. In this paper a new Discrete walsh Hadamard Transform based visible watermarking system is proposed. As the watermark is embedded in transform domain, the system is robust to many signal processing attacks. Moreover in this proposed method the watermark is embedded in tiling manner in all the range of frequencies to make it robust to compression and cropping attack. The robustness of the algorithm is tested against noise addition, cropping, compression, Histogram equalization and resizing attacks. The experimental results show that the algorithm is robust to common signal processing attacks and the observed peak signal to noise ratio (PSNR) of watermarked image is varying from 20 to 30 db depends on the size of the watermark.
Local Subspace Classifier with Transform-Invariance for Image Classification
NASA Astrophysics Data System (ADS)
Hotta, Seiji
A family of linear subspace classifiers called local subspace classifier (LSC) outperforms the k-nearest neighbor rule (kNN) and conventional subspace classifiers in handwritten digit classification. However, LSC suffers very high sensitivity to image transformations because it uses projection and the Euclidean distances for classification. In this paper, I present a combination of a local subspace classifier (LSC) and a tangent distance (TD) for improving accuracy of handwritten digit recognition. In this classification rule, we can deal with transform-invariance easily because we are able to use tangent vectors for approximation of transformations. However, we cannot use tangent vectors in other type of images such as color images. Hence, kernel LSC (KLSC) is proposed for incorporating transform-invariance into LSC via kernel mapping. The performance of the proposed methods is verified with the experiments on handwritten digit and color image classification.
Digital SAR processing using a fast polynomial transform
NASA Technical Reports Server (NTRS)
Truong, T. K.; Lipes, R. G.; Butman, S. A.; Reed, I. S.; Rubin, A. L.
1984-01-01
A new digital processing algorithm based on the fast polynomial transform is developed for producing images from Synthetic Aperture Radar data. This algorithm enables the computation of the two dimensional cyclic correlation of the raw echo data with the impulse response of a point target, thereby reducing distortions inherent in one dimensional transforms. This SAR processing technique was evaluated on a general-purpose computer and an actual Seasat SAR image was produced. However, regular production runs will require a dedicated facility. It is expected that such a new SAR processing algorithm could provide the basis for a real-time SAR correlator implementation in the Deep Space Network. Previously announced in STAR as N82-11295
The FBI compression standard for digitized fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.
1996-10-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the currentmore » status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.« less
FBI compression standard for digitized fingerprint images
NASA Astrophysics Data System (ADS)
Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas
1996-11-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.
A new Watermarking System based on Discrete Cosine Transform (DCT) in color biometric images.
Dogan, Sengul; Tuncer, Turker; Avci, Engin; Gulten, Arif
2012-08-01
This paper recommend a biometric color images hiding approach An Watermarking System based on Discrete Cosine Transform (DCT), which is used to protect the security and integrity of transmitted biometric color images. Watermarking is a very important hiding information (audio, video, color image, gray image) technique. It is commonly used on digital objects together with the developing technology in the last few years. One of the common methods used for hiding information on image files is DCT method which used in the frequency domain. In this study, DCT methods in order to embed watermark data into face images, without corrupting their features.
A new approach to pre-processing digital image for wavelet-based watermark
NASA Astrophysics Data System (ADS)
Agreste, Santa; Andaloro, Guido
2008-11-01
The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.
A Double-function Digital Watermarking Algorithm Based on Chaotic System and LWT
NASA Astrophysics Data System (ADS)
Yuxia, Zhao; Jingbo, Fan
A double- function digital watermarking technology is studied and a double-function digital watermarking algorithm of colored image is presented based on chaotic system and the lifting wavelet transformation (LWT).The algorithm has realized the double aims of the copyright protection and the integrity authentication of image content. Making use of feature of human visual system (HVS), the watermark image is embedded into the color image's low frequency component and middle frequency components by different means. The algorithm has great security by using two kinds chaotic mappings and Arnold to scramble the watermark image at the same time. The algorithm has good efficiency by using LWT. The emulation experiment indicates the algorithm has great efficiency and security, and the effect of concealing is really good.
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.
2001-12-01
A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.
Video multiple watermarking technique based on image interlacing using DWT.
Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M
2014-01-01
Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.
Digital holographic image fusion for a larger size object using compressive sensing
NASA Astrophysics Data System (ADS)
Tian, Qiuhong; Yan, Liping; Chen, Benyong; Yao, Jiabao; Zhang, Shihua
2017-05-01
Digital holographic imaging fusion for a larger size object using compressive sensing is proposed. In this method, the high frequency component of the digital hologram under discrete wavelet transform is represented sparsely by using compressive sensing so that the data redundancy of digital holographic recording can be resolved validly, the low frequency component is retained totally to ensure the image quality, and multiple reconstructed images with different clear parts corresponding to a laser spot size are fused to realize the high quality reconstructed image of a larger size object. In addition, a filter combing high-pass and low-pass filters is designed to remove the zero-order term from a digital hologram effectively. The digital holographic experimental setup based on off-axis Fresnel digital holography was constructed. The feasible and comparative experiments were carried out. The fused image was evaluated by using the Tamura texture features. The experimental results demonstrated that the proposed method can improve the processing efficiency and visual characteristics of the fused image and enlarge the size of the measured object effectively.
ERIC Educational Resources Information Center
Greenberg, Richard
1998-01-01
Describes the Image Processing for Teaching (IPT) project which provides digital image processing to excite students about science and mathematics as they use research-quality software on microcomputers. Provides information on IPT whose components of this dissemination project have been widespread teacher education, curriculum-based materials…
Digital differential confocal microscopy based on spatial shift transformation.
Liu, J; Wang, Y; Liu, C; Wilson, T; Wang, H; Tan, J
2014-11-01
Differential confocal microscopy is a particularly powerful surface profilometry technique in industrial metrology due to its high axial sensitivity and insensitivity to noise. However, the practical implementation of the technique requires the accurate positioning of point detectors in three-dimensions. We describe a simple alternative based on spatial transformation of a through-focus series of images obtained from a homemade beam scanning confocal microscope. This digital differential confocal microscopy approach is described and compared with the traditional Differential confocal microscopy approach. The ease of use of the digital differential confocal microscopy system is illustrated by performing measurements on a 3D standard specimen. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.
The precision-processing subsystem for the Earth Resources Technology Satellite.
NASA Technical Reports Server (NTRS)
Chapelle, W. E.; Bybee, J. E.; Bedross, G. M.
1972-01-01
Description of the precision processor, a subsystem in the image-processing system for the Earth Resources Technology Satellite (ERTS). This processor is a special-purpose image-measurement and printing system, designed to process user-selected bulk images to produce 1:1,000,000-scale film outputs and digital image data, presented in a Universal-Transverse-Mercator (UTM) projection. The system will remove geometric and radiometric errors introduced by the ERTS multispectral sensors and by the bulk-processor electron-beam recorder. The geometric transformations required for each input scene are determined by resection computations based on reseau measurements and image comparisons with a special ground-control base contained within the system; the images are then printed and digitized by electronic image-transfer techniques.
Bhattacharyya, Parthasarathi; Mondal, Ashok; Dey, Rana; Saha, Dipanjan; Saha, Goutam
2015-05-01
Auscultation is an important part of the clinical examination of different lung diseases. Objective analysis of lung sounds based on underlying characteristics and its subsequent automatic interpretations may help a clinical practice. We collected the breath sounds from 8 normal subjects and 20 diffuse parenchymal lung disease (DPLD) patients using a newly developed instrument and then filtered off the heart sounds using a novel technology. The collected sounds were thereafter analysed digitally on several characteristics as dynamical complexity, texture information and regularity index to find and define their unique digital signatures for differentiating normality and abnormality. For convenience of testing, these characteristic signatures of normal and DPLD lung sounds were transformed into coloured visual representations. The predictive power of these images has been validated by six independent observers that include three physicians. The proposed method gives a classification accuracy of 100% for composite features for both the normal as well as lung sound signals from DPLD patients. When tested by independent observers on the visually transformed images, the positive predictive value to diagnose the normality and DPLD remained 100%. The lung sounds from the normal and DPLD subjects could be differentiated and expressed according to their digital signatures. On visual transformation to coloured images, they retain 100% predictive power. This technique may assist physicians to diagnose DPLD from visual images bearing the digital signature of the condition. © 2015 Asian Pacific Society of Respirology.
Chen, Chenglong; Ni, Jiangqun; Shen, Zhaoyi; Shi, Yun Qing
2017-06-01
Geometric transformations, such as resizing and rotation, are almost always needed when two or more images are spliced together to create convincing image forgeries. In recent years, researchers have developed many digital forensic techniques to identify these operations. Most previous works in this area focus on the analysis of images that have undergone single geometric transformations, e.g., resizing or rotation. In several recent works, researchers have addressed yet another practical and realistic situation: successive geometric transformations, e.g., repeated resizing, resizing-rotation, rotation-resizing, and repeated rotation. We will also concentrate on this topic in this paper. Specifically, we present an in-depth analysis in the frequency domain of the second-order statistics of the geometrically transformed images. We give an exact formulation of how the parameters of the first and second geometric transformations influence the appearance of periodic artifacts. The expected positions of characteristic resampling peaks are analytically derived. The theory developed here helps to address the gap left by previous works on this topic and is useful for image security and authentication, in particular, the forensics of geometric transformations in digital images. As an application of the developed theory, we present an effective method that allows one to distinguish between the aforementioned four different processing chains. The proposed method can further estimate all the geometric transformation parameters. This may provide useful clues for image forgery detection.
Secure Image Transmission over DFT-precoded OFDM-VLC systems based on Chebyshev Chaos scrambling
NASA Astrophysics Data System (ADS)
Wang, Zhongpeng; Qiu, Weiwei
2017-08-01
This paper proposes a physical layer image secure transmission scheme for discrete Fourier transform (DFT) precoded OFDM-based visible light communication systems by using Chebyshev chaos maps. In the proposed scheme, 256 subcarriers and QPSK modulation are employed. The transmitted digital signal of the image is encrypted with a Chebyshev chaos sequence. The encrypted signal is then transformed by a DFT precoding matrix to reduce the PAPR of the OFDM signal. After that, the encrypted and DFT-precoded OFDM are transmitted over a VLC channel. The simulation results show that the proposed image security transmission scheme can not only protect the DFT-precoded OFDM-based VLC from eavesdroppers but also improve BER performance.
Consequences of "going digital" for pathology professionals - entering the cloud.
Laurinavicius, Arvydas; Raslavicus, Paul
2012-01-01
New opportunities and the adoption of digital technologies will transform the way pathology professionals and services work. Many areas of our daily life as well as medical professions have experienced this change already which has resulted in a paradigm shift in many activities. Pathology is an image-based discipline, therefore, arrival of digital imaging into this domain promises major shift in our work and required mentality. Recognizing the physical and digital duality of the pathology workflow, we can prepare for the imminent increase of the digital component, synergize and enjoy its benefits. Development of a new generation of laboratory information systems along with seamless integration of digital imaging, decision-support, and knowledge databases will enable pathologists to work in a distributed environment. The paradigm of "cloud pathology" is proposed as an ultimate vision of digital pathology workstations plugged into the integrated multidisciplinary patient care systems.
Improved grid-noise removal in single-frame digital moiré 3D shape measurement
NASA Astrophysics Data System (ADS)
Mohammadi, Fatemeh; Kofman, Jonathan
2016-11-01
A single-frame grid-noise removal technique was developed for application in single-frame digital-moiré 3D shape measurement. The ability of the stationary wavelet transform (SWT) to prevent oscillation artifacts near discontinuities, and the ability of the Fourier transform (FFT) applied to wavelet coefficients to separate grid-noise from useful image information, were combined in a new technique, SWT-FFT, to remove grid-noise from moiré-pattern images generated by digital moiré. In comparison to previous grid-noise removal techniques in moiré, SWT-FFT avoids the requirement for mechanical translation of optical components and capture of multiple frames, to enable single-frame moiré-based measurement. Experiments using FFT, Discrete Wavelet Transform (DWT), DWT-FFT, and SWT-FFT were performed on moiré-pattern images containing grid noise, generated by digital moiré, for several test objects. SWT-FFT had the best performance in removing high-frequency grid-noise, both straight and curved lines, minimizing artifacts, and preserving the moiré pattern without blurring and degradation. SWT-FFT also had the lowest noise amplitude in the reconstructed height and lowest roughness index for all test objects, indicating best grid-noise removal in comparison to the other techniques.
Evidence of tampering in watermark identification
NASA Astrophysics Data System (ADS)
McLauchlan, Lifford; Mehrübeoglu, Mehrübe
2009-08-01
In this work, watermarks are embedded in digital images in the discrete wavelet transform (DWT) domain. Principal component analysis (PCA) is performed on the DWT coefficients. Next higher order statistics based on the principal components and the eigenvalues are determined for different sets of images. Feature sets are analyzed for different types of attacks in m dimensional space. The results demonstrate the separability of the features for the tampered digital copies. Different feature sets are studied to determine more effective tamper evident feature sets. The digital forensics, the probable manipulation(s) or modification(s) performed on the digital information can be identified using the described technique.
Phase in Optical Image Processing
NASA Astrophysics Data System (ADS)
Naughton, Thomas J.
2010-04-01
The use of phase has a long standing history in optical image processing, with early milestones being in the field of pattern recognition, such as VanderLugt's practical construction technique for matched filters, and (implicitly) Goodman's joint Fourier transform correlator. In recent years, the flexibility afforded by phase-only spatial light modulators and digital holography, for example, has enabled many processing techniques based on the explicit encoding and decoding of phase. One application area concerns efficient numerical computations. Pushing phase measurement to its physical limits, designs employing the physical properties of phase have ranged from the sensible to the wonderful, in some cases making computationally easy problems easier to solve and in other cases addressing mathematics' most challenging computationally hard problems. Another application area is optical image encryption, in which, typically, a phase mask modulates the fractional Fourier transformed coefficients of a perturbed input image, and the phase of the inverse transform is then sensed as the encrypted image. The inherent linearity that makes the system so elegant mitigates against its use as an effective encryption technique, but we show how a combination of optical and digital techniques can restore confidence in that security. We conclude with the concept of digital hologram image processing, and applications of same that are uniquely suited to optical implementation, where the processing, recognition, or encryption step operates on full field information, such as that emanating from a coherently illuminated real-world three-dimensional object.
Bautista, Pinky A; Yagi, Yukako
2012-05-01
Hematoxylin and eosin (H&E) stain is currently the most popular for routine histopathology staining. Special and/or immuno-histochemical (IHC) staining is often requested to further corroborate the initial diagnosis on H&E stained tissue sections. Digital simulation of staining (or digital staining) can be a very valuable tool to produce the desired stained images from the H&E stained tissue sections instantaneously. We present an approach to digital staining of histopathology multispectral images by combining the effects of spectral enhancement and spectral transformation. Spectral enhancement is accomplished by shifting the N-band original spectrum of the multispectral pixel with the weighted difference between the pixel's original and estimated spectrum; the spectrum is estimated using M < N principal component (PC) vectors. The pixel's enhanced spectrum is transformed to the spectral configuration associated to its reaction to a specific stain by utilizing an N × N transformation matrix, which is derived through application of least mean squares method to the enhanced and target spectral transmittance samples of the different tissue components found in the image. Results of our experiments on the digital conversion of an H&E stained multispectral image to its Masson's trichrome stained equivalent show the viability of the method.
NASA Astrophysics Data System (ADS)
Bekkouche, Toufik; Bouguezel, Saad
2018-03-01
We propose a real-to-real image encryption method. It is a double random amplitude encryption method based on the parametric discrete Fourier transform coupled with chaotic maps to perform the scrambling. The main idea behind this method is the introduction of a complex-to-real conversion by exploiting the inherent symmetry property of the transform in the case of real-valued sequences. This conversion allows the encrypted image to be real-valued instead of being a complex-valued image as in all existing double random phase encryption methods. The advantage is to store or transmit only one image instead of two images (real and imaginary parts). Computer simulation results and comparisons with the existing double random amplitude encryption methods are provided for peak signal-to-noise ratio, correlation coefficient, histogram analysis, and key sensitivity.
High-speed real-time image compression based on all-optical discrete cosine transformation
NASA Astrophysics Data System (ADS)
Guo, Qiang; Chen, Hongwei; Wang, Yuxi; Chen, Minghua; Yang, Sigang; Xie, Shizhong
2017-02-01
In this paper, we present a high-speed single-pixel imaging (SPI) system based on all-optical discrete cosine transform (DCT) and demonstrate its capability to enable noninvasive imaging of flowing cells in a microfluidic channel. Through spectral shaping based on photonic time stretch (PTS) and wavelength-to-space conversion, structured illumination patterns are generated at a rate (tens of MHz) which is three orders of magnitude higher than the switching rate of a digital micromirror device (DMD) used in a conventional single-pixel camera. Using this pattern projector, high-speed image compression based on DCT can be achieved in the optical domain. In our proposed system, a high compression ratio (approximately 10:1) and a fast image reconstruction procedure are both achieved, which implicates broad applications in industrial quality control and biomedical imaging.
Zhang, Wuhong; Chen, Lixiang
2016-06-15
Digital spiral imaging has been demonstrated as an effective optical tool to encode optical information and retrieve topographic information of an object. Here we develop a conceptually new and concise scheme for optical image encoding and decoding toward free-space digital spiral imaging. We experimentally demonstrate that the optical lattices with ℓ=±50 orbital angular momentum superpositions and a clover image with nearly 200 Laguerre-Gaussian (LG) modes can be well encoded and successfully decoded. It is found that an image encoded/decoded with a two-index LG spectrum (considering both azimuthal and radial indices, ℓ and p) possesses much higher fidelity than that with a one-index LG spectrum (only considering the ℓ index). Our work provides an alternative tool for the image encoding/decoding scheme toward free-space optical communications.
The Artist, the Color Copier, and Digital Imaging.
ERIC Educational Resources Information Center
Witte, Mary Stieglitz
The impact that color-copying technology and digital imaging have had on art, photography, and design are explored. Color copiers have provided new opportunities for direct and spontaneous image making an the potential for new transformations in art. The current generation of digital color copiers permits new directions in imaging, but the…
Wavelength-encoded tomography based on optical temporal Fourier transform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chi; Wong, Kenneth K. Y., E-mail: kywong@eee.hku.hk
We propose and demonstrate a technique called wavelength-encoded tomography (WET) for non-invasive optical cross-sectional imaging, particularly beneficial in biological system. The WET utilizes time-lens to perform the optical Fourier transform, and the time-to-wavelength conversion generates a wavelength-encoded image of optical scattering from internal microstructures, analogous to the interferometery-based imaging such as optical coherence tomography. Optical Fourier transform, in principle, comes with twice as good axial resolution over the electrical Fourier transform, and will greatly simplify the digital signal processing after the data acquisition. As a proof-of-principle demonstration, a 150 -μm (ideally 36 μm) resolution is achieved based on a 7.5-nm bandwidth swept-pump,more » using a conventional optical spectrum analyzer. This approach can potentially achieve up to 100-MHz or even higher frame rate with some proven ultrafast spectrum analyzer. We believe that this technique is innovative towards the next-generation ultrafast optical tomographic imaging application.« less
Image authentication by means of fragile CGH watermarking
NASA Astrophysics Data System (ADS)
Schirripa Spagnolo, Giuseppe; Simonetti, Carla; Cozzella, Lorenzo
2005-09-01
In this paper we propose a fragile marking system based on Computer Generated Hologram coding techniques, which is able to detect malicious tampering while tolerating some incidental distortions. A fragile watermark is a mark that is readily altered or destroyed when the host image is modified through a linear or nonlinear transformation. A fragile watermark monitors the integrity of the content of the image but not its numerical representation. Therefore the watermark is designed so that the integrity is proven if the content of the image has not been tampered. Since digital images can be altered or manipulated with ease, the ability to detect changes to digital images is very important for many applications such as news reporting, medical archiving, or legal usages. The proposed technique could be applied to Color Images as well as to Gray Scale ones. Using Computer Generated Hologram watermarking, the embedded mark could be easily recovered by means of a Fourier Transform. Due to this fact host image can be tampered and watermarked with the same holographic pattern. To avoid this possibility we have introduced an encryption method using a asymmetric Cryptography. The proposed schema is based on the knowledge of original mark from the Authentication
Exploring of PST-TBPM in Monitoring Bridge Dynamic Deflection in Vibration
NASA Astrophysics Data System (ADS)
Zhang, Guojian; Liu, Shengzhen; Zhao, Tonglong; Yu, Chengxin
2018-01-01
This study adopts digital photography to monitor bridge dynamic deflection in vibration. Digital photography used in this study is based on PST-TBPM (photographing scale transformation-time baseline parallax method). Firstly, a digital camera is used to monitor the bridge in static as a zero image. Then, the digital camera is used to monitor the bridge in vibration every three seconds as the successive images. Based on the reference system, PST-TBPM is used to calculate the images to obtain the bridge dynamic deflection in vibration. Results show that the average measurement accuracies are 0.615 pixels and 0.79 pixels in X and Z direction. The maximal deflection of the bridge is 7.14 pixels. PST-TBPM is valid in solving the problem-the photographing direction not perpendicular to the bridge. Digital photography used in this study can assess the bridge health through monitoring the bridge dynamic deflection in vibration. The deformation trend curves depicted over time also can warn the possible dangers.
Spatial-Heterodyne Interferometry For Reflection And Transm Ission (Shirt) Measurements
Hanson, Gregory R [Clinton, TN; Bingham, Philip R [Knoxville, TN; Tobin, Ken W [Harriman, TN
2006-02-14
Systems and methods are described for spatial-heterodyne interferometry for reflection and transmission (SHIRT) measurements. A method includes digitally recording a first spatially-heterodyned hologram using a first reference beam and a first object beam; digitally recording a second spatially-heterodyned hologram using a second reference beam and a second object beam; Fourier analyzing the digitally recorded first spatially-heterodyned hologram to define a first analyzed image; Fourier analyzing the digitally recorded second spatially-heterodyned hologram to define a second analyzed image; digitally filtering the first analyzed image to define a first result; and digitally filtering the second analyzed image to define a second result; performing a first inverse Fourier transform on the first result, and performing a second inverse Fourier transform on the second result. The first object beam is transmitted through an object that is at least partially translucent, and the second object beam is reflected from the object.
FPGA design of correlation-based pattern recognition
NASA Astrophysics Data System (ADS)
Jridi, Maher; Alfalou, Ayman
2017-05-01
Optical/Digital pattern recognition and tracking based on optical/digital correlation are a well-known techniques to detect, identify and localize a target object in a scene. Despite the limited number of treatments required by the correlation scheme, computational time and resources are relatively high. The most computational intensive treatment required by the correlation is the transformation from spatial to spectral domain and then from spectral to spatial domain. Furthermore, these transformations are used on optical/digital encryption schemes like the double random phase encryption (DRPE). In this paper, we present a VLSI architecture for the correlation scheme based on the fast Fourier transform (FFT). One interesting feature of the proposed scheme is its ability to stream image processing in order to perform correlation for video sequences. A trade-off between the hardware consumption and the robustness of the correlation can be made in order to understand the limitations of the correlation implementation in reconfigurable and portable platforms. Experimental results obtained from HDL simulations and FPGA prototype have demonstrated the advantages of the proposed scheme.
Applications of digital image processing techniques to problems of data registration and correlation
NASA Technical Reports Server (NTRS)
Green, W. B.
1978-01-01
An overview is presented of the evolution of the computer configuration at JPL's Image Processing Laboratory (IPL). The development of techniques for the geometric transformation of digital imagery is discussed and consideration is given to automated and semiautomated image registration, and the registration of imaging and nonimaging data. The increasing complexity of image processing tasks at IPL is illustrated with examples of various applications from the planetary program and earth resources activities. It is noted that the registration of existing geocoded data bases with Landsat imagery will continue to be important if the Landsat data is to be of genuine use to the user community.
Robust watermark technique using masking and Hermite transform.
Coronel, Sandra L Gomez; Ramírez, Boris Escalante; Mosqueda, Marco A Acevedo
2016-01-01
The following paper evaluates a watermark algorithm designed for digital images by using a perceptive mask and a normalization process, thus preventing human eye detection, as well as ensuring its robustness against common processing and geometric attacks. The Hermite transform is employed because it allows a perfect reconstruction of the image, while incorporating human visual system properties; moreover, it is based on the Gaussian functions derivates. The applied watermark represents information of the digital image proprietor. The extraction process is blind, because it does not require the original image. The following techniques were utilized in the evaluation of the algorithm: peak signal-to-noise ratio, the structural similarity index average, the normalized crossed correlation, and bit error rate. Several watermark extraction tests were performed, with against geometric and common processing attacks. It allowed us to identify how many bits in the watermark can be modified for its adequate extraction.
Computer applications in diagnostic imaging.
Horii, S C
1991-03-01
This article has introduced the nature, generation, use, and future of digital imaging. As digital technology has transformed other aspects of our lives--has the reader tried to buy a conventional record album recently? almost all music store stock is now compact disks--it is sure to continue to transform medicine as well. Whether that transformation will be to our liking as physicians or a source of frustration and disappointment is dependent on understanding the issues involved.
An Automatic Procedure for Combining Digital Images and Laser Scanner Data
NASA Astrophysics Data System (ADS)
Moussa, W.; Abdel-Wahab, M.; Fritsch, D.
2012-07-01
Besides improving both the geometry and the visual quality of the model, the integration of close-range photogrammetry and terrestrial laser scanning techniques directs at filling gaps in laser scanner point clouds to avoid modeling errors, reconstructing more details in higher resolution and recovering simple structures with less geometric details. Thus, within this paper a flexible approach for the automatic combination of digital images and laser scanner data is presented. Our approach comprises two methods for data fusion. The first method starts by a marker-free registration of digital images based on a point-based environment model (PEM) of a scene which stores the 3D laser scanner point clouds associated with intensity and RGB values. The PEM allows the extraction of accurate control information for the direct computation of absolute camera orientations with redundant information by means of accurate space resection methods. In order to use the computed relations between the digital images and the laser scanner data, an extended Helmert (seven-parameter) transformation is introduced and its parameters are estimated. Precedent to that, in the second method, the local relative orientation parameters of the camera images are calculated by means of an optimized Structure and Motion (SaM) reconstruction method. Then, using the determined transformation parameters results in having absolute oriented images in relation to the laser scanner data. With the resulting absolute orientations we have employed robust dense image reconstruction algorithms to create oriented dense image point clouds, which are automatically combined with the laser scanner data to form a complete detailed representation of a scene. Examples of different data sets are shown and experimental results demonstrate the effectiveness of the presented procedures.
[Development of a digital chest phantom for studies on energy subtraction techniques].
Hayashi, Norio; Taniguchi, Anna; Noto, Kimiya; Shimosegawa, Masayuki; Ogura, Toshihiro; Doi, Kunio
2014-03-01
Digital chest phantoms continue to play a significant role in optimizing imaging parameters for chest X-ray examinations. The purpose of this study was to develop a digital chest phantom for studies on energy subtraction techniques under ideal conditions without image noise. Computed tomography (CT) images from the LIDC (Lung Image Database Consortium) were employed to develop a digital chest phantom. The method consisted of the following four steps: 1) segmentation of the lung and bone regions on CT images; 2) creation of simulated nodules; 3) transformation to attenuation coefficient maps from the segmented images; and 4) projection from attenuation coefficient maps. To evaluate the usefulness of digital chest phantoms, we determined the contrast of the simulated nodules in projection images of the digital chest phantom using high and low X-ray energies, soft tissue images obtained by energy subtraction, and "gold standard" images of the soft tissues. Using our method, the lung and bone regions were segmented on the original CT images. The contrast of simulated nodules in soft tissue images obtained by energy subtraction closely matched that obtained using the gold standard images. We thus conclude that it is possible to carry out simulation studies based on energy subtraction techniques using the created digital chest phantoms. Our method is potentially useful for performing simulation studies for optimizing the imaging parameters in chest X-ray examinations.
Focus measure method based on the modulus of the gradient of the color planes for digital microscopy
NASA Astrophysics Data System (ADS)
Hurtado-Pérez, Román; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso; Aguilar-Valdez, J. Félix; Ortega-Mendoza, Gabriel
2018-02-01
The modulus of the gradient of the color planes (MGC) is implemented to transform multichannel information to a grayscale image. This digital technique is used in two applications: (a) focus measurements during autofocusing (AF) process and (b) extending the depth of field (EDoF) by means of multifocus image fusion. In the first case, the MGC procedure is based on an edge detection technique and is implemented in over 15 focus metrics that are typically handled in digital microscopy. The MGC approach is tested on color images of histological sections for the selection of in-focus images. An appealing attribute of all the AF metrics working in the MGC space is their monotonic behavior even up to a magnification of 100×. An advantage of the MGC method is its computational simplicity and inherent parallelism. In the second application, a multifocus image fusion algorithm based on the MGC approach has been implemented on graphics processing units (GPUs). The resulting fused images are evaluated using a nonreference image quality metric. The proposed fusion method reveals a high-quality image independently of faulty illumination during the image acquisition. Finally, the three-dimensional visualization of the in-focus image is shown.
The impact of digital imaging in the field of cytopathology.
Pantanowitz, Liron; Hornish, Maryanne; Goulart, Robert A
2009-03-06
With the introduction of digital imaging, pathology is undergoing a digital transformation. In the field of cytology, digital images are being used for telecytology, automated screening of Pap test slides, training and education (e.g. online digital atlases), and proficiency testing. To date, there has been no systematic review on the impact of digital imaging on the practice of cytopathology. This article critically addresses the emerging role of computer-assisted screening and the application of digital imaging to the field of cytology, including telecytology, virtual microscopy, and the impact of online cytology resources. The role of novel diagnostic techniques like image cytometry is also reviewed.
Makowski, Piotr L; Zaperty, Weronika; Kozacki, Tomasz
2018-01-01
A new framework for in-plane transformations of digital holograms (DHs) is proposed, which provides improved control over basic geometrical features of holographic images reconstructed optically in full color. The method is based on a Fourier hologram equivalent of the adaptive affine transformation technique [Opt. Express18, 8806 (2010)OPEXFF1094-408710.1364/OE.18.008806]. The solution includes four elementary geometrical transformations that can be performed independently on a full-color 3D image reconstructed from an RGB hologram: (i) transverse magnification; (ii) axial translation with minimized distortion; (iii) transverse translation; and (iv) viewing angle rotation. The independent character of transformations (i) and (ii) constitutes the main result of the work and plays a double role: (1) it simplifies synchronization of color components of the RGB image in the presence of mismatch between capture and display parameters; (2) provides improved control over position and size of the projected image, particularly the axial position, which opens new possibilities for efficient animation of holographic content. The approximate character of the operations (i) and (ii) is examined both analytically and experimentally using an RGB circular holographic display system. Additionally, a complex animation built from a single wide-aperture RGB Fourier hologram is presented to demonstrate full capabilities of the developed toolset.
Optical image encryption using multilevel Arnold transform and noninterferometric imaging
NASA Astrophysics Data System (ADS)
Chen, Wen; Chen, Xudong
2011-11-01
Information security has attracted much current attention due to the rapid development of modern technologies, such as computer and internet. We propose a novel method for optical image encryption using multilevel Arnold transform and rotatable-phase-mask noninterferometric imaging. An optical image encryption scheme is developed in the gyrator transform domain, and one phase-only mask (i.e., phase grating) is rotated and updated during image encryption. For the decryption, an iterative retrieval algorithm is proposed to extract high-quality plaintexts. Conventional encoding methods (such as digital holography) have been proven vulnerably to the attacks, and the proposed optical encoding scheme can effectively eliminate security deficiency and significantly enhance cryptosystem security. The proposed strategy based on the rotatable phase-only mask can provide a new alternative for data/image encryption in the noninterferometric imaging.
Digital tool for detecting diabetic retinopathy in retinography image using gabor transform
NASA Astrophysics Data System (ADS)
Morales, Y.; Nuñez, R.; Suarez, J.; Torres, C.
2017-01-01
Diabetic retinopathy is a chronic disease and is the leading cause of blindness in the population. The fundamental problem is that diabetic retinopathy is usually asymptomatic in its early stage and, in advanced stages, it becomes incurable, hence the importance of early detection. To detect diabetic retinopathy, the ophthalmologist examines the fundus by ophthalmoscopy, after sends the patient to get a Retinography. Sometimes, these retinography are not of good quality. This paper show the implementation of a digital tool that facilitates to ophthalmologist provide better patient diagnosis suffering from diabetic retinopathy, informing them that type of retinopathy has and to what degree of severity is find . This tool develops an algorithm in Matlab based on Gabor transform and in the application of digital filters to provide better and higher quality of retinography. The performance of algorithm has been compared with conventional methods obtaining resulting filtered images with better contrast and higher.
ERIC Educational Resources Information Center
Niess, Margaret L.; Gillow-Wiles, Henry
2014-01-01
This qualitative cross-case study explores the influence of a designed learning trajectory on transforming teachers' technological pedagogical content knowledge (TPACK) for teaching with digital image and video technologies. The TPACK Learning Trajectory embeds tasks with specific instructional strategies within a social metacognitive…
An improved three-dimensional non-scanning laser imaging system based on digital micromirror device
NASA Astrophysics Data System (ADS)
Xia, Wenze; Han, Shaokun; Lei, Jieyu; Zhai, Yu; Timofeev, Alexander N.
2018-01-01
Nowadays, there are two main methods to realize three-dimensional non-scanning laser imaging detection, which are detection method based on APD and detection method based on Streak Tube. However, the detection method based on APD possesses some disadvantages, such as small number of pixels, big pixel interval and complex supporting circuit. The detection method based on Streak Tube possesses some disadvantages, such as big volume, bad reliability and high cost. In order to resolve the above questions, this paper proposes an improved three-dimensional non-scanning laser imaging system based on Digital Micromirror Device. In this imaging system, accurate control of laser beams and compact design of imaging structure are realized by several quarter-wave plates and a polarizing beam splitter. The remapping fiber optics is used to sample the image plane of receiving optical lens, and transform the image into line light resource, which can realize the non-scanning imaging principle. The Digital Micromirror Device is used to convert laser pulses from temporal domain to spatial domain. The CCD with strong sensitivity is used to detect the final reflected laser pulses. In this paper, we also use an algorithm which is used to simulate this improved laser imaging system. In the last, the simulated imaging experiment demonstrates that this improved laser imaging system can realize three-dimensional non-scanning laser imaging detection.
Bautista, Pinky A; Yagi, Yukako
2011-01-01
In this paper we introduced a digital staining method for histopathology images captured with an n-band multispectral camera. The method consisted of two major processes: enhancement of the original spectral transmittance and the transformation of the enhanced transmittance to its target spectral configuration. Enhancement is accomplished by shifting the original transmittance with the scaled difference between the original transmittance and the transmittance estimated with m dominant principal component (PC) vectors;the m-PC vectors were determined from the transmittance samples of the background image. Transformation of the enhanced transmittance to the target spectral configuration was done using an nxn transformation matrix, which was derived by applying a least square method to the enhanced and target spectral training data samples of the different tissue components. Experimental results on the digital conversion of a hematoxylin and eosin (H&E) stained multispectral image to its Masson's trichrome stained (MT) equivalent shows the viability of the method.
NASA Astrophysics Data System (ADS)
Su, Yonggang; Tang, Chen; Li, Biyuan; Lei, Zhenkun
2018-05-01
This paper presents a novel optical colour image watermarking scheme based on phase-truncated linear canonical transform (PT-LCT) and image decomposition (ID). In this proposed scheme, a PT-LCT-based asymmetric cryptography is designed to encode the colour watermark into a noise-like pattern, and an ID-based multilevel embedding method is constructed to embed the encoded colour watermark into a colour host image. The PT-LCT-based asymmetric cryptography, which can be optically implemented by double random phase encoding with a quadratic phase system, can provide a higher security to resist various common cryptographic attacks. And the ID-based multilevel embedding method, which can be digitally implemented by a computer, can make the information of the colour watermark disperse better in the colour host image. The proposed colour image watermarking scheme possesses high security and can achieve a higher robustness while preserving the watermark’s invisibility. The good performance of the proposed scheme has been demonstrated by extensive experiments and comparison with other relevant schemes.
A robust color image watermarking algorithm against rotation attacks
NASA Astrophysics Data System (ADS)
Han, Shao-cheng; Yang, Jin-feng; Wang, Rui; Jia, Gui-min
2018-01-01
A robust digital watermarking algorithm is proposed based on quaternion wavelet transform (QWT) and discrete cosine transform (DCT) for copyright protection of color images. The luminance component Y of a host color image in YIQ space is decomposed by QWT, and then the coefficients of four low-frequency subbands are transformed by DCT. An original binary watermark scrambled by Arnold map and iterated sine chaotic system is embedded into the mid-frequency DCT coefficients of the subbands. In order to improve the performance of the proposed algorithm against rotation attacks, a rotation detection scheme is implemented before watermark extracting. The experimental results demonstrate that the proposed watermarking scheme shows strong robustness not only against common image processing attacks but also against arbitrary rotation attacks.
[Glossary of terms used by radiologists in image processing].
Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P
1995-01-01
We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.
NASA Astrophysics Data System (ADS)
Liu, Zhengjun; Chen, Hang; Blondel, Walter; Shen, Zhenmin; Liu, Shutian
2018-06-01
A novel image encryption method is proposed by using the expanded fractional Fourier transform, which is implemented with a pair of lenses. Here the centers of two lenses are separated at the cross section of axis in optical system. The encryption system is addressed with Fresnel diffraction and phase modulation for the calculation of information transmission. The iterative process with the transform unit is utilized for hiding secret image. The structure parameters of a battery of lenses can be used for additional keys. The performance of encryption method is analyzed theoretically and digitally. The results show that the security of this algorithm is enhanced markedly by the added keys.
Wavelet/scalar quantization compression standard for fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.
1996-06-12
US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class ofmore » potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.« less
NASA Astrophysics Data System (ADS)
Zhang, Xiaolei; Zhang, Xiangchao; Yuan, He; Zhang, Hao; Xu, Min
2018-02-01
Digital holography is a promising measurement method in the fields of bio-medicine and micro-electronics. But the captured images of digital holography are severely polluted by the speckle noise because of optical scattering and diffraction. Via analyzing the properties of Fresnel diffraction and the topographies of micro-structures, a novel reconstruction method based on the dual-tree complex wavelet transform (DT-CWT) is proposed. This algorithm is shiftinvariant and capable of obtaining sparse representations for the diffracted signals of salient features, thus it is well suited for multiresolution processing of the interferometric holograms of directional morphologies. An explicit representation of orthogonal Fresnel DT-CWT bases and a specific filtering method are developed. This method can effectively remove the speckle noise without destroying the salient features. Finally, the proposed reconstruction method is compared with the conventional Fresnel diffraction integration and Fresnel wavelet transform with compressive sensing methods to validate its remarkable superiority on the aspects of topography reconstruction and speckle removal.
Monitoring Bridge Dynamic Deformation in Vibration by Digital Photography
NASA Astrophysics Data System (ADS)
Yu, Chengxin; Zhang, Guojian; Liu, Xiaodong; Fan, Li; Hai, Hua
2018-01-01
This study adopts digital photography to monitor bridge dynamic deformation in vibration. Digital photography in this study is based on PST-TBPM (photographing scale transformation-time baseline parallax method). Firstly, we monitor the bridge in static as a zero image. Then, we continuously monitor the bridge in vibration as the successive images. Based on the reference points on each image, PST-TBPM is used to calculate the images to obtain the dynamic deformation values of these deformation points. Results show that the average measurement accuracies are 0.685 pixels (0.51mm) and 0.635 pixels (0.47mm) in X and Z direction, respectively. The maximal deformations in X and Z direction of the bridge are 4.53 pixels and 5.21 pixels, respectively. PST-TBPM is valid in solving the problem that the photographing direction is not perpendicular to the bridge. Digital photography in this study can be used to assess bridge health through monitoring the dynamic deformation of a bridge in vibration. The deformation trend curves also can warn the possible dangers over time.
The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, J.N.; Brislawn, C.M.; Hopper, T.
1993-05-01
The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.
The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, J.N.; Brislawn, C.M.; Hopper, T.
1993-01-01
The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.
Liu, Changgeng; Thapa, Damber; Yao, Xincheng
2017-01-01
Guidestar hologram based digital adaptive optics (DAO) is one recently emerging active imaging modality. It records each complex distorted line field reflected or scattered from the sample by an off-axis digital hologram, measures the optical aberration from a separate off-axis digital guidestar hologram, and removes the optical aberration from the distorted line fields by numerical processing. In previously demonstrated DAO systems, the optical aberration was directly retrieved from the guidestar hologram by taking its Fourier transform and extracting the phase term. For the direct retrieval method (DRM), when the sample is not coincident with the guidestar focal plane, the accuracy of the optical aberration retrieved by DRM undergoes a fast decay, leading to quality deterioration of corrected images. To tackle this problem, we explore here an image metrics-based iterative method (MIM) to retrieve the optical aberration from the guidestar hologram. Using an aberrated objective lens and scattering samples, we demonstrate that MIM can improve the accuracy of the retrieved aberrations from both focused and defocused guidestar holograms, compared to DRM, to improve the robustness of the DAO. PMID:28380937
Digital cleaning and "dirt" layer visualization of an oil painting.
Palomero, Cherry May T; Soriano, Maricor N
2011-10-10
We demonstrate a new digital cleaning technique which uses a neural network that is trained to learn the transformation from dirty to clean segments of a painting image. The inputs and outputs of the network are pixels belonging to dirty and clean segments found in Fernando Amorsolo's Malacañang by the River. After digital cleaning we visualize the painting's discoloration by assuming it to be a transmission filter superimposed on the clean painting. Using an RGB color-to-spectrum transformation to obtain the point-per-point spectra of the clean and dirty painting images, we calculate this "dirt" filter and render it for the whole image.
NASA Technical Reports Server (NTRS)
Harwit, M.; Swift, R.; Wattson, R.; Decker, J.; Paganetti, R.
1976-01-01
A spectrometric imager and a thermal imager, which achieve multiplexing by the use of binary optical encoding masks, were developed. The masks are based on orthogonal, pseudorandom digital codes derived from Hadamard matrices. Spatial and/or spectral data is obtained in the form of a Hadamard transform of the spatial and/or spectral scene; computer algorithms are then used to decode the data and reconstruct images of the original scene. The hardware, algorithms and processing/display facility are described. A number of spatial and spatial/spectral images are presented. The achievement of a signal-to-noise improvement due to the signal multiplexing was also demonstrated. An analysis of the results indicates both the situations for which the multiplex advantage may be gained, and the limitations of the technique. A number of potential applications of the spectrometric imager are discussed.
Distributed Two-Dimensional Fourier Transforms on DSPs with an Application for Phase Retrieval
NASA Technical Reports Server (NTRS)
Smith, Jeffrey Scott
2006-01-01
Many applications of two-dimensional Fourier Transforms require fixed timing as defined by system specifications. One example is image-based wavefront sensing. The image-based approach has many benefits, yet it is a computational intensive solution for adaptive optic correction, where optical adjustments are made in real-time to correct for external (atmospheric turbulence) and internal (stability) aberrations, which cause image degradation. For phase retrieval, a type of image-based wavefront sensing, numerous two-dimensional Fast Fourier Transforms (FFTs) are used. To meet the required real-time specifications, a distributed system is needed, and thus, the 2-D FFT necessitates an all-to-all communication among the computational nodes. The 1-D floating point FFT is very efficient on a digital signal processor (DSP). For this study, several architectures and analysis of such are presented which address the all-to-all communication with DSPs. Emphasis of this research is on a 64-node cluster of Analog Devices TigerSharc TS-101 DSPs.
NASA Technical Reports Server (NTRS)
Juday, Richard D. (Editor)
1988-01-01
The present conference discusses topics in pattern-recognition correlator architectures, digital stereo systems, geometric image transformations and their applications, topics in pattern recognition, filter algorithms, object detection and classification, shape representation techniques, and model-based object recognition methods. Attention is given to edge-enhancement preprocessing using liquid crystal TVs, massively-parallel optical data base management, three-dimensional sensing with polar exponential sensor arrays, the optical processing of imaging spectrometer data, hybrid associative memories and metric data models, the representation of shape primitives in neural networks, and the Monte Carlo estimation of moment invariants for pattern recognition.
NASA Astrophysics Data System (ADS)
Yamada, Masayoshi; Fukuzawa, Masayuki; Kitsunezuka, Yoshiki; Kishida, Jun; Nakamori, Nobuyuki; Kanamori, Hitoshi; Sakurai, Takashi; Kodama, Souichi
1995-05-01
In order to detect pulsation from a series of noisy ultrasound-echo moving images of a newborn baby's head for pediatric diagnosis, a digital image processing system capable of recording at the video rate and processing the recorded series of images was constructed. The time-sequence variations of each pixel value in a series of moving images were analyzed and then an algorithm based on Fourier transform was developed for the pulsation detection, noting that the pulsation associated with blood flow was periodically changed by heartbeat. Pulsation detection for pediatric diagnosis was successfully made from a series of noisy ultrasound-echo moving images of newborn baby's head by using the image processing system and the pulsation detection algorithm developed here.
Global detection of large lunar craters based on the CE-1 digital elevation model
NASA Astrophysics Data System (ADS)
Luo, Lei; Mu, Lingli; Wang, Xinyuan; Li, Chao; Ji, Wei; Zhao, Jinjin; Cai, Heng
2013-12-01
Craters, one of the most significant features of the lunar surface, have been widely researched because they offer us the relative age of the surface unit as well as crucial geological information. Research on crater detection algorithms (CDAs) of the Moon and other planetary bodies has concentrated on detecting them from imagery data, but the computational cost of detecting large craters using images makes these CDAs impractical. This paper presents a new approach to crater detection that utilizes a digital elevation model instead of images; this enables fully automatic global detection of large craters. Craters were delineated by terrain attributes, and then thresholding maps of terrain attributes were used to transform topographic data into a binary image, finally craters were detected by using the Hough Transform from the binary image. By using the proposed algorithm, we produced a catalog of all craters ⩾10 km in diameter on the lunar surface and analyzed their distribution and population characteristics.
An image adaptive, wavelet-based watermarking of digital images
NASA Astrophysics Data System (ADS)
Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia
2007-12-01
In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.
Kusakawa, Shinji; Yasuda, Satoshi; Kuroda, Takuya; Kawamata, Shin; Sato, Yoji
2015-12-08
Contamination with tumorigenic cellular impurities is one of the most pressing concerns for human cell-processed therapeutic products (hCTPs). The soft agar colony formation (SACF) assay, which is a well-known in vitro assay for the detection of malignant transformed cells, is applicable for the quality assessment of hCTPs. Here we established an image-based screening system for the SACF assay using a high-content cell analyzer termed the digital SACF assay. Dual fluorescence staining of formed colonies and the dissolution of soft agar led to accurate detection of transformed cells with the imaging cytometer. Partitioning a cell sample into multiple wells of culture plates enabled digital readout of the presence of colonies and elevated the sensitivity for their detection. In practice, the digital SACF assay detected impurity levels as low as 0.00001% of the hCTPs, i.e. only one HeLa cell contained in 10,000,000 human mesenchymal stem cells, within 30 days. The digital SACF assay saves time, is more sensitive than in vivo tumorigenicity tests, and would be useful for the quality control of hCTPs in the manufacturing process.
Kusakawa, Shinji; Yasuda, Satoshi; Kuroda, Takuya; Kawamata, Shin; Sato, Yoji
2015-01-01
Contamination with tumorigenic cellular impurities is one of the most pressing concerns for human cell-processed therapeutic products (hCTPs). The soft agar colony formation (SACF) assay, which is a well-known in vitro assay for the detection of malignant transformed cells, is applicable for the quality assessment of hCTPs. Here we established an image-based screening system for the SACF assay using a high-content cell analyzer termed the digital SACF assay. Dual fluorescence staining of formed colonies and the dissolution of soft agar led to accurate detection of transformed cells with the imaging cytometer. Partitioning a cell sample into multiple wells of culture plates enabled digital readout of the presence of colonies and elevated the sensitivity for their detection. In practice, the digital SACF assay detected impurity levels as low as 0.00001% of the hCTPs, i.e. only one HeLa cell contained in 10,000,000 human mesenchymal stem cells, within 30 days. The digital SACF assay saves time, is more sensitive than in vivo tumorigenicity tests, and would be useful for the quality control of hCTPs in the manufacturing process. PMID:26644244
Noncoherent parallel optical processor for discrete two-dimensional linear transformations.
Glaser, I
1980-10-01
We describe a parallel optical processor, based on a lenslet array, that provides general linear two-dimensional transformations using noncoherent light. Such a processor could become useful in image- and signal-processing applications in which the throughput requirements cannot be adequately satisfied by state-of-the-art digital processors. Experimental results that illustrate the feasibility of the processor by demonstrating its use in parallel optical computation of the two-dimensional Walsh-Hadamard transformation are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poling, Whitney A.; Savic, Vesna; Hector, Louis G.
2016-04-05
The strain-induced, diffusionless shear transformation of retained austenite to martensite during straining of transformation induced plasticity (TRIP) assisted steels increases strain hardening and delays necking and fracture leading to exceptional ductility and strength, which are attractive for automotive applications. A novel technique that provides the retained austenite volume fraction variation with strain in TRIP-assisted steels with improved precision is presented. Digital images of the gauge section of tensile specimens were first recorded up to selected plastic strains with a stereo digital image correlation (DIC) system. The austenite volume fraction was measured by synchrotron X-ray diffraction from small squares cut frommore » the gage section. Strain fields in the squares were then computed by localizing the strain measurement to the corresponding region of a given square during DIC post-processing of the images recorded during tensile testing. Results obtained for a QP980 steel are used to study the influence of initial volume fraction of austenite and the austenite transformation with strain on tensile mechanical behavior.« less
NASA Astrophysics Data System (ADS)
Battisti, F.; Carli, M.; Neri, A.
2011-03-01
The increasing use of digital image-based applications is resulting in huge databases that are often difficult to use and prone to misuse and privacy concerns. These issues are especially crucial in medical applications. The most commonly adopted solution is the encryption of both the image and the patient data in separate files that are then linked. This practice results to be inefficient since, in order to retrieve patient data or analysis details, it is necessary to decrypt both files. In this contribution, an alternative solution for secure medical image annotation is presented. The proposed framework is based on the joint use of a key-dependent wavelet transform, the Integer Fibonacci-Haar transform, of a secure cryptographic scheme, and of a reversible watermarking scheme. The system allows: i) the insertion of the patient data into the encrypted image without requiring the knowledge of the original image, ii) the encryption of annotated images without causing loss in the embedded information, and iii) due to the complete reversibility of the process, it allows recovering the original image after the mark removal. Experimental results show the effectiveness of the proposed scheme.
NASA Astrophysics Data System (ADS)
Singh, Hukum
2016-06-01
An asymmetric scheme has been proposed for optical double images encryption in the gyrator wavelet transform (GWT) domain. Grayscale and binary images are encrypted separately using double random phase encoding (DRPE) in the GWT domain. Phase masks based on devil's vortex Fresnel Lens (DVFLs) and random phase masks (RPMs) are jointly used in spatial as well as in the Fourier plane. The images to be encrypted are first gyrator transformed and then single-level discrete wavelet transformed (DWT) to decompose LL , HL , LH and HH matrices of approximation, horizontal, vertical and diagonal coefficients. The resulting coefficients from the DWT are multiplied by other RPMs and the results are applied to inverse discrete wavelet transform (IDWT) for obtaining the encrypted images. The images are recovered from their corresponding encrypted images by using the correct parameters of the GWT, DVFL and its digital implementation has been performed using MATLAB 7.6.0 (R2008a). The mother wavelet family, DVFL and gyrator transform orders associated with the GWT are extra keys that cause difficulty to an attacker. Thus, the scheme is more secure as compared to conventional techniques. The efficacy of the proposed scheme is verified by computing mean-squared-error (MSE) between recovered and the original images. The sensitivity of the proposed scheme is verified with encryption parameters and noise attacks.
Omniview motionless camera orientation system
NASA Technical Reports Server (NTRS)
Martin, H. Lee (Inventor); Kuban, Daniel P. (Inventor); Zimmermann, Steven D. (Inventor); Busko, Nicholas (Inventor)
2010-01-01
An apparatus and method is provided for converting digital images for use in an imaging system. The apparatus includes a data memory which stores digital data representing an image having a circular or spherical field of view such as an image captured by a fish-eye lens, a control input for receiving a signal for selecting a portion of the image, and a converter responsive to the control input for converting digital data corresponding to the selected portion into digital data representing a planar image for subsequent display. Various methods include the steps of storing digital data representing an image having a circular or spherical field of view, selecting a portion of the image, and converting the stored digital data corresponding to the selected portion into digital data representing a planar image for subsequent display. In various embodiments, the data converter and data conversion step may use an orthogonal set of transformation algorithms.
Cartographic services contract...for everything geographic
,
2003-01-01
The U.S. Geological Survey's (USGS) Cartographic Services Contract (CSC) is used to award work for photogrammetric and mapping services under the umbrella of Architect-Engineer (A&E) contracting. The A&E contract is broad in scope and can accommodate any activity related to standard, nonstandard, graphic, and digital cartographic products. Services provided may include, but are not limited to, photogrammetric mapping and aerotriangulation; orthophotography; thematic mapping (for example, land characterization); analog and digital imagery applications; geographic information systems development; surveying and control acquisition, including ground-based and airborne Global Positioning System; analog and digital image manipulation, analysis, and interpretation; raster and vector map digitizing; data manipulations (for example, transformations, conversions, generalization, integration, and conflation); primary and ancillary data acquisition (for example, aerial photography, satellite imagery, multispectral, multitemporal, and hyperspectral data); image scanning and processing; metadata production, revision, and creation; and production or revision of standard USGS products defined by formal and informal specification and standards, such as those for digital line graphs, digital elevation models, digital orthophoto quadrangles, and digital raster graphics.
Cui, Xinchun; Niu, Yuying; Zheng, Xiangwei; Han, Yingshuai
2018-01-01
In this paper, a new color watermarking algorithm based on differential evolution is proposed. A color host image is first converted from RGB space to YIQ space, which is more suitable for the human visual system. Then, apply three-level discrete wavelet transformation to luminance component Y and generate four different frequency sub-bands. After that, perform singular value decomposition on these sub-bands. In the watermark embedding process, apply discrete wavelet transformation to a watermark image after the scrambling encryption processing. Our new algorithm uses differential evolution algorithm with adaptive optimization to choose the right scaling factors. Experimental results show that the proposed algorithm has a better performance in terms of invisibility and robustness.
NASA Technical Reports Server (NTRS)
Cornell, Stephen R.; Leser, William P.; Hochhalter, Jacob D.; Newman, John A.; Hartl, Darren J.
2014-01-01
A method for detecting fatigue cracks has been explored at NASA Langley Research Center. Microscopic NiTi shape memory alloy (sensory) particles were embedded in a 7050 aluminum alloy matrix to detect the presence of fatigue cracks. Cracks exhibit an elevated stress field near their tip inducing a martensitic phase transformation in nearby sensory particles. Detectable levels of acoustic energy are emitted upon particle phase transformation such that the existence and location of fatigue cracks can be detected. To test this concept, a fatigue crack was grown in a mode-I single-edge notch fatigue crack growth specimen containing sensory particles. As the crack approached the sensory particles, measurements of particle strain, matrix-particle debonding, and phase transformation behavior of the sensory particles were performed. Full-field deformation measurements were performed using a novel multi-scale optical 3D digital image correlation (DIC) system. This information will be used in a finite element-based study to determine optimal sensory material behavior and density.
NASA Technical Reports Server (NTRS)
Rickard, D. A.; Bodenheimer, R. E.
1976-01-01
Digital computer components which perform two dimensional array logic operations (Tse logic) on binary data arrays are described. The properties of Golay transforms which make them useful in image processing are reviewed, and several architectures for Golay transform processors are presented with emphasis on the skeletonizing algorithm. Conventional logic control units developed for the Golay transform processors are described. One is a unique microprogrammable control unit that uses a microprocessor to control the Tse computer. The remaining control units are based on programmable logic arrays. Performance criteria are established and utilized to compare the various Golay transform machines developed. A critique of Tse logic is presented, and recommendations for additional research are included.
A novel blinding digital watermark algorithm based on lab color space
NASA Astrophysics Data System (ADS)
Dong, Bing-feng; Qiu, Yun-jie; Lu, Hong-tao
2010-02-01
It is necessary for blinding digital image watermark algorithm to extract watermark information without any extra information except the watermarked image itself. But most of the current blinding watermark algorithms have the same disadvantage: besides the watermarked image, they also need the size and other information about the original image when extracting the watermark. This paper presents an innovative blinding color image watermark algorithm based on Lab color space, which does not have the disadvantages mentioned above. This algorithm first marks the watermark region size and position through embedding some regular blocks called anchor points in image spatial domain, and then embeds the watermark into the image. In doing so, the watermark information can be easily extracted after doing cropping and scale change to the image. Experimental results show that the algorithm is particularly robust against the color adjusting and geometry transformation. This algorithm has already been used in a copyright protecting project and works very well.
A natural-color mapping for single-band night-time image based on FPGA
NASA Astrophysics Data System (ADS)
Wang, Yilun; Qian, Yunsheng
2018-01-01
A natural-color mapping for single-band night-time image method based on FPGA can transmit the color of the reference image to single-band night-time image, which is consistent with human visual habits and can help observers identify the target. This paper introduces the processing of the natural-color mapping algorithm based on FPGA. Firstly, the image can be transformed based on histogram equalization, and the intensity features and standard deviation features of reference image are stored in SRAM. Then, the real-time digital images' intensity features and standard deviation features are calculated by FPGA. At last, FPGA completes the color mapping through matching pixels between images using the features in luminance channel.
Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy
NASA Astrophysics Data System (ADS)
Bucht, Curry; Söderberg, Per; Manneberg, Göran
2009-02-01
The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor. Morphometry of the corneal endothelium is presently done by semi-automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development of fully automated analysis of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images. The digitally enhanced images of the corneal endothelium were transformed, using the fast Fourier transform (FFT). Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.
Watermarking scheme based on singular value decomposition and homomorphic transform
NASA Astrophysics Data System (ADS)
Verma, Deval; Aggarwal, A. K.; Agarwal, Himanshu
2017-10-01
A semi-blind watermarking scheme based on singular-value-decomposition (SVD) and homomorphic transform is pro-posed. This scheme ensures the digital security of an eight bit gray scale image by inserting an invisible eight bit gray scale wa-termark into it. The key approach of the scheme is to apply the homomorphic transform on the host image to obtain its reflectance component. The watermark is embedded into the singular values that are obtained by applying the singular value decomposition on the reflectance component. Peak-signal-to-noise-ratio (PSNR), normalized-correlation-coefficient (NCC) and mean-structural-similarity-index-measure (MSSIM) are used to evaluate the performance of the scheme. Invisibility of watermark is ensured by visual inspection and high value of PSNR of watermarked images. Presence of watermark is ensured by visual inspection and high values of NCC and MSSIM of extracted watermarks. Robustness of the scheme is verified by high values of NCC and MSSIM for attacked watermarked images.
A difference tracking algorithm based on discrete sine transform
NASA Astrophysics Data System (ADS)
Liu, HaoPeng; Yao, Yong; Lei, HeBing; Wu, HaoKun
2018-04-01
Target tracking is an important field of computer vision. The template matching tracking algorithm based on squared difference matching (SSD) and standard correlation coefficient (NCC) matching is very sensitive to the gray change of image. When the brightness or gray change, the tracking algorithm will be affected by high-frequency information. Tracking accuracy is reduced, resulting in loss of tracking target. In this paper, a differential tracking algorithm based on discrete sine transform is proposed to reduce the influence of image gray or brightness change. The algorithm that combines the discrete sine transform and the difference algorithm maps the target image into a image digital sequence. The Kalman filter predicts the target position. Using the Hamming distance determines the degree of similarity between the target and the template. The window closest to the template is determined the target to be tracked. The target to be tracked updates the template. Based on the above achieve target tracking. The algorithm is tested in this paper. Compared with SSD and NCC template matching algorithms, the algorithm tracks target stably when image gray or brightness change. And the tracking speed can meet the read-time requirement.
Image Transformations-Montserrat
NASA Technical Reports Server (NTRS)
2002-01-01
A slightly oblique digital photograph of Montserrat taken from the International Space Station was posted to Earth Observatory in December 2001. An Earth Observatory reader used widely available software to correct the oblique perspective and adjust the color. The story of how he modified the image includes step-by-step instructions that can be applied to other photographs. Photographs of Earth taken by astronauts have shaped our view of the Earth and are part of our popular culture because NASA makes them easily accessible to the public. Read the Transformations Story for more information. The original image was digital photograph number ISS002-E-9309, taken on July 9, 2001, from the International Space Station and was provided by the Earth Sciences and Image Analysis Laboratory at Johnson Space Center. Additional images taken by astronauts and cosmonauts can be viewed at the NASA-JSC Gateway to Astronaut Photography of Earth. Bill Innanen provided the transformed image and the story of how he did it.
Vadnjal, Ana Laura; Etchepareborda, Pablo; Federico, Alejandro; Kaufmann, Guillermo H
2013-03-20
We present a method to determine micro and nano in-plane displacements based on the phase singularities generated by application of directional wavelet transforms to speckle pattern images. The spatial distribution of the obtained phase singularities by the wavelet transform configures a network, which is characterized by two quasi-orthogonal directions. The displacement value is determined by identifying the intersection points of the network before and after the displacement produced by the tested object. The performance of this method is evaluated using simulated speckle patterns and experimental data. The proposed approach is compared with the optical vortex metrology and digital image correlation methods in terms of performance and noise robustness, and the advantages and limitations associated to each method are also discussed.
Development of Software to Digitize Historic Hardcopy Seismograms from Nuclear Explosions
2010-09-01
portion. As will be discussed below, this complicates the preparation of the image for subsequent digitization because background threshold values are...is the output image and −1 < β ≤ 0 is a user selectable parameter. Global contrast enhancement uses a whitening transform to make a given image
Digital holographic 3D imaging spectrometry (a review)
NASA Astrophysics Data System (ADS)
Yoshimori, Kyu
2017-09-01
This paper reviews recent progress in the digital holographic 3D imaging spectrometry. The principle of this method is a marriage of incoherent holography and Fourier transform spectroscopy. Review includes principle, procedure of signal processing and experimental results to obtain a multispectral set of 3D images for spatially incoherent, polychromatic objects.
Digital tissue and what it may reveal about the brain.
Morgan, Josh L; Lichtman, Jeff W
2017-10-30
Imaging as a means of scientific data storage has evolved rapidly over the past century from hand drawings, to photography, to digital images. Only recently can sufficiently large datasets be acquired, stored, and processed such that tissue digitization can actually reveal more than direct observation of tissue. One field where this transformation is occurring is connectomics: the mapping of neural connections in large volumes of digitized brain tissue.
NASA Astrophysics Data System (ADS)
Qin, Yi; Wang, Zhipeng; Wang, Hongjuan; Gong, Qiong
2018-07-01
We propose a binary image encryption method in joint transform correlator (JTC) by aid of the run-length encoding (RLE) and Quick Response (QR) code, which enables lossless retrieval of the primary image. The binary image is encoded with RLE to obtain the highly compressed data, and then the compressed binary image is further scrambled using a chaos-based method. The compressed and scrambled binary image is then transformed into one QR code that will be finally encrypted in JTC. The proposed method successfully, for the first time to our best knowledge, encodes a binary image into a QR code with the identical size of it, and therefore may probe a new way for extending the application of QR code in optical security. Moreover, the preprocessing operations, including RLE, chaos scrambling and the QR code translation, append an additional security level on JTC. We present digital results that confirm our approach.
A half-blind color image hiding and encryption method in fractional Fourier domains
NASA Astrophysics Data System (ADS)
Ge, Fan; Chen, Linfei; Zhao, Daomu
2008-09-01
We have proposed a new technique for digital image encryption and hiding based on fractional Fourier transforms with double random phases. An original hidden image is encrypted two times and the keys are increased to strengthen information protection. Color image hiding and encryption with wavelength multiplexing is proposed by embedding and encryption in R, G and B three channels. The robustness against occlusion attacks and noise attacks are analyzed. And computer simulations are presented with the corresponding results.
The Engineer Topographic Laboratories /ETL/ hybrid optical/digital image processor
NASA Astrophysics Data System (ADS)
Benton, J. R.; Corbett, F.; Tuft, R.
1980-01-01
An optical-digital processor for generalized image enhancement and filtering is described. The optical subsystem is a two-PROM Fourier filter processor. Input imagery is isolated, scaled, and imaged onto the first PROM; this input plane acts like a liquid gate and serves as an incoherent-to-coherent converter. The image is transformed onto a second PROM which also serves as a filter medium; filters are written onto the second PROM with a laser scanner in real time. A solid state CCTV camera records the filtered image, which is then digitized and stored in a digital image processor. The operator can then manipulate the filtered image using the gray scale and color remapping capabilities of the video processor as well as the digital processing capabilities of the minicomputer.
Time efficient Gabor fused master slave optical coherence tomography
NASA Astrophysics Data System (ADS)
Cernat, Ramona; Bradu, Adrian; Rivet, Sylvain; Podoleanu, Adrian
2018-02-01
In this paper the benefits in terms of operation time that Master/Slave (MS) implementation of optical coherence tomography can bring in comparison to Gabor fused (GF) employing conventional fast Fourier transform based OCT are presented. The Gabor Fusion/Master Slave Optical Coherence Tomography architecture proposed here does not need any data stitching. Instead, a subset of en-face images is produced for each focus position inside the sample to be imaged, using a reduced number of theoretically inferred Master masks. These en-face images are then assembled into a final volume. When the channelled spectra are digitized into 1024 sampling points, and more than 4 focus positions are required to produce the final volume, the Master Slave implementation of the instrument is faster than the conventional fast Fourier transform based procedure.
Design and evaluation of sparse quantization index modulation watermarking schemes
NASA Astrophysics Data System (ADS)
Cornelis, Bruno; Barbarien, Joeri; Dooms, Ann; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter
2008-08-01
In the past decade the use of digital data has increased significantly. The advantages of digital data are, amongst others, easy editing, fast, cheap and cross-platform distribution and compact storage. The most crucial disadvantages are the unauthorized copying and copyright issues, by which authors and license holders can suffer considerable financial losses. Many inexpensive methods are readily available for editing digital data and, unlike analog information, the reproduction in the digital case is simple and robust. Hence, there is great interest in developing technology that helps to protect the integrity of a digital work and the copyrights of its owners. Watermarking, which is the embedding of a signal (known as the watermark) into the original digital data, is one method that has been proposed for the protection of digital media elements such as audio, video and images. In this article, we examine watermarking schemes for still images, based on selective quantization of the coefficients of a wavelet transformed image, i.e. sparse quantization-index modulation (QIM) watermarking. Different grouping schemes for the wavelet coefficients are evaluated and experimentally verified for robustness against several attacks. Wavelet tree-based grouping schemes yield a slightly improved performance over block-based grouping schemes. Additionally, the impact of the deployment of error correction codes on the most promising configurations is examined. The utilization of BCH-codes (Bose, Ray-Chaudhuri, Hocquenghem) results in an improved robustness as long as the capacity of the error codes is not exceeded (cliff-effect).
(abstract) Topographic Signatures in Geology
NASA Technical Reports Server (NTRS)
Farr, Tom G.; Evans, Diane L.
1996-01-01
Topographic information is required for many Earth Science investigations. For example, topography is an important element in regional and global geomorphic studies because it reflects the interplay between the climate-driven processes of erosion and the tectonic processes of uplift. A number of techniques have been developed to analyze digital topographic data, including Fourier texture analysis. A Fourier transform of the topography of an area allows the spatial frequency content of the topography to be analyzed. Band-pass filtering of the transform produces images representing the amplitude of different spatial wavelengths. These are then used in a multi-band classification to map units based on their spatial frequency content. The results using a radar image instead of digital topography showed good correspondence to a geologic map, however brightness variations in the image unrelated to topography caused errors. An additional benefit to the use of Fourier band-pass images for the classification is that the textural signatures of the units are quantative measures of the spatial characteristics of the units that may be used to map similar units in similar environments.
NASA Astrophysics Data System (ADS)
Trusiak, Maciej; Micó, Vicente; Patorski, Krzysztof; García-Monreal, Javier; Sluzewski, Lukasz; Ferreira, Carlos
2016-08-01
In this contribution we propose two Hilbert-Huang Transform based algorithms for fast and accurate single-shot and two-shot quantitative phase imaging applicable in both on-axis and off-axis configurations. In the first scheme a single fringe pattern containing information about biological phase-sample under study is adaptively pre-filtered using empirical mode decomposition based approach. Further it is phase demodulated by the Hilbert Spiral Transform aided by the Principal Component Analysis for the local fringe orientation estimation. Orientation calculation enables closed fringes efficient analysis and can be avoided using arbitrary phase-shifted two-shot Gram-Schmidt Orthonormalization scheme aided by Hilbert-Huang Transform pre-filtering. This two-shot approach is a trade-off between single-frame and temporal phase shifting demodulation. Robustness of the proposed techniques is corroborated using experimental digital holographic microscopy studies of polystyrene micro-beads and red blood cells. Both algorithms compare favorably with the temporal phase shifting scheme which is used as a reference method.
Wehde, M. E.
1995-01-01
The common method of digital image comparison by subtraction imposes various constraints on the image contents. Precise registration of images is required to assure proper evaluation of surface locations. The attribute being measured and the calibration and scaling of the sensor are also important to the validity and interpretability of the subtraction result. Influences of sensor gains and offsets complicate the subtraction process. The presence of any uniform systematic transformation component in one of two images to be compared distorts the subtraction results and requires analyst intervention to interpret or remove it. A new technique has been developed to overcome these constraints. Images to be compared are first transformed using the cumulative relative frequency as a transfer function. The transformed images represent the contextual relationship of each surface location with respect to all others within the image. The process of differentiating between the transformed images results in a percentile rank ordered difference. This process produces consistent terrain-change information even when the above requirements necessary for subtraction are relaxed. This technique may be valuable to an appropriately designed hierarchical terrain-monitoring methodology because it does not require human participation in the process.
Kafieh, Rahele; Shahamoradi, Mahdi; Hekmatian, Ehsan; Foroohandeh, Mehrdad; Emamidoost, Mostafa
2012-10-01
To carry out in vivo and in vitro comparative pilot study to evaluate the preciseness of a newly proposed digital dental radiography setup. This setup was based on markers placed on an external frame to eliminate the measurement errors due to incorrect geometry in relative positioning of cone, teeth and the sensor. Five patients with previous panoramic images were selected to undergo the proposed periapical digital imaging for in vivo phase. For in vitro phase, 40 extracted teeth were replanted in dry mandibular sockets and periapical digital images were prepared. The standard reference for real scales of the teeth were obtained through extracted teeth measurements for in vitro application and were calculated through panoramic imaging for in vivo phases. The proposed image processing thechnique was applied on periapical digital images to distinguish the incorrect geometry. The recognized error was inversely applied on the image and the modified images were compared to the correct values. The measurement findings after the distortion removal were compared to our gold standards (results of panoramic imaging or measurements from extracted teeth) and showed the accuracy of 96.45% through in vivo examinations and 96.0% through in vitro tests. The proposed distortion removal method is perfectly able to identify the possible inaccurate geometry during image acquisition and is capable of applying the inverse transform to the distorted radiograph to obtain the correctly modified image. This can be really helpful in applications like root canal therapy, implant surgical procedures and digital subtraction radiography, which are essentially dependent on precise measurements.
Kafieh, Rahele; Shahamoradi, Mahdi; Hekmatian, Ehsan; Foroohandeh, Mehrdad; Emamidoost, Mostafa
2012-01-01
To carry out in vivo and in vitro comparative pilot study to evaluate the preciseness of a newly proposed digital dental radiography setup. This setup was based on markers placed on an external frame to eliminate the measurement errors due to incorrect geometry in relative positioning of cone, teeth and the sensor. Five patients with previous panoramic images were selected to undergo the proposed periapical digital imaging for in vivo phase. For in vitro phase, 40 extracted teeth were replanted in dry mandibular sockets and periapical digital images were prepared. The standard reference for real scales of the teeth were obtained through extracted teeth measurements for in vitro application and were calculated through panoramic imaging for in vivo phases. The proposed image processing thechnique was applied on periapical digital images to distinguish the incorrect geometry. The recognized error was inversely applied on the image and the modified images were compared to the correct values. The measurement findings after the distortion removal were compared to our gold standards (results of panoramic imaging or measurements from extracted teeth) and showed the accuracy of 96.45% through in vivo examinations and 96.0% through in vitro tests. The proposed distortion removal method is perfectly able to identify the possible inaccurate geometry during image acquisition and is capable of applying the inverse transform to the distorted radiograph to obtain the correctly modified image. This can be really helpful in applications like root canal therapy, implant surgical procedures and digital subtraction radiography, which are essentially dependent on precise measurements. PMID:23724372
Fast reconstruction of off-axis digital holograms based on digital spatial multiplexing.
Sha, Bei; Liu, Xuan; Ge, Xiao-Lu; Guo, Cheng-Shan
2014-09-22
A method for fast reconstruction of off-axis digital holograms based on digital multiplexing algorithm is proposed. Instead of the existed angular multiplexing (AM), the new method utilizes a spatial multiplexing (SM) algorithm, in which four off-axis holograms recorded in sequence are synthesized into one SM function through multiplying each hologram with a tilted plane wave and then adding them up. In comparison with the conventional methods, the SM algorithm simplifies two-dimensional (2-D) Fourier transforms (FTs) of four N*N arrays into a 1.25-D FTs of one N*N arrays. Experimental results demonstrate that, using the SM algorithm, the computational efficiency can be improved and the reconstructed wavefronts keep the same quality as those retrieved based on the existed AM method. This algorithm may be useful in design of a fast preview system of dynamic wavefront imaging in digital holography.
NASA Astrophysics Data System (ADS)
Singh, Hukum
2016-12-01
A cryptosystem for securing image encryption is considered by using double random phase encoding in Fresnel wavelet transform (FWT) domain. Random phase masks (RPMs) and structured phase masks (SPMs) based on devil's vortex toroidal lens (DVTL) are used in spatial as well as in Fourier planes. The images to be encrypted are first Fresnel transformed and then single-level discrete wavelet transform (DWT) is apply to decompose LL,HL, LH and HH matrices. The resulting matrices from the DWT are multiplied by additional RPMs and the resultants are subjected to inverse DWT for the encrypted images. The scheme is more secure because of many parameters used in the construction of SPM. The original images are recovered by using the correct parameters of FWT and SPM. Phase mask SPM based on DVTL increases security that enlarges the key space for encryption and decryption. The proposed encryption scheme is a lens-less optical system and its digital implementation has been performed using MATLAB 7.6.0 (R2008a). The computed value of mean-squared-error between the retrieved and the input images shows the efficacy of scheme. The sensitivity to encryption parameters, robustness against occlusion, entropy and multiplicative Gaussian noise attacks have been analysed.
2017-03-01
It does so by using an optical lens to perform an inverse spatial Fourier Transform on the up-converted RF signals, thereby rendering a real-time... simultaneous beams or other engineered beam patterns. There are two general approaches to array-based beam forming: digital and analog. In digital beam...of significantly limiting the number of beams that can be formed simultaneously and narrowing the operational bandwidth. An alternate approach that
Color image lossy compression based on blind evaluation and prediction of noise characteristics
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena
2011-03-01
The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.
NASA Astrophysics Data System (ADS)
Yao, Guang-tao; Zhang, Xiao-hui; Ge, Wei-long
2012-01-01
The underwater laser imaging detection is an effective method of detecting short distance target underwater as an important complement of sonar detection. With the development of underwater laser imaging technology and underwater vehicle technology, the underwater automatic target identification has gotten more and more attention, and is a research difficulty in the area of underwater optical imaging information processing. Today, underwater automatic target identification based on optical imaging is usually realized with the method of digital circuit software programming. The algorithm realization and control of this method is very flexible. However, the optical imaging information is 2D image even 3D image, the amount of imaging processing information is abundant, so the electronic hardware with pure digital algorithm will need long identification time and is hard to meet the demands of real-time identification. If adopt computer parallel processing, the identification speed can be improved, but it will increase complexity, size and power consumption. This paper attempts to apply optical correlation identification technology to realize underwater automatic target identification. The optics correlation identification technology utilizes the Fourier transform characteristic of Fourier lens which can accomplish Fourier transform of image information in the level of nanosecond, and optical space interconnection calculation has the features of parallel, high speed, large capacity and high resolution, combines the flexibility of calculation and control of digital circuit method to realize optoelectronic hybrid identification mode. We reduce theoretical formulation of correlation identification and analyze the principle of optical correlation identification, and write MATLAB simulation program. We adopt single frame image obtained in underwater range gating laser imaging to identify, and through identifying and locating the different positions of target, we can improve the speed and orientation efficiency of target identification effectively, and validate the feasibility of this method primarily.
Optical recognition of statistical patterns
NASA Astrophysics Data System (ADS)
Lee, S. H.
1981-12-01
Optical implementation of the Fukunaga-Koontz transform (FKT) and the Least-Squares Linear Mapping Technique (LSLMT) is described. The FKT is a linear transformation which performs image feature extraction for a two-class image classification problem. The LSLMT performs a transform from large dimensional feature space to small dimensional decision space for separating multiple image classes by maximizing the interclass differences while minimizing the intraclass variations. The FKT and the LSLMT were optically implemented by utilizing a coded phase optical processor. The transform was used for classifying birds and fish. After the F-K basis functions were calculated, those most useful for classification were incorporated into a computer generated hologram. The output of the optical processor, consisting of the squared magnitude of the F-K coefficients, was detected by a T.V. camera, digitized, and fed into a micro-computer for classification. A simple linear classifier based on only two F-K coefficients was able to separate the images into two classes, indicating that the F-K transform had chosen good features. Two advantages of optically implementing the FKT and LSLMT are parallel and real time processing.
Optical recognition of statistical patterns
NASA Technical Reports Server (NTRS)
Lee, S. H.
1981-01-01
Optical implementation of the Fukunaga-Koontz transform (FKT) and the Least-Squares Linear Mapping Technique (LSLMT) is described. The FKT is a linear transformation which performs image feature extraction for a two-class image classification problem. The LSLMT performs a transform from large dimensional feature space to small dimensional decision space for separating multiple image classes by maximizing the interclass differences while minimizing the intraclass variations. The FKT and the LSLMT were optically implemented by utilizing a coded phase optical processor. The transform was used for classifying birds and fish. After the F-K basis functions were calculated, those most useful for classification were incorporated into a computer generated hologram. The output of the optical processor, consisting of the squared magnitude of the F-K coefficients, was detected by a T.V. camera, digitized, and fed into a micro-computer for classification. A simple linear classifier based on only two F-K coefficients was able to separate the images into two classes, indicating that the F-K transform had chosen good features. Two advantages of optically implementing the FKT and LSLMT are parallel and real time processing.
Wavelet-based reversible watermarking for authentication
NASA Astrophysics Data System (ADS)
Tian, Jun
2002-04-01
In the digital information age, digital content (audio, image, and video) can be easily copied, manipulated, and distributed. Copyright protection and content authentication of digital content has become an urgent problem to content owners and distributors. Digital watermarking has provided a valuable solution to this problem. Based on its application scenario, most digital watermarking methods can be divided into two categories: robust watermarking and fragile watermarking. As a special subset of fragile watermark, reversible watermark (which is also called lossless watermark, invertible watermark, erasable watermark) enables the recovery of the original, unwatermarked content after the watermarked content has been detected to be authentic. Such reversibility to get back unwatermarked content is highly desired in sensitive imagery, such as military data and medical data. In this paper we present a reversible watermarking method based on an integer wavelet transform. We look into the binary representation of each wavelet coefficient and embed an extra bit to expandable wavelet coefficient. The location map of all expanded coefficients will be coded by JBIG2 compression and these coefficient values will be losslessly compressed by arithmetic coding. Besides these two compressed bit streams, an SHA-256 hash of the original image will also be embedded for authentication purpose.
Topography of hidden objects using THz digital holography with multi-beam interferences.
Valzania, Lorenzo; Zolliker, Peter; Hack, Erwin
2017-05-15
We present a method for the separation of the signal scattered from an object hidden behind a THz-transparent sample in the framework of THz digital holography in reflection. It combines three images of different interference patterns to retrieve the amplitude and phase distribution of the object beam. Comparison of simulated with experimental images obtained from a metallic resolution target behind a Teflon plate demonstrates that the interference patterns can be described in the simple form of three-beam interference. Holographic reconstructions after the application of the method show a considerable improvement compared to standard reconstructions exclusively based on Fourier transform phase retrieval.
Superfast 3D shape measurement of a flapping flight process with motion based segmentation
NASA Astrophysics Data System (ADS)
Li, Beiwen
2018-02-01
Flapping flight has drawn interests from different fields including biology, aerodynamics and robotics. For such research, the digital fringe projection technology using defocused binary image projection has superfast (e.g. several kHz) measurement capabilities with digital-micromirror-device, yet its measurement quality is still subject to the motion of flapping flight. This research proposes a novel computational framework for dynamic 3D shape measurement of a flapping flight process. The fast and slow motion parts are separately reconstructed with Fourier transform and phase shifting. Experiments demonstrate its success by measuring a flapping wing robot (image acquisition rate: 5000 Hz; flapping speed: 25 cycles/second).
A method to measure the presampling MTF in digital radiography using Wiener deconvolution
NASA Astrophysics Data System (ADS)
Zhou, Zhongxing; Zhu, Qingzhen; Gao, Feng; Zhao, Huijuan; Zhang, Lixin; Li, Guohui
2013-03-01
We developed a novel method for determining the presampling modulation transfer function (MTF) of digital radiography systems from slanted edge images based on Wiener deconvolution. The degraded supersampled edge spread function (ESF) was obtained from simulated slanted edge images with known MTF in the presence of poisson noise, and its corresponding ideal ESF without degration was constructed according to its central edge position. To meet the requirements of the absolute integrable condition of Fourier transformation, the origianl ESFs were mirrored to construct the symmetric pattern of ESFs. Then based on Wiener deconvolution technique, the supersampled line spread function (LSF) could be acquired from the symmetric pattern of degraded supersampled ESFs in the presence of ideal symmetric ESFs and system noise. The MTF is then the normalized magnitude of the Fourier transform of the LSF. The determined MTF showed a strong agreement with the theoritical true MTF when appropriated Wiener parameter was chosen. The effects of Wiener parameter value and the width of square-like wave peak in symmetric ESFs were illustrated and discussed. In conclusion, an accurate and simple method to measure the presampling MTF was established using Wiener deconvolution technique according to slanted edge images.
Sun, X; Chen, K J; Berg, E P; Newman, D J; Schwartz, C A; Keller, W L; Maddock Carlin, K R
2014-02-01
The objective was to use digital color image texture features to predict troponin-T degradation in beef. Image texture features, including 88 gray level co-occurrence texture features, 81 two-dimension fast Fourier transformation texture features, and 48 Gabor wavelet filter texture features, were extracted from color images of beef strip steaks (longissimus dorsi, n = 102) aged for 10d obtained using a digital camera and additional lighting. Steaks were designated degraded or not-degraded based on troponin-T degradation determined on d 3 and d 10 postmortem by immunoblotting. Statistical analysis (STEPWISE regression model) and artificial neural network (support vector machine model, SVM) methods were designed to classify protein degradation. The d 3 and d 10 STEPWISE models were 94% and 86% accurate, respectively, while the d 3 and d 10 SVM models were 63% and 71%, respectively, in predicting protein degradation in aged meat. STEPWISE and SVM models based on image texture features show potential to predict troponin-T degradation in meat. © 2013.
A hardware implementation of the discrete Pascal transform for image processing
NASA Astrophysics Data System (ADS)
Goodman, Thomas J.; Aburdene, Maurice F.
2006-02-01
The discrete Pascal transform is a polynomial transform with applications in pattern recognition, digital filtering, and digital image processing. It already has been shown that the Pascal transform matrix can be decomposed into a product of binary matrices. Such a factorization leads to a fast and efficient hardware implementation without the use of multipliers, which consume large amounts of hardware. We recently developed a field-programmable gate array (FPGA) implementation to compute the Pascal transform. Our goal was to demonstrate the computational efficiency of the transform while keeping hardware requirements at a minimum. Images are uploaded into memory from a remote computer prior to processing, and the transform coefficients can be offloaded from the FPGA board for analysis. Design techniques like as-soon-as-possible scheduling and adder sharing allowed us to develop a fast and efficient system. An eight-point, one-dimensional transform completes in 13 clock cycles and requires only four adders. An 8x8 two-dimensional transform completes in 240 cycles and requires only a top-level controller in addition to the one-dimensional transform hardware. Finally, through minor modifications to the controller, the transform operations can be pipelined to achieve 100% utilization of the four adders, allowing one eight-point transform to complete every seven clock cycles.
A real-time error-free color-correction facility for digital consumers
NASA Astrophysics Data System (ADS)
Shaw, Rodney
2008-01-01
It has been well known since the earliest days of color photography that color-balance in general, and facial reproduction (flesh tones) in particular, are of dominant interest to the consumer, and significant research resources have been expended in satisfying this need. The general problem is a difficult one, spanning the factors that govern perception and personal preference, the physics and chemistry of color reproduction, as well as wide field of color measurement specification, and analysis. However, with the advent of digital photography and its widespread acceptance in the consumer market, and with the possibility of a much greater degree of individual control over color reproduction, the field is taking on a new consumer-driven impetus, and the provision of user facilities for preferred color choice now constitutes an intense field of research. In addition, due to the conveniences of digital technology, the collection of large data bases and statistics relating to individual color preferences have now become a relatively straightforward operation. Using a consumer preference approach of this type, we have developed a user-friendly facility whereby unskilled consumers may manipulate the color of their personal digital images according to their preferred choice. By virtue of its ease of operation and the real-time nature of the color-correction transforms, this facility can readily be inserted anywhere a consumer interacts with a digital image, from camera, printer, or scanner, to web or photo-kiosk. Here the underlying scientific principles are explored in detail, and these are related to the practical color-preference outcomes. Examples are given of the application to the correction of images with unsatisfactory color balance, and especially to flesh tones and faces, and the nature of the consumer controls and their corresponding image transformations are explored.
Image re-sampling detection through a novel interpolation kernel.
Hilal, Alaa
2018-06-01
Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.
Display nonlinearity in digital image processing for visual communications
NASA Astrophysics Data System (ADS)
Peli, Eli
1992-11-01
The luminance emitted from a cathode ray tube (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image property represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. The effect of this nonlinear transformation on a variety of image-processing applications used in visual communications is described.
Display nonlinearity in digital image processing for visual communications
NASA Astrophysics Data System (ADS)
Peli, Eli
1991-11-01
The luminance emitted from a cathode ray tube, (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image properly represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. This paper describes the effect of this nonlinear transformation on a variety of image-processing applications used in visual communications.
Toward a perceptual video-quality metric
NASA Astrophysics Data System (ADS)
Watson, Andrew B.
1998-07-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.
Cheremkhin, Pavel A; Kurbatova, Ekaterina A
2018-01-01
Compression of digital holograms can significantly help with the storage of objects and data in 2D and 3D form, its transmission, and its reconstruction. Compression of standard images by methods based on wavelets allows high compression ratios (up to 20-50 times) with minimum losses of quality. In the case of digital holograms, application of wavelets directly does not allow high values of compression to be obtained. However, additional preprocessing and postprocessing can afford significant compression of holograms and the acceptable quality of reconstructed images. In this paper application of wavelet transforms for compression of off-axis digital holograms are considered. The combined technique based on zero- and twin-order elimination, wavelet compression of the amplitude and phase components of the obtained Fourier spectrum, and further additional compression of wavelet coefficients by thresholding and quantization is considered. Numerical experiments on reconstruction of images from the compressed holograms are performed. The comparative analysis of applicability of various wavelets and methods of additional compression of wavelet coefficients is performed. Optimum parameters of compression of holograms by the methods can be estimated. Sizes of holographic information were decreased up to 190 times.
PC software package to confront multimodality images and a stereotactic atlas in neurosurgery
NASA Astrophysics Data System (ADS)
Barillot, Christian; Lemoine, Didier; Gibaud, Bernard; Toulemont, P. J.; Scarabin, Jean-Marie
1990-07-01
The aim of this application is to interactively transfer information between CT, MRI or DSA data and a 3D stereotactic atlas digitized on a C. Based on a 3D organization of data, this system is devoted to assist a neurosurgeon in surgical planning by numerically cross-assigning information between heterogeneous data (in-vivo or atlas). All these images can be retrieved in digital form from the PACS central archive (SIRENE PACS system). The basic feature of this confrontation is the Talairach's proportional squaring which consists in dividing the 3D cerebral space in independently deformable sub-parts. This 3D model is based on anatomical structures such as the AC-PC line and its two associated vertical lines VAC and VPC. Based on this proportional squaring, the atlas has been digitized in order to get atlas plates along the three orthogonal directions of this geometrical reference (axial, coronal, sagittal). The registration of in-vivo data to the proportional squaring is done by extracting either external framework landmarks or anatomical reference structures (i.e. AC and PC structures on the MRI sagittal mid-plane image). Geometrical transformations and scaling are then recorded for each modality or acquisition according to the proportional squaring. These transformations make for instance possible the transfer of a 3D point of a MRI examination to its 3D location within the proportional squaring and furthermore to its 3D location within another data set (in-vivo or atlas). From that stage, the application gives the choice to the neurosurgeon to select any confrontation between input data (in-vivo images or atlas) and output data (id).
Detection of small surface defects using DCT based enhancement approach in machine vision systems
NASA Astrophysics Data System (ADS)
He, Fuqiang; Wang, Wen; Chen, Zichen
2005-12-01
Utilizing DCT based enhancement approach, an improved small defect detection algorithm for real-time leather surface inspection was developed. A two-stage decomposition procedure was proposed to extract an odd-odd frequency matrix after a digital image has been transformed to DCT domain. Then, the reverse cumulative sum algorithm was proposed to detect the transition points of the gentle curves plotted from the odd-odd frequency matrix. The best radius of the cutting sector was computed in terms of the transition points and the high-pass filtering operation was implemented. The filtered image was then inversed and transformed back to the spatial domain. Finally, the restored image was segmented by an entropy method and some defect features are calculated. Experimental results show the proposed small defect detection method can reach the small defect detection rate by 94%.
Kahle, A.B.; Rowan, L.C.
1980-01-01
Six channels of moultispectral middle infrared (8 to 14 micrometres) aircraft scanner data were acquired over the East Tintic mining district, Utah. The digital image data were computer processed to create a color-composite image based on principal component transformations. When combined with a visible and near infrared color-composite image from a previous flight, with limited field checking, it is possible to discriminate quartzite, carbonate rocks, quartz latitic and quartz monzonitic rocks, latitic and monzonitic rocks, silicified altered rocks, argillized altered rocks, and vegetation. -from Authors
Barisoni, Laura; Troost, Jonathan P; Nast, Cynthia; Bagnasco, Serena; Avila-Casado, Carmen; Hodgin, Jeffrey; Palmer, Matthew; Rosenberg, Avi; Gasim, Adil; Liensziewski, Chrysta; Merlino, Lino; Chien, Hui-Ping; Chang, Anthony; Meehan, Shane M; Gaut, Joseph; Song, Peter; Holzman, Lawrence; Gibson, Debbie; Kretzler, Matthias; Gillespie, Brenda W; Hewitt, Stephen M
2016-07-01
The multicenter Nephrotic Syndrome Study Network (NEPTUNE) digital pathology scoring system employs a novel and comprehensive methodology to document pathologic features from whole-slide images, immunofluorescence and ultrastructural digital images. To estimate inter- and intra-reader concordance of this descriptor-based approach, data from 12 pathologists (eight NEPTUNE and four non-NEPTUNE) with experience from training to 30 years were collected. A descriptor reference manual was generated and a webinar-based protocol for consensus/cross-training implemented. Intra-reader concordance for 51 glomerular descriptors was evaluated on jpeg images by seven NEPTUNE pathologists scoring 131 glomeruli three times (Tests I, II, and III), each test following a consensus webinar review. Inter-reader concordance of glomerular descriptors was evaluated in 315 glomeruli by all pathologists; interstitial fibrosis and tubular atrophy (244 cases, whole-slide images) and four ultrastructural podocyte descriptors (178 cases, jpeg images) were evaluated once by six and five pathologists, respectively. Cohen's kappa for inter-reader concordance for 48/51 glomerular descriptors with sufficient observations was moderate (0.40
Quantum color image watermarking based on Arnold transformation and LSB steganography
NASA Astrophysics Data System (ADS)
Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Luo, Gaofeng
In this paper, a quantum color image watermarking scheme is proposed through twice-scrambling of Arnold transformations and steganography of least significant bit (LSB). Both carrier image and watermark images are represented by the novel quantum representation of color digital images model (NCQI). The image sizes for carrier and watermark are assumed to be 2n×2n and 2n‑1×2n‑1, respectively. At first, the watermark is scrambled into a disordered form through image preprocessing technique of exchanging the image pixel position and altering the color information based on Arnold transforms, simultaneously. Then, the scrambled watermark with 2n‑1×2n‑1 image size and 24-qubit grayscale is further expanded to an image with size 2n×2n and 6-qubit grayscale using the nearest-neighbor interpolation method. Finally, the scrambled and expanded watermark is embedded into the carrier by steganography of LSB scheme, and a key image with 2n×2n size and 3-qubit information is generated at the meantime, which only can use the key image to retrieve the original watermark. The extraction of watermark is the reverse process of embedding, which is achieved by applying a sequence of operations in the reverse order. Simulation-based experimental results involving different carrier and watermark images (i.e. conventional or non-quantum) are simulated based on the classical computer’s MATLAB 2014b software, which illustrates that the present method has a good performance in terms of three items: visual quality, robustness and steganography capacity.
Image Encryption Algorithm Based on Hyperchaotic Maps and Nucleotide Sequences Database
2017-01-01
Image encryption technology is one of the main means to ensure the safety of image information. Using the characteristics of chaos, such as randomness, regularity, ergodicity, and initial value sensitiveness, combined with the unique space conformation of DNA molecules and their unique information storage and processing ability, an efficient method for image encryption based on the chaos theory and a DNA sequence database is proposed. In this paper, digital image encryption employs a process of transforming the image pixel gray value by using chaotic sequence scrambling image pixel location and establishing superchaotic mapping, which maps quaternary sequences and DNA sequences, and by combining with the logic of the transformation between DNA sequences. The bases are replaced under the displaced rules by using DNA coding in a certain number of iterations that are based on the enhanced quaternary hyperchaotic sequence; the sequence is generated by Chen chaos. The cipher feedback mode and chaos iteration are employed in the encryption process to enhance the confusion and diffusion properties of the algorithm. Theoretical analysis and experimental results show that the proposed scheme not only demonstrates excellent encryption but also effectively resists chosen-plaintext attack, statistical attack, and differential attack. PMID:28392799
[Web-ring of sites for pathologists in the internet: a computer-mediated communication environment].
Khramtsov, A I; Isianov, N N; Khorzhevskiĭ, V A
2009-01-01
The recently developed Web-ring of pathology-related Web-sites has transformed computer-mediated communications for Russian-speaking pathologists. Though the pathologists may be geographically dispersed, the network provides a complex of asynchronous and synchronous conferences for the purposes of diagnosis, consultations, education, communication, and collaboration in the field of pathology. This paper describes approaches to be used by participants of the pathology-related Web-ring. The approaches are analogous to the tools employed in telepathology and digital microscopy. One of the novel methodologies is the use of Web-based conferencing systems, in which the whole slide digital images of tissue microarrays were jointly reviewed online by pathologists at distant locations. By using ImageScope (Aperio Technologies) and WebEx connect desktop management technology, they shared presentations and images and communicated in realtime. In this manner, the Web-based forums and conferences will be a powerful addition to a telepathology.
Method for inserting noise in digital mammography to simulate reduction in radiation dose
NASA Astrophysics Data System (ADS)
Borges, Lucas R.; de Oliveira, Helder C. R.; Nunes, Polyana F.; Vieira, Marcelo A. C.
2015-03-01
The quality of clinical x-ray images is closely related to the radiation dose used in the imaging study. The general principle for selecting the radiation is ALARA ("as low as reasonably achievable"). The practical optimization, however, remains challenging. It is well known that reducing the radiation dose increases the quantum noise, which could compromise the image quality. In order to conduct studies about dose reduction in mammography, it would be necessary to acquire repeated clinical images, from the same patient, with different dose levels. However, such practice would be unethical due to radiation related risks. One solution is to simulate the effects of dose reduction in clinical images. This work proposes a new method, based on the Anscombe transformation, which simulates dose reduction in digital mammography by inserting quantum noise into clinical mammograms acquired with the standard radiation dose. Thus, it is possible to simulate different levels of radiation doses without exposing the patient to new levels of radiation. Results showed that the achieved quality of simulated images generated with our method is the same as when using other methods found in the literature, with the novelty of using the Anscombe transformation for converting signal-independent Gaussian noise into signal-dependent quantum noise.
Design of efficient circularly symmetric two-dimensional variable digital FIR filters.
Bindima, Thayyil; Elias, Elizabeth
2016-05-01
Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability.
Design of efficient circularly symmetric two-dimensional variable digital FIR filters
Bindima, Thayyil; Elias, Elizabeth
2016-01-01
Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability. PMID:27222739
Image gathering and digital restoration for fidelity and visual quality
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur
1991-01-01
The fidelity and resolution of the traditional Wiener restorations given in the prevalent digital processing literature can be significantly improved when the transformations between the continuous and discrete representations in image gathering and display are accounted for. However, the visual quality of these improved restorations also is more sensitive to the defects caused by aliasing artifacts, colored noise, and ringing near sharp edges. In this paper, these visual defects are characterized, and methods for suppressing them are presented. It is demonstrated how the visual quality of fidelity-maximized images can be improved when (1) the image-gathering system is specifically designed to enhance the performance of the image-restoration algorithm, and (2) the Wiener filter is combined with interactive Gaussian smoothing, synthetic high edge enhancement, and nonlinear tone-scale transformation. The nonlinear transformation is used primarily to enhance the spatial details that are often obscurred when the normally wide dynamic range of natural radiance fields is compressed into the relatively narrow dynamic range of film and other displays.
NASA Astrophysics Data System (ADS)
A. AL-Salhi, Yahya E.; Lu, Songfeng
2016-08-01
Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.
Human Motion Capture Data Tailored Transform Coding.
Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He
2015-07-01
Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.
Image Transform Based on the Distribution of Representative Colors for Color Deficient
NASA Astrophysics Data System (ADS)
Ohata, Fukashi; Kudo, Hiroaki; Matsumoto, Tetsuya; Takeuchi, Yoshinori; Ohnishi, Noboru
This paper proposes the method to convert digital image containing distinguishing difficulty sets of colors into the image with high visibility. We set up four criteria, automatically processing by a computer, retaining continuity in color space, not making images into lower visible for people with normal color vision, and not making images not originally having distinguishing difficulty sets of colors into lower visible. We conducted the psychological experiment. We obtained the result that the visibility of a converted image had been improved at 60% for 40 images, and we confirmed the main criterion of the continuity in color space was kept.
Learning to Read in the Digital Age
ERIC Educational Resources Information Center
Rose, David; Dalton, Bridget
2009-01-01
The digital age offers transformative opportunities for individualization of learning. First, modern imaging technologies have changed our understanding of learning and the sources and ranges of its diversity. Second, digital technologies make it possible to design learning environments that are responsive to individual differences. We draw on…
Digital processing of the Mariner 10 images of Venus and Mercury
NASA Technical Reports Server (NTRS)
Soha, J. M.; Lynn, D. J.; Mosher, J. A.; Elliot, D. A.
1977-01-01
An extensive effort was devoted to the digital processing of the Mariner 10 images of Venus and Mercury at the Image Processing Laboratory of the Jet Propulsion Laboratory. This effort was designed to optimize the display of the considerable quantity of information contained in the images. Several image restoration, enhancement, and transformation procedures were applied; examples of these techniques are included. A particular task was the construction of large mosaics which characterize the surface of Mercury and the atmospheric structure of Venus.
Content Preserving Watermarking for Medical Images Using Shearlet Transform and SVD
NASA Astrophysics Data System (ADS)
Favorskaya, M. N.; Savchina, E. I.
2017-05-01
Medical Image Watermarking (MIW) is a special field of a watermarking due to the requirements of the Digital Imaging and COmmunications in Medicine (DICOM) standard since 1993. All 20 parts of the DICOM standard are revised periodically. The main idea of the MIW is to embed various types of information including the doctor's digital signature, fragile watermark, electronic patient record, and main watermark in a view of region of interest for the doctor into the host medical image. These four types of information are represented in different forms; some of them are encrypted according to the DICOM requirements. However, all types of information ought to be resulted into the generalized binary stream for embedding. The generalized binary stream may have a huge volume. Therefore, not all watermarking methods can be applied successfully. Recently, the digital shearlet transform had been introduced as a rigorous mathematical framework for the geometric representation of multi-dimensional data. Some modifications of the shearlet transform, particularly the non-subsampled shearlet transform, can be associated to a multi-resolution analysis that provides a fully shift-invariant, multi-scale, and multi-directional expansion. During experiments, a quality of the extracted watermarks under the JPEG compression and typical internet attacks was estimated using several metrics, including the peak signal to noise ratio, structural similarity index measure, and bit error rate.
Cartography of irregularly shaped satellites
NASA Technical Reports Server (NTRS)
Batson, R. M.; Edwards, Kathleen
1987-01-01
Irregularly shaped satellites, such as Phobos and Amalthea, do not lend themselves to mapping by conventional methods because mathematical projections of their surfaces fail to convey an accurate visual impression of the landforms, and because large and irregular scale changes make their features difficult to measure on maps. A digital mapping technique has therefore been developed by which maps are compiled from digital topographic and spacecraft image files. The digital file is geometrically transformed as desired for human viewing, either on video screens or on hard copy. Digital files of this kind consist of digital images superimposed on another digital file representing the three-dimensional form of a body.
Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy
NASA Astrophysics Data System (ADS)
Bucht, Curry; Söderberg, Per; Manneberg, Göran
2010-02-01
The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor of the corneal endothelium. Pathological conditions and physical trauma may threaten the endothelial cell density to such an extent that the optical property of the cornea and thus clear eyesight is threatened. Diagnosis of the corneal endothelium through morphometry is an important part of several clinical applications. Morphometry of the corneal endothelium is presently carried out by semi automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development and use of fully automated analysis of a very large range of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images, normalizing lights and contrasts. The digitally enhanced images of the corneal endothelium were Fourier transformed, using the fast Fourier transform (FFT) and stored as new images. Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on 292 images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.
Simultaneous measurement of translation and tilt using digital speckle photography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhaduri, Basanta; Quan, Chenggen; Tay, Cho Jui
2010-06-20
A Michelson-type digital speckle photographic system has been proposed in which one light beam produces a Fourier transform and another beam produces an image at a recording plane, without interfering between themselves. Because the optical Fourier transform is insensitive to translation and the imaging technique is insensitive to tilt, the proposed system is able to simultaneously and independently determine both surface tilt and translation by two separate recordings, one before and another after the surface motion, without the need to obtain solutions for simultaneous equations. Experimental results are presented to verify the theoretical analysis.
Optimal focal-plane restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1989-01-01
Image restoration can be implemented efficiently by calculating the convolution of the digital image and a small kernel during image acquisition. Processing the image in the focal-plane in this way requires less computation than traditional Fourier-transform-based techniques such as the Wiener filter and constrained least-squares filter. Here, the values of the convolution kernel that yield the restoration with minimum expected mean-square error are determined using a frequency analysis of the end-to-end imaging system. This development accounts for constraints on the size and shape of the spatial kernel and all the components of the imaging system. Simulation results indicate the technique is effective and efficient.
NASA Astrophysics Data System (ADS)
Hama, Hiromitsu; Yamashita, Kazumi
1991-11-01
A new method for video signal processing is described in this paper. The purpose is real-time image transformations at low cost, low power, and small size hardware. This is impossible without special hardware. Here generalized digital differential analyzer (DDA) and control memory (CM) play a very important role. Then indentation, which is called jaggy, is caused on the boundary of a background and a foreground accompanied with the processing. Jaggy does not occur inside the transformed image because of adopting linear interpretation. But it does occur inherently on the boundary of the background and the transformed images. It causes deterioration of image quality, and must be avoided. There are two well-know ways to improve image quality, blurring and supersampling. The former does not have much effect, and the latter has the much higher cost of computing. As a means of settling such a trouble, a method is proposed, which searches for positions that may arise jaggy and smooths such points. Computer simulations based on the real data from VTR, one scene of a movie, are presented to demonstrate our proposed scheme using DDA and CMs and to confirm the effectiveness on various transformations.
ERIC Educational Resources Information Center
Jacobsen, Michele; Clifford, Pat; Friesen, Sharon
The Galileo Educational Network is an innovative educational reform initiative that brings learning to learners. Expert teachers work alongside teachers and students in schools to create new images of engaged learning, technology integration and professional development. This case study is based on the nine schools involved with Galileo in…
Viking image processing. [digital stereo imagery and computer mosaicking
NASA Technical Reports Server (NTRS)
Green, W. B.
1977-01-01
The paper discusses the camera systems capable of recording black and white and color imagery developed for the Viking Lander imaging experiment. Each Viking Lander image consisted of a matrix of numbers with 512 rows and an arbitrary number of columns up to a maximum of about 9,000. Various techniques were used in the processing of the Viking Lander images, including: (1) digital geometric transformation, (2) the processing of stereo imagery to produce three-dimensional terrain maps, and (3) computer mosaicking of distinct processed images. A series of Viking Lander images is included.
Implementation of a direct-imaging and FX correlator for the BEST-2 array
NASA Astrophysics Data System (ADS)
Foster, G.; Hickish, J.; Magro, A.; Price, D.; Zarb Adami, K.
2014-04-01
A new digital backend has been developed for the Basic Element for SKA Training II (BEST-2) array at Radiotelescopi di Medicina, INAF-IRA, Italy, which allows concurrent operation of an FX correlator, and a direct-imaging correlator and beamformer. This backend serves as a platform for testing some of the spatial Fourier transform concepts which have been proposed for use in computing correlations on regularly gridded arrays. While spatial Fourier transform-based beamformers have been implemented previously, this is, to our knowledge, the first time a direct-imaging correlator has been deployed on a radio astronomy array. Concurrent observations with the FX and direct-imaging correlator allow for direct comparison between the two architectures. Additionally, we show the potential of the direct-imaging correlator for time-domain astronomy, by passing a subset of beams though a pulsar and transient detection pipeline. These results provide a timely verification for spatial Fourier transform-based instruments that are currently in commissioning. These instruments aim to detect highly redshifted hydrogen from the epoch of reionization and/or to perform wide-field surveys for time-domain studies of the radio sky. We experimentally show the direct-imaging correlator architecture to be a viable solution for correlation and beamforming.
Effect of data compression on diagnostic accuracy in digital hand and chest radiography
NASA Astrophysics Data System (ADS)
Sayre, James W.; Aberle, Denise R.; Boechat, Maria I.; Hall, Theodore R.; Huang, H. K.; Ho, Bruce K. T.; Kashfian, Payam; Rahbar, Guita
1992-05-01
Image compression is essential to handle a large volume of digital images including CT, MR, CR, and digitized films in a digital radiology operation. The full-frame bit allocation using the cosine transform technique developed during the last few years has been proven to be an excellent irreversible image compression method. This paper describes the effect of using the hardware compression module on diagnostic accuracy in hand radiographs with subperiosteal resorption and chest radiographs with interstitial disease. Receiver operating characteristic analysis using 71 hand radiographs and 52 chest radiographs with five observers each demonstrates that there is no statistical significant difference in diagnostic accuracy between the original films and the compressed images with a compression ratio as high as 20:1.
Several considerations with respect to the future of digital photography and photographic printing
NASA Astrophysics Data System (ADS)
Tuijn, Chris; Mahy, Marc F.
2000-12-01
Digital cameras are no longer exotic gadgets being used by a privileged group of early adopters. More and more people realize that there are obvious advantages to the digital solution over the conventional film-based workflow. Claiming that prints on paper are no longer necessary in the digit workflow, however, would be similar to reviving the myth of the paperless office. Often, people still like to share their memories on paper and this for a variety of reasons. There are still some hurdles to be taken in order to make the digital dream com true. In this paper, we will give a survey of the different workflows in digital photography. The local, semi-local and Internet solutions will be discussed as well as the preferred output systems for each of these solutions. When discussing output system, we immediately think of appropriate color management solutions. In the second part of this paper, we will discuss the major color management issues appearing in digital photography. A clear separation between the image acquisition and the image rendering phases will be made. After a quick survey of the different image restoration and enhancement techniques, we will make some reflections on the ideal color exchange space; the enhanced image should be delivered in this exchange space and, from there, the standard color management transformations can be applied to transfer the image from this exchange space to the native color space of the output device. We will also discus some color gamut characteristics and color management problems of different types of photographic printers that can occur during this conversion process.
Direct-to-digital holography reduction of reference hologram noise and fourier space smearing
Voelkl, Edgar
2006-06-27
Systems and methods are described for reduction of reference hologram noise and reduction of Fourier space smearing, especially in the context of direct-to-digital holography (off-axis interferometry). A method of reducing reference hologram noise includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference image waves; and transforming the corresponding plurality of reference image waves into a reduced noise reference image wave. A method of reducing smearing in Fourier space includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference complex image waves; transforming the corresponding plurality of reference image waves into a reduced noise reference complex image wave; recording a hologram of an object; processing the hologram of the object into an object complex image wave; and dividing the complex image wave of the object by the reduced noise reference complex image wave to obtain a reduced smearing object complex image wave.
Computer-based desktop system for surgical videotape editing.
Vincent-Hamelin, E; Sarmiento, J M; de la Puente, J M; Vicente, M
1997-05-01
The educational role of surgical video presentations should be optimized by linking surgical images to graphic evaluation of indications, techniques, and results. We describe a PC-based video production system for personal editing of surgical tapes, according to the objectives of each presentation. The hardware requirement is a personal computer (100 MHz processor, 1-Gb hard disk, 16 Mb RAM) with a PC-to-TV/video transfer card plugged into a slot. Computer-generated numerical data, texts, and graphics are transformed into analog signals displayed on TV/video. A Genlock interface (a special interface card) synchronizes digital and analog signals, to overlay surgical images to electronic illustrations. The presentation is stored as digital information or recorded on a tape. The proliferation of multimedia tools is leading us to adapt presentations to the objectives of lectures and to integrate conceptual analyses with dynamic image-based information. We describe a system that handles both digital and analog signals, production being recorded on a tape. Movies may be managed in a digital environment, with either an "on-line" or "off-line" approach. System requirements are high, but handling a single device optimizes editing without incurring such complexity that management becomes impractical to surgeons. Our experience suggests that computerized editing allows linking surgical scientific and didactic messages on a single communication medium, either a videotape or a CD-ROM.
A method to perform a fast fourier transform with primitive image transformations.
Sheridan, Phil
2007-05-01
The Fourier transform is one of the most important transformations in image processing. A major component of this influence comes from the ability to implement it efficiently on a digital computer. This paper describes a new methodology to perform a fast Fourier transform (FFT). This methodology emerges from considerations of the natural physical constraints imposed by image capture devices (camera/eye). The novel aspects of the specific FFT method described include: 1) a bit-wise reversal re-grouping operation of the conventional FFT is replaced by the use of lossless image rotation and scaling and 2) the usual arithmetic operations of complex multiplication are replaced with integer addition. The significance of the FFT presented in this paper is introduced by extending a discrete and finite image algebra, named Spiral Honeycomb Image Algebra (SHIA), to a continuous version, named SHIAC.
Astronomical image data compression by morphological skeleton transformation
NASA Astrophysics Data System (ADS)
Huang, L.; Bijaoui, A.
A compression method adapted for exact restoring of the detected objects and based on the morphological skeleton transformation is presented. The morphological skeleton provides a complete and compact description of an object and gives an efficient compression rate. The flexibility of choosing a structuring element adapted to different images and the simplicity of the implementation are considered to be advantages of the method. The experiment was carried out on three typical astronomical images. The first two images were obtained by digitizing a Palomar Schmidt photographic plate in a coma field with the PDS microdensitometer at Nice Observatory. The third image was obtained by CCD camera at the Pic du Midi Observatory. Each pixel was coded by 16 bits and stored at a computer system (VAX785) with STII format. Each image is characterized by 256 x 256 pixels. It is found that first image represents a stellar field, the second represents a set of galaxies in the Coma, and the third image contains an elliptical galaxy.
Image segmentation-based robust feature extraction for color image watermarking
NASA Astrophysics Data System (ADS)
Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen
2018-04-01
This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.
SU-E-J-237: Image Feature Based DRR and Portal Image Registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, X; Chang, J
Purpose: Two-dimensional (2D) matching of the kV X-ray and digitally reconstructed radiography (DRR) images is an important setup technique for image-guided radiotherapy (IGRT). In our clinics, mutual information based methods are used for this purpose on commercial linear accelerators, but with often needs for manual corrections. This work proved the feasibility that feature based image transform can be used to register kV and DRR images. Methods: The scale invariant feature transform (SIFT) method was implemented to detect the matching image details (or key points) between the kV and DRR images. These key points represent high image intensity gradients, and thusmore » the scale invariant features. Due to the poor image contrast from our kV image, direct application of the SIFT method yielded many detection errors. To assist the finding of key points, the center coordinates of the kV and DRR images were read from the DICOM header, and the two groups of key points with similar relative positions to their corresponding centers were paired up. Using these points, a rigid transform (with scaling, horizontal and vertical shifts) was estimated. We also artificially introduced vertical and horizontal shifts to test the accuracy of our registration method on anterior-posterior (AP) and lateral pelvic images. Results: The results provided a satisfactory overlay of the transformed kV onto the DRR image. The introduced vs. detected shifts were fit into a linear regression. In the AP image experiments, linear regression analysis showed a slope of 1.15 and 0.98 with an R2 of 0.89 and 0.99 for the horizontal and vertical shifts, respectively. The results are 1.2 and 1.3 with R2 of 0.72 and 0.82 for the lateral image shifts. Conclusion: This work provided an alternative technique for kV to DRR alignment. Further improvements in the estimation accuracy and image contrast tolerance are underway.« less
Fourier transform digital holographic adaptive optics imaging system
Liu, Changgeng; Yu, Xiao; Kim, Myung K.
2013-01-01
A Fourier transform digital holographic adaptive optics imaging system and its basic principles are proposed. The CCD is put at the exact Fourier transform plane of the pupil of the eye lens. The spherical curvature introduced by the optics except the eye lens itself is eliminated. The CCD is also at image plane of the target. The point-spread function of the system is directly recorded, making it easier to determine the correct guide-star hologram. Also, the light signal will be stronger at the CCD, especially for phase-aberration sensing. Numerical propagation is avoided. The sensor aperture has nothing to do with the resolution and the possibility of using low coherence or incoherent illumination is opened. The system becomes more efficient and flexible. Although it is intended for ophthalmic use, it also shows potential application in microscopy. The robustness and feasibility of this compact system are demonstrated by simulations and experiments using scattering objects. PMID:23262541
Goal-oriented rectification of camera-based document images.
Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J
2011-04-01
Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure.
Transmission of digital images within the NTSC analog format
Nickel, George H.
2004-06-15
HDTV and NTSC compatible image communication is done in a single NTSC channel bandwidth. Luminance and chrominance image data of a scene to be transmitted is obtained. The image data is quantized and digitally encoded to form digital image data in HDTV transmission format having low-resolution terms and high-resolution terms. The low-resolution digital image data terms are transformed to a voltage signal corresponding to NTSC color subcarrier modulation with retrace blanking and color bursts to form a NTSC video signal. The NTSC video signal and the high-resolution digital image data terms are then transmitted in a composite NTSC video transmission. In a NTSC receiver, the NTSC video signal is processed directly to display the scene. In a HDTV receiver, the NTSC video signal is processed to invert the color subcarrier modulation to recover the low-resolution terms, where the recovered low-resolution terms are combined with the high-resolution terms to reconstruct the scene in a high definition format.
NASA Astrophysics Data System (ADS)
Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.
2016-01-01
Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.
Proposed U.S. Geological Survey standard for digital orthophotos
Hooper, David; Caruso, Vincent
1991-01-01
The U.S. Geological Survey has added the new category of digital orthophotos to the National Digital Cartographic Data Base. This differentially rectified digital image product enables users to take advantage of the properties of current photoimagery as a source of geographic information. The product and accompanying standard were implemented in spring 1991. The digital orthophotos will be quadrangle based and cast on the Universal Transverse Mercator projection and will extend beyond the 3.75-minute or 7.5-minute quadrangle area at least 300 meters to form a rectangle. The overedge may be used for mosaicking with adjacent digital orthophotos. To provide maximum information content and utility to the user, metadata (header) records exist at the beginning of the digital orthophoto file. Header information includes the photographic source type, date, instrumentation used to create the digital orthophoto, and information relating to the DEM that was used in the rectification process. Additional header information is included on transformation constants from the 1927 and 1983 North American Datums to the orthophoto internal file coordinates to enable the user to register overlays on either datum. The quadrangle corners in both datums are also imprinted on the image. Flexibility has been built into the digital orthophoto format for future enhancements, such as the provision to include the corresponding digital elevation model elevations used to rectify the orthophoto. The digital orthophoto conforms to National Map Accuracy Standards and provides valuable mapping data that can be used as a tool for timely revision of standard map products, for land use and land cover studies, and as a digital layer in a geographic information system.
van der Laak, Jeroen A W M; Dijkman, Henry B P M; Pahlplatz, Martin M M
2006-03-01
The magnification factor in transmission electron microscopy is not very precise, hampering for instance quantitative analysis of specimens. Calibration of the magnification is usually performed interactively using replica specimens, containing line or grating patterns with known spacing. In the present study, a procedure is described for automated magnification calibration using digital images of a line replica. This procedure is based on analysis of the power spectrum of Fourier transformed replica images, and is compared to interactive measurement in the same images. Images were used with magnification ranging from 1,000 x to 200,000 x. The automated procedure deviated on average 0.10% from interactive measurements. Especially for catalase replicas, the coefficient of variation of automated measurement was considerably smaller (average 0.28%) compared to that of interactive measurement (average 3.5%). In conclusion, calibration of the magnification in digital images from transmission electron microscopy may be performed automatically, using the procedure presented here, with high precision and accuracy.
NASA Astrophysics Data System (ADS)
Grycewicz, Thomas J.; Florio, Christopher J.; Franz, Geoffrey A.; Robinson, Ross E.
2007-09-01
When using Fourier plane digital algorithms or an optical correlator to measure the correlation between digital images, interpolation by center-of-mass or quadratic estimation techniques can be used to estimate image displacement to the sub-pixel level. However, this can lead to a bias in the correlation measurement. This bias shifts the sub-pixel output measurement to be closer to the nearest pixel center than the actual location. The paper investigates the bias in the outputs of both digital and optical correlators, and proposes methods to minimize this effect. We use digital studies and optical implementations of the joint transform correlator to demonstrate optical registration with accuracies better than 0.1 pixels. We use both simulations of image shift and movies of a moving target as inputs. We demonstrate bias error for both center-of-mass and quadratic interpolation, and discuss the reasons that this bias is present. Finally, we suggest measures to reduce or eliminate the bias effects. We show that when sub-pixel bias is present, it can be eliminated by modifying the interpolation method. By removing the bias error, we improve registration accuracy by thirty percent.
Recording multiple spatially-heterodyned direct to digital holograms in one digital image
Hanson, Gregory R [Clinton, TN; Bingham, Philip R [Knoxville, TN
2008-03-25
Systems and methods are described for recording multiple spatially-heterodyned direct to digital holograms in one digital image. A method includes digitally recording, at a first reference beam-object beam angle, a first spatially-heterodyned hologram including spatial heterodyne fringes for Fourier analysis; Fourier analyzing the recorded first spatially-heterodyned hologram by shifting a first original origin of the recorded first spatially-heterodyned hologram to sit on top of a first spatial-heterodyne carrier frequency defined by the first reference beam-object beam angle; digitally recording, at a second reference beam-object beam angle, a second spatially-heterodyned hologram including spatial heterodyne fringes for Fourier analysis; Fourier analyzing the recorded second spatially-heterodyned hologram by shifting a second original origin of the recorded second spatially-heterodyned hologram to sit on top of a second spatial-heterodyne carrier frequency defined by the second reference beam-object beam angle; applying a first digital filter to cut off signals around the first original origin and define a first result; performing a first inverse Fourier transform on the first result; applying a second digital filter to cut off signals around the second original origin and define a second result; and performing a second inverse Fourier transform on the second result, wherein the first reference beam-object beam angle is not equal to the second reference beam-object beam angle and a single digital image includes both the first spatially-heterodyned hologram and the second spatially-heterodyned hologram.
THz holography in reflection using a high resolution microbolometer array.
Zolliker, Peter; Hack, Erwin
2015-05-04
We demonstrate a digital holographic setup for Terahertz imaging of surfaces in reflection. The set-up is based on a high-power continuous wave (CW) THz laser and a high-resolution (640 × 480 pixel) bolometer detector array. Wave propagation to non-parallel planes is used to reconstruct the object surface that is rotated relative to the detector plane. In addition we implement synthetic aperture methods for resolution enhancement and compare Fourier transform phase retrieval to phase stepping methods. A lateral resolution of 200 μm and a relative phase sensitivity of about 0.4 rad corresponding to a depth resolution of 6 μm are estimated from reconstructed images of two specially prepared test targets, respectively. We highlight the use of digital THz holography for surface profilometry as well as its potential for video-rate imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dowell, Larry Jonathan
Disclosed is a method and device for aligning at least two digital images. An embodiment may use frequency-domain transforms of small tiles created from each image to identify substantially similar, "distinguishing" features within each of the images, and then align the images together based on the location of the distinguishing features. To accomplish this, an embodiment may create equal sized tile sub-images for each image. A "key" for each tile may be created by performing a frequency-domain transform calculation on each tile. A information-distance difference between each possible pair of tiles on each image may be calculated to identify distinguishingmore » features. From analysis of the information-distance differences of the pairs of tiles, a subset of tiles with high discrimination metrics in relation to other tiles may be located for each image. The subset of distinguishing tiles for each image may then be compared to locate tiles with substantially similar keys and/or information-distance metrics to other tiles of other images. Once similar tiles are located for each image, the images may be aligned in relation to the identified similar tiles.« less
A text zero-watermarking method based on keyword dense interval
NASA Astrophysics Data System (ADS)
Yang, Fan; Zhu, Yuesheng; Jiang, Yifeng; Qing, Yin
2017-07-01
Digital watermarking has been recognized as a useful technology for the copyright protection and authentication of digital information. However, rarely did the former methods focus on the key content of digital carrier. The idea based on the protection of key content is more targeted and can be considered in different digital information, including text, image and video. In this paper, we use text as research object and a text zero-watermarking method which uses keyword dense interval (KDI) as the key content is proposed. First, we construct zero-watermarking model by introducing the concept of KDI and giving the method of KDI extraction. Second, we design detection model which includes secondary generation of zero-watermark and the similarity computing method of keyword distribution. Besides, experiments are carried out, and the results show that the proposed method gives better performance than other available methods especially in the attacks of sentence transformation and synonyms substitution.
Watermarking and copyright labeling of printed images
NASA Astrophysics Data System (ADS)
Hel-Or, Hagit Z.
2001-07-01
Digital watermarking is a labeling technique for digital images which embeds a code into the digital data so the data are marked. Watermarking techniques previously developed deal with on-line digital data. These techniques have been developed to withstand digital attacks such as image processing, image compression and geometric transformations. However, one must also consider the readily available attack of printing and scanning. The available watermarking techniques are not reliable under printing and scanning. In fact, one must consider the availability of watermarks for printed images as well as for digital images. An important issue is to intercept and prevent forgery in printed material such as currency notes, back checks, etc. and to track and validate sensitive and secrete printed material. Watermarking in such printed material can be used not only for verification of ownership but as an indicator of date and type of transaction or date and source of the printed data. In this work we propose a method of embedding watermarks in printed images by inherently taking advantage of the printing process. The method is visually unobtrusive to the printed image, the watermark is easily extracted and is robust under reconstruction errors. The decoding algorithm is automatic given the watermarked image.
Methods for performing fast discrete curvelet transforms of data
Candes, Emmanuel; Donoho, David; Demanet, Laurent
2010-11-23
Fast digital implementations of the second generation curvelet transform for use in data processing are disclosed. One such digital transformation is based on unequally-spaced fast Fourier transforms (USFFT) while another is based on the wrapping of specially selected Fourier samples. Both digital transformations return a table of digital curvelet coefficients indexed by a scale parameter, an orientation parameter, and a spatial location parameter. Both implementations are fast in the sense that they run in about O(n.sup.2 log n) flops for n by n Cartesian arrays or about O(N log N) flops for Cartesian arrays of size N=n.sup.3; in addition, they are also invertible, with rapid inversion algorithms of about the same complexity.
Carnegie Mellon University bioimaging day 2014: Challenges and opportunities in digital pathology
Rohde, Gustavo K.; Ozolek, John A.; Parwani, Anil V.; Pantanowitz, Liron
2014-01-01
Recent advances in digital imaging is impacting the practice of pathology. One of the key enabling technologies that is leading the way towards this transformation is the use of whole slide imaging (WSI) which allows glass slides to be converted into large image files that can be shared, stored, and analyzed rapidly. Many applications around this novel technology have evolved in the last decade including education, research and clinical applications. This publication highlights a collection of abstracts, each corresponding to a talk given at Carnegie Mellon University's (CMU) Bioimaging Day 2014 co-sponsored by the Biomedical Engineering and Lane Center for Computational Biology Departments at CMU. Topics related specifically to digital pathology are presented in this collection of abstracts. These include topics related to digital workflow implementation, imaging and artifacts, storage demands, and automated image analysis algorithms. PMID:25250190
Carnegie Mellon University bioimaging day 2014: Challenges and opportunities in digital pathology.
Rohde, Gustavo K; Ozolek, John A; Parwani, Anil V; Pantanowitz, Liron
2014-01-01
Recent advances in digital imaging is impacting the practice of pathology. One of the key enabling technologies that is leading the way towards this transformation is the use of whole slide imaging (WSI) which allows glass slides to be converted into large image files that can be shared, stored, and analyzed rapidly. Many applications around this novel technology have evolved in the last decade including education, research and clinical applications. This publication highlights a collection of abstracts, each corresponding to a talk given at Carnegie Mellon University's (CMU) Bioimaging Day 2014 co-sponsored by the Biomedical Engineering and Lane Center for Computational Biology Departments at CMU. Topics related specifically to digital pathology are presented in this collection of abstracts. These include topics related to digital workflow implementation, imaging and artifacts, storage demands, and automated image analysis algorithms.
Video rate morphological processor based on a redundant number representation
NASA Astrophysics Data System (ADS)
Kuczborski, Wojciech; Attikiouzel, Yianni; Crebbin, Gregory A.
1992-03-01
This paper presents a video rate morphological processor for automated visual inspection of printed circuit boards, integrated circuit masks, and other complex objects. Inspection algorithms are based on gray-scale mathematical morphology. Hardware complexity of the known methods of real-time implementation of gray-scale morphology--the umbra transform and the threshold decomposition--has prompted us to propose a novel technique which applied an arithmetic system without carrying propagation. After considering several arithmetic systems, a redundant number representation has been selected for implementation. Two options are analyzed here. The first is a pure signed digit number representation (SDNR) with the base of 4. The second option is a combination of the base-2 SDNR (to represent gray levels of images) and the conventional twos complement code (to represent gray levels of structuring elements). Operation principle of the morphological processor is based on the concept of the digit level systolic array. Individual processing units and small memory elements create a pipeline. The memory elements store current image windows (kernels). All operation primitives of processing units apply a unified direction of digit processing: most significant digit first (MSDF). The implementation technology is based on the field programmable gate arrays by Xilinx. This paper justified the rationality of a new approach to logic design, which is the decomposition of Boolean functions instead of Boolean minimization.
View subspaces for indexing and retrieval of 3D models
NASA Astrophysics Data System (ADS)
Dutagaci, Helin; Godil, Afzal; Sankur, Bülent; Yemez, Yücel
2010-02-01
View-based indexing schemes for 3D object retrieval are gaining popularity since they provide good retrieval results. These schemes are coherent with the theory that humans recognize objects based on their 2D appearances. The viewbased techniques also allow users to search with various queries such as binary images, range images and even 2D sketches. The previous view-based techniques use classical 2D shape descriptors such as Fourier invariants, Zernike moments, Scale Invariant Feature Transform-based local features and 2D Digital Fourier Transform coefficients. These methods describe each object independent of others. In this work, we explore data driven subspace models, such as Principal Component Analysis, Independent Component Analysis and Nonnegative Matrix Factorization to describe the shape information of the views. We treat the depth images obtained from various points of the view sphere as 2D intensity images and train a subspace to extract the inherent structure of the views within a database. We also show the benefit of categorizing shapes according to their eigenvalue spread. Both the shape categorization and data-driven feature set conjectures are tested on the PSB database and compared with the competitor view-based 3D shape retrieval algorithms.
Image-adaptive and robust digital wavelet-domain watermarking for images
NASA Astrophysics Data System (ADS)
Zhao, Yi; Zhang, Liping
2018-03-01
We propose a new frequency domain wavelet based watermarking technique. The key idea of our scheme is twofold: multi-tier solution representation of image and odd-even quantization embedding/extracting watermark. Because many complementary watermarks need to be hidden, the watermark image designed is image-adaptive. The meaningful and complementary watermark images was embedded into the original image (host image) by odd-even quantization modifying coefficients, which was selected from the detail wavelet coefficients of the original image, if their magnitudes are larger than their corresponding Just Noticeable Difference thresholds. The tests show good robustness against best-known attacks such as noise addition, image compression, median filtering, clipping as well as geometric transforms. Further research may improve the performance by refining JND thresholds.
3D Surface Reconstruction for Lower Limb Prosthetic Model using Radon Transform
NASA Astrophysics Data System (ADS)
Sobani, S. S. Mohd; Mahmood, N. H.; Zakaria, N. A.; Razak, M. A. Abdul
2018-03-01
This paper describes the idea to realize three-dimensional surfaces of objects with cylinder-based shapes where the techniques adopted and the strategy developed for a non-rigid three-dimensional surface reconstruction of an object from uncalibrated two-dimensional image sequences using multiple-view digital camera and turntable setup. The surface of an object is reconstructed based on the concept of tomography with the aid of performing several digital image processing algorithms on the two-dimensional images captured by a digital camera in thirty-six different projections and the three-dimensional structure of the surface is analysed. Four different objects are used as experimental models in the reconstructions and each object is placed on a manually rotated turntable. The results shown that the proposed method has successfully reconstruct the three-dimensional surface of the objects and practicable. The shape and size of the reconstructed three-dimensional objects are recognizable and distinguishable. The reconstructions of objects involved in the test are strengthened with the analysis where the maximum percent error obtained from the computation is approximately 1.4 % for the height whilst 4.0%, 4.79% and 4.7% for the diameters at three specific heights of the objects.
NASA Astrophysics Data System (ADS)
Fang, Jingyu; Xu, Haisong; Wang, Zhehong; Wu, Xiaomin
2016-05-01
With colorimetric characterization, digital cameras can be used as image-based tristimulus colorimeters for color communication. In order to overcome the restriction of fixed capture settings adopted in the conventional colorimetric characterization procedures, a novel method was proposed considering capture settings. The method calculating colorimetric value of the measured image contains five main steps, including conversion from RGB values to equivalent ones of training settings through factors based on imaging system model so as to build the bridge between different settings, scaling factors involved in preparation steps for transformation mapping to avoid errors resulted from nonlinearity of polynomial mapping for different ranges of illumination levels. The experiment results indicate that the prediction error of the proposed method, which was measured by CIELAB color difference formula, reaches less than 2 CIELAB units under different illumination levels and different correlated color temperatures. This prediction accuracy for different capture settings remains the same level as the conventional method for particular lighting condition.
Development of Michelson interferometer based spatial phase-shift digital shearography
NASA Astrophysics Data System (ADS)
Xie, Xin
Digital shearography is a non-contact, full field, optical measurement method, which has the capability of directly measuring the gradient of deformation. For high measurement sensitivity, phase evaluation method has to be introduced into digital shearography by phase-shift technique. Catalog by phase-shift method, digital phase-shift shearography can be divided into Temporal Phase-Shift Digital Shearography (TPS-DS) and Spatial Phase-Shift Digital Shearography (SPS-DS). TPS-DS is the most widely used phase-shift shearography system, due to its simple algorithm, easy operation and good phase-map quality. However, the application of TPS-DS is only limited in static/step-by-step loading measurement situation, due to its multi-step shifting process. In order to measure the strain under dynamic/continuous loading situation, a SPS-DS system has to be developed. This dissertation aims to develop a series of Michelson Interferometer based SPS-DS measurement methods to achieve the strain measurement by using only a single pair of speckle pattern images. The Michelson Interferometer based SPS-DS systems utilize special designed optical setup to introduce extra carrier frequency into the laser wavefront. The phase information corresponds to the strain field can be separated on the Fourier domain using a Fourier Transform and can further be evaluated with a Windowed Inverse Fourier Transform. With different optical setups and carrier frequency arrangements, the Michelson Interferometer based SPS-DS method is capable to achieve a variety of measurement tasks using only single pair of speckle pattern images. Catalog by the aimed measurand, these capable measurement tasks can be divided into five categories: 1) measurement of out-of-plane strain field with small shearing amount; 2) measurement of relative out-of-plane deformation field with big shearing amount; 3) simultaneous measurement of relative out-of-plane deformation field and deformation gradient field by using multiple carrier frequencies; 4) simultaneous measurement of two directional strain field using dual measurement channels 5) measurement of pure in-plane strain and pure out-of-plane strain with multiple carrier frequencies. The basic theory, optical path analysis, preliminary studies, results analysis and research plan are shown in detail in this dissertation.
Zhang, Zhi-Feng; Gao, Zhan; Liu, Yuan-Yuan; Jiang, Feng-Chun; Yang, Yan-Li; Ren, Yu-Fen; Yang, Hong-Jun; Yang, Kun; Zhang, Xiao-Dong
2012-01-01
Train wheel sets must be periodically inspected for possible or actual premature failures and it is very significant to record the wear history for the full life of utilization of wheel sets. This means that an online measuring system could be of great benefit to overall process control. An online non-contact method for measuring a wheel set's geometric parameters based on the opto-electronic measuring technique is presented in this paper. A charge coupled device (CCD) camera with a selected optical lens and a frame grabber was used to capture the image of the light profile of the wheel set illuminated by a linear laser. The analogue signals of the image were transformed into corresponding digital grey level values. The 'mapping function method' is used to transform an image pixel coordinate to a space coordinate. The images of wheel sets were captured when the train passed through the measuring system. The rim inside thickness and flange thickness were measured and analyzed. The spatial resolution of the whole image capturing system is about 0.33 mm. Theoretic and experimental results show that the online measurement system based on computer vision can meet wheel set measurement requirements.
An effective approach for iris recognition using phase-based image matching.
Miyazawa, Kazuyuki; Ito, Koichi; Aoki, Takafumi; Kobayashi, Koji; Nakajima, Hiroshi
2008-10-01
This paper presents an efficient algorithm for iris recognition using phase-based image matching--an image matching technique using phase components in 2D Discrete Fourier Transforms (DFTs) of given images. Experimental evaluation using CASIA iris image databases (versions 1.0 and 2.0) and Iris Challenge Evaluation (ICE) 2005 database clearly demonstrates that the use of phase components of iris images makes possible to achieve highly accurate iris recognition with a simple matching algorithm. This paper also discusses major implementation issues of our algorithm. In order to reduce the size of iris data and to prevent the visibility of iris images, we introduce the idea of 2D Fourier Phase Code (FPC) for representing iris information. The 2D FPC is particularly useful for implementing compact iris recognition devices using state-of-the-art Digital Signal Processing (DSP) technology.
Time-of-flight camera via a single-pixel correlation image sensor
NASA Astrophysics Data System (ADS)
Mao, Tianyi; Chen, Qian; He, Weiji; Dai, Huidong; Ye, Ling; Gu, Guohua
2018-04-01
A time-of-flight imager based on single-pixel correlation image sensors is proposed for noise-free depth map acquisition in presence of ambient light. Digital micro-mirror device and time-modulated IR-laser provide spatial and temporal illumination on the unknown object. Compressed sensing and ‘four bucket principle’ method are combined to reconstruct the depth map from a sequence of measurements at a low sampling rate. Second-order correlation transform is also introduced to reduce the noise from the detector itself and direct ambient light. Computer simulations are presented to validate the computational models and improvement of reconstructions.
Makeyev, Oleksandr; Sazonov, Edward; Schuckers, Stephanie; Lopez-Meyer, Paulo; Melanson, Ed; Neuman, Michael
2007-01-01
In this paper we propose a sound recognition technique based on the limited receptive area (LIRA) neural classifier and continuous wavelet transform (CWT). LIRA neural classifier was developed as a multipurpose image recognition system. Previous tests of LIRA demonstrated good results in different image recognition tasks including: handwritten digit recognition, face recognition, metal surface texture recognition, and micro work piece shape recognition. We propose a sound recognition technique where scalograms of sound instances serve as inputs of the LIRA neural classifier. The methodology was tested in recognition of swallowing sounds. Swallowing sound recognition may be employed in systems for automated swallowing assessment and diagnosis of swallowing disorders. The experimental results suggest high efficiency and reliability of the proposed approach.
Wide-field computational imaging of pathology slides using lens-free on-chip microscopy.
Greenbaum, Alon; Zhang, Yibo; Feizi, Alborz; Chung, Ping-Luen; Luo, Wei; Kandukuri, Shivani R; Ozcan, Aydogan
2014-12-17
Optical examination of microscale features in pathology slides is one of the gold standards to diagnose disease. However, the use of conventional light microscopes is partially limited owing to their relatively high cost, bulkiness of lens-based optics, small field of view (FOV), and requirements for lateral scanning and three-dimensional (3D) focus adjustment. We illustrate the performance of a computational lens-free, holographic on-chip microscope that uses the transport-of-intensity equation, multi-height iterative phase retrieval, and rotational field transformations to perform wide-FOV imaging of pathology samples with comparable image quality to a traditional transmission lens-based microscope. The holographically reconstructed image can be digitally focused at any depth within the object FOV (after image capture) without the need for mechanical focus adjustment and is also digitally corrected for artifacts arising from uncontrolled tilting and height variations between the sample and sensor planes. Using this lens-free on-chip microscope, we successfully imaged invasive carcinoma cells within human breast sections, Papanicolaou smears revealing a high-grade squamous intraepithelial lesion, and sickle cell anemia blood smears over a FOV of 20.5 mm(2). The resulting wide-field lens-free images had sufficient image resolution and contrast for clinical evaluation, as demonstrated by a pathologist's blinded diagnosis of breast cancer tissue samples, achieving an overall accuracy of ~99%. By providing high-resolution images of large-area pathology samples with 3D digital focus adjustment, lens-free on-chip microscopy can be useful in resource-limited and point-of-care settings. Copyright © 2014, American Association for the Advancement of Science.
Reconstruction of noisy and blurred images using blur kernel
NASA Astrophysics Data System (ADS)
Ellappan, Vijayan; Chopra, Vishal
2017-11-01
Blur is a common in so many digital images. Blur can be caused by motion of the camera and scene object. In this work we proposed a new method for deblurring images. This work uses sparse representation to identify the blur kernel. By analyzing the image coordinates Using coarse and fine, we fetch the kernel based image coordinates and according to that observation we get the motion angle of the shaken or blurred image. Then we calculate the length of the motion kernel using radon transformation and Fourier for the length calculation of the image and we use Lucy Richardson algorithm which is also called NON-Blind(NBID) Algorithm for more clean and less noisy image output. All these operation will be performed in MATLAB IDE.
Evaluation of solar angle variation over digital processing of LANDSAT imagery. [Brazil
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Novo, E. M. L. M.
1984-01-01
The effects of the seasonal variation of illumination over digital processing of LANDSAT images are evaluated. Original images are transformed by means of digital filtering to enhance their spatial features. The resulting images are used to obtain an unsupervised classification of relief units. After defining relief classes, which are supposed to be spectrally different, topographic variables (declivity, altitude, relief range and slope length) are used to identify the true relief units existing on the ground. The samples are also clustered by means of an unsupervised classification option. The results obtained for each LANDSAT overpass are compared. Digital processing is highly affected by illumination geometry. There is no correspondence between relief units as defined by spectral features and those resulting from topographic features.
Multiplexed wavelet transform technique for detection of microcalcification in digitized mammograms.
Mini, M G; Devassia, V P; Thomas, Tessamma
2004-12-01
Wavelet transform (WT) is a potential tool for the detection of microcalcifications, an early sign of breast cancer. This article describes the implementation and evaluates the performance of two novel WT-based schemes for the automatic detection of clustered microcalcifications in digitized mammograms. Employing a one-dimensional WT technique that utilizes the pseudo-periodicity property of image sequences, the proposed algorithms achieve high detection efficiency and low processing memory requirements. The detection is achieved from the parent-child relationship between the zero-crossings [Marr-Hildreth (M-H) detector] /local extrema (Canny detector) of the WT coefficients at different levels of decomposition. The detected pixels are weighted before the inverse transform is computed, and they are segmented by simple global gray level thresholding. Both detectors produce 95% detection sensitivity, even though there are more false positives for the M-H detector. The M-H detector preserves the shape information and provides better detection sensitivity for mammograms containing widely distributed calcifications.
NASA Astrophysics Data System (ADS)
Zhao, Yun-wei; Zhu, Zi-qiang; Lu, Guang-yin; Han, Bo
2018-03-01
The sine and cosine transforms implemented with digital filters have been used in the Transient electromagnetic methods for a few decades. Kong (2007) proposed a method of obtaining filter coefficients, which are computed in the sample domain by Hankel transform pair. However, the curve shape of Hankel transform pair changes with a parameter, which usually is set to be 1 or 3 in the process of obtaining the digital filter coefficients of sine and cosine transforms. First, this study investigates the influence of the parameter on the digital filter algorithm of sine and cosine transforms based on the digital filter algorithm of Hankel transform and the relationship between the sine, cosine function and the ±1/2 order Bessel function of the first kind. The results show that the selection of the parameter highly influences the precision of digital filter algorithm. Second, upon the optimal selection of the parameter, it is found that an optimal sampling interval s also exists to achieve the best precision of digital filter algorithm. Finally, this study proposes four groups of sine and cosine transform digital filter coefficients with different length, which may help to develop the digital filter algorithm of sine and cosine transforms, and promote its application.
NASA Astrophysics Data System (ADS)
Weidlich, O.; Bernecker, M.
2004-04-01
Measurements of laminations from marine and limnic sediments are commonly a time-consuming procedure. However, the resulting quantitative proxies are of importance for the interpretation of both, climate changes and paleo-seismic activities. Digital image analysis accelerates the generation and interpretation of large data sets from laminated sediments based on contrasting grey values of dark and light laminae. Statistical transformation and correlation of the grey value signals reflect high frequency cycles due to changing mean laminae thicknesses, and thus provide data monitoring climate change. Perturbations (e.g., slumping structures, seismites, and tsunamites) of the commonly continuous laminae record seismic activities and obtain proxies for paleo-earthquake frequency. Using outcrop data from (i) the Pleistocene Lisan Formation of Jordan (Dead Sea Basin) and (ii) the Carboniferous-Permian Copacabana Formation of Bolivia (Lake Titicaca), we present a two-step approach to gain high-resolution time series based on field data for both purposes from unconsolidated and lithified outcrops. Step 1 concerns the construction of a continuous digital phototransect and step 2 covers the creation of a grey density curve based on digital photos along a line transect using image analysis. The applied automated image analysis technique provides a continuous digital record of the studied sections and, therefore, serves as useful tool for the evaluation of further proxy data. Analysing the obtained grey signal of the light and dark laminae of varves using phototransects, we discuss the potential and limitations of the proposed technique.
Wavelet domain textual coding of Ottoman script images
NASA Astrophysics Data System (ADS)
Gerek, Oemer N.; Cetin, Enis A.; Tewfik, Ahmed H.
1996-02-01
Image coding using wavelet transform, DCT, and similar transform techniques is well established. On the other hand, these coding methods neither take into account the special characteristics of the images in a database nor are they suitable for fast database search. In this paper, the digital archiving of Ottoman printings is considered. Ottoman documents are printed in Arabic letters. Witten et al. describes a scheme based on finding the characters in binary document images and encoding the positions of the repeated characters This method efficiently compresses document images and is suitable for database research, but it cannot be applied to Ottoman or Arabic documents as the concept of character is different in Ottoman or Arabic. Typically, one has to deal with compound structures consisting of a group of letters. Therefore, the matching criterion will be according to those compound structures. Furthermore, the text images are gray tone or color images for Ottoman scripts for the reasons that are described in the paper. In our method the compound structure matching is carried out in wavelet domain which reduces the search space and increases the compression ratio. In addition to the wavelet transformation which corresponds to the linear subband decomposition, we also used nonlinear subband decomposition. The filters in the nonlinear subband decomposition have the property of preserving edges in the low resolution subband image.
Automatic Generation of Caricatures with Multiple Expressions Using Transformative Approach
NASA Astrophysics Data System (ADS)
Liao, Wen-Hung; Lai, Chien-An
The proliferation of digital cameras has changed the way we create and share photos. Novel forms of photo composition and reproduction have surfaced in recent years. In this paper, we present an automatic caricature generation system using transformative approaches. By combing facial feature detection, image segmentation and image warping/morphing techniques, the system is able to generate stylized caricature using only one reference image. When more than one reference sample are available, the system can either choose the best fit based on shape matching, or synthesize a composite style using polymorph technique. The system can also produce multiple expressions by controlling a subset of MPEG-4 facial animation parameters (FAP). Finally, to enable flexible manipulation of the synthetic caricature, we also investigate issues such as color quantization and raster-to-vector conversion. A major strength of our method is that the synthesized caricature bears a higher degree of resemblance to the real person than traditional component-based approaches.
Accurate B-spline-based 3-D interpolation scheme for digital volume correlation
NASA Astrophysics Data System (ADS)
Ren, Maodong; Liang, Jin; Wei, Bin
2016-12-01
An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.
Method for simulating dose reduction in digital mammography using the Anscombe transformation.
Borges, Lucas R; Oliveira, Helder C R de; Nunes, Polyana F; Bakic, Predrag R; Maidment, Andrew D A; Vieira, Marcelo A C
2016-06-01
This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise metrics confirm that this method is capable of precisely simulating various dose reductions.
3D reconstruction based on light field images
NASA Astrophysics Data System (ADS)
Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei
2018-04-01
This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.
NASA Astrophysics Data System (ADS)
Pape, Dennis R.
1990-09-01
The present conference discusses topics in optical image processing, optical signal processing, acoustooptic spectrum analyzer systems and components, and optical computing. Attention is given to tradeoffs in nonlinearly recorded matched filters, miniature spatial light modulators, detection and classification using higher-order statistics of optical matched filters, rapid traversal of an image data base using binary synthetic discriminant filters, wideband signal processing for emitter location, an acoustooptic processor for autonomous SAR guidance, and sampling of Fresnel transforms. Also discussed are an acoustooptic RF signal-acquisition system, scanning acoustooptic spectrum analyzers, the effects of aberrations on acoustooptic systems, fast optical digital arithmetic processors, information utilization in analog and digital processing, optical processors for smart structures, and a self-organizing neural network for unsupervised learning.
Automated reconstruction of standing posture panoramas from multi-sector long limb x-ray images
NASA Astrophysics Data System (ADS)
Miller, Linzey; Trier, Caroline; Ben-Zikri, Yehuda K.; Linte, Cristian A.
2016-03-01
Due to the digital X-ray imaging system's limited field of view, several individual sector images are required to capture the posture of an individual in standing position. These images are then "stitched together" to reconstruct the standing posture. We have created an image processing application that automates the stitching, therefore minimizing user input, optimizing workflow, and reducing human error. The application begins with pre-processing the input images by removing artifacts, filtering out isolated noisy regions, and amplifying a seamless bone edge. The resulting binary images are then registered together using a rigid-body intensity based registration algorithm. The identified registration transformations are then used to map the original sector images into the panorama image. Our method focuses primarily on the use of the anatomical content of the images to generate the panoramas as opposed to using external markers employed to aid with the alignment process. Currently, results show robust edge detection prior to registration and we have tested our approach by comparing the resulting automatically-stitched panoramas to the manually stitched panoramas in terms of registration parameters, target registration error of homologous markers, and the homogeneity of the digitally subtracted automatically- and manually-stitched images using 26 patient datasets.
Skeletonization of gray-scale images by gray weighted distance transform
NASA Astrophysics Data System (ADS)
Qian, Kai; Cao, Siqi; Bhattacharya, Prabir
1997-07-01
In pattern recognition, thinning algorithms are often a useful tool to represent a digital pattern by means of a skeletonized image, consisting of a set of one-pixel-width lines that highlight the significant features interest in applying thinning directly to gray-scale images, motivated by the desire of processing images characterized by meaningful information distributed over different levels of gray intensity. In this paper, a new algorithm is presented which can skeletonize both black-white and gray pictures. This algorithm is based on the gray distance transformation and can be used to process any non-well uniformly distributed gray-scale picture and can preserve the topology of original picture. This process includes a preliminary phase of investigation in the 'hollows' in the gray-scale image; these hollows are considered not as topological constrains for the skeleton structure depending on their statistically significant depth. This algorithm can also be executed on a parallel machine as all the operations are executed in local. Some examples are discussed to illustrate the algorithm.
Image-adapted visually weighted quantization matrices for digital image compression
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1994-01-01
A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Portable real-time color night vision
NASA Astrophysics Data System (ADS)
Toet, Alexander; Hogervorst, Maarten A.
2008-03-01
We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a daytime color reference image. The colors in the resulting colorized multiband night-time images closely resemble the colors in the daytime color reference image. Also, object colors remain invariant under panning operations and are independent of the scene content. Here we describe the implementation of this method in two prototype portable dual band realtime night vision systems. One system provides co-aligned visual and near-infrared bands of two image intensifiers, the other provides co-aligned images from a digital image intensifier and an uncooled longwave infrared microbolometer. The co-aligned images from both systems are further processed by a notebook computer. The color mapping is implemented as a realtime lookup table transform. The resulting colorised video streams can be displayed in realtime on head mounted displays and stored on the hard disk of the notebook computer. Preliminary field trials demonstrate the potential of these systems for applications like surveillance, navigation and target detection.
Maheshwari, Shishir; Pachori, Ram Bilas; Acharya, U Rajendra
2017-05-01
Glaucoma is an ocular disorder caused due to increased fluid pressure in the optic nerve. It damages the optic nerve and subsequently causes loss of vision. The available scanning methods are Heidelberg retinal tomography, scanning laser polarimetry, and optical coherence tomography. These methods are expensive and require experienced clinicians to use them. So, there is a need to diagnose glaucoma accurately with low cost. Hence, in this paper, we have presented a new methodology for an automated diagnosis of glaucoma using digital fundus images based on empirical wavelet transform (EWT). The EWT is used to decompose the image, and correntropy features are obtained from decomposed EWT components. These extracted features are ranked based on t value feature selection algorithm. Then, these features are used for the classification of normal and glaucoma images using least-squares support vector machine (LS-SVM) classifier. The LS-SVM is employed for classification with radial basis function, Morlet wavelet, and Mexican-hat wavelet kernels. The classification accuracy of the proposed method is 98.33% and 96.67% using threefold and tenfold cross validation, respectively.
Zhang, Juwei; Tan, Xiaojiang; Zheng, Pengbo
2017-01-01
Electromagnetic methods are commonly employed to detect wire rope discontinuities. However, determining the residual strength of wire rope based on the quantitative recognition of discontinuities remains problematic. We have designed a prototype device based on the residual magnetic field (RMF) of ferromagnetic materials, which overcomes the disadvantages associated with in-service inspections, such as large volume, inconvenient operation, low precision, and poor portability by providing a relatively small and lightweight device with improved detection precision. A novel filtering system consisting of the Hilbert-Huang transform and compressed sensing wavelet filtering is presented. Digital image processing was applied to achieve the localization and segmentation of defect RMF images. The statistical texture and invariant moment characteristics of the defect images were extracted as the input of a radial basis function neural network. Experimental results show that the RMF device can detect defects in various types of wire rope and prolong the service life of test equipment by reducing the friction between the detection device and the wire rope by accommodating a high lift-off distance. PMID:28300790
Chung, Kuo-Liang; Hsu, Tsu-Chun; Huang, Chi-Chao
2017-10-01
In this paper, we propose a novel and effective hybrid method, which joins the conventional chroma subsampling and the distortion-minimization-based luma modification together, to improve the quality of the reconstructed RGB full-color image. Assume the input RGB full-color image has been transformed to a YUV image, prior to compression. For each 2×2 UV block, one 4:2:0 subsampling is applied to determine the one subsampled U and V components, U s and V s . Based on U s , V s , and the corresponding 2×2 original RGB block, a main theorem is provided to determine the ideally modified 2×2 luma block in constant time such that the color peak signal-to-noise ratio (CPSNR) quality distortion between the original 2×2 RGB block and the reconstructed 2×2 RGB block can be minimized in a globally optimal sense. Furthermore, the proposed hybrid method and the delivered theorem are adjusted to tackle the digital time delay integration images and the Bayer mosaic images whose Bayer CFA structure has been widely used in modern commercial digital cameras. Based on the IMAX, Kodak, and screen content test image sets, the experimental results demonstrate that in high efficiency video coding, the proposed hybrid method has substantial quality improvement, in terms of the CPSNR quality, visual effect, CPSNR-bitrate trade-off, and Bjøntegaard delta PSNR performance, of the reconstructed RGB images when compared with existing chroma subsampling schemes.
Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images
Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong
2015-01-01
In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods. PMID:26703596
Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images.
Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong
2015-12-12
In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods.
Two-stage Keypoint Detection Scheme for Region Duplication Forgery Detection in Digital Images.
Emam, Mahmoud; Han, Qi; Zhang, Hongli
2018-01-01
In digital image forensics, copy-move or region duplication forgery detection became a vital research topic recently. Most of the existing keypoint-based forgery detection methods fail to detect the forgery in the smooth regions, rather than its sensitivity to geometric changes. To solve these problems and detect points which cover all the regions, we proposed two steps for keypoint detection. First, we employed the scale-invariant feature operator to detect the spatially distributed keypoints from the textured regions. Second, the keypoints from the missing regions are detected using Harris corner detector with nonmaximal suppression to evenly distribute the detected keypoints. To improve the matching performance, local feature points are described using Multi-support Region Order-based Gradient Histogram descriptor. Based on precision-recall rates and commonly tested dataset, comprehensive performance evaluation is performed. The results demonstrated that the proposed scheme has better detection and robustness against some geometric transformation attacks compared with state-of-the-art methods. © 2017 American Academy of Forensic Sciences.
A Hybrid Soft-computing Method for Image Analysis of Digital Plantar Scanners.
Razjouyan, Javad; Khayat, Omid; Siahi, Mehdi; Mansouri, Ali Alizadeh
2013-01-01
Digital foot scanners have been developed in recent years to yield anthropometrists digital image of insole with pressure distribution and anthropometric information. In this paper, a hybrid algorithm containing gray level spatial correlation (GLSC) histogram and Shanbag entropy is presented for analysis of scanned foot images. An evolutionary algorithm is also employed to find the optimum parameters of GLSC and transform function of the membership values. Resulting binary images as the thresholded images are undergone anthropometric measurements taking in to account the scale factor of pixel size to metric scale. The proposed method is finally applied to plantar images obtained through scanning feet of randomly selected subjects by a foot scanner system as our experimental setup described in the paper. Running computation time and the effects of GLSC parameters are investigated in the simulation results.
Digital techniques for processing Landsat imagery
NASA Technical Reports Server (NTRS)
Green, W. B.
1978-01-01
An overview of the basic techniques used to process Landsat images with a digital computer, and the VICAR image processing software developed at JPL and available to users through the NASA sponsored COSMIC computer program distribution center is presented. Examples of subjective processing performed to improve the information display for the human observer, such as contrast enhancement, pseudocolor display and band rationing, and of quantitative processing using mathematical models, such as classification based on multispectral signatures of different areas within a given scene and geometric transformation of imagery into standard mapping projections are given. Examples are illustrated by Landsat scenes of the Andes mountains and Altyn-Tagh fault zone in China before and after contrast enhancement and classification of land use in Portland, Oregon. The VICAR image processing software system which consists of a language translator that simplifies execution of image processing programs and provides a general purpose format so that imagery from a variety of sources can be processed by the same basic set of general applications programs is described.
USDA-ARS?s Scientific Manuscript database
A structured-illumination reflectance imaging (SIRI) system was recently developed in our laboratory for enhanced quality evaluation of horticultural products. It was implemented using a digital camera to acquire reflectance images from food products subjected to sinusoidal patterns (or other simila...
NASA Astrophysics Data System (ADS)
Lai, Chia-Lin; Lee, Jhih-Shian; Chen, Jyh-Cheng
2015-02-01
Energy-mapping, the conversion of linear attenuation coefficients (μ) calculated at the effective computed tomography (CT) energy to those corresponding to 511 keV, is an important step in CT-based attenuation correction (CTAC) for positron emission tomography (PET) quantification. The aim of this study was to implement energy-mapping step by using curve fitting ability of artificial neural network (ANN). Eleven digital phantoms simulated by Geant4 application for tomographic emission (GATE) and 12 physical phantoms composed of various volume concentrations of iodine contrast were used in this study to generate energy-mapping curves by acquiring average CT values and linear attenuation coefficients at 511 keV of these phantoms. The curves were built with ANN toolbox in MATLAB. To evaluate the effectiveness of the proposed method, another two digital phantoms (liver and spine-bone) and three physical phantoms (volume concentrations of 3%, 10% and 20%) were used to compare the energy-mapping curves built by ANN and bilinear transformation, and a semi-quantitative analysis was proceeded by injecting 0.5 mCi FDG into a SD rat for micro-PET scanning. The results showed that the percentage relative difference (PRD) values of digital liver and spine-bone phantom are 5.46% and 1.28% based on ANN, and 19.21% and 1.87% based on bilinear transformation. For 3%, 10% and 20% physical phantoms, the PRD values of ANN curve are 0.91%, 0.70% and 3.70%, and the PRD values of bilinear transformation are 3.80%, 1.44% and 4.30%, respectively. Both digital and physical phantoms indicated that the ANN curve can achieve better performance than bilinear transformation. The semi-quantitative analysis of rat PET images showed that the ANN curve can reduce the inaccuracy caused by attenuation effect from 13.75% to 4.43% in brain tissue, and 23.26% to 9.41% in heart tissue. On the other hand, the inaccuracy remained 6.47% and 11.51% in brain and heart tissue when the bilinear transformation was used. Overall, it can be concluded that the bilinear transformation method resulted in considerable bias and the newly proposed calibration curve built by ANN could achieve better results with acceptable accuracy.
Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera.
Nguyen, Thuy Tuong; Slaughter, David C; Hanson, Bradley D; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin
2015-07-28
This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images.
Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera
Nguyen, Thuy Tuong; Slaughter, David C.; Hanson, Bradley D.; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin
2015-01-01
This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images. PMID:26225982
Understanding the Physical Optics Phenomena by Using a Digital Application for Light Propagation
NASA Astrophysics Data System (ADS)
Sierra-Sosa, Daniel-Esteban; Ángel-Toro, Luciano
2011-01-01
Understanding the light propagation on the basis of the Huygens-Fresnel principle stands for a fundamental factor for deeper comprehension of different physical optics related phenomena like diffraction, self-imaging, image formation, Fourier analysis and spatial filtering. This constitutes the physical approach of the Fourier optics whose principles and applications have been developed since the 1950's. Both for analytical and digital applications purposes, light propagation can be formulated in terms of the Fresnel Integral Transform. In this work, a digital optics application based on the implementation of the Discrete Fresnel Transform (DFT), and addressed to serve as a tool for applications in didactics of optics is presented. This tool allows, at a basic and intermediate learning level, exercising with the identification of basic phenomena, and observing changes associated with modifications of physical parameters. This is achieved by using a friendly graphic user interface (GUI). It also assists the user in the development of his capacity for abstracting and predicting the characteristics of more complicated phenomena. At an upper level of learning, the application could be used to favor a deeper comprehension of involved physics and models, and experimenting with new models and configurations. To achieve this, two characteristics of the didactic tool were taken into account when designing it. First, all physical operations, ranging from simple diffraction experiments to digital holography and interferometry, were developed on the basis of the more fundamental concept of light propagation. Second, the algorithm was conceived to be easily upgradable due its modular architecture based in MATLAB® software environment. Typical results are presented and briefly discussed in connection with didactics of optics.
Correction of projective distortion in long-image-sequence mosaics without prior information
NASA Astrophysics Data System (ADS)
Yang, Chenhui; Mao, Hongwei; Abousleman, Glen; Si, Jennie
2010-04-01
Image mosaicking is the process of piecing together multiple video frames or still images from a moving camera to form a wide-area or panoramic view of the scene being imaged. Mosaics have widespread applications in many areas such as security surveillance, remote sensing, geographical exploration, agricultural field surveillance, virtual reality, digital video, and medical image analysis, among others. When mosaicking a large number of still images or video frames, the quality of the resulting mosaic is compromised by projective distortion. That is, during the mosaicking process, the image frames that are transformed and pasted to the mosaic become significantly scaled down and appear out of proportion with respect to the mosaic. As more frames continue to be transformed, important target information in the frames can be lost since the transformed frames become too small, which eventually leads to the inability to continue further. Some projective distortion correction techniques make use of prior information such as GPS information embedded within the image, or camera internal and external parameters. Alternatively, this paper proposes a new algorithm to reduce the projective distortion without using any prior information whatsoever. Based on the analysis of the projective distortion, we approximate the projective matrix that describes the transformation between image frames using an affine model. Using singular value decomposition, we can deduce the affine model scaling factor that is usually very close to 1. By resetting the image scale of the affine model to 1, the transformed image size remains unchanged. Even though the proposed correction introduces some error in the image matching, this error is typically acceptable and more importantly, the final mosaic preserves the original image size after transformation. We demonstrate the effectiveness of this new correction algorithm on two real-world unmanned air vehicle (UAV) sequences. The proposed method is shown to be effective and suitable for real-time implementation.
Liu, Da; Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai
2016-01-01
Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012.
Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai
2016-01-01
Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012. PMID:27281032
Animal-Free Chemical Safety Assessment
Loizou, George D.
2016-01-01
The exponential growth of the Internet of Things and the global popularity and remarkable decline in cost of the mobile phone is driving the digital transformation of medical practice. The rapidly maturing digital, non-medical world of mobile (wireless) devices, cloud computing and social networking is coalescing with the emerging digital medical world of omics data, biosensors and advanced imaging which offers the increasingly realistic prospect of personalized medicine. Described as a potential “seismic” shift from the current “healthcare” model to a “wellness” paradigm that is predictive, preventative, personalized and participatory, this change is based on the development of increasingly sophisticated biosensors which can track and measure key biochemical variables in people. Additional key drivers in this shift are metabolomic and proteomic signatures, which are increasingly being reported as pre-symptomatic, diagnostic and prognostic of toxicity and disease. These advancements also have profound implications for toxicological evaluation and safety assessment of pharmaceuticals and environmental chemicals. An approach based primarily on human in vivo and high-throughput in vitro human cell-line data is a distinct possibility. This would transform current chemical safety assessment practice which operates in a human “data poor” to a human “data rich” environment. This could also lead to a seismic shift from the current animal-based to an animal-free chemical safety assessment paradigm. PMID:27493630
Artifacts in Radar Imaging of Moving Targets
2012-09-01
CA, USA, 2007. [11] B. Borden, Radar imaging of airborne targets: A primer for Applied mathematicians and Physicists . New York, NY: Taylor and... Project (0704–0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE 21 September 2012 3. REPORT TYPE AND DATES COVERED...CW Continuous Wave DAC Digital to Analog Convertor DFT Discrete Fourier Transform FBP Filtered Back Projection FFT Fast Fourier Transform GPS
Transforming Dermatologic Imaging for the Digital Era: Metadata and Standards.
Caffery, Liam J; Clunie, David; Curiel-Lewandrowski, Clara; Malvehy, Josep; Soyer, H Peter; Halpern, Allan C
2018-01-17
Imaging is increasingly being used in dermatology for documentation, diagnosis, and management of cutaneous disease. The lack of standards for dermatologic imaging is an impediment to clinical uptake. Standardization can occur in image acquisition, terminology, interoperability, and metadata. This paper presents the International Skin Imaging Collaboration position on standardization of metadata for dermatologic imaging. Metadata is essential to ensure that dermatologic images are properly managed and interpreted. There are two standards-based approaches to recording and storing metadata in dermatologic imaging. The first uses standard consumer image file formats, and the second is the file format and metadata model developed for the Digital Imaging and Communication in Medicine (DICOM) standard. DICOM would appear to provide an advantage over using consumer image file formats for metadata as it includes all the patient, study, and technical metadata necessary to use images clinically. Whereas, consumer image file formats only include technical metadata and need to be used in conjunction with another actor-for example, an electronic medical record-to supply the patient and study metadata. The use of DICOM may have some ancillary benefits in dermatologic imaging including leveraging DICOM network and workflow services, interoperability of images and metadata, leveraging existing enterprise imaging infrastructure, greater patient safety, and better compliance to legislative requirements for image retention.
A 2D Fourier tool for the analysis of photo-elastic effect in large granular assemblies
NASA Astrophysics Data System (ADS)
Leśniewska, Danuta
2017-06-01
Fourier transforms are the basic tool in constructing different types of image filters, mainly those reducing optical noise. Some DIC or PIV software also uses frequency space to obtain displacement fields from a series of digital images of a deforming body. The paper presents series of 2D Fourier transforms of photo-elastic transmission images, representing large pseudo 2D granular assembly, deforming under varying boundary conditions. The images related to different scales were acquired using the same image resolution, but taken at different distance from the sample. Fourier transforms of images, representing different stages of deformation, reveal characteristic features at the three (`macro-`, `meso-` and `micro-`) scales, which can serve as a data to study internal order-disorder transition within granular materials.
Personal Fabrication Systems: From Bits to Atoms
ERIC Educational Resources Information Center
Bull, Glen; Garofalo, Joe
2009-01-01
Media--text, images, audio, and video--underwent a transformation from analog to digital formats during the transition from the 20th to the 21st century. Digital media can easily be replicated, downloaded, revised, edited, and reposted, and the implications of this are affecting education, government, entertainment, culture, and society. The…
NASA Astrophysics Data System (ADS)
Avbelj, Janja; Iwaszczuk, Dorota; Müller, Rupert; Reinartz, Peter; Stilla, Uwe
2015-02-01
For image fusion in remote sensing applications the georeferencing accuracy using position, attitude, and camera calibration measurements can be insufficient. Thus, image processing techniques should be employed for precise coregistration of images. In this article a method for multimodal object-based image coregistration refinement between hyperspectral images (HSI) and digital surface models (DSM) is presented. The method is divided in three parts: object outline detection in HSI and DSM, matching, and determination of transformation parameters. The novelty of our proposed coregistration refinement method is the use of material properties and height information of urban objects from HSI and DSM, respectively. We refer to urban objects as objects which are typical in urban environments and focus on buildings by describing them with 2D outlines. Furthermore, the geometric accuracy of these detected building outlines is taken into account in the matching step and for the determination of transformation parameters. Hence, a stochastic model is introduced to compute optimal transformation parameters. The feasibility of the method is shown by testing it on two aerial HSI of different spatial and spectral resolution, and two DSM of different spatial resolution. The evaluation is carried out by comparing the accuracies of the transformations parameters to the reference parameters, determined by considering object outlines at much higher resolution, and also by computing the correctness and the quality rate of the extracted outlines before and after coregistration refinement. Results indicate that using outlines of objects instead of only line segments is advantageous for coregistration of HSI and DSM. The extraction of building outlines in comparison to the line cue extraction provides a larger amount of assigned lines between the images and is more robust to outliers, i.e. false matches.
Coherent time-stretch transformation for real-time capture of wideband signals.
Buckley, Brandon W; Madni, Asad M; Jalali, Bahram
2013-09-09
Time stretch transformation of wideband waveforms boosts the performance of analog-to-digital converters and digital signal processors by slowing down analog electrical signals before digitization. The transform is based on dispersive Fourier transformation implemented in the optical domain. A coherent receiver would be ideal for capturing the time-stretched optical signal. Coherent receivers offer improved sensitivity, allow for digital cancellation of dispersion-induced impairments and optical nonlinearities, and enable decoding of phase-modulated optical data formats. Because time-stretch uses a chirped broadband (>1 THz) optical carrier, a new coherent detection technique is required. In this paper, we introduce and demonstrate coherent time stretch transformation; a technique that combines dispersive Fourier transform with optically broadband coherent detection.
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Novo, E. M. L. M.
1983-01-01
The effects of the seasonal variation of illumination over digital processing of LANDSAT images are evaluated. Two sets of LANDSAT data referring to the orbit 150 and row 28 were selected with illumination parameters varying from 43 deg to 64 deg for azimuth and from 30 deg to 36 deg for solar elevation respectively. IMAGE-100 system permitted the digital processing of LANDSAT data. Original images were transformed by means of digital filtering so as to enhance their spatial features. The resulting images were used to obtain an unsupervised classification of relief units. Topographic variables (declivity, altitude, relief range and slope length) were used to identify the true relief units existing on the ground. The LANDSAT over pass data show that digital processing is highly affected by illumination geometry, and there is no correspondence between relief units as defined by spectral features and those resulting from topographic features.
Exploring of PST-TBPM in Monitoring Dynamic Deformation of Steel Structure in Vibration
NASA Astrophysics Data System (ADS)
Chen, Mingzhi; Zhao, Yongqian; Hai, Hua; Yu, Chengxin; Zhang, Guojian
2018-01-01
In order to monitor the dynamic deformation of steel structure in the real-time, digital photography is used in this paper. Firstly, the grid method is used correct the distortion of digital camera. Then the digital cameras are used to capture the initial and experimental images of steel structure to obtain its relative deformation. PST-TBPM (photographing scale transformation-time baseline parallax method) is used to eliminate the parallax error and convert the pixel change value of deformation points into the actual displacement value. In order to visualize the deformation trend of steel structure, the deformation curves are drawn based on the deformation value of deformation points. Results show that the average absolute accuracy and relative accuracy of PST-TBPM are 0.28mm and 1.1‰, respectively. Digital photography used in this study can meet accuracy requirements of steel structure deformation monitoring. It also can warn the safety of steel structure and provide data support for managers’ safety decisions based on the deformation curves on site.
Solution for the nonuniformity correction of infrared focal plane arrays.
Zhou, Huixin; Liu, Shangqian; Lai, Rui; Wang, Dabao; Cheng, Yubao
2005-05-20
Based on the S-curve model of the detector response of infrared focal plan arrays (IRFPAs), an improved two-point correction algorithm is presented. The algorithm first transforms the nonlinear image data into linear data and then uses the normal two-point algorithm to correct the linear data. The algorithm can effectively overcome the influence of nonlinearity of the detector's response, and it enlarges the correction precision and the dynamic range of the response. A real-time imaging-signal-processing system for IRFPAs that is based on a digital signal processor and field-programmable gate arrays is also presented. The nonuniformity correction capability of the presented solution is validated by experimental imaging procedures of a 128 x 128 pixel IRFPA camera prototype.
Nelson, Danielle; Ziv, Amitai; Bandali, Karim S
2012-10-01
The recent technological advance of digital high resolution imaging has allowed the field of pathology and medical laboratory science to undergo a dramatic transformation with the incorporation of virtual microscopy as a simulation-based educational and diagnostic tool. This transformation has correlated with an overall increase in the use of simulation in medicine in an effort to address dwindling clinical resource availability and patient safety issues currently facing the modern healthcare system. Virtual microscopy represents one such simulation-based technology that has the potential to enhance student learning and readiness to practice while revolutionising the ability to clinically diagnose pathology collaboratively across the world. While understanding that a substantial amount of literature already exists on virtual microscopy, much more research is still required to elucidate the full capabilities of this technology. This review explores the use of virtual microscopy in medical education and disease diagnosis with a unique focus on key requirements needed to take this technology to the next level in its use in medical education and clinical practice.
Nelson, Danielle; Ziv, Amitai; Bandali, Karim S
2013-10-01
The recent technological advance of digital high resolution imaging has allowed the field of pathology and medical laboratory science to undergo a dramatic transformation with the incorporation of virtual microscopy as a simulation-based educational and diagnostic tool. This transformation has correlated with an overall increase in the use of simulation in medicine in an effort to address dwindling clinical resource availability and patient safety issues currently facing the modern healthcare system. Virtual microscopy represents one such simulation-based technology that has the potential to enhance student learning and readiness to practice while revolutionising the ability to clinically diagnose pathology collaboratively across the world. While understanding that a substantial amount of literature already exists on virtual microscopy, much more research is still required to elucidate the full capabilities of this technology. This review explores the use of virtual microscopy in medical education and disease diagnosis with a unique focus on key requirements needed to take this technology to the next level in its use in medical education and clinical practice.
Quantitative phase imaging of living cells with a swept laser source
NASA Astrophysics Data System (ADS)
Chen, Shichao; Zhu, Yizheng
2016-03-01
Digital holographic phase microscopy is a well-established quantitative phase imaging technique. However, interference artifacts from inside the system, typically induced by elements whose optical thickness are within the source coherence length, limit the imaging quality as well as sensitivity. In this paper, a swept laser source based technique is presented. Spectra acquired at a number of wavelengths, after Fourier Transform, can be used to identify the sources of the interference artifacts. With proper tuning of the optical pathlength difference between sample and reference arms, it is possible to avoid these artifacts and achieve sensitivity below 0.3nm. Performance of the proposed technique is examined in live cell imaging.
NASA Astrophysics Data System (ADS)
Fernández, Ariel; Ferrari, José A.
2017-05-01
Pattern recognition and feature extraction are image processing applications of great interest in defect inspection and robot vision among others. In comparison to purely digital methods, the attractiveness of optical processors for pattern recognition lies in their highly parallel operation and real-time processing capability. This work presents an optical implementation of the generalized Hough transform (GHT), a well-established technique for recognition of geometrical features in binary images. Detection of a geometric feature under the GHT is accomplished by mapping the original image to an accumulator space; the large computational requirements for this mapping make the optical implementation an attractive alternative to digital-only methods. We explore an optical setup where the transformation is obtained, and the size and orientation parameters can be controlled, allowing for dynamic scale and orientation-variant pattern recognition. A compact system for the above purposes results from the use of an electrically tunable lens for scale control and a pupil mask implemented on a high-contrast spatial light modulator for orientation/shape variation of the template. Real-time can also be achieved. In addition, by thresholding of the GHT and optically inverse transforming, the previously detected features of interest can be extracted.
Urani, C; Corvi, R; Callegaro, G; Stefanini, F M
2013-09-01
In vitro cell transformation assays (CTAs) have been shown to model important stages of in vivo carcinogenesis and have the potential to predict carcinogenicity in humans. Advantages of CTAs are their ability of revealing both genotoxic and non-genotoxic carcinogens while reducing both experimental costs and the number of animals used. The endpoint of the CTA is foci formation, and requires classification under light microscopy based on morphology. Thus current limitations for the wide adoption of the assay partially depend on a fair degree of subjectivity in foci scoring. An objective evaluation may be obtained after separating foci from background monolayer in the digital image, and quantifying values of statistical descriptors which are selected to capture eye-scored morphological features. The aim of this study was to develop statistical descriptors to be applied to transformed foci of BALB/c 3T3, which cover foci size, multilayering and invasive cell growth into the background monolayer. Proposed descriptors were applied to a database of 407 foci images to explore the numerical features, and to illustrate open problems and potential solutions. Copyright © 2013 Elsevier Ltd. All rights reserved.
Techniques of noninvasive optical tomographic imaging
NASA Astrophysics Data System (ADS)
Rosen, Joseph; Abookasis, David; Gokhler, Mark
2006-01-01
Recently invented methods of optical tomographic imaging through scattering and absorbing media are presented. In one method, the three-dimensional structure of an object hidden between two biological tissues is recovered from many noisy speckle pictures obtained on the output of a multi-channeled optical imaging system. Objects are recovered from many speckled images observed by a digital camera through two stereoscopic microlens arrays. Each microlens in each array generates a speckle image of the object buried between the layers. In the computer each image is Fourier transformed jointly with an image of the speckled point-like source captured under the same conditions. A set of the squared magnitudes of the Fourier-transformed pictures is accumulated to form a single average picture. This final picture is again Fourier transformed, resulting in the three-dimensional reconstruction of the hidden object. In the other method, the effect of spatial longitudinal coherence is used for imaging through an absorbing layer with different thickness, or different index of refraction, along the layer. The technique is based on synthesis of multiple peak spatial degree of coherence. This degree of coherence enables us to scan simultaneously different sample points on different altitudes, and thus decreases the acquisition time. The same multi peak degree of coherence is also used for imaging through the absorbing layer. Our entire experiments are performed with a quasi-monochromatic light source. Therefore problems of dispersion and inhomogeneous absorption are avoided.
Entropy based quantification of Ki-67 positive cell images and its evaluation by a reader study
NASA Astrophysics Data System (ADS)
Niazi, M. Khalid Khan; Pennell, Michael; Elkins, Camille; Hemminger, Jessica; Jin, Ming; Kirby, Sean; Kurt, Habibe; Miller, Barrie; Plocharczyk, Elizabeth; Roth, Rachel; Ziegler, Rebecca; Shana'ah, Arwa; Racke, Fred; Lozanski, Gerard; Gurcan, Metin N.
2013-03-01
Presence of Ki-67, a nuclear protein, is typically used to measure cell proliferation. The quantification of the Ki-67 proliferation index is performed visually by the pathologist; however, this is subject to inter- and intra-reader variability. Automated techniques utilizing digital image analysis by computers have emerged. The large variations in specimen preparation, staining, and imaging as well as true biological heterogeneity of tumor tissue often results in variable intensities in Ki-67 stained images. These variations affect the performance of currently developed methods. To optimize the segmentation of Ki-67 stained cells, one should define a data dependent transformation that will account for these color variations instead of defining a fixed linear transformation to separate different hues. To address these issues in images of tissue stained with Ki-67, we propose a methodology that exploits the intrinsic properties of CIE L∗a∗b∗ color space to translate this complex problem into an automatic entropy based thresholding problem. The developed method was evaluated through two reader studies with pathology residents and expert hematopathologists. Agreement between the proposed method and the expert pathologists was good (CCC = 0.80).
CAD - CAM Procedures Used for Rapid Prototyping of Prosthetic Hip Joint Bone
NASA Astrophysics Data System (ADS)
Popa, Luminita I.; Popa, Vasile N.
2016-11-01
The article addresses the issue of rapid prototyping CAD/ CAM procedures, based on CT imaging, for custom implants dedicated to hip arthroplasty and the correlation study to be achieved between femoral canal shape, valued by modern imaging methods, and the prosthesis form. A set of CT images is transformed into a digital model using one of several software packages available for conversion. The purpose of research is to obtain prosthesis with compatible characteristics as close to the physiological, with an optimal adjustment of the prosthesis to the bone in which it is implanted, allowing the recovery of the patient physically, mentally and socially.
Callegaro, Giulia; Malkoc, Kasja; Corvi, Raffaella; Urani, Chiara; Stefanini, Federico M
2017-12-01
The identification of the carcinogenic risk of chemicals is currently mainly based on animal studies. The in vitro Cell Transformation Assays (CTAs) are a promising alternative to be considered in an integrated approach. CTAs measure the induction of foci of transformed cells. CTAs model key stages of the in vivo neoplastic process and are able to detect both genotoxic and some non-genotoxic compounds, being the only in vitro method able to deal with the latter. Despite their favorable features, CTAs can be further improved, especially reducing the possible subjectivity arising from the last phase of the protocol, namely visual scoring of foci using coded morphological features. By taking advantage of digital image analysis, the aim of our work is to translate morphological features into statistical descriptors of foci images, and to use them to mimic the classification performances of the visual scorer to discriminate between transformed and non-transformed foci. Here we present a classifier based on five descriptors trained on a dataset of 1364 foci, obtained with different compounds and concentrations. Our classifier showed accuracy, sensitivity and specificity equal to 0.77 and an area under the curve (AUC) of 0.84. The presented classifier outperforms a previously published model. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Real-Time System for Lane Detection Based on FPGA and DSP
NASA Astrophysics Data System (ADS)
Xiao, Jing; Li, Shutao; Sun, Bin
2016-12-01
This paper presents a real-time lane detection system including edge detection and improved Hough Transform based lane detection algorithm and its hardware implementation with field programmable gate array (FPGA) and digital signal processor (DSP). Firstly, gradient amplitude and direction information are combined to extract lane edge information. Then, the information is used to determine the region of interest. Finally, the lanes are extracted by using improved Hough Transform. The image processing module of the system consists of FPGA and DSP. Particularly, the algorithms implemented in FPGA are working in pipeline and processing in parallel so that the system can run in real-time. In addition, DSP realizes lane line extraction and display function with an improved Hough Transform. The experimental results show that the proposed system is able to detect lanes under different road situations efficiently and effectively.
Low photon count based digital holography for quadratic phase cryptography.
Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Ryle, James P; Healy, John J; Lee, Byung-Geun; Sheridan, John T
2017-07-15
Recently, the vulnerability of the linear canonical transform-based double random phase encryption system to attack has been demonstrated. To alleviate this, we present for the first time, to the best of our knowledge, a method for securing a two-dimensional scene using a quadratic phase encoding system operating in the photon-counted imaging (PCI) regime. Position-phase-shifting digital holography is applied to record the photon-limited encrypted complex samples. The reconstruction of the complex wavefront involves four sparse (undersampled) dataset intensity measurements (interferograms) at two different positions. Computer simulations validate that the photon-limited sparse-encrypted data has adequate information to authenticate the original data set. Finally, security analysis, employing iterative phase retrieval attacks, has been performed.
Virtual slides in peer reviewed, open access medical publication.
Kayser, Klaus; Borkenfeld, Stephan; Goldmann, Torsten; Kayser, Gian
2011-12-19
Application of virtual slides (VS), the digitalization of complete glass slides, is in its infancy to be implemented in routine diagnostic surgical pathology and to issues that are related to tissue-based diagnosis, such as education and scientific publication. Electronic publication in Pathology offers new features of scientific communication in pathology that cannot be obtained by conventional paper based journals. Most of these features are based upon completely open or partly directed interaction between the reader and the system that distributes the article. One of these interactions can be applied to microscopic images allowing the reader to navigate and magnify the presented images. VS and interactive Virtual Microscopy (VM) are a tool to increase the scientific value of microscopic images. The open access journal Diagnostic Pathology http://www.diagnosticpathology.org has existed for about five years. It is a peer reviewed journal that publishes all types of scientific contributions, including original scientific work, case reports and review articles. In addition to digitized still images the authors of appropriate articles are requested to submit the underlying glass slides to an institution (DiagnomX.eu, and Leica.com) for digitalization and documentation. The images are stored in a separate image data bank which is adequately linked to the article. The normal review process is not involved. Both processes (peer review and VS acquisition) are performed contemporaneously in order to minimize a potential publication delay. VS are not provided with a DOI index (digital object identifier). The first articles that include VS were published in March 2011. Several logistic constraints had to be overcome until the first articles including VS could be published. Step by step an automated acquisition and distribution system had to be implemented to the corresponding article. The acceptance of VS by the reader is high as well as by the authors. Of specific value are the increased confidence to and reputation of authors as well as the presented information to the reader. Additional associated functions such as access to author-owned related image collections, reader-controlled automated image measurements and image transformations are in preparation. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1232133347629819.
NASA Astrophysics Data System (ADS)
Wang, Zhongpeng; Chen, Fangni; Qiu, Weiwei; Chen, Shoufa; Ren, Dongxiao
2018-03-01
In this paper, a two-layer image encryption scheme for a discrete cosine transform (DCT) precoded orthogonal frequency division multiplexing (OFDM) visible light communication (VLC) system is proposed. Firstly, in the proposed scheme the transmitted image is first encrypted by a chaos scrambling sequence,which is generated from the hybrid 4-D hyper- and Arnold map in the upper-layer. After that, the encrypted image is converted into digital QAM modulation signal, which is re-encrypted by chaos scrambling sequence based on Arnold map in physical layer to further enhance the security of the transmitted image. Moreover, DCT precoding is employed to improve BER performance of the proposed system and reduce the PAPR of OFDM signal. The BER and PAPR performances of the proposed system are evaluated by simulation experiments. The experiment results show that the proposed two-layer chaos scrambling schemes achieve image secure transmission for image-based OFDM VLC. Furthermore, DCT precoding can reduce the PAPR and improve the BER performance of OFDM-based VLC.
Aquino, Arturo; Gegundez-Arias, Manuel Emilio; Marin, Diego
2010-11-01
Optic disc (OD) detection is an important step in developing systems for automated diagnosis of various serious ophthalmic pathologies. This paper presents a new template-based methodology for segmenting the OD from digital retinal images. This methodology uses morphological and edge detection techniques followed by the Circular Hough Transform to obtain a circular OD boundary approximation. It requires a pixel located within the OD as initial information. For this purpose, a location methodology based on a voting-type algorithm is also proposed. The algorithms were evaluated on the 1200 images of the publicly available MESSIDOR database. The location procedure succeeded in 99% of cases, taking an average computational time of 1.67 s. with a standard deviation of 0.14 s. On the other hand, the segmentation algorithm rendered an average common area overlapping between automated segmentations and true OD regions of 86%. The average computational time was 5.69 s with a standard deviation of 0.54 s. Moreover, a discussion on advantages and disadvantages of the models more generally used for OD segmentation is also presented in this paper.
Forensic hash for multimedia information
NASA Astrophysics Data System (ADS)
Lu, Wenjun; Varna, Avinash L.; Wu, Min
2010-01-01
Digital multimedia such as images and videos are prevalent on today's internet and cause significant social impact, which can be evidenced by the proliferation of social networking sites with user generated contents. Due to the ease of generating and modifying images and videos, it is critical to establish trustworthiness for online multimedia information. In this paper, we propose novel approaches to perform multimedia forensics using compact side information to reconstruct the processing history of a document. We refer to this as FASHION, standing for Forensic hASH for informatION assurance. Based on the Radon transform and scale space theory, the proposed forensic hash is compact and can effectively estimate the parameters of geometric transforms and detect local tampering that an image may have undergone. Forensic hash is designed to answer a broader range of questions regarding the processing history of multimedia data than the simple binary decision from traditional robust image hashing, and also offers more efficient and accurate forensic analysis than multimedia forensic techniques that do not use any side information.
Image recognition of clipped stigma traces in rice seeds
NASA Astrophysics Data System (ADS)
Cheng, F.; Ying, YB
2005-11-01
The objective of this research is to develop algorithm to recognize clipped stigma traces in rice seeds using image processing. At first, the micro-configuration of clipped stigma traces was observed with electronic scanning microscope. Then images of rice seeds were acquired with a color machine vision system. A digital image-processing algorithm based on morphological operations and Hough transform was developed to inspect the occurrence of clipped stigma traces. Five varieties of Jinyou402, Shanyou10, Zhongyou207, Jiayou and you3207 were evaluated. The algorithm was implemented with all image sets using a Matlab 6.5 procedure. The results showed that the algorithm achieved an average accuracy of 96%. The algorithm was proved to be insensitive to the different rice seed varieties.
Segmentation of the glottal space from laryngeal images using the watershed transform.
Osma-Ruiz, Víctor; Godino-Llorente, Juan I; Sáenz-Lechón, Nicolás; Fraile, Rubén
2008-04-01
The present work describes a new method for the automatic detection of the glottal space from laryngeal images obtained either with high speed or with conventional video cameras attached to a laryngoscope. The detection is based on the combination of several relevant techniques in the field of digital image processing. The image is segmented with a watershed transform followed by a region merging, while the final decision is taken using a simple linear predictor. This scheme has successfully segmented the glottal space in all the test images used. The method presented can be considered a generalist approach for the segmentation of the glottal space because, in contrast with other methods found in literature, this approach does not need either initialization or finding strict environmental conditions extracted from the images to be processed. Therefore, the main advantage is that the user does not have to outline the region of interest with a mouse click. In any case, some a priori knowledge about the glottal space is needed, but this a priori knowledge can be considered weak compared to the environmental conditions fixed in former works.
No-reference multiscale blur detection tool for content based image retrieval
NASA Astrophysics Data System (ADS)
Ezekiel, Soundararajan; Stocker, Russell; Harrity, Kyle; Alford, Mark; Ferris, David; Blasch, Erik; Gorniak, Mark
2014-06-01
In recent years, digital cameras have been widely used for image capturing. These devices are equipped in cell phones, laptops, tablets, webcams, etc. Image quality is an important component of digital image analysis. To assess image quality for these mobile products, a standard image is required as a reference image. In this case, Root Mean Square Error and Peak Signal to Noise Ratio can be used to measure the quality of the images. However, these methods are not possible if there is no reference image. In our approach, a discrete-wavelet transformation is applied to the blurred image, which decomposes into the approximate image and three detail sub-images, namely horizontal, vertical, and diagonal images. We then focus on noise-measuring the detail images and blur-measuring the approximate image to assess the image quality. We then compute noise mean and noise ratio from the detail images, and blur mean and blur ratio from the approximate image. The Multi-scale Blur Detection (MBD) metric provides both an assessment of the noise and blur content. These values are weighted based on a linear regression against full-reference y values. From these statistics, we can compare to normal useful image statistics for image quality without needing a reference image. We then test the validity of our obtained weights by R2 analysis as well as using them to estimate image quality of an image with a known quality measure. The result shows that our method provides acceptable results for images containing low to mid noise levels and blur content.
Color reproduction software for a digital still camera
NASA Astrophysics Data System (ADS)
Lee, Bong S.; Park, Du-Sik; Nam, Byung D.
1998-04-01
We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.
Optimized satellite image compression and reconstruction via evolution strategies
NASA Astrophysics Data System (ADS)
Babb, Brendan; Moore, Frank; Peterson, Michael
2009-05-01
This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.
Meng, Xianjing; Yin, Yilong; Yang, Gongping; Xi, Xiaoming
2013-07-18
Retinal identification based on retinal vasculatures in the retina provides the most secure and accurate means of authentication among biometrics and has primarily been used in combination with access control systems at high security facilities. Recently, there has been much interest in retina identification. As digital retina images always suffer from deformations, the Scale Invariant Feature Transform (SIFT), which is known for its distinctiveness and invariance for scale and rotation, has been introduced to retinal based identification. However, some shortcomings like the difficulty of feature extraction and mismatching exist in SIFT-based identification. To solve these problems, a novel preprocessing method based on the Improved Circular Gabor Transform (ICGF) is proposed. After further processing by the iterated spatial anisotropic smooth method, the number of uninformative SIFT keypoints is decreased dramatically. Tested on the VARIA and eight simulated retina databases combining rotation and scaling, the developed method presents promising results and shows robustness to rotations and scale changes.
Meng, Xianjing; Yin, Yilong; Yang, Gongping; Xi, Xiaoming
2013-01-01
Retinal identification based on retinal vasculatures in the retina provides the most secure and accurate means of authentication among biometrics and has primarily been used in combination with access control systems at high security facilities. Recently, there has been much interest in retina identification. As digital retina images always suffer from deformations, the Scale Invariant Feature Transform (SIFT), which is known for its distinctiveness and invariance for scale and rotation, has been introduced to retinal based identification. However, some shortcomings like the difficulty of feature extraction and mismatching exist in SIFT-based identification. To solve these problems, a novel preprocessing method based on the Improved Circular Gabor Transform (ICGF) is proposed. After further processing by the iterated spatial anisotropic smooth method, the number of uninformative SIFT keypoints is decreased dramatically. Tested on the VARIA and eight simulated retina databases combining rotation and scaling, the developed method presents promising results and shows robustness to rotations and scale changes. PMID:23873409
Digital interface of electronic transformers based on embedded system
NASA Astrophysics Data System (ADS)
Shang, Qiufeng; Qi, Yincheng
2008-10-01
Benefited from digital interface of electronic transformers, information sharing and system integration in substation can be realized. An embedded system-based digital output scheme of electronic transformers is proposed. The digital interface is designed with S3C44B0X 32bit RISC microprocessor as the hardware platform. The μCLinux operation system (OS) is transplanted on ARM7 (S3C44B0X). Applying Ethernet technology as the communication mode in the substation automation system is a new trend. The network interface chip RTL8019AS is adopted. Data transmission is realized through the in-line TCP/IP protocol of uClinux embedded OS. The application result and character analysis show that the design can meet the real-time and reliability requirements of IEC60044-7/8 electronic voltage/current instrument transformer standards.
ERIC Educational Resources Information Center
McCullagh, John; Greenwood, Julian
2011-01-01
In this digital age, is primary science being left behind? Computer microscopes provide opportunities to transform science lessons into highly exciting learning experiences and to shift enquiry and discovery back into the hands of the children. A class of 5- and 6-year-olds was just one group of children involved in the Digitally Resourced…
Influence of video compression on the measurement error of the television system
NASA Astrophysics Data System (ADS)
Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.
2015-05-01
Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.
Vector quantizer based on brightness maps for image compression with the polynomial transform
NASA Astrophysics Data System (ADS)
Escalante-Ramirez, Boris; Moreno-Gutierrez, Mauricio; Silvan-Cardenas, Jose L.
2002-11-01
We present a vector quantization scheme acting on brightness fields based on distance/distortion criteria correspondent with psycho-visual aspects. These criteria quantify sensorial distortion between vectors that represent either portions of a digital image or alternatively, coefficients of a transform-based coding system. In the latter case, we use an image representation model, namely the Hermite transform, that is based on some of the main perceptual characteristics of the human vision system (HVS) and in their response to light stimulus. Energy coding in the brightness domain, determination of local structure, code-book training and local orientation analysis are all obtained by means of the Hermite transform. This paper, for thematic reasons, is divided in four sections. The first one will shortly highlight the importance of having newer and better compression algorithms. This section will also serve to explain briefly the most relevant characteristics of the HVS, advantages and disadvantages related with the behavior of our vision in front of ocular stimulus. The second section shall go through a quick review of vector quantization techniques, focusing their performance on image treatment, as a preview for the image vector quantizer compressor actually constructed in section 5. Third chapter was chosen to concentrate the most important data gathered on brightness models. The building of this so-called brightness maps (quantification of the human perception on the visible objects reflectance), in a bi-dimensional model, will be addressed here. The Hermite transform, a special case of polynomial transforms, and its usefulness, will be treated, in an applicable discrete form, in the fourth chapter. As we have learned from previous works 1, Hermite transform has showed to be a useful and practical solution to efficiently code the energy within an image block, deciding which kind of quantization is to be used upon them (whether scalar or vector). It will also be a unique tool to structurally classify the image block within a given lattice. This particular operation intends to be one of the main contributions of this work. The fifth section will fuse the proposals derived from the study of the three main topics- addressed in the last sections- in order to propose an image compression model that takes advantage of vector quantizers inside the brightness transformed domain to determine the most important structures, finding the energy distribution inside the Hermite domain. Sixth and last section will show some results obtained while testing the coding-decoding model. The guidelines to evaluate the image compressing performance were the compression ratio, SNR and psycho-visual quality. Some conclusions derived from the research and possible unexplored paths will be shown on this section as well.
NASA Astrophysics Data System (ADS)
van Eycke, Yves-Rémi; Allard, Justine; Salmon, Isabelle; Debeir, Olivier; Decaestecker, Christine
2017-02-01
Immunohistochemistry (IHC) is a widely used technique in pathology to evidence protein expression in tissue samples. However, this staining technique is known for presenting inter-batch variations. Whole slide imaging in digital pathology offers a possibility to overcome this problem by means of image normalisation techniques. In the present paper we propose a methodology to objectively evaluate the need of image normalisation and to identify the best way to perform it. This methodology uses tissue microarray (TMA) materials and statistical analyses to evidence the possible variations occurring at colour and intensity levels as well as to evaluate the efficiency of image normalisation methods in correcting them. We applied our methodology to test different methods of image normalisation based on blind colour deconvolution that we adapted for IHC staining. These tests were carried out for different IHC experiments on different tissue types and targeting different proteins with different subcellular localisations. Our methodology enabled us to establish and to validate inter-batch normalization transforms which correct the non-relevant IHC staining variations. The normalised image series were then processed to extract coherent quantitative features characterising the IHC staining patterns.
Van Eycke, Yves-Rémi; Allard, Justine; Salmon, Isabelle; Debeir, Olivier; Decaestecker, Christine
2017-01-01
Immunohistochemistry (IHC) is a widely used technique in pathology to evidence protein expression in tissue samples. However, this staining technique is known for presenting inter-batch variations. Whole slide imaging in digital pathology offers a possibility to overcome this problem by means of image normalisation techniques. In the present paper we propose a methodology to objectively evaluate the need of image normalisation and to identify the best way to perform it. This methodology uses tissue microarray (TMA) materials and statistical analyses to evidence the possible variations occurring at colour and intensity levels as well as to evaluate the efficiency of image normalisation methods in correcting them. We applied our methodology to test different methods of image normalisation based on blind colour deconvolution that we adapted for IHC staining. These tests were carried out for different IHC experiments on different tissue types and targeting different proteins with different subcellular localisations. Our methodology enabled us to establish and to validate inter-batch normalization transforms which correct the non-relevant IHC staining variations. The normalised image series were then processed to extract coherent quantitative features characterising the IHC staining patterns. PMID:28220842
Feasibility study for automatic reduction of phase change imagery
NASA Technical Reports Server (NTRS)
Nossaman, G. O.
1971-01-01
The feasibility of automatically reducing a form of pictorial aerodynamic heating data is discussed. The imagery, depicting the melting history of a thin coat of fusible temperature indicator painted on an aerodynamically heated model, was previously reduced by manual methods. Careful examination of various lighting theories and approaches led to an experimentally verified illumination concept capable of yielding high-quality imagery. Both digital and video image processing techniques were applied to reduction of the data, and it was demonstrated that either method can be used to develop superimposed contours. Mathematical techniques were developed to find the model-to-image and the inverse image-to-model transformation using six conjugate points, and methods were developed using these transformations to determine heating rates on the model surface. A video system was designed which is able to reduce the imagery rapidly, economically and accurately. Costs for this system were estimated. A study plan was outlined whereby the mathematical transformation techniques developed to produce model coordinate heating data could be applied to operational software, and methods were discussed and costs estimated for obtaining the digital information necessary for this software.
NASA Technical Reports Server (NTRS)
Cecil, R. W.; White, R. A.; Szczur, M. R.
1972-01-01
The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.
Smooth affine shear tight frames: digitization and applications
NASA Astrophysics Data System (ADS)
Zhuang, Xiaosheng
2015-08-01
In this paper, we mainly discuss one of the recent developed directional multiscale representation systems: smooth affine shear tight frames. A directional wavelet tight frame is generated by isotropic dilations and translations of directional wavelet generators, while an affine shear tight frame is generated by anisotropic dilations, shears, and translations of shearlet generators. These two tight frames are actually connected in the sense that the affine shear tight frame can be obtained from a directional wavelet tight frame through subsampling. Consequently, an affine shear tight frame indeed has an underlying filter bank from the MRA structure of its associated directional wavelet tight frame. We call such filter banks affine shear filter banks, which can be designed completely in the frequency domain. We discuss the digitization of affine shear filter banks and their implementations: the forward and backward digital affine shear transforms. Redundancy rate and computational complexity of digital affine shear transforms are also investigated in this paper. Numerical experiments and comparisons in image/video processing show the advantages of digital affine shear transforms over many other state-of-art directional multiscale representation systems.
Image microarrays (IMA): Digital pathology's missing tool
Hipp, Jason; Cheng, Jerome; Pantanowitz, Liron; Hewitt, Stephen; Yagi, Yukako; Monaco, James; Madabhushi, Anant; Rodriguez-canales, Jaime; Hanson, Jeffrey; Roy-Chowdhuri, Sinchita; Filie, Armando C.; Feldman, Michael D.; Tomaszewski, John E.; Shih, Natalie NC.; Brodsky, Victor; Giaccone, Giuseppe; Emmert-Buck, Michael R.; Balis, Ulysses J.
2011-01-01
Introduction: The increasing availability of whole slide imaging (WSI) data sets (digital slides) from glass slides offers new opportunities for the development of computer-aided diagnostic (CAD) algorithms. With the all-digital pathology workflow that these data sets will enable in the near future, literally millions of digital slides will be generated and stored. Consequently, the field in general and pathologists, specifically, will need tools to help extract actionable information from this new and vast collective repository. Methods: To address this limitation, we designed and implemented a tool (dCORE) to enable the systematic capture of image tiles with constrained size and resolution that contain desired histopathologic features. Results: In this communication, we describe a user-friendly tool that will enable pathologists to mine digital slides archives to create image microarrays (IMAs). IMAs are to digital slides as tissue microarrays (TMAs) are to cell blocks. Thus, a single digital slide could be transformed into an array of hundreds to thousands of high quality digital images, with each containing key diagnostic morphologies and appropriate controls. Current manual digital image cut-and-paste methods that allow for the creation of a grid of images (such as an IMA) of matching resolutions are tedious. Conclusion: The ability to create IMAs representing hundreds to thousands of vetted morphologic features has numerous applications in education, proficiency testing, consensus case review, and research. Lastly, in a manner analogous to the way conventional TMA technology has significantly accelerated in situ studies of tissue specimens use of IMAs has similar potential to significantly accelerate CAD algorithm development. PMID:22200030
A fast and automatic fusion algorithm for unregistered multi-exposure image sequence
NASA Astrophysics Data System (ADS)
Liu, Yan; Yu, Feihong
2014-09-01
Human visual system (HVS) can visualize all the brightness levels of the scene through visual adaptation. However, the dynamic range of most commercial digital cameras and display devices are smaller than the dynamic range of human eye. This implies low dynamic range (LDR) images captured by normal digital camera may lose image details. We propose an efficient approach to high dynamic (HDR) image fusion that copes with image displacement and image blur degradation in a computationally efficient manner, which is suitable for implementation on mobile devices. The various image registration algorithms proposed in the previous literatures are unable to meet the efficiency and performance requirements in the application of mobile devices. In this paper, we selected Oriented Brief (ORB) detector to extract local image structures. The descriptor selected in multi-exposure image fusion algorithm has to be fast and robust to illumination variations and geometric deformations. ORB descriptor is the best candidate in our algorithm. Further, we perform an improved RANdom Sample Consensus (RANSAC) algorithm to reject incorrect matches. For the fusion of images, a new approach based on Stationary Wavelet Transform (SWT) is used. The experimental results demonstrate that the proposed algorithm generates high quality images at low computational cost. Comparisons with a number of other feature matching methods show that our method gets better performance.
NASA Astrophysics Data System (ADS)
Bini, Jason; Spain, James; Nehal, Kishwer; Hazelwood, Vikki; Dimarzio, Charles; Rajadhyaksha, Milind
2011-07-01
Confocal mosaicing microscopy enables rapid imaging of large areas of fresh tissue, without the processing that is necessary for conventional histology. Mosaicing may offer a means to perform rapid histology at the bedside. A possible barrier toward clinical acceptance is that the mosaics are based on a single mode of grayscale contrast and appear black and white, whereas histology is based on two stains (hematoxylin for nuclei, eosin for cellular cytoplasm and dermis) and appears purple and pink. Toward addressing this barrier, we report advances in digital staining: fluorescence mosaics that show only nuclei, are digitally stained purple and overlaid on reflectance mosaics, which show only cellular cytoplasm and dermis, and are digitally stained pink. With digital staining, the appearance of confocal mosaics mimics the appearance of histology. Using multispectral analysis and color matching functions, red, green, and blue (RGB) components of hematoxylin and eosin stains in tissue were determined. The resulting RGB components were then applied in a linear algorithm to transform fluorescence and reflectance contrast in confocal mosaics to the absorbance contrast seen in pathology. Optimization of staining with acridine orange showed improved quality of digitally stained mosaics, with good correlation to the corresponding histology.
Pre-Hardware Optimization and Implementation Of Fast Optics Closed Control Loop Algorithms
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Lyon, Richard G.; Herman, Jay R.; Abuhassan, Nader
2004-01-01
One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The FFT is particularly useful in two-dimensional (2-D) image processing (FFT2) within optical systems control. However, timing constraints of a fast optics closed control loop would require a supercomputer to run the software implementation of the FFT2 and its inverse, as well as other image processing representative algorithm, such as numerical image folding and fringe feature extraction. A laboratory supercomputer is not always available even for ground operations and is not feasible for a night project. However, the computationally intensive algorithms still warrant alternative implementation using reconfigurable computing technologies (RC) such as Digital Signal Processors (DSP) and Field Programmable Gate Arrays (FPGA), which provide low cost compact super-computing capabilities. We present a new RC hardware implementation and utilization architecture that significantly reduces the computational complexity of a few basic image-processing algorithm, such as FFT2, image folding and phase diversity for the NASA Solar Viewing Interferometer Prototype (SVIP) using a cluster of DSPs and FPGAs. The DSP cluster utilization architecture also assures avoidance of a single point of failure, while using commercially available hardware. This, combined with the control algorithms pre-hardware optimization, or the first time allows construction of image-based 800 Hertz (Hz) optics closed control loops on-board a spacecraft, based on the SVIP ground instrument. That spacecraft is the proposed Earth Atmosphere Solar Occultation Imager (EASI) to study greenhouse gases CO2, C2H, H2O, O3, O2, N2O from Lagrange-2 point in space. This paper provides an advanced insight into a new type of science capabilities for future space exploration missions based on on-board image processing for control and for robotics missions using vision sensors. It presents a top-level description of technologies required for the design and construction of SVIP and EASI and to advance the spatial-spectral imaging and large-scale space interferometry science and engineering.
A model for a PC-based, universal-format, multimedia digitization system: moving beyond the scanner.
McEachen, James C; Cusack, Thomas J; McEachen, John C
2003-08-01
Digitizing images for use in case presentations based on hardcopy films, slides, photographs, negatives, books, and videos can present a challenging task. Scanners and digital cameras have become standard tools of the trade. Unfortunately, use of these devices to digitize multiple images in many different media formats can be a time-consuming and in some cases unachievable process. The authors' goal was to create a PC-based solution for digitizing multiple media formats in a timely fashion while maintaining adequate image presentation quality. The authors' PC-based solution makes use of off-the-shelf hardware applications to include a digital document camera (DDC), VHS video player, and video-editing kit. With the assistance of five staff radiologists, the authors examined the quality of multiple image types digitized with this equipment. The authors also quantified the speed of digitization of various types of media using the DDC and video-editing kit. With regard to image quality, the five staff radiologists rated the digitized angiography, CT, and MR images as adequate to excellent for use in teaching files and case presentations. With regard to digitized plain films, the average rating was adequate. As for performance, the authors recognized a 68% improvement in the time required to digitize hardcopy films using the DDC instead of a professional quality scanner. The PC-based solution provides a means for digitizing multiple images from many different types of media in a timely fashion while maintaining adequate image presentation quality.
Method for simulating dose reduction in digital mammography using the Anscombe transformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borges, Lucas R., E-mail: lucas.rodrigues.borges@usp.br; Oliveira, Helder C. R. de; Nunes, Polyana F.
2016-06-15
Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtainedmore » by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise metrics confirm that this method is capable of precisely simulating various dose reductions.« less
Method for simulating dose reduction in digital mammography using the Anscombe transformation
Borges, Lucas R.; de Oliveira, Helder C. R.; Nunes, Polyana F.; Bakic, Predrag R.; Maidment, Andrew D. A.; Vieira, Marcelo A. C.
2016-01-01
Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise metrics confirm that this method is capable of precisely simulating various dose reductions. PMID:27277017
Multiscale Characterization of Nickel Titanium Shape Memory Alloys
NASA Astrophysics Data System (ADS)
Gall, Keith
Shape memory alloys were characterized by a variety of methods to investigate the relationship between microstructural phase transformation, macroscale deformation due to mechanical loading, material geometry, and initial material state. The major portion of the work is application of digital image correlation at several length scales to SMAs under mechanical loading. In addition, the connection between electrical resistance, stress, and strain was studied in NiTi wires. Finally, a new processing method was investigated to develop porous NiTi samples, which can be examined under DIC in future work. The phase transformation temperatures of a Nickel-Titanium based shape memory alloy (SMA) were initially evaluated under stress-free conditions by the differential scanning calorimetric (DSC) technique. Results show that the phase transformation temperature is significantly higher for transition from de-twinned martensite to austenite than from twinned martensite or R phase to austenite. To further examine transformation temperatures as a function of initial state a tensile test apparatus with in-situ electrical resistance (ER) measurements was used to evaluate the transformation properties of SMAs at a variety of stress levels and initial compositions. The results show that stress has a significant influence on the transformation of detwinned martensite, but a small influence on R phase and twinned martensite transformations. Electrical resistance changes linearly with strain during the transformations from both kinds of martensite to austenite. The linearity between ER and strain during the transformation from de-twinned martensite to austenite is not affected by the stress, facilitating application to control algorithms. A revised phase diagram is drawn to express these results. To better understand the nature of the local and global strain fields that accompany phase transformation in shape memory alloys (SMAs), here we use high resolution imaging together with image correlation processing at several length scales. The Digital Image Correlation (DIC) method uses digital images captured during material deformation to generate displacement and strain field maps of the specimen surface. Both 5x optical magnification and low magnification provide details of localized strain behavior during the stress induced phase transformation in polycrystalline Nickel-Titanium SMA samples. Tension bars with (and without) machined geometric defects are tested with (and without) paint speckle pattern to investigate the response near pore-like defects. Results from the standard tensile bars (no defect) show a recoverable transformation propagate across the sample (from both ends towards center) that is observed as localization in the DIC calculated strain field. Biaxial strain measurements from the DIC method also provide data to calculate a Poisson Ratio as a function of transformation progress. Specimens with a circular (0.5 mm dia) defect exhibit similar strain-localization behaviors, but the stress concentration causes early material transformation near the defect. Analysis of the magnified images illustrates strain field localization due to the underlying polycrystalline microstructure of the NiTi specimen. Last, a study presents the development of new processing techniques for porous SMA materials. Porous SMAs are potential candidates in a variety of applications where micro-macrochannels might improve thermal response of mechanical actuators or promote bone ingrowth for biomedical implant devices. Recent methods in powder metallurgy have shown that adding small amounts of Niobium improves densification of sintered NiTi alloys. New results here show how porous NiTiNb microstructures are processed using temporary steel wire space holder. The wires (or layered 2-D meshes) are electrochemically dissolved to leave a complex network of pores throughout a dense NiTiNb alloy. The processing method presented here allows better control of pore geometry and arrangement when compared to existing techniques in NiTiNb powder metallurgy.
[Design and development of the DSA digital subtraction workstation].
Peng, Wen-Xian; Peng, Tian-Zhou; Xia, Shun-Ren; Jin, Guang-Bo
2008-05-01
According to the patient examination criterion and the demands of all related departments, the DSA digital subtraction workstation has been successfully designed and is introduced in this paper by analyzing the characteristic of video source of DSA which was manufactured by GE Company and has no DICOM standard interface. The workstation includes images-capturing gateway and post-processing software. With the developed workstation, all images from this early DSA equipment are transformed into DICOM format and then are shared in different machines.
Extracting contours of oval-shaped objects by Hough transform and minimal path algorithms
NASA Astrophysics Data System (ADS)
Tleis, Mohamed; Verbeek, Fons J.
2014-04-01
Circular and oval-like objects are very common in cell and micro biology. These objects need to be analyzed, and to that end, digitized images from the microscope are used so as to come to an automated analysis pipeline. It is essential to detect all the objects in an image as well as to extract the exact contour of each individual object. In this manner it becomes possible to perform measurements on these objects, i.e. shape and texture features. Our measurement objective is achieved by probing contour detection through dynamic programming. In this paper we describe a method that uses Hough transform and two minimal path algorithms to detect contours of (ovoid-like) objects. These algorithms are based on an existing grey-weighted distance transform and a new algorithm to extract the circular shortest path in an image. The methods are tested on an artificial dataset of a 1000 images, with an F1-score of 0.972. In a case study with yeast cells, contours from our methods were compared with another solution using Pratt's figure of merit. Results indicate that our methods were more precise based on a comparison with a ground-truth dataset. As far as yeast cells are concerned, the segmentation and measurement results enable, in future work, to retrieve information from different developmental stages of the cell using complex features.
NASA Astrophysics Data System (ADS)
Zhou, Yuping; Zhang, Qi
2018-04-01
In the information environment, digital and information processing to Li brocade patterns reveals an important means of Li ethnic style and inheriting the national culture. Adobe Illustrator CS3 and Java language were used in the paper to make "variation" processing to Li brocade patterns, and generate "Li brocade pattern mutant genes". The generation of pattern mutant genes includes color mutation, shape mutation, adding and missing transform, and twisted transform, etc. Research shows that Li brocade pattern mutant genes can be generated by using the Adobe Illustrator CS3 and the image processing tools of Java language edit, etc.
Chromatic aberration correction: an enhancement to the calibration of low-cost digital dermoscopes.
Wighton, Paul; Lee, Tim K; Lui, Harvey; McLean, David; Atkins, M Stella
2011-08-01
We present a method for calibrating low-cost digital dermoscopes that corrects for color and inconsistent lighting and also corrects for chromatic aberration. Chromatic aberration is a form of radial distortion that often occurs in inexpensive digital dermoscopes and creates red and blue halo-like effects on edges. Being radial in nature, distortions due to chromatic aberration are not constant across the image, but rather vary in both magnitude and direction. As a result, distortions are not only visually distracting but could also mislead automated characterization techniques. Two low-cost dermoscopes, based on different consumer-grade cameras, were tested. Color is corrected by imaging a reference and applying singular value decomposition to determine the transformation required to ensure accurate color reproduction. Lighting is corrected by imaging a uniform surface and creating lighting correction maps. Chromatic aberration is corrected using a second-order radial distortion model. Our results for color and lighting calibration are consistent with previously published results, while distortions due to chromatic aberration can be reduced by 42-47% in the two systems considered. The disadvantages of inexpensive dermoscopy can be quickly substantially mitigated with a suitable calibration procedure. © 2011 John Wiley & Sons A/S.
Reconstruction of color images via Haar wavelet based on digital micromirror device
NASA Astrophysics Data System (ADS)
Liu, Xingjiong; He, Weiji; Gu, Guohua
2015-10-01
A digital micro mirror device( DMD) is introduced to form Haar wavelet basis , projecting on the color target image by making use of structured illumination, including red, green and blue light. The light intensity signals reflected from the target image are received synchronously by the bucket detector which has no spatial resolution, converted into voltage signals and then transferred into PC[1] .To reach the aim of synchronization, several synchronization processes are added during data acquisition. In the data collection process, according to the wavelet tree structure, the locations of significant coefficients at the finer scale are predicted by comparing the coefficients sampled at the coarsest scale with the threshold. The monochrome grayscale images are obtained under red , green and blue structured illumination by using Haar wavelet inverse transform algorithm, respectively. The color fusion algorithm is carried on the three monochrome grayscale images to obtain the final color image. According to the imaging principle, the experimental demonstration device is assembled. The letter "K" and the X-rite Color Checker Passport are projected and reconstructed as target images, and the final reconstructed color images have good qualities. This article makes use of the method of Haar wavelet reconstruction, reducing the sampling rate considerably. It provides color information without compromising the resolution of the final image.
Employing wavelet-based texture features in ammunition classification
NASA Astrophysics Data System (ADS)
Borzino, Ángelo M. C. R.; Maher, Robert C.; Apolinário, José A.; de Campos, Marcello L. R.
2017-05-01
Pattern recognition, a branch of machine learning, involves classification of information in images, sounds, and other digital representations. This paper uses pattern recognition to identify which kind of ammunition was used when a bullet was fired based on a carefully constructed set of gunshot sound recordings. To do this task, we show that texture features obtained from the wavelet transform of a component of the gunshot signal, treated as an image, and quantized in gray levels, are good ammunition discriminators. We test the technique with eight different calibers and achieve a classification rate better than 95%. We also compare the performance of the proposed method with results obtained by standard temporal and spectrographic techniques
Computer image processing - The Viking experience. [digital enhancement techniques
NASA Technical Reports Server (NTRS)
Green, W. B.
1977-01-01
Computer processing of digital imagery from the Viking mission to Mars is discussed, with attention given to subjective enhancement and quantitative processing. Contrast stretching and high-pass filtering techniques of subjective enhancement are described; algorithms developed to determine optimal stretch and filtering parameters are also mentioned. In addition, geometric transformations to rectify the distortion of shapes in the field of view and to alter the apparent viewpoint of the image are considered. Perhaps the most difficult problem in quantitative processing of Viking imagery was the production of accurate color representations of Orbiter and Lander camera images.
NASA Technical Reports Server (NTRS)
Kruse, F. A.; Knepper, D. H., Jr.; Clark, R. N.
1986-01-01
Techniques using Munsell color transformations were developed for reducing 128 channels (or less) of Airborne Imaging Spectrometer (AIS) data to a single color-composite-image suitable for both visual interpretation and digital analysis. Using AIS data acquired in 1984 and 1985, limestone and dolomite roof pendants and sericite-illite and other clay minerals related to alteration were mapped in a quartz monzonite stock in the northern Grapevine Mountains of California and Nevada. Field studies and laboratory spectral measurements verify the mineralogical distributions mapped from the AIS data.
Researches on Position Detection for Vacuum Switch Electrode
NASA Astrophysics Data System (ADS)
Dong, Huajun; Guo, Yingjie; Li, Jie; Kong, Yihan
2018-03-01
Form and transformation character of vacuum arc is important influencing factor on the vacuum switch performance, and the dynamic separations of electrode is the chief effecting factor on the transformation of vacuum arcs forms. Consequently, how to detect the position of electrode to calculate the separations in the arcs image is of great significance. However, gray level distribution of vacuum arcs image isn’t even, the gray level of burning arcs is high, but the gray level of electrode is low, meanwhile, the forms of vacuum arcs changes sharply, the problems above restrict electrode position detection precisely. In this paper, algorithm of detecting electrode position base on vacuum arcs image was proposed. The digital image processing technology was used in vacuum switch arcs image analysis, the upper edge and lower edge were detected respectively, then linear fitting was done using the result of edge detection, the fitting result was the position of electrode, thus, accurate position detection of electrode was realized. From the experimental results, we can see that: algorithm described in this paper detected upper and lower edge of arcs successfully and the position of electrode was obtained through calculation.
NASA Astrophysics Data System (ADS)
Wei, Minsong; Xing, Fei; You, Zheng
2017-01-01
The advancing growth of micro- and nano-satellites requires miniaturized sun sensors which could be conveniently applied in the attitude determination subsystem. In this work, a profile detecting technology based high accurate wireless digital sun sensor was proposed, which could transform a two-dimensional image into two-linear profile output so that it can realize a high update rate under a very low power consumption. A multiple spots recovery approach with an asymmetric mask pattern design principle was introduced to fit the multiplexing image detector method for accuracy improvement of the sun sensor within a large Field of View (FOV). A FOV determination principle based on the concept of FOV region was also proposed to facilitate both sub-FOV analysis and the whole FOV determination. A RF MCU, together with solar cells, was utilized to achieve the wireless and self-powered functionality. The prototype of the sun sensor is approximately 10 times lower in size and weight compared with the conventional digital sun sensor (DSS). Test results indicated that the accuracy of the prototype was 0.01° within a cone FOV of 100°. Such an autonomous DSS could be equipped flexibly on a micro- or nano-satellite, especially for highly accurate remote sensing applications.
Multispectral Wavefronts Retrieval in Digital Holographic Three-Dimensional Imaging Spectrometry
NASA Astrophysics Data System (ADS)
Yoshimori, Kyu
2010-04-01
This paper deals with a recently developed passive interferometric technique for retrieving a set of spectral components of wavefronts that are propagating from a spatially incoherent, polychromatic object. The technique is based on measurement of 5-D spatial coherence function using a suitably designed interferometer. By applying signal processing, including aperture synthesis and spectral decomposition, one may obtains a set of wavefronts of different spectral bands. Since each wavefront is equivalent to the complex Fresnel hologram at a particular spectrum of the polychromatic object, application of the conventional Fresnel transform yields 3-D image of different spectrum. Thus, this technique of multispectral wavefronts retrieval provides a new type of 3-D imaging spectrometry based on a fully passive interferometry. Experimental results are also shown to demonstrate the validity of the method.
Spatial-heterodyne interferometry for transmission (SHIFT) measurements
Bingham, Philip R.; Hanson, Gregory R.; Tobin, Ken W.
2006-10-10
Systems and methods are described for spatial-heterodyne interferometry for transmission (SHIFT) measurements. A method includes digitally recording a spatially-heterodyned hologram including spatial heterodyne fringes for Fourier analysis using a reference beam, and an object beam that is transmitted through an object that is at least partially translucent; Fourier analyzing the digitally recorded spatially-heterodyned hologram, by shifting an original origin of the digitally recorded spatially-heterodyned hologram to sit on top of a spatial-heterodyne carrier frequency defined by an angle between the reference beam and the object beam, to define an analyzed image; digitally filtering the analyzed image to cut off signals around the original origin to define a result; and performing an inverse Fourier transform on the result.
Digital Sound Encryption with Logistic Map and Number Theoretic Transform
NASA Astrophysics Data System (ADS)
Satria, Yudi; Gabe Rizky, P. H.; Suryadi, MT
2018-03-01
Digital sound security has limits on encrypting in Frequency Domain. Number Theoretic Transform based on field (GF 2521 – 1) improve and solve that problem. The algorithm for this sound encryption is based on combination of Chaos function and Number Theoretic Transform. The Chaos function that used in this paper is Logistic Map. The trials and the simulations are conducted by using 5 different digital sound files data tester in Wave File Extension Format and simulated at least 100 times each. The key stream resulted is random with verified by 15 NIST’s randomness test. The key space formed is very big which more than 10469. The processing speed of algorithm for encryption is slightly affected by Number Theoretic Transform.
A Comparison of Optical versus Hardware Fourier Transforms.
1983-10-31
AD- R136 223 A COMPRISON’OF OPTICAL ERSUS HARDWARE FOURIER i/i.TRANSFORMS(U) VIRGINIA POLYTECHNIC INST AND STATE UNIV BLACKSBURG DEPT OF PHYSICS S P...transform and its inverse filtered Fourier transform obtained with the Digital Image Processing (DIP) hardware system located at the School of Aerospace...transparencies, and provided to us by Dr. Ralph G. Allen, Director of the Laser Effects Branch (Division of Radiation Sciences). The DIP system consisted of: an
Image-based modeling of tumor shrinkage in head and neck radiation therapy1
Chao, Ming; Xie, Yaoqin; Moros, Eduardo G.; Le, Quynh-Thu; Xing, Lei
2010-01-01
Purpose: Understanding the kinetics of tumor growth∕shrinkage represents a critical step in quantitative assessment of therapeutics and realization of adaptive radiation therapy. This article presents a novel framework for image-based modeling of tumor change and demonstrates its performance with synthetic images and clinical cases. Methods: Due to significant tumor tissue content changes, similarity-based models are not suitable for describing the process of tumor volume changes. Under the hypothesis that tissue features in a tumor volume or at the boundary region are partially preserved, the kinetic change was modeled in two steps: (1) Autodetection of homologous tissue features shared by two input images using the scale invariance feature transformation (SIFT) method; and (2) establishment of a voxel-to-voxel correspondence between the images for the remaining spatial points by interpolation. The correctness of the tissue feature correspondence was assured by a bidirectional association procedure, where SIFT features were mapped from template to target images and reversely. A series of digital phantom experiments and five head and neck clinical cases were used to assess the performance of the proposed technique. Results: The proposed technique can faithfully identify the known changes introduced when constructing the digital phantoms. The subsequent feature-guided thin plate spline calculation reproduced the “ground truth” with accuracy better than 1.5 mm. For the clinical cases, the new algorithm worked reliably for a volume change as large as 30%. Conclusions: An image-based tumor kinetic algorithm was developed to model the tumor response to radiation therapy. The technique provides a practical framework for future application in adaptive radiation therapy. PMID:20527569
Image-guided ex-vivo targeting accuracy using a laparoscopic tissue localization system
NASA Astrophysics Data System (ADS)
Bieszczad, Jerry; Friets, Eric; Knaus, Darin; Rauth, Thomas; Herline, Alan; Miga, Michael; Galloway, Robert; Kynor, David
2007-03-01
In image-guided surgery, discrete fiducials are used to determine a spatial registration between the location of surgical tools in the operating theater and the location of targeted subsurface lesions and critical anatomic features depicted in preoperative tomographic image data. However, the lack of readily localized anatomic landmarks has greatly hindered the use of image-guided surgery in minimally invasive abdominal procedures. To address these needs, we have previously described a laser-based system for localization of internal surface anatomy using conventional laparoscopes. During a procedure, this system generates a digitized, three-dimensional representation of visible anatomic surfaces in the abdominal cavity. This paper presents the results of an experiment utilizing an ex-vivo bovine liver to assess subsurface targeting accuracy achieved using our system. During the experiment, several radiopaque targets were inserted into the liver parenchyma. The location of each target was recorded using an optically-tracked insertion probe. The liver surface was digitized using our system, and registered with the liver surface extracted from post-procedure CT images. This surface-based registration was then used to transform the position of the inserted targets into the CT image volume. The target registration error (TRE) achieved using our surface-based registration (given a suitable registration algorithm initialization) was 2.4 mm +/- 1.0 mm. A comparable TRE (2.6 mm +/- 1.7 mm) was obtained using a registration based on traditional fiducial markers placed on the surface of the same liver. These results indicate the potential of fiducial-free, surface-to-surface registration for image-guided lesion targeting in minimally invasive abdominal surgery.
A robust and hierarchical approach for the automatic co-registration of intensity and visible images
NASA Astrophysics Data System (ADS)
González-Aguilera, Diego; Rodríguez-Gonzálvez, Pablo; Hernández-López, David; Luis Lerma, José
2012-09-01
This paper presents a new robust approach to integrate intensity and visible images which have been acquired with a terrestrial laser scanner and a calibrated digital camera, respectively. In particular, an automatic and hierarchical method for the co-registration of both sensors is developed. The approach integrates several existing solutions to improve the performance of the co-registration between range-based and visible images: the Affine Scale-Invariant Feature Transform (A-SIFT), the epipolar geometry, the collinearity equations, the Groebner basis solution and the RANdom SAmple Consensus (RANSAC), integrating a voting scheme. The approach presented herein improves the existing co-registration approaches in automation, robustness, reliability and accuracy.
Processing techniques for software based SAR processors
NASA Technical Reports Server (NTRS)
Leung, K.; Wu, C.
1983-01-01
Software SAR processing techniques defined to treat Shuttle Imaging Radar-B (SIR-B) data are reviewed. The algorithms are devised for the data processing procedure selection, SAR correlation function implementation, multiple array processors utilization, cornerturning, variable reference length azimuth processing, and range migration handling. The Interim Digital Processor (IDP) originally implemented for handling Seasat SAR data has been adapted for the SIR-B, and offers a resolution of 100 km using a processing procedure based on the Fast Fourier Transformation fast correlation approach. Peculiarities of the Seasat SAR data processing requirements are reviewed, along with modifications introduced for the SIR-B. An Advanced Digital SAR Processor (ADSP) is under development for use with the SIR-B in the 1986 time frame as an upgrade for the IDP, which will be in service in 1984-5.
NASA Technical Reports Server (NTRS)
Hsu, Ken-Yuh (Editor); Liu, Hua-Kuang (Editor)
1992-01-01
The present conference discusses optical neural networks, photorefractive nonlinear optics, optical pattern recognition, digital and analog processors, and holography and its applications. Attention is given to bifurcating optical information processing, neural structures in digital halftoning, an exemplar-based optical neural net classifier for color pattern recognition, volume storage in photorefractive disks, and microlaser-based compact optical neuroprocessors. Also treated are the optical implementation of a feature-enhanced optical interpattern-associative neural network model and its optical implementation, an optical pattern binary dual-rail logic gate module, a theoretical analysis for holographic associative memories, joint transform correlators, image addition and subtraction via the Talbot effect, and optical wavelet-matched filters. (No individual items are abstracted in this volume)
NASA Astrophysics Data System (ADS)
Hsu, Ken-Yuh; Liu, Hua-Kuang
The present conference discusses optical neural networks, photorefractive nonlinear optics, optical pattern recognition, digital and analog processors, and holography and its applications. Attention is given to bifurcating optical information processing, neural structures in digital halftoning, an exemplar-based optical neural net classifier for color pattern recognition, volume storage in photorefractive disks, and microlaser-based compact optical neuroprocessors. Also treated are the optical implementation of a feature-enhanced optical interpattern-associative neural network model and its optical implementation, an optical pattern binary dual-rail logic gate module, a theoretical analysis for holographic associative memories, joint transform correlators, image addition and subtraction via the Talbot effect, and optical wavelet-matched filters. (No individual items are abstracted in this volume)
Image recombination transform algorithm for superresolution structured illumination microscopy
Zhou, Xing; Lei, Ming; Dan, Dan; Yao, Baoli; Yang, Yanlong; Qian, Jia; Chen, Guangde; Bianco, Piero R.
2016-01-01
Abstract. Structured illumination microscopy (SIM) is an attractive choice for fast superresolution imaging. The generation of structured illumination patterns made by interference of laser beams is broadly employed to obtain high modulation depth of patterns, while the polarizations of the laser beams must be elaborately controlled to guarantee the high contrast of interference intensity, which brings a more complex configuration for the polarization control. The emerging pattern projection strategy is much more compact, but the modulation depth of patterns is deteriorated by the optical transfer function of the optical system, especially in high spatial frequency near the diffraction limit. Therefore, the traditional superresolution reconstruction algorithm for interference-based SIM will suffer from many artifacts in the case of projection-based SIM that possesses a low modulation depth. Here, we propose an alternative reconstruction algorithm based on image recombination transform, which provides an alternative solution to address this problem even in a weak modulation depth. We demonstrated the effectiveness of this algorithm in the multicolor superresolution imaging of bovine pulmonary arterial endothelial cells in our developed projection-based SIM system, which applies a computer controlled digital micromirror device for fast fringe generation and multicolor light-emitting diodes for illumination. The merit of the system incorporated with the proposed algorithm allows for a low excitation intensity fluorescence imaging even less than 1 W/cm2, which is beneficial for the long-term, in vivo superresolved imaging of live cells and tissues. PMID:27653935
CMOS Image Sensor with a Built-in Lane Detector.
Hsiao, Pei-Yung; Cheng, Hsien-Chein; Huang, Shih-Shinh; Fu, Li-Chen
2009-01-01
This work develops a new current-mode mixed signal Complementary Metal-Oxide-Semiconductor (CMOS) imager, which can capture images and simultaneously produce vehicle lane maps. The adopted lane detection algorithm, which was modified to be compatible with hardware requirements, can achieve a high recognition rate of up to approximately 96% under various weather conditions. Instead of a Personal Computer (PC) based system or embedded platform system equipped with expensive high performance chip of Reduced Instruction Set Computer (RISC) or Digital Signal Processor (DSP), the proposed imager, without extra Analog to Digital Converter (ADC) circuits to transform signals, is a compact, lower cost key-component chip. It is also an innovative component device that can be integrated into intelligent automotive lane departure systems. The chip size is 2,191.4 × 2,389.8 μm, and the package uses 40 pin Dual-In-Package (DIP). The pixel cell size is 18.45 × 21.8 μm and the core size of photodiode is 12.45 × 9.6 μm; the resulting fill factor is 29.7%.
NASA Astrophysics Data System (ADS)
Darazi, R.; Gouze, A.; Macq, B.
2009-01-01
Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.
[A digital micromirror device-based Hadamard transform near infrared spectrometer].
Liu, Jia; Chen, Fen-Fei; Liao, Cheng-Sheng; Xu, Qian; Zeng, Li-Bo; Wu, Qiong-Shui
2011-10-01
A Hadamard transform near infrared spectrometer based on a digital micromirror device was constructed. The optical signal was collected by optical fiber, a grating was used for light diffraction, a digital micromirror device (DMD) was applied instead of traditional mechanical Hadamard masks for optical modulation, and an InGaAs near infrared detector was used as the optic sensor. The original spectrum was recovered by fast Hadamard transform algrithms. The advantages of the spectrometer, such as high resolution, signal-noise-ratio, stability, sensitivity and response speed were proved by experiments, which indicated that it is very suitable for oil and food-safety applications.
Zarella, Mark D; Breen, David E; Plagov, Andrei; Garcia, Fernando U
2015-01-01
Hematoxylin and eosin (H&E) staining is ubiquitous in pathology practice and research. As digital pathology has evolved, the reliance of quantitative methods that make use of H&E images has similarly expanded. For example, cell counting and nuclear morphometry rely on the accurate demarcation of nuclei from other structures and each other. One of the major obstacles to quantitative analysis of H&E images is the high degree of variability observed between different samples and different laboratories. In an effort to characterize this variability, as well as to provide a substrate that can potentially mitigate this factor in quantitative image analysis, we developed a technique to project H&E images into an optimized space more appropriate for many image analysis procedures. We used a decision tree-based support vector machine learning algorithm to classify 44 H&E stained whole slide images of resected breast tumors according to the histological structures that are present. This procedure takes an H&E image as an input and produces a classification map of the image that predicts the likelihood of a pixel belonging to any one of a set of user-defined structures (e.g., cytoplasm, stroma). By reducing these maps into their constituent pixels in color space, an optimal reference vector is obtained for each structure, which identifies the color attributes that maximally distinguish one structure from other elements in the image. We show that tissue structures can be identified using this semi-automated technique. By comparing structure centroids across different images, we obtained a quantitative depiction of H&E variability for each structure. This measurement can potentially be utilized in the laboratory to help calibrate daily staining or identify troublesome slides. Moreover, by aligning reference vectors derived from this technique, images can be transformed in a way that standardizes their color properties and makes them more amenable to image processing.
Biomedical imaging and sensing using flatbed scanners.
Göröcs, Zoltán; Ozcan, Aydogan
2014-09-07
In this Review, we provide an overview of flatbed scanner based biomedical imaging and sensing techniques. The extremely large imaging field-of-view (e.g., ~600-700 cm(2)) of these devices coupled with their cost-effectiveness provide unique opportunities for digital imaging of samples that are too large for regular optical microscopes, and for collection of large amounts of statistical data in various automated imaging or sensing tasks. Here we give a short introduction to the basic features of flatbed scanners also highlighting the key parameters for designing scientific experiments using these devices, followed by a discussion of some of the significant examples, where scanner-based systems were constructed to conduct various biomedical imaging and/or sensing experiments. Along with mobile phones and other emerging consumer electronics devices, flatbed scanners and their use in advanced imaging and sensing experiments might help us transform current practices of medicine, engineering and sciences through democratization of measurement science and empowerment of citizen scientists, science educators and researchers in resource limited settings.
Biomedical Imaging and Sensing using Flatbed Scanners
Göröcs, Zoltán; Ozcan, Aydogan
2014-01-01
In this Review, we provide an overview of flatbed scanner based biomedical imaging and sensing techniques. The extremely large imaging field-of-view (e.g., ~600–700 cm2) of these devices coupled with their cost-effectiveness provide unique opportunities for digital imaging of samples that are too large for regular optical microscopes, and for collection of large amounts of statistical data in various automated imaging or sensing tasks. Here we give a short introduction to the basic features of flatbed scanners also highlighting the key parameters for designing scientific experiments using these devices, followed by a discussion of some of the significant examples, where scanner-based systems were constructed to conduct various biomedical imaging and/or sensing experiments. Along with mobile phones and other emerging consumer electronics devices, flatbed scanners and their use in advanced imaging and sensing experiments might help us transform current practices of medicine, engineering and sciences through democratization of measurement science and empowerment of citizen scientists, science educators and researchers in resource limited settings. PMID:24965011
Image detection and compression for memory efficient system analysis
NASA Astrophysics Data System (ADS)
Bayraktar, Mustafa
2015-02-01
The advances in digital signal processing have been progressing towards efficient use of memory and processing. Both of these factors can be utilized efficiently by using feasible techniques of image storage by computing the minimum information of image which will enhance computation in later processes. Scale Invariant Feature Transform (SIFT) can be utilized to estimate and retrieve of an image. In computer vision, SIFT can be implemented to recognize the image by comparing its key features from SIFT saved key point descriptors. The main advantage of SIFT is that it doesn't only remove the redundant information from an image but also reduces the key points by matching their orientation and adding them together in different windows of image [1]. Another key property of this approach is that it works on highly contrasted images more efficiently because it`s design is based on collecting key points from the contrast shades of image.
Trelease, Robert B
2016-11-01
Until the late-twentieth century, primary anatomical sciences education was relatively unenhanced by advanced technology and dependent on the mainstays of printed textbooks, chalkboard- and photographic projection-based classroom lectures, and cadaver dissection laboratories. But over the past three decades, diffusion of innovations in computer technology transformed the practices of anatomical education and research, along with other aspects of work and daily life. Increasing adoption of first-generation personal computers (PCs) in the 1980s paved the way for the first practical educational applications, and visionary anatomists foresaw the usefulness of computers for teaching. While early computers lacked high-resolution graphics capabilities and interactive user interfaces, applications with video discs demonstrated the practicality of programming digital multimedia linking descriptive text with anatomical imaging. Desktop publishing established that computers could be used for producing enhanced lecture notes, and commercial presentation software made it possible to give lectures using anatomical and medical imaging, as well as animations. Concurrently, computer processing supported the deployment of medical imaging modalities, including computed tomography, magnetic resonance imaging, and ultrasound, that were subsequently integrated into anatomy instruction. Following its public birth in the mid-1990s, the World Wide Web became the ubiquitous multimedia networking technology underlying the conduct of contemporary education and research. Digital video, structural simulations, and mobile devices have been more recently applied to education. Progressive implementation of computer-based learning methods interacted with waves of ongoing curricular change, and such technologies have been deemed crucial for continuing medical education reforms, providing new challenges and opportunities for anatomical sciences educators. Anat Sci Educ 9: 583-602. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.
Code of Federal Regulations, 2012 CFR
2012-01-01
... recognized as complete words or numbers. (iv) The system must preserve the initial image (including both... the system. (3) Requirements applicable to a system based on digital images. For systems based on the storage of digital images, the system must provide accessibility to any digital image in the system. The...
Code of Federal Regulations, 2011 CFR
2011-01-01
... recognized as complete words or numbers. (iv) The system must preserve the initial image (including both... the system. (3) Requirements applicable to a system based on digital images. For systems based on the storage of digital images, the system must provide accessibility to any digital image in the system. The...
Code of Federal Regulations, 2010 CFR
2010-01-01
... recognized as complete words or numbers. (iv) The system must preserve the initial image (including both... the system. (3) Requirements applicable to a system based on digital images. For systems based on the storage of digital images, the system must provide accessibility to any digital image in the system. The...
Code of Federal Regulations, 2014 CFR
2014-01-01
... recognized as complete words or numbers. (iv) The system must preserve the initial image (including both... the system. (3) Requirements applicable to a system based on digital images. For systems based on the storage of digital images, the system must provide accessibility to any digital image in the system. The...
Code of Federal Regulations, 2013 CFR
2013-01-01
... recognized as complete words or numbers. (iv) The system must preserve the initial image (including both... the system. (3) Requirements applicable to a system based on digital images. For systems based on the storage of digital images, the system must provide accessibility to any digital image in the system. The...
Hierarchical brain mapping via a generalized Dirichlet solution for mapping brain manifolds
NASA Astrophysics Data System (ADS)
Joshi, Sarang C.; Miller, Michael I.; Christensen, Gary E.; Banerjee, Ayan; Coogan, Tom; Grenander, Ulf
1995-08-01
In this paper we present a coarse-to-fine approach for the transformation of digital anatomical textbooks from the ideal to the individual that unifies the work on landmark deformations and volume based transformation. The Hierarchical approach is linked to the Biological problem itself, coming out of the various kinds of information which is provided by the anatomists. This information is in the form of points, lines, surfaces and sub-volumes corresponding to 0, 1, 2, and 3 dimensional sub-manifolds respectively. The algorithm is driven by these sub- manifolds. We follow the approach that the highest dimensional transformation is a result from the solution of a sequence of lower dimensional problems driven by successive refinements or partitions of the images into various Biologically meaningful sub-structures.
Color correction pipeline optimization for digital cameras
NASA Astrophysics Data System (ADS)
Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo
2013-04-01
The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.
An object-based approach for tree species extraction from digital orthophoto maps
NASA Astrophysics Data System (ADS)
Jamil, Akhtar; Bayram, Bulent
2018-05-01
Tree segmentation is an active and ongoing research area in the field of photogrammetry and remote sensing. It is more challenging due to both intra-class and inter-class similarities among various tree species. In this study, we exploited various statistical features for extraction of hazelnut trees from 1 : 5000 scaled digital orthophoto maps. Initially, the non-vegetation areas were eliminated using traditional normalized difference vegetation index (NDVI) followed by application of mean shift segmentation for transforming the pixels into meaningful homogeneous objects. In order to eliminate false positives, morphological opening and closing was employed on candidate objects. A number of heuristics were also derived to eliminate unwanted effects such as shadow and bounding box aspect ratios, before passing them into the classification stage. Finally, a knowledge based decision tree was constructed to distinguish the hazelnut trees from rest of objects which include manmade objects and other type of vegetation. We evaluated the proposed methodology on 10 sample orthophoto maps obtained from Giresun province in Turkey. The manually digitized hazelnut tree boundaries were taken as reference data for accuracy assessment. Both manually digitized and segmented tree borders were converted into binary images and the differences were calculated. According to the obtained results, the proposed methodology obtained an overall accuracy of more than 85 % for all sample images.
3CCD image segmentation and edge detection based on MATLAB
NASA Astrophysics Data System (ADS)
He, Yong; Pan, Jiazhi; Zhang, Yun
2006-09-01
This research aimed to identify weeds from crops in early stage in the field operation by using image-processing technology. As 3CCD images offer greater binary value difference between weed and crop section than ordinary digital images taken by common cameras. It has 3 channels (green, red, ifred) which takes a snap-photo of the same area, and the three images can be composed into one image, which facilitates the segmentation of different areas. By the application of image-processing toolkit on MATLAB, the different areas in the image can be segmented clearly. As edge detection technique is the first and very important step in image processing, The different result of different processing method was compared. Especially, by using the wavelet packet transform toolkit on MATLAB, An image was preprocessed and then the edge was extracted, and getting more clearly cut image of edge. The segmentation methods include operations as erosion, dilation and other algorithms to preprocess the images. It is of great importance to segment different areas in digital images in field real time, so as to be applied in precision farming, to saving energy and herbicide and many other materials. At present time Large scale software as MATLAB on PC was used, but the computation can be reduced and integrated into a small embed system, which means that the application of this technique in agricultural engineering is feasible and of great economical value.
Automatic localization of the nipple in mammograms using Gabor filters and the Radon transform
NASA Astrophysics Data System (ADS)
Chakraborty, Jayasree; Mukhopadhyay, Sudipta; Rangayyan, Rangaraj M.; Sadhu, Anup; Azevedo-Marques, P. M.
2013-02-01
The nipple is an important landmark in mammograms. Detection of the nipple is useful for alignment and registration of mammograms in computer-aided diagnosis of breast cancer. In this paper, a novel approach is proposed for automatic detection of the nipple based on the oriented patterns of the breast tissues present in mammograms. The Radon transform is applied to the oriented patterns obtained by a bank of Gabor filters to detect the linear structures related to the tissue patterns. The detected linear structures are then used to locate the nipple position using the characteristics of convergence of the tissue patterns towards the nipple. The performance of the method was evaluated with 200 scanned-film images from the mini-MIAS database and 150 digital radiography (DR) images from a local database. Average errors of 5:84 mm and 6:36 mm were obtained with respect to the reference nipple location marked by a radiologist for the mini-MIAS and the DR images, respectively.
Image transfer protocol in progressively increasing resolution
NASA Technical Reports Server (NTRS)
Percival, Jeffrey W. (Inventor); White, Richard L. (Inventor)
1999-01-01
A method of transferring digital image data over a communication link transforms and orders the data so that, as data is received by a receiving station, a low detail version of the image is immediately generated with later transmissions of data providing progressively greater detail in this image. User instructions are accepted, limiting the ultimate resolution of the image or suspending enhancement of the image except in certain user defined regions. When a low detail image is requested followed by a request for a high detailed version of the same image, the originally transmitted data of the low resolution image is not discarded or retransmitted but used with later data to improve the originally transmitted image. Only a single copy of the transformed image need be retained by the transmitting device in order to satisfy requests for different amounts of image detail.
Enhancing security of fingerprints through contextual biometric watermarking.
Noore, Afzel; Singh, Richa; Vatsa, Mayank; Houck, Max M
2007-07-04
This paper presents a novel digital watermarking technique using face and demographic text data as multiple watermarks for verifying the chain of custody and protecting the integrity of a fingerprint image. The watermarks are embedded in selected texture regions of a fingerprint image using discrete wavelet transform. Experimental results show that modifications in these locations are visually imperceptible and maintain the minutiae details. The integrity of the fingerprint image is verified through the high matching scores obtained from an automatic fingerprint identification system. There is also a high degree of visual correlation between the embedded images, and the extracted images from the watermarked fingerprint. The degree of similarity is computed using pixel-based metrics and human visual system metrics. The results also show that the proposed watermarked fingerprint and the extracted images are resilient to common attacks such as compression, filtering, and noise.
NASA Astrophysics Data System (ADS)
Lechartier, Audrey; Martin, Guilhem; Comby, Solène; Roussel-Dherbey, Francine; Deschamps, Alexis; Mantel, Marc; Meyer, Nicolas; Verdier, Marc; Veron, Muriel
2017-01-01
The influence of the martensitic transformation on microscale plastic strain heterogeneity of a duplex stainless steel has been investigated. Microscale strain heterogeneities were measured by digital image correlation during an in situ tensile test within the SEM. The martensitic transformation was monitored in situ during tensile testing by high-energy synchrotron X-ray diffraction. A clear correlation is shown between the plasticity-induced transformation of austenite to martensite and the development of plastic strain heterogeneities at the phase level.
Kernel-aligned multi-view canonical correlation analysis for image recognition
NASA Astrophysics Data System (ADS)
Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao
2016-09-01
Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.
Lakshmi, C; Thenmozhi, K; Rayappan, John Bosco Balaguru; Amirtharajan, Rengarajan
2018-06-01
Digital Imaging and Communications in Medicine (DICOM) is one among the significant formats used worldwide for the representation of medical images. Undoubtedly, medical-image security plays a crucial role in telemedicine applications. Merging encryption and watermarking in medical-image protection paves the way for enhancing the authentication and safer transmission over open channels. In this context, the present work on DICOM image encryption has employed a fuzzy chaotic map for encryption and the Discrete Wavelet Transform (DWT) for watermarking. The proposed approach overcomes the limitation of the Arnold transform-one of the most utilised confusion mechanisms in image ciphering. Various metrics have substantiated the effectiveness of the proposed medical-image encryption algorithm. Copyright © 2018 Elsevier B.V. All rights reserved.
Corner-point criterion for assessing nonlinear image processing imagers
NASA Astrophysics Data System (ADS)
Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory
2017-10-01
Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to color imaging is proposed, with a discussion about the choice of the working color space depending on the type of image enhancement processing used.
Scott, Ian A; Sullivan, Clair; Staib, Andrew
2018-05-24
Objective In an era of rapid digitisation of Australian hospitals, practical guidance is needed in how to successfully implement electronic medical records (EMRs) as both a technical innovation and a major transformative change in clinical care. The aim of the present study was to develop a checklist that clearly and comprehensively defines the steps that best prepare hospitals for EMR implementation and digital transformation. Methods The checklist was developed using a formal methodological framework comprised of: literature reviews of relevant issues; an interactive workshop involving a multidisciplinary group of digital leads from Queensland hospitals; a draft document based on literature and workshop proceedings; and a review and feedback from senior clinical leads. Results The final checklist comprised 19 questions, 13 related to EMR implementation and six to digital transformation. Questions related to the former included organisational considerations (leadership, governance, change leaders, implementation plan), technical considerations (vendor choice, information technology and project management teams, system and hardware alignment with clinician workflows, interoperability with legacy systems) and training (user training, post-go-live contingency plans, roll-out sequence, staff support at point of care). Questions related to digital transformation included cultural considerations (clinically focused vision statement and communication strategy, readiness for change surveys), management of digital disruption syndromes and plans for further improvement in patient care (post-go-live optimisation of digital system, quality and benefit evaluation, ongoing digital innovation). Conclusion This evidence-based, field-tested checklist provides guidance to hospitals planning EMR implementation and separates readiness for EMR from readiness for digital transformation. What is known about the topic? Many hospitals throughout Australia have implemented, or are planning to implement, hospital wide electronic medical records (EMRs) with varying degrees of functionality. Few hospitals have implemented a complete end-to-end digital system with the ability to bring about major transformation in clinical care. Although the many challenges in implementing EMRs have been well documented, they have not been incorporated into an evidence-based, field-tested checklist that can practically assist hospitals in preparing for EMR implementation as both a technical innovation and a vehicle for major digital transformation of care. What does this paper add? This paper outlines a 19-question checklist that was developed using a formal methodological framework comprising literature review of relevant issues, proceedings from an interactive workshop involving a multidisciplinary group of digital leads from hospitals throughout Queensland, including three hospitals undertaking EMR implementation and one hospital with complete end-to-end EMR, and review of a draft checklist by senior clinical leads within a statewide digital healthcare improvement network. The checklist distinguishes between issues pertaining to EMR as a technical innovation and EMR as a vehicle for digital transformation of patient care. What are the implications for practitioners? Successful implementation of a hospital-wide EMR requires senior managers, clinical leads, information technology teams and project management teams to fully address key operational and strategic issues. Using an issues checklist may help prevent any one issue being inadvertently overlooked or underemphasised in the planning and implementation stages, and ensure the EMR is fully adopted and optimally used by clinician users in an ongoing digital transformation of care.
The use of immunohistochemistry for biomarker assessment--can it compete with other technologies?
Dunstan, Robert W; Wharton, Keith A; Quigley, Catherine; Lowe, Amanda
2011-10-01
A morphology-based assay such as immunohistochemistry (IHC) should be a highly effective means to define the expression of a target molecule of interest, especially if the target is a protein. However, over the past decade, IHC as a platform for biomarkers has been challenged by more quantitative molecular assays with reference standards but that lack morphologic context. For IHC to be considered a "top-tier" biomarker assay, it must provide truly quantitative data on par with non-morphologic assays, which means it needs to be run with reference standards. However, creating such standards for IHC will require optimizing all aspects of tissue collection, fixation, section thickness, morphologic criteria for assessment, staining processes, digitization of images, and image analysis. This will also require anatomic pathology to evolve from a discipline that is descriptive to one that is quantitative. A major step in this transformation will be replacing traditional ocular microscopes with computer monitors and whole slide images, for without digitization, there can be no accurate quantitation; without quantitation, there can be no standardization; and without standardization, the value of morphology-based IHC assays will not be realized.
Chaotic CDMA watermarking algorithm for digital image in FRFT domain
NASA Astrophysics Data System (ADS)
Liu, Weizhong; Yang, Wentao; Feng, Zhuoming; Zou, Xuecheng
2007-11-01
A digital image-watermarking algorithm based on fractional Fourier transform (FRFT) domain is presented by utilizing chaotic CDMA technique in this paper. As a popular and typical transmission technique, CDMA has many advantages such as privacy, anti-jamming and low power spectral density, which can provide robustness against image distortions and malicious attempts to remove or tamper with the watermark. A super-hybrid chaotic map, with good auto-correlation and cross-correlation characteristics, is adopted to produce many quasi-orthogonal codes (QOC) that can replace the periodic PN-code used in traditional CDAM system. The watermarking data is divided into a lot of segments that correspond to different chaotic QOC respectively and are modulated into the CDMA watermarking data embedded into low-frequency amplitude coefficients of FRFT domain of the cover image. During watermark detection, each chaotic QOC extracts its corresponding watermarking segment by calculating correlation coefficients between chaotic QOC and watermarked data of the detected image. The CDMA technique not only can enhance the robustness of watermark but also can compress the data of the modulated watermark. Experimental results show that the watermarking algorithm has good performances in three aspects: better imperceptibility, anti-attack robustness and security.
Joint Transform Correlation for face tracking: elderly fall detection application
NASA Astrophysics Data System (ADS)
Katz, Philippe; Aron, Michael; Alfalou, Ayman
2013-03-01
In this paper, an iterative tracking algorithm based on a non-linear JTC (Joint Transform Correlator) architecture and enhanced by a digital image processing method is proposed and validated. This algorithm is based on the computation of a correlation plane where the reference image is updated at each frame. For that purpose, we use the JTC technique in real time to track a patient (target image) in a room fitted with a video camera. The correlation plane is used to localize the target image in the current video frame (frame i). Then, the reference image to be exploited in the next frame (frame i+1) is updated according to the previous one (frame i). In an effort to validate our algorithm, our work is divided into two parts: (i) a large study based on different sequences with several situations and different JTC parameters is achieved in order to quantify their effects on the tracking performances (decimation, non-linearity coefficient, size of the correlation plane, size of the region of interest...). (ii) the tracking algorithm is integrated into an application of elderly fall detection. The first reference image is a face detected by means of Haar descriptors, and then localized into the new video image thanks to our tracking method. In order to avoid a bad update of the reference frame, a method based on a comparison of image intensity histograms is proposed and integrated in our algorithm. This step ensures a robust tracking of the reference frame. This article focuses on face tracking step optimisation and evalutation. A supplementary step of fall detection, based on vertical acceleration and position, will be added and studied in further work.
A new stationary gridline artifact suppression method based on the 2D discrete wavelet transform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Hui, E-mail: corinna@seu.edu.cn; Key Laboratory of Computer Network and Information Integration; Centre de Recherche en Information Biomédicale sino-français, Laboratoire International Associé, Inserm, Université de Rennes 1, Rennes 35000
2015-04-15
Purpose: In digital x-ray radiography, an antiscatter grid is inserted between the patient and the image receptor to reduce scattered radiation. If the antiscatter grid is used in a stationary way, gridline artifacts will appear in the final image. In most of the gridline removal image processing methods, the useful information with spatial frequencies close to that of the gridline is usually lost or degraded. In this study, a new stationary gridline suppression method is designed to preserve more of the useful information. Methods: The method is as follows. The input image is first recursively decomposed into several smaller subimagesmore » using a multiscale 2D discrete wavelet transform. The decomposition process stops when the gridline signal is found to be greater than a threshold in one or several of these subimages using a gridline detection module. An automatic Gaussian band-stop filter is then applied to the detected subimages to remove the gridline signal. Finally, the restored image is achieved using the corresponding 2D inverse discrete wavelet transform. Results: The processed images show that the proposed method can remove the gridline signal efficiently while maintaining the image details. The spectra of a 1D Fourier transform of the processed images demonstrate that, compared with some existing gridline removal methods, the proposed method has better information preservation after the removal of the gridline artifacts. Additionally, the performance speed is relatively high. Conclusions: The experimental results demonstrate the efficiency of the proposed method. Compared with some existing gridline removal methods, the proposed method can preserve more information within an acceptable execution time.« less
Computer-aided diagnosis of breast microcalcifications based on dual-tree complex wavelet transform.
Jian, Wushuai; Sun, Xueyan; Luo, Shuqian
2012-12-19
Digital mammography is the most reliable imaging modality for breast carcinoma diagnosis and breast micro-calcifications is regarded as one of the most important signs on imaging diagnosis. In this paper, a computer-aided diagnosis (CAD) system is presented for breast micro-calcifications based on dual-tree complex wavelet transform (DT-CWT) to facilitate radiologists like double reading. Firstly, 25 abnormal ROIs were extracted according to the center and diameter of the lesions manually and 25 normal ROIs were selected randomly. Then micro-calcifications were segmented by combining space and frequency domain techniques. We extracted three texture features based on wavelet (Haar, DB4, DT-CWT) transform. Totally 14 descriptors were introduced to define the characteristics of the suspicious micro-calcifications. Principal Component Analysis (PCA) was used to transform these descriptors to a compact and efficient vector expression. Support Vector Machine (SVM) classifier was used to classify potential micro-calcifications. Finally, we used the receiver operating characteristic (ROC) curve and free-response operating characteristic (FROC) curve to evaluate the performance of the CAD system. The results of SVM classifications based on different wavelets shows DT-CWT has a better performance. Compared with other results, DT-CWT method achieved an accuracy of 96% and 100% for the classification of normal and abnormal ROIs, and the classification of benign and malignant micro-calcifications respectively. In FROC analysis, our CAD system for clinical dataset detection achieved a sensitivity of 83.5% at a false positive per image of 1.85. Compared with general wavelets, DT-CWT could describe the features more effectively, and our CAD system had a competitive performance.
Computer-aided diagnosis of breast microcalcifications based on dual-tree complex wavelet transform
2012-01-01
Background Digital mammography is the most reliable imaging modality for breast carcinoma diagnosis and breast micro-calcifications is regarded as one of the most important signs on imaging diagnosis. In this paper, a computer-aided diagnosis (CAD) system is presented for breast micro-calcifications based on dual-tree complex wavelet transform (DT-CWT) to facilitate radiologists like double reading. Methods Firstly, 25 abnormal ROIs were extracted according to the center and diameter of the lesions manually and 25 normal ROIs were selected randomly. Then micro-calcifications were segmented by combining space and frequency domain techniques. We extracted three texture features based on wavelet (Haar, DB4, DT-CWT) transform. Totally 14 descriptors were introduced to define the characteristics of the suspicious micro-calcifications. Principal Component Analysis (PCA) was used to transform these descriptors to a compact and efficient vector expression. Support Vector Machine (SVM) classifier was used to classify potential micro-calcifications. Finally, we used the receiver operating characteristic (ROC) curve and free-response operating characteristic (FROC) curve to evaluate the performance of the CAD system. Results The results of SVM classifications based on different wavelets shows DT-CWT has a better performance. Compared with other results, DT-CWT method achieved an accuracy of 96% and 100% for the classification of normal and abnormal ROIs, and the classification of benign and malignant micro-calcifications respectively. In FROC analysis, our CAD system for clinical dataset detection achieved a sensitivity of 83.5% at a false positive per image of 1.85. Conclusions Compared with general wavelets, DT-CWT could describe the features more effectively, and our CAD system had a competitive performance. PMID:23253202
An application of the MPP to the interactive manipulation of stereo images of digital terrain models
NASA Technical Reports Server (NTRS)
Pol, Sanjay; Mcallister, David; Davis, Edward
1987-01-01
Massively Parallel Processor algorithms were developed for the interactive manipulation of flat shaded digital terrain models defined over grids. The emphasis is on real time manipulation of stereo images. Standard graphics transformations are applied to a 128 x 128 grid of elevations followed by shading and a perspective projection to produce the right eye image. The surface is then rendered using a simple painter's algorithm for hidden surface removal. The left eye image is produced by rotating the surface 6 degs about the viewer's y axis followed by a perspective projection and rendering of the image as described above. The left and right eye images are then presented on a graphics device using standard stereo technology. Performance evaluations and comparisons are presented.
Virtual slides in peer reviewed, open access medical publication
2011-01-01
Background Application of virtual slides (VS), the digitalization of complete glass slides, is in its infancy to be implemented in routine diagnostic surgical pathology and to issues that are related to tissue-based diagnosis, such as education and scientific publication. Approach Electronic publication in Pathology offers new features of scientific communication in pathology that cannot be obtained by conventional paper based journals. Most of these features are based upon completely open or partly directed interaction between the reader and the system that distributes the article. One of these interactions can be applied to microscopic images allowing the reader to navigate and magnify the presented images. VS and interactive Virtual Microscopy (VM) are a tool to increase the scientific value of microscopic images. Technology and Performance The open access journal Diagnostic Pathology http://www.diagnosticpathology.org has existed for about five years. It is a peer reviewed journal that publishes all types of scientific contributions, including original scientific work, case reports and review articles. In addition to digitized still images the authors of appropriate articles are requested to submit the underlying glass slides to an institution (DiagnomX.eu, and Leica.com) for digitalization and documentation. The images are stored in a separate image data bank which is adequately linked to the article. The normal review process is not involved. Both processes (peer review and VS acquisition) are performed contemporaneously in order to minimize a potential publication delay. VS are not provided with a DOI index (digital object identifier). The first articles that include VS were published in March 2011. Results and Perspectives Several logistic constraints had to be overcome until the first articles including VS could be published. Step by step an automated acquisition and distribution system had to be implemented to the corresponding article. The acceptance of VS by the reader is high as well as by the authors. Of specific value are the increased confidence to and reputation of authors as well as the presented information to the reader. Additional associated functions such as access to author-owned related image collections, reader-controlled automated image measurements and image transformations are in preparation. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1232133347629819. PMID:22182763
Registration methods for nonblind watermark detection in digital cinema applications
NASA Astrophysics Data System (ADS)
Nguyen, Philippe; Balter, Raphaele; Montfort, Nicolas; Baudry, Severine
2003-06-01
Digital watermarking may be used to enforce copyright protection of digital cinema, by embedding in each projected movie an unique identifier (fingerprint). By identifying the source of illegal copies, watermarking will thus incite movie theatre managers to enforce copyright protection, in particular by preventing people from coming in with a handy cam. We propose here a non-blind watermark method to improve the watermark detection on very impaired sequences. We first present a study on the picture impairments caused by the projection on a screen, then acquisition with a handy cam. We show that images undergo geometric deformations, which are fully described by a projective geometry model. The sequence also undergoes spatial and temporal luminance variation. Based on this study and on the impairments models which follow, we propose a method to match the retrieved sequence to the original one. First, temporal registration is performed by comparing the average luminance variation on both sequences. To compensate for geometric transformations, we used paired points from both sequences, obtained by applying a feature points detector. The matching of the feature points then enables to retrieve the geometric transform parameters. Tests show that the watermark retrieval on rectified sequences is greatly improved.
Range and Panoramic Image Fusion Into a Textured Range Image for Culture Heritage Documentation
NASA Astrophysics Data System (ADS)
Bila, Z.; Reznicek, J.; Pavelka, K.
2013-07-01
This paper deals with a fusion of range and panoramic images, where the range image is acquired by a 3D laser scanner and the panoramic image is acquired with a digital still camera mounted on a panoramic head and tripod. The fused resulting dataset, called "textured range image", provides more reliable information about the investigated object for conservators and historians, than using both datasets separately. A simple example of fusion of a range and panoramic images, both obtained in St. Francis Xavier Church in town Opařany, is given here. Firstly, we describe the process of data acquisition, then the processing of both datasets into a proper format for following fusion and the process of fusion. The process of fusion can be divided into a two main parts: transformation and remapping. In the first, transformation, part, both images are related by matching similar features detected on both images with a proper detector, which results in transformation matrix enabling transformation of the range image onto a panoramic image. Then, the range data are remapped from the range image space into a panoramic image space and stored as an additional "range" channel. The process of image fusion is validated by comparing similar features extracted on both datasets.
Visualization Support for an Army Reconnaissance Mission
1994-02-01
transform an aerial photographic image into an orthophoto image. In this process, the horizontal coordinates and elevation of a point on the ground are...to the corresponding horizontal position on the orthophoto . The result is a new digital image without relief displacement. This orthophoto image will...process, the orthophotos were generated. The generation of one orthophoto for every other photo was sufficient to ensure complete coverage of the test
Three-dimensional Imaging and Scanning: Current and Future Applications for Pathology
Farahani, Navid; Braun, Alex; Jutt, Dylan; Huffman, Todd; Reder, Nick; Liu, Zheng; Yagi, Yukako; Pantanowitz, Liron
2017-01-01
Imaging is vital for the assessment of physiologic and phenotypic details. In the past, biomedical imaging was heavily reliant on analog, low-throughput methods, which would produce two-dimensional images. However, newer, digital, and high-throughput three-dimensional (3D) imaging methods, which rely on computer vision and computer graphics, are transforming the way biomedical professionals practice. 3D imaging has been useful in diagnostic, prognostic, and therapeutic decision-making for the medical and biomedical professions. Herein, we summarize current imaging methods that enable optimal 3D histopathologic reconstruction: Scanning, 3D scanning, and whole slide imaging. Briefly mentioned are emerging platforms, which combine robotics, sectioning, and imaging in their pursuit to digitize and automate the entire microscopy workflow. Finally, both current and emerging 3D imaging methods are discussed in relation to current and future applications within the context of pathology. PMID:28966836
Bini, Jason; Spain, James; Nehal, Kishwer; Hazelwood, Vikki; DiMarzio, Charles; Rajadhyaksha, Milind
2011-01-01
Confocal mosaicing microscopy enables rapid imaging of large areas of fresh tissue, without the processing that is necessary for conventional histology. Mosaicing may offer a means to perform rapid histology at the bedside. A possible barrier toward clinical acceptance is that the mosaics are based on a single mode of grayscale contrast and appear black and white, whereas histology is based on two stains (hematoxylin for nuclei, eosin for cellular cytoplasm and dermis) and appears purple and pink. Toward addressing this barrier, we report advances in digital staining: fluorescence mosaics that show only nuclei, are digitally stained purple and overlaid on reflectance mosaics, which show only cellular cytoplasm and dermis, and are digitally stained pink. With digital staining, the appearance of confocal mosaics mimics the appearance of histology. Using multispectral analysis and color matching functions, red, green, and blue (RGB) components of hematoxylin and eosin stains in tissue were determined. The resulting RGB components were then applied in a linear algorithm to transform fluorescence and reflectance contrast in confocal mosaics to the absorbance contrast seen in pathology. Optimization of staining with acridine orange showed improved quality of digitally stained mosaics, with good correlation to the corresponding histology. PMID:21806269
A simplification of the fractional Hartley transform applied to image security system in phase
NASA Astrophysics Data System (ADS)
Jimenez, Carlos J.; Vilardy, Juan M.; Perez, Ronal
2017-01-01
In this work we develop a new encryption system for encoded image in phase using the fractional Hartley transform (FrHT), truncation operations and random phase masks (RPMs). We introduce a simplification of the FrHT with the purpose of computing this transform in an efficient and fast way. The security of the encryption system is increased by using nonlinear operations, such as the phase encoding and the truncation operations. The image to encrypt (original image) is encoded in phase and the truncation operations applied in the encryption-decryption system are the amplitude and phase truncations. The encrypted image is protected by six keys, which are the two fractional orders of the FrHTs, the two RPMs and the two pseudorandom code images generated by the amplitude and phase truncation operations. All these keys have to be correct for a proper recovery of the original image in the decryption system. We present digital results that confirm our approach.
Images in Language: Metaphors and Metamorphoses. Visual Learning. Volume 1
ERIC Educational Resources Information Center
Benedek, Andras, Ed.; Nyiri, Kristof, Ed.
2011-01-01
Learning and teaching are faced with radically new challenges in today's rapidly changing world and its deeply transformed communicational environment. We are living in an era of images. Contemporary visual technology--film, video, interactive digital media--is promoting but also demanding a new approach to education: the age of visual learning…
Sieroń-Stołtny, Karolina; Kwiatek, Sebastian; Latos, Wojciech; Kawczyk-Krupka, Aleksandra; Cieślar, Grzegorz; Stanek, Agata; Ziaja, Damian; Bugaj, Andrzej M; Sieroń, Aleksander
2012-03-01
Oesophageal papilloma and Barrett's oesophagus are benign lesions known as risk factors of carcinoma in the oesophagus. Therefore, it is important to diagnose these early changes before neoplastic transformation. Autofluorescence endoscopy is a fast and non-invasive method of imaging of tissues based on the natural fluorescence of endogenous fluorophores. The aim of this study was to prove the diagnostic utility of autofluorescence endoscopy with digital image processing in histological diagnosis of endoscopic findings in the upper digestive tract, primarily in the imaging of oesophageal papilloma. During the retrospective analysis of about 200 endoscopic procedures in the upper digestive tract, 67 cases of benign, precancerous or cancerous changes were found. White light endoscopy (WLE) image, single-channel (red or green) autofluorescence images, as well as green and red fluorescence intensities in two modal fluorescence image and red-to-green (R/G) ratio (Numerical Colour Value, NCV) were correlated with histopathologic results. The NCV analysis in autofluorescence imaging (AFI) showed increased R/G ratio in cancerous changes in 96% vs. 85% in WLE. Simultaneous analysis with digital image processing allowed us to diagnose suspicious tissue as cancerous in all of cases. Barrett's metaplasia was confirmed in 90% vs. 79% (AFI vs. WLE), and 98% in imaging with digital image processing. In benign lesions, WLE allowed us to exclude tissue as malignant in 85%. Using autofluorescence endoscopy R/G ratio was increased in only 10% of benign changes causing the picture to be interpreted as suspicious, but when both methods were used together, 97.5% were cases excluded as malignancies. Mean R/G ratios were estimated to be 2.5 in cancers, 1.25 in Barrett's metaplasia and 0.75 in benign changes and were statistically significant (p=0.04). Autofluorescence imaging is a sensitive method to diagnose precancerous and cancerous early stages of the diseases located in oesophagus. Especially in two-modal imaging including white light endoscopy, autofluorescence imaging with digital image processing seems to be a useful modality of early diagnostics. Also in observation of papilloma changes, it facilitates differentiation between neoplastic and benign lesions and more accurate estimation of the risk of potential malignancy. Copyright © 2011 Elsevier B.V. All rights reserved.
A software to digital image processing to be used in the voxel phantom development.
Vieira, J W; Lima, F R A
2009-11-15
Anthropomorphic models used in computational dosimetry, also denominated phantoms, are based on digital images recorded from scanning of real people by Computed Tomography (CT) or Magnetic Resonance Imaging (MRI). The voxel phantom construction requests computational processing for transformations of image formats, to compact two-dimensional (2-D) images forming of three-dimensional (3-D) matrices, image sampling and quantization, image enhancement, restoration and segmentation, among others. Hardly the researcher of computational dosimetry will find all these available abilities in single software, and almost always this difficulty presents as a result the decrease of the rhythm of his researches or the use, sometimes inadequate, of alternative tools. The need to integrate the several tasks mentioned above to obtain an image that can be used in an exposure computational model motivated the development of the Digital Image Processing (DIP) software, mainly to solve particular problems in Dissertations and Thesis developed by members of the Grupo de Pesquisa em Dosimetria Numérica (GDN/CNPq). Because of this particular objective, the software uses the Portuguese idiom in their implementations and interfaces. This paper presents the second version of the DIP, whose main changes are the more formal organization on menus and menu items, and menu for digital image segmentation. Currently, the DIP contains the menus Fundamentos, Visualizações, Domínio Espacial, Domínio de Frequências, Segmentações and Estudos. Each menu contains items and sub-items with functionalities that, usually, request an image as input and produce an image or an attribute in the output. The DIP reads edits and writes binary files containing the 3-D matrix corresponding to a stack of axial images from a given geometry that can be a human body or other volume of interest. It also can read any type of computational image and to make conversions. When the task involves only an output image, this is saved as a JPEG file in the Windows default; when it involves an image stack, the output binary file is denominated SGI (Simulações Gráficas Interativas (Interactive Graphic Simulations), an acronym already used in other publications of the GDN/CNPq.
Pursley, Randall H.; Salem, Ghadi; Devasahayam, Nallathamby; Subramanian, Sankaran; Koscielniak, Janusz; Krishna, Murali C.; Pohida, Thomas J.
2006-01-01
The integration of modern data acquisition and digital signal processing (DSP) technologies with Fourier transform electron paramagnetic resonance (FT-EPR) imaging at radiofrequencies (RF) is described. The FT-EPR system operates at a Larmor frequency (Lf) of 300 MHz to facilitate in vivo studies. This relatively low frequency Lf, in conjunction with our ~10 MHz signal bandwidth, enables the use of direct free induction decay time-locked subsampling (TLSS). This particular technique provides advantages by eliminating the traditional analog intermediate frequency downconversion stage along with the corresponding noise sources. TLSS also results in manageable sample rates that facilitate the design of DSP-based data acquisition and image processing platforms. More specifically, we utilize a high-speed field programmable gate array (FPGA) and a DSP processor to perform advanced real-time signal and image processing. The migration to a DSP-based configuration offers the benefits of improved EPR system performance, as well as increased adaptability to various EPR system configurations (i.e., software configurable systems instead of hardware reconfigurations). The required modifications to the FT-EPR system design are described, with focus on the addition of DSP technologies including the application-specific hardware, software, and firmware developed for the FPGA and DSP processor. The first results of using real-time DSP technologies in conjunction with direct detection bandpass sampling to implement EPR imaging at RF frequencies are presented. PMID:16243552
Optical head tracking for functional magnetic resonance imaging using structured light.
Zaremba, Andrei A; MacFarlane, Duncan L; Tseng, Wei-Che; Stark, Andrew J; Briggs, Richard W; Gopinath, Kaundinya S; Cheshkov, Sergey; White, Keith D
2008-07-01
An accurate motion-tracking technique is needed to compensate for subject motion during functional magnetic resonance imaging (fMRI) procedures. Here, a novel approach to motion metrology is discussed. A structured light pattern specifically coded for digital signal processing is positioned onto a fiduciary of the patient. As the patient undergoes spatial transformations in 6 DoF (degrees of freedom), a high-resolution CCD camera captures successive images for analysis on a computing platform. A high-speed image processing algorithm is used to calculate spatial transformations in a time frame commensurate with patient movements (10-100 ms) and with a precision of at least 0.5 microm for translations and 0.1 deg for rotations.
Application of digital image processing techniques to astronomical imagery 1980
NASA Technical Reports Server (NTRS)
Lorre, J. J.
1981-01-01
Topics include: (1) polar coordinate transformations (M83); (2) multispectral ratios (M82); (3) maximum entropy restoration (M87); (4) automated computation of stellar magnitudes in nebulosity; (5) color and polarization; (6) aliasing.
Digital radiographic imaging: is the dental practice ready?
Parks, Edwin T
2008-04-01
Digital radiographic imaging is slowly, but surely, replacing film-based imaging. It has many advantages over traditional imaging, but the technology also has some drawbacks. The author presents an overview of the types of digital image receptors available, image enhancement software and the range of costs for the new technology. PRACTICE IMPLICATIONS. The expenses associated with converting to digital radiographic imaging are considerable. The purpose of this article is to provide the clinician with an overview of digital radiographic imaging technology so that he or she can be an informed consumer when evaluating the numerous digital systems in the marketplace.
Pattern-Recognition Processor Using Holographic Photopolymer
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin; Cammack, Kevin
2006-01-01
proposed joint-transform optical correlator (JTOC) would be capable of operating as a real-time pattern-recognition processor. The key correlation-filter reading/writing medium of this JTOC would be an updateable holographic photopolymer. The high-resolution, high-speed characteristics of this photopolymer would enable pattern-recognition processing to occur at a speed three orders of magnitude greater than that of state-of-the-art digital pattern-recognition processors. There are many potential applications in biometric personal identification (e.g., using images of fingerprints and faces) and nondestructive industrial inspection. In order to appreciate the advantages of the proposed JTOC, it is necessary to understand the principle of operation of a conventional JTOC. In a conventional JTOC (shown in the upper part of the figure), a collimated laser beam passes through two side-by-side spatial light modulators (SLMs). One SLM displays a real-time input image to be recognized. The other SLM displays a reference image from a digital memory. A Fourier-transform lens is placed at its focal distance from the SLM plane, and a charge-coupled device (CCD) image detector is placed at the back focal plane of the lens for use as a square-law recorder. Processing takes place in two stages. In the first stage, the CCD records the interference pattern between the Fourier transforms of the input and reference images, and the pattern is then digitized and saved in a buffer memory. In the second stage, the reference SLM is turned off and the interference pattern is fed back to the input SLM. The interference pattern thus becomes Fourier-transformed, yielding at the CCD an image representing the joint-transform correlation between the input and reference images. This image contains a sharp correlation peak when the input and reference images are matched. The drawbacks of a conventional JTOC are the following: The CCD has low spatial resolution and is not an ideal square-law detector for the purpose of holographic recording of interference fringes. A typical state-of-the-art CCD has a pixel-pitch limited resolution of about 100 lines/mm. In contrast, the holographic photopolymer to be used in the proposed JTOC offers a resolution > 2,000 lines/mm. In addition to being disadvantageous in itself, the low resolution of the CCD causes overlap of a DC term and the desired correlation term in the output image. This overlap severely limits the correlation signal-to-noise ratio. The two-stage nature of the process limits the achievable throughput rate. A further limit is imposed by the low frame rate (typical video rates) of low- and medium-cost commercial CCDs.
Dual function seal: visualized digital signature for electronic medical record systems.
Yu, Yao-Chang; Hou, Ting-Wei; Chiang, Tzu-Chiang
2012-10-01
Digital signature is an important cryptography technology to be used to provide integrity and non-repudiation in electronic medical record systems (EMRS) and it is required by law. However, digital signatures normally appear in forms unrecognizable to medical staff, this may reduce the trust from medical staff that is used to the handwritten signatures or seals. Therefore, in this paper we propose a dual function seal to extend user trust from a traditional seal to a digital signature. The proposed dual function seal is a prototype that combines the traditional seal and digital seal. With this prototype, medical personnel are not just can put a seal on paper but also generate a visualized digital signature for electronic medical records. Medical Personnel can then look at the visualized digital signature and directly know which medical personnel generated it, just like with a traditional seal. Discrete wavelet transform (DWT) is used as an image processing method to generate a visualized digital signature, and the peak signal to noise ratio (PSNR) is calculated to verify that distortions of all converted images are beyond human recognition, and the results of our converted images are from 70 dB to 80 dB. The signature recoverability is also tested in this proposed paper to ensure that the visualized digital signature is verifiable. A simulated EMRS is implemented to show how the visualized digital signature can be integrity into EMRS.
NASA Astrophysics Data System (ADS)
Xie, Xi; Kan, Qianhua; Kang, Guozheng; Li, Jian; Qiu, Bo; Yu, Chao
2016-04-01
The strain field of a super-elastic NiTi shape memory alloy (SMA) and its variation during uniaxial cyclic tension-unloading were observed by a non-contact digital image correlation method, and then the transformation domains and their evolutions were indirectly investigated and discussed. It is seen that the super-elastic NiTi (SMA) exhibits a remarkable localized deformation and the transformation domains evolve periodically with the repeated cyclic tension-unloading within the first several cycles. However, the evolutions of transformation domains at the stage of stable cyclic transformation depend on applied peak stress: when the peak stress is low, no obvious transformation band is observed and the strain field is nearly uniform; when the peak stress is large enough, obvious transformation bands occur due to the residual martensite caused by the prevention of enriched dislocations to the reverse transformation from induced martensite to austenite. Temperature variations measured by an infrared thermal imaging method further verifies the formation and evolution of transformation domains.
Bassan, Paul; Weida, Miles J; Rowlette, Jeremy; Gardner, Peter
2014-08-21
Chemical imaging in the field of vibrational spectroscopy is developing into a promising tool to complement digital histopathology. Applications include screening of biopsy tissue via automated recognition of tissue/cell type and disease state based on the chemical information from the spectrum. For integration into clinical practice, data acquisition needs to be speeded up to implement a rack based system where specimens are rapidly imaged to compete with current visible scanners where 100's of slides can be scanned overnight. Current Fourier transform infrared (FTIR) imaging with focal plane array (FPA) detectors are currently the state-of-the-art instrumentation for infrared absorption chemical imaging, however recent development in broadly tunable lasers in the mid-IR range is considered the most promising potential candidate for next generation microscopes. In this paper we test a prototype quantum cascade laser (QCL) based spectral imaging microscope with a focus on discrete frequency chemical imaging. We demonstrate how a protein chemical image of the amide I band (1655 cm(-1)) of a 2 × 2.4 cm(2) breast tissue microarray (TMA) containing over 200 cores can be measured in 9 min. This result indicates that applications requiring chemical images from a few key wavelengths would be ideally served by laser-based microscopes.
Living specimen tomography by digital holographic microscopy: morphometry of testate amoeba
NASA Astrophysics Data System (ADS)
Charrière, Florian; Pavillon, Nicolas; Colomb, Tristan; Depeursinge, Christian; Heger, Thierry J.; Mitchell, Edward A. D.; Marquet, Pierre; Rappaz, Benjamin
2006-08-01
This paper presents an optical diffraction tomography technique based on digital holographic microscopy. Quantitative 2-dimensional phase images are acquired for regularly-spaced angular positions of the specimen covering a total angle of π, allowing to built 3-dimensional quantitative refractive index distributions by an inverse Radon transform. A 20x magnification allows a resolution better than 3 μm in all three dimensions, with accuracy better than 0.01 for the refractive index measurements. This technique is for the first time to our knowledge applied to living specimen (testate amoeba, Protista). Morphometric measurements are extracted from the tomographic reconstructions, showing that the commonly used method for testate amoeba biovolume evaluation leads to systematic under evaluations by about 50%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rozario, T; Bereg, S; Chiu, T
Purpose: In order to locate lung tumors on projection images without internal markers, digitally reconstructed radiograph (DRR) is created and compared with projection images. Since lung tumors always move and their locations change on projection images while they are static on DRRs, a special DRR (background DRR) is generated based on modified anatomy from which lung tumors are removed. In addition, global discrepancies exist between DRRs and projections due to their different image originations, scattering, and noises. This adversely affects comparison accuracy. A simple but efficient comparison algorithm is reported. Methods: This method divides global images into a matrix ofmore » small tiles and similarities will be evaluated by calculating normalized cross correlation (NCC) between corresponding tiles on projections and DRRs. The tile configuration (tile locations) will be automatically optimized to keep the tumor within a single tile which has bad matching with the corresponding DRR tile. A pixel based linear transformation will be determined by linear interpolations of tile transformation results obtained during tile matching. The DRR will be transformed to the projection image level and subtracted from it. The resulting subtracted image now contains only the tumor. A DRR of the tumor is registered to the subtracted image to locate the tumor. Results: This method has been successfully applied to kV fluoro images (about 1000 images) acquired on a Vero (Brainlab) for dynamic tumor tracking on phantom studies. Radiation opaque markers are implanted and used as ground truth for tumor positions. Although, other organs and bony structures introduce strong signals superimposed on tumors at some angles, this method accurately locates tumors on every projection over 12 gantry angles. The maximum error is less than 2.6 mm while the total average error is 1.0 mm. Conclusion: This algorithm is capable of detecting tumor without markers despite strong background signals.« less
Optic disc detection and boundary extraction in retinal images.
Basit, A; Fraz, Muhammad Moazam
2015-04-10
With the development of digital image processing, analysis and modeling techniques, automatic retinal image analysis is emerging as an important screening tool for early detection of ophthalmologic disorders such as diabetic retinopathy and glaucoma. In this paper, a robust method for optic disc detection and extraction of the optic disc boundary is proposed to help in the development of computer-assisted diagnosis and treatment of such ophthalmic disease. The proposed method is based on morphological operations, smoothing filters, and the marker controlled watershed transform. Internal and external markers are used to first modify the gradient magnitude image and then the watershed transformation is applied on this modified gradient magnitude image for boundary extraction. This method has shown significant improvement over existing methods in terms of detection and boundary extraction of the optic disc. The proposed method has optic disc detection success rate of 100%, 100%, 100% and 98.9% for the DRIVE, Shifa, CHASE_DB1, and DIARETDB1 databases, respectively. The optic disc boundary detection achieved an average spatial overlap of 61.88%, 70.96%, 45.61%, and 54.69% for these databases, respectively, which are higher than currents methods.
The recognition of graphical patterns invariant to geometrical transformation of the models
NASA Astrophysics Data System (ADS)
Ileană, Ioan; Rotar, Corina; Muntean, Maria; Ceuca, Emilian
2010-11-01
In case that a pattern recognition system is used for images recognition (in robot vision, handwritten recognition etc.), the system must have the capacity to identify an object indifferently of its size or position in the image. The problem of the invariance of recognition can be approached in some fundamental modes. One may apply the similarity criterion used in associative recall. The original pattern is replaced by a mathematical transform that assures some invariance (e.g. the value of two-dimensional Fourier transformation is translation invariant, the value of Mellin transformation is scale invariant). In a different approach the original pattern is represented through a set of features, each of them being coded indifferently of the position, orientation or position of the pattern. Generally speaking, it is easy to obtain invariance in relation with one transformation group, but is difficult to obtain simultaneous invariance at rotation, translation and scale. In this paper we analyze some methods to achieve invariant recognition of images, particularly for digit images. A great number of experiments are due and the conclusions are underplayed in the paper.
NASA Astrophysics Data System (ADS)
Taillebot, V.; Lexcellent, C.; Vacher, P.
2012-03-01
The thermomechanical behavior of shape memory alloys is now well mastered. However, a hindrance to their sustainable use is the lack of knowledge of their fracture behavior. With the aim of filling this partial gap, fracture tests on edge-cracked specimens in NiTi have been made. Particular attention was paid to determine the phase transformation zones in the vicinity of the crack tip. In one hand, experimental kinematic fields are observed using digital image correlation showing strain localization around the crack tip. In the other hand, an analytical prediction, based on a modified equivalent stress criterion and taking into account the asymmetric behavior of shape memory alloys in tension-compression, provides shape and size of transformation outset zones. Experimental results are relatively in agreement with our analytical modeling.
Automated road network extraction from high spatial resolution multi-spectral imagery
NASA Astrophysics Data System (ADS)
Zhang, Qiaoping
For the last three decades, the Geomatics Engineering and Computer Science communities have considered automated road network extraction from remotely-sensed imagery to be a challenging and important research topic. The main objective of this research is to investigate the theory and methodology of automated feature extraction for image-based road database creation, refinement or updating, and to develop a series of algorithms for road network extraction from high resolution multi-spectral imagery. The proposed framework for road network extraction from multi-spectral imagery begins with an image segmentation using the k-means algorithm. This step mainly concerns the exploitation of the spectral information for feature extraction. The road cluster is automatically identified using a fuzzy classifier based on a set of predefined road surface membership functions. These membership functions are established based on the general spectral signature of road pavement materials and the corresponding normalized digital numbers on each multi-spectral band. Shape descriptors of the Angular Texture Signature are defined and used to reduce the misclassifications between roads and other spectrally similar objects (e.g., crop fields, parking lots, and buildings). An iterative and localized Radon transform is developed for the extraction of road centerlines from the classified images. The purpose of the transform is to accurately and completely detect the road centerlines. It is able to find short, long, and even curvilinear lines. The input image is partitioned into a set of subset images called road component images. An iterative Radon transform is locally applied to each road component image. At each iteration, road centerline segments are detected based on an accurate estimation of the line parameters and line widths. Three localization approaches are implemented and compared using qualitative and quantitative methods. Finally, the road centerline segments are grouped into a road network. The extracted road network is evaluated against a reference dataset using a line segment matching algorithm. The entire process is unsupervised and fully automated. Based on extensive experimentation on a variety of remotely-sensed multi-spectral images, the proposed methodology achieves a moderate success in automating road network extraction from high spatial resolution multi-spectral imagery.
High-definition Fourier transform infrared spectroscopic imaging of prostate tissue
NASA Astrophysics Data System (ADS)
Wrobel, Tomasz P.; Kwak, Jin Tae; Kadjacsy-Balla, Andre; Bhargava, Rohit
2016-03-01
Histopathology forms the gold standard for cancer diagnosis and therapy, and generally relies on manual examination of microscopic structural morphology within tissue. Fourier-Transform Infrared (FT-IR) imaging is an emerging vibrational spectroscopic imaging technique, especially in a High-Definition (HD) format, that provides the spatial specificity of microscopy at magnifications used in diagnostic surgical pathology. While it has been shown for standard imaging that IR absorption by tissue creates a strong signal where the spectrum at each pixel is a quantitative "fingerprint" of the molecular composition of the sample, here we show that this fingerprint also enables direct digital pathology without the need for stains or dyes for HD imaging. An assessment of the potential of HD imaging to improve diagnostic pathology accuracy is presented.
Research on a dem Coregistration Method Based on the SAR Imaging Geometry
NASA Astrophysics Data System (ADS)
Niu, Y.; Zhao, C.; Zhang, J.; Wang, L.; Li, B.; Fan, L.
2018-04-01
Due to the systematic error, especially the horizontal deviation that exists in the multi-source, multi-temporal DEMs (Digital Elevation Models), a method for high precision coregistration is needed. This paper presents a new fast DEM coregistration method based on a given SAR (Synthetic Aperture Radar) imaging geometry to overcome the divergence and time-consuming problem of the conventional DEM coregistration method. First, intensity images are simulated for two DEMs under the given SAR imaging geometry. 2D (Two-dimensional) offsets are estimated in the frequency domain using the intensity cross-correlation operation in the FFT (Fast Fourier Transform) tool, which can greatly accelerate the calculation process. Next, the transformation function between two DEMs is achieved via the robust least-square fitting of 2D polynomial operation. Accordingly, two DEMs can be precisely coregistered. Last, two DEMs, i.e., one high-resolution LiDAR (Light Detection and Ranging) DEM and one low-resolution SRTM (Shutter Radar Topography Mission) DEM, covering the Yangjiao landslide region of Chongqing are taken as an example to test the new method. The results indicate that, in most cases, this new method can achieve not only a result as much as 80 times faster than the minimum elevation difference (Least Z-difference, LZD) DEM registration method, but also more accurate and more reliable results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, Brad M.; Nathan, Diane L.; Wang Yan
Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., 'FOR PROCESSING') andmore » vendor postprocessed (i.e., 'FOR PRESENTATION'), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then aggregated into a final dense tissue segmentation that is used to compute breast PD%. Our method is validated on a group of 81 women for whom bilateral, mediolateral oblique, raw and processed screening digital mammograms were available, and agreement is assessed with both continuous and categorical density estimates made by a trained breast-imaging radiologist. Results: Strong association between algorithm-estimated and radiologist-provided breast PD% was detected for both raw (r= 0.82, p < 0.001) and processed (r= 0.85, p < 0.001) digital mammograms on a per-breast basis. Stronger agreement was found when overall breast density was assessed on a per-woman basis for both raw (r= 0.85, p < 0.001) and processed (0.89, p < 0.001) mammograms. Strong agreement between categorical density estimates was also seen (weighted Cohen's {kappa}{>=} 0.79). Repeated measures analysis of variance demonstrated no statistically significant differences between the PD% estimates (p > 0.1) due to either presentation of the image (raw vs processed) or method of PD% assessment (radiologist vs algorithm). Conclusions: The proposed fully automated algorithm was successful in estimating breast percent density from both raw and processed digital mammographic images. Accurate assessment of a woman's breast density is critical in order for the estimate to be incorporated into risk assessment models. These results show promise for the clinical application of the algorithm in quantifying breast density in a repeatable manner, both at time of imaging as well as in retrospective studies.« less
Keller, Brad M.; Nathan, Diane L.; Wang, Yan; Zheng, Yuanjie; Gee, James C.; Conant, Emily F.; Kontos, Despina
2012-01-01
Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., “FOR PROCESSING”) and vendor postprocessed (i.e., “FOR PRESENTATION”), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then aggregated into a final dense tissue segmentation that is used to compute breast PD%. Our method is validated on a group of 81 women for whom bilateral, mediolateral oblique, raw and processed screening digital mammograms were available, and agreement is assessed with both continuous and categorical density estimates made by a trained breast-imaging radiologist. Results: Strong association between algorithm-estimated and radiologist-provided breast PD% was detected for both raw (r = 0.82, p < 0.001) and processed (r = 0.85, p < 0.001) digital mammograms on a per-breast basis. Stronger agreement was found when overall breast density was assessed on a per-woman basis for both raw (r = 0.85, p < 0.001) and processed (0.89, p < 0.001) mammograms. Strong agreement between categorical density estimates was also seen (weighted Cohen's κ ≥ 0.79). Repeated measures analysis of variance demonstrated no statistically significant differences between the PD% estimates (p > 0.1) due to either presentation of the image (raw vs processed) or method of PD% assessment (radiologist vs algorithm). Conclusions: The proposed fully automated algorithm was successful in estimating breast percent density from both raw and processed digital mammographic images. Accurate assessment of a woman's breast density is critical in order for the estimate to be incorporated into risk assessment models. These results show promise for the clinical application of the algorithm in quantifying breast density in a repeatable manner, both at time of imaging as well as in retrospective studies. PMID:22894417
Keller, Brad M; Nathan, Diane L; Wang, Yan; Zheng, Yuanjie; Gee, James C; Conant, Emily F; Kontos, Despina
2012-08-01
The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., "FOR PROCESSING") and vendor postprocessed (i.e., "FOR PRESENTATION"), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then aggregated into a final dense tissue segmentation that is used to compute breast PD%. Our method is validated on a group of 81 women for whom bilateral, mediolateral oblique, raw and processed screening digital mammograms were available, and agreement is assessed with both continuous and categorical density estimates made by a trained breast-imaging radiologist. Strong association between algorithm-estimated and radiologist-provided breast PD% was detected for both raw (r = 0.82, p < 0.001) and processed (r = 0.85, p < 0.001) digital mammograms on a per-breast basis. Stronger agreement was found when overall breast density was assessed on a per-woman basis for both raw (r = 0.85, p < 0.001) and processed (0.89, p < 0.001) mammograms. Strong agreement between categorical density estimates was also seen (weighted Cohen's κ ≥ 0.79). Repeated measures analysis of variance demonstrated no statistically significant differences between the PD% estimates (p > 0.1) due to either presentation of the image (raw vs processed) or method of PD% assessment (radiologist vs algorithm). The proposed fully automated algorithm was successful in estimating breast percent density from both raw and processed digital mammographic images. Accurate assessment of a woman's breast density is critical in order for the estimate to be incorporated into risk assessment models. These results show promise for the clinical application of the algorithm in quantifying breast density in a repeatable manner, both at time of imaging as well as in retrospective studies.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-08-01
The main purpose of this work is to explore the usefulness of fractal descriptors estimated in multi-resolution domains to characterize biomedical digital image texture. In this regard, three multi-resolution techniques are considered: the well-known discrete wavelet transform (DWT) and the empirical mode decomposition (EMD), and; the newly introduced; variational mode decomposition mode (VMD). The original image is decomposed by the DWT, EMD, and VMD into different scales. Then, Fourier spectrum based fractal descriptors is estimated at specific scales and directions to characterize the image. The support vector machine (SVM) was used to perform supervised classification. The empirical study was applied to the problem of distinguishing between normal and abnormal brain magnetic resonance images (MRI) affected with Alzheimer disease (AD). Our results demonstrate that fractal descriptors estimated in VMD domain outperform those estimated in DWT and EMD domains; and also those directly estimated from the original image.
NASA Technical Reports Server (NTRS)
1986-01-01
Digital Imaging is the computer processed numerical representation of physical images. Enhancement of images results in easier interpretation. Quantitative digital image analysis by Perceptive Scientific Instruments, locates objects within an image and measures them to extract quantitative information. Applications are CAT scanners, radiography, microscopy in medicine as well as various industrial and manufacturing uses. The PSICOM 327 performs all digital image analysis functions. It is based on Jet Propulsion Laboratory technology, is accurate and cost efficient.
Wavelet transform: fundamentals, applications, and implementation using acousto-optic correlators
NASA Astrophysics Data System (ADS)
DeCusatis, Casimer M.; Koay, J.; Litynski, Daniel M.; Das, Pankaj K.
1995-10-01
In recent years there has been a great deal of interest in the use of wavelets to supplement or replace conventional Fourier transform signal processing. This paper provides a review of wavelet transforms for signal processing applications, and discusses several emerging applications which benefit from the advantages of wavelets. The wavelet transform can be implemented as an acousto-optic correlator; perfect reconstruction of digital signals may also be achieved using acousto-optic finite impulse response filter banks. Acousto-optic image correlators are discussed as a potential implementation of the wavelet transform, since a 1D wavelet filter bank may be encoded as a 2D image. We discuss applications of the wavelet transform including nondestructive testing of materials, biomedical applications in the analysis of EEG signals, and interference excision in spread spectrum communication systems. Computer simulations and experimental results for these applications are also provided.
Digital Transformation and Disruption of the Health Care Sector: Internet-Based Observational Study.
Herrmann, Maximilian; Boehme, Philip; Mondritzki, Thomas; Ehlers, Jan P; Kavadias, Stylianos; Truebel, Hubert
2018-03-27
Digital innovation, introduced across many industries, is a strong force of transformation. Some industries have seen faster transformation, whereas the health care sector only recently came into focus. A context where digital corporations move into health care, payers strive to keep rising costs at bay, and longer-living patients desire continuously improved quality of care points to a digital and value-based transformation with drastic implications for the health care sector. We tried to operationalize the discussion within the health care sector around digital and disruptive innovation to identify what type of technological enablers, business models, and value networks seem to be emerging from different groups of innovators with respect to their digital transformational efforts. From the Forbes 2000 and CBinsights databases, we identified 100 leading technology, life science, and start-up companies active in the health care sector. Further analysis identified projects from these companies within a digital context that were subsequently evaluated using the following criteria: delivery of patient value, presence of a comprehensive and distinctive underlying business model, solutions provided, and customer needs addressed. Our methodological approach recorded more than 400 projects and collaborations. We identified patterns that show established corporations rely more on incremental innovation that supports their current business models, while start-ups engage their flexibility to explore new market segments with notable transformations of established business models. Thereby, start-ups offer higher promises of disruptive innovation. Additionally, start-ups offer more diversified value propositions addressing broader areas of the health care sector. Digital transformation is an opportunity to accelerate health care performance by lowering cost and improving quality of care. At an economic scale, business models can be strengthened and disruptive innovation models enabled. Corporations should look for collaborations with start-up companies to keep investment costs at bay and off the balance sheet. At the same time, the regulatory knowledge of established corporations might help start-ups to kick off digital disruption in the health care sector. ©Maximilian Herrmann, Philip Boehme, Thomas Mondritzki, Jan P Ehlers, Stylianos Kavadias, Hubert Truebel. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 27.03.2018.
Digital Transformation and Disruption of the Health Care Sector: Internet-Based Observational Study
Mondritzki, Thomas; Ehlers, Jan P; Kavadias, Stylianos
2018-01-01
Background Digital innovation, introduced across many industries, is a strong force of transformation. Some industries have seen faster transformation, whereas the health care sector only recently came into focus. A context where digital corporations move into health care, payers strive to keep rising costs at bay, and longer-living patients desire continuously improved quality of care points to a digital and value-based transformation with drastic implications for the health care sector. Objective We tried to operationalize the discussion within the health care sector around digital and disruptive innovation to identify what type of technological enablers, business models, and value networks seem to be emerging from different groups of innovators with respect to their digital transformational efforts. Methods From the Forbes 2000 and CBinsights databases, we identified 100 leading technology, life science, and start-up companies active in the health care sector. Further analysis identified projects from these companies within a digital context that were subsequently evaluated using the following criteria: delivery of patient value, presence of a comprehensive and distinctive underlying business model, solutions provided, and customer needs addressed. Results Our methodological approach recorded more than 400 projects and collaborations. We identified patterns that show established corporations rely more on incremental innovation that supports their current business models, while start-ups engage their flexibility to explore new market segments with notable transformations of established business models. Thereby, start-ups offer higher promises of disruptive innovation. Additionally, start-ups offer more diversified value propositions addressing broader areas of the health care sector. Conclusions Digital transformation is an opportunity to accelerate health care performance by lowering cost and improving quality of care. At an economic scale, business models can be strengthened and disruptive innovation models enabled. Corporations should look for collaborations with start-up companies to keep investment costs at bay and off the balance sheet. At the same time, the regulatory knowledge of established corporations might help start-ups to kick off digital disruption in the health care sector. PMID:29588274
Digital disruption ?syndromes.
Sullivan, Clair; Staib, Andrew
2017-05-18
The digital transformation of hospitals in Australia is occurring rapidly in order to facilitate innovation and improve efficiency. Rapid transformation can cause temporary disruption of hospital workflows and staff as processes are adapted to the new digital workflows. The aim of this paper is to outline various types of digital disruption and some strategies for effective management. A large tertiary university hospital recently underwent a rapid, successful roll-out of an integrated electronic medical record (EMR). We observed this transformation and propose several digital disruption "syndromes" to assist with understanding and management during digital transformation: digital deceleration, digital transparency, digital hypervigilance, data discordance, digital churn and post-digital 'depression'. These 'syndromes' are defined and discussed in detail. Successful management of this temporary digital disruption is important to ensure a successful transition to a digital platform. What is known about this topic? Digital disruption is defined as the changes facilitated by digital technologies that occur at a pace and magnitude that disrupt established ways of value creation, social interactions, doing business and more generally our thinking. Increasing numbers of Australian hospitals are implementing digital solutions to replace traditional paper-based systems for patient care in order to create opportunities for improved care and efficiencies. Such large scale change has the potential to create transient disruption to workflows and staff. Managing this temporary disruption effectively is an important factor in the successful implementation of an EMR. What does this paper add? A large tertiary university hospital recently underwent a successful rapid roll-out of an integrated electronic medical record (EMR) to become Australia's largest digital hospital over a 3-week period. We observed and assisted with the management of several cultural, behavioural and operational forms of digital disruption which lead us to propose some digital disruption 'syndromes'. The definition and management of these 'syndromes' are discussed in detail. What are the implications for practitioners? Minimising the temporary effects of digital disruption in hospitals requires an understanding that these digital 'syndromes' are to be expected and actively managed during large-scale transformation.
NASA Astrophysics Data System (ADS)
Lv, Zeqian; Xu, Xiaohai; Yan, Tianhao; Cai, Yulong; Su, Yong; Zhang, Qingchuan
2018-01-01
In the measurement of plate specimens, traditional two-dimensional (2D) digital image correlation (DIC) is challenged by two aspects: (1) the slant optical axis (misalignment of the optical camera axis and the object surface) and (2) out-of-plane motions (including translations and rotations) of the specimens. There are measurement errors in the results measured by 2D DIC, especially when the out-of-plane motions are big enough. To solve this problem, a novel compensation method has been proposed to correct the unsatisfactory results. The proposed compensation method consists of three main parts: 1) a pre-calibration step is used to determine the intrinsic parameters and lens distortions; 2) a compensation panel (a rigid panel with several markers located at known positions) is mounted to the specimen to track the specimen's motion so that the relative coordinate transformation between the compensation panel and the 2D DIC setup can be calculated using the coordinate transform algorithm; 3) three-dimensional world coordinates of measuring points on the specimen can be reconstructed via the coordinate transform algorithm and used to calculate deformations. Simulations have been carried out to validate the proposed compensation method. Results come out that when the extensometer length is 400 pixels, the strain accuracy reaches 10 με no matter out-of-plane translations (less than 1/200 of the object distance) nor out-of-plane rotations (rotation angle less than 5°) occur. The proposed compensation method leads to good results even when the out-of-plane translation reaches several percents of the object distance or the out-of-plane rotation angle reaches tens of degrees. The proposed compensation method has been applied in tensile experiments to obtain high-accuracy results as well.
Digital Image Analysis System for Monitoring Crack Growth at Elevated Temperature
1988-05-01
The objective of the research work reported here was to develop a new concept, based on Digital Image Analysis , for monitoring the crack-tip position...a 512 x 512 pixel frame. c) Digital Image Analysis software developed to locate and digitize the position of the crack-tip, on the observed image
Lee, Christoph I; Lehman, Constance D
2016-11-01
Emerging imaging technologies, including digital breast tomosynthesis, have the potential to transform breast cancer screening. However, the rapid adoption of these new technologies outpaces the evidence of their clinical and cost-effectiveness. The authors describe the forces driving the rapid diffusion of tomosynthesis into clinical practice, comparing it with the rapid diffusion of digital mammography shortly after its introduction. They outline the potential positive and negative effects that adoption can have on imaging workflow and describe the practice management challenges when incorporating tomosynthesis. The authors also provide recommendations for collecting evidence supporting the development of policies and best practices. Copyright © 2013 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Entomological Collections in the Age of Big Data.
Short, Andrew Edward Z; Dikow, Torsten; Moreau, Corrie S
2018-01-07
With a million described species and more than half a billion preserved specimens, the large scale of insect collections is unequaled by those of any other group. Advances in genomics, collection digitization, and imaging have begun to more fully harness the power that such large data stores can provide. These new approaches and technologies have transformed how entomological collections are managed and utilized. While genomic research has fundamentally changed the way many specimens are collected and curated, advances in technology have shown promise for extracting sequence data from the vast holdings already in museums. Efforts to mainstream specimen digitization have taken root and have accelerated traditional taxonomic studies as well as distribution modeling and global change research. Emerging imaging technologies such as microcomputed tomography and confocal laser scanning microscopy are changing how morphology can be investigated. This review provides an overview of how the realization of big data has transformed our field and what may lie in store.
NASA Astrophysics Data System (ADS)
Dong, Jian; Kudo, Hiroyuki
2017-03-01
Compressed sensing (CS) is attracting growing concerns in sparse-view computed tomography (CT) image reconstruction. The most standard approach of CS is total variation (TV) minimization. However, images reconstructed by TV usually suffer from distortions, especially in reconstruction of practical CT images, in forms of patchy artifacts, improper serrate edges and loss of image textures. Most existing CS approaches including TV achieve image quality improvement by applying linear transforms to object image, but linear transforms usually fail to take discontinuities into account, such as edges and image textures, which is considered to be the key reason for image distortions. Actually, discussions on nonlinear filter based image processing has a long history, leading us to clarify that the nonlinear filters yield better results compared to linear filters in image processing task such as denoising. Median root prior was first utilized by Alenius as nonlinear transform in CT image reconstruction, with significant gains obtained. Subsequently, Zhang developed the application of nonlocal means-based CS. A fact is gradually becoming clear that the nonlinear transform based CS has superiority in improving image quality compared with the linear transform based CS. However, it has not been clearly concluded in any previous paper within the scope of our knowledge. In this work, we investigated the image quality differences between the conventional TV minimization and nonlinear sparsifying transform based CS, as well as image quality differences among different nonlinear sparisying transform based CSs in sparse-view CT image reconstruction. Additionally, we accelerated the implementation of nonlinear sparsifying transform based CS algorithm.
Brain's tumor image processing using shearlet transform
NASA Astrophysics Data System (ADS)
Cadena, Luis; Espinosa, Nikolai; Cadena, Franklin; Korneeva, Anna; Kruglyakov, Alexey; Legalov, Alexander; Romanenko, Alexey; Zotin, Alexander
2017-09-01
Brain tumor detection is well known research area for medical and computer scientists. In last decades there has been much research done on tumor detection, segmentation, and classification. Medical imaging plays a central role in the diagnosis of brain tumors and nowadays uses methods non-invasive, high-resolution techniques, especially magnetic resonance imaging and computed tomography scans. Edge detection is a fundamental tool in image processing, particularly in the areas of feature detection and feature extraction, which aim at identifying points in a digital image at which the image has discontinuities. Shearlets is the most successful frameworks for the efficient representation of multidimensional data, capturing edges and other anisotropic features which frequently dominate multidimensional phenomena. The paper proposes an improved brain tumor detection method by automatically detecting tumor location in MR images, its features are extracted by new shearlet transform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masotti, Matteo; Lanconelli, Nico; Campanini, Renato
In this work, gray-scale invariant ranklet texture features are proposed for false positive reduction (FPR) in computer-aided detection (CAD) of breast masses. Two main considerations are at the basis of this proposal. First, false positive (FP) marks surviving our previous CAD system seem to be characterized by specific texture properties that can be used to discriminate them from masses. Second, our previous CAD system achieves invariance to linear/nonlinear monotonic gray-scale transformations by encoding regions of interest into ranklet images through the ranklet transform, an image transformation similar to the wavelet transform, yet dealing with pixels' ranks rather than with theirmore » gray-scale values. Therefore, the new FPR approach proposed herein defines a set of texture features which are calculated directly from the ranklet images corresponding to the regions of interest surviving our previous CAD system, hence, ranklet texture features; then, a support vector machine (SVM) classifier is used for discrimination. As a result of this approach, texture-based information is used to discriminate FP marks surviving our previous CAD system; at the same time, invariance to linear/nonlinear monotonic gray-scale transformations of the new CAD system is guaranteed, as ranklet texture features are calculated from ranklet images that have this property themselves by construction. To emphasize the gray-scale invariance of both the previous and new CAD systems, training and testing are carried out without any in-between parameters' adjustment on mammograms having different gray-scale dynamics; in particular, training is carried out on analog digitized mammograms taken from a publicly available digital database, whereas testing is performed on full-field digital mammograms taken from an in-house database. Free-response receiver operating characteristic (FROC) curve analysis of the two CAD systems demonstrates that the new approach achieves a higher reduction of FP marks when compared to the previous one. Specifically, at 60%, 65%, and 70% per-mammogram sensitivity, the new CAD system achieves 0.50, 0.68, and 0.92 FP marks per mammogram, whereas at 70%, 75%, and 80% per-case sensitivity it achieves 0.37, 0.48, and 0.71 FP marks per mammogram, respectively. Conversely, at the same sensitivities, the previous CAD system reached 0.71, 0.87, and 1.15 FP marks per mammogram, and 0.57, 0.73, and 0.92 FPs per mammogram. Also, statistical significance of the difference between the two per-mammogram and per-case FROC curves is demonstrated by the p-value<0.001 returned by jackknife FROC analysis performed on the two CAD systems.« less
Emerging Themes in Image Informatics and Molecular Analysis for Digital Pathology.
Bhargava, Rohit; Madabhushi, Anant
2016-07-11
Pathology is essential for research in disease and development, as well as for clinical decision making. For more than 100 years, pathology practice has involved analyzing images of stained, thin tissue sections by a trained human using an optical microscope. Technological advances are now driving major changes in this paradigm toward digital pathology (DP). The digital transformation of pathology goes beyond recording, archiving, and retrieving images, providing new computational tools to inform better decision making for precision medicine. First, we discuss some emerging innovations in both computational image analytics and imaging instrumentation in DP. Second, we discuss molecular contrast in pathology. Molecular DP has traditionally been an extension of pathology with molecularly specific dyes. Label-free, spectroscopic images are rapidly emerging as another important information source, and we describe the benefits and potential of this evolution. Third, we describe multimodal DP, which is enabled by computational algorithms and combines the best characteristics of structural and molecular pathology. Finally, we provide examples of application areas in telepathology, education, and precision medicine. We conclude by discussing challenges and emerging opportunities in this area.
Emerging Themes in Image Informatics and Molecular Analysis for Digital Pathology
Bhargava, Rohit; Madabhushi, Anant
2017-01-01
Pathology is essential for research in disease and development, as well as for clinical decision making. For more than 100 years, pathology practice has involved analyzing images of stained, thin tissue sections by a trained human using an optical microscope. Technological advances are now driving major changes in this paradigm toward digital pathology (DP). The digital transformation of pathology goes beyond recording, archiving, and retrieving images, providing new computational tools to inform better decision making for precision medicine. First, we discuss some emerging innovations in both computational image analytics and imaging instrumentation in DP. Second, we discuss molecular contrast in pathology. Molecular DP has traditionally been an extension of pathology with molecularly specific dyes. Label-free, spectroscopic images are rapidly emerging as another important information source, and we describe the benefits and potential of this evolution. Third, we describe multimodal DP, which is enabled by computational algorithms and combines the best characteristics of structural and molecular pathology. Finally, we provide examples of application areas in telepathology, education, and precision medicine. We conclude by discussing challenges and emerging opportunities in this area. PMID:27420575
Digital Platforms as Factor Transforming Management Models in Businesses and Industries
NASA Astrophysics Data System (ADS)
Dimitrakiev, D.; Molodchik, A. V.
2018-05-01
Increasingly, digital platforms are built into the value chain, acting as an intermediary between the manufacturer and the consumer. The paper presents tendencies and features of business model transformation in connection with management of the new digital technologies. The limitations of traditional business models and the capabilities of business models based on digital platforms and self-organization were revealed. In the study, the viability of the new business model for the dental industry was confirmed and the new concept of the branch self-organizing control system based on the information platform, blockchain, cryptocurrency and reward of target consumer is offered, including mechanisms that make the model attractive for both the consumer and the service provider.
A Synthetic Quadrature Phase Detector/Demodulator for Fourier Transform Transform Spectrometers
NASA Technical Reports Server (NTRS)
Campbell, Joel
2008-01-01
A method is developed to demodulate (velocity correct) Fourier transform spectrometer (FTS) data that is taken with an analog to digital converter that digitizes equally spaced in time. This method makes it possible to use simple low cost, high resolution audio digitizers to record high quality data without the need for an event timer or quadrature laser hardware, and makes it possible to use a metrology laser of any wavelength. The reduced parts count and simplicity implementation makes it an attractive alternative in space based applications when compared to previous methods such as the Brault algorithm.
2017-08-01
filtering, correlation and radio- astronomy . In this report approximate transforms that closely follow the DFT have been studied and found. The approximate...communications, data networks, sensor networks, cognitive radio, radar and beamforming, imaging, filtering, correlation and radio- astronomy . FFTs efficiently...public release; distribution is unlimited. 4.3 Digital Hardware and Design Architectures Collaboration for Astronomy Signal Processing and Electronics
Method for indexing and retrieving manufacturing-specific digital imagery based on image content
Ferrell, Regina K.; Karnowski, Thomas P.; Tobin, Jr., Kenneth W.
2004-06-15
A method for indexing and retrieving manufacturing-specific digital images based on image content comprises three steps. First, at least one feature vector can be extracted from a manufacturing-specific digital image stored in an image database. In particular, each extracted feature vector corresponds to a particular characteristic of the manufacturing-specific digital image, for instance, a digital image modality and overall characteristic, a substrate/background characteristic, and an anomaly/defect characteristic. Notably, the extracting step includes generating a defect mask using a detection process. Second, using an unsupervised clustering method, each extracted feature vector can be indexed in a hierarchical search tree. Third, a manufacturing-specific digital image associated with a feature vector stored in the hierarchicial search tree can be retrieved, wherein the manufacturing-specific digital image has image content comparably related to the image content of the query image. More particularly, can include two data reductions, the first performed based upon a query vector extracted from a query image. Subsequently, a user can select relevant images resulting from the first data reduction. From the selection, a prototype vector can be calculated, from which a second-level data reduction can be performed. The second-level data reduction can result in a subset of feature vectors comparable to the prototype vector, and further comparable to the query vector. An additional fourth step can include managing the hierarchical search tree by substituting a vector average for several redundant feature vectors encapsulated by nodes in the hierarchical search tree.
NASA Astrophysics Data System (ADS)
Singh, Mandeep; Khare, Kedar
2018-05-01
We describe a numerical processing technique that allows single-shot region-of-interest (ROI) reconstruction in image plane digital holographic microscopy with full pixel resolution. The ROI reconstruction is modelled as an optimization problem where the cost function to be minimized consists of an L2-norm squared data fitting term and a modified Huber penalty term that are minimized alternately in an adaptive fashion. The technique can provide full pixel resolution complex-valued images of the selected ROI which is not possible to achieve with the commonly used Fourier transform method. The technique can facilitate holographic reconstruction of individual cells of interest from a large field-of-view digital holographic microscopy data. The complementary phase information in addition to the usual absorption information already available in the form of bright field microscopy can make the methodology attractive to the biomedical user community.
Autonomous navigation method for substation inspection robot based on travelling deviation
NASA Astrophysics Data System (ADS)
Yang, Guoqing; Xu, Wei; Li, Jian; Fu, Chongguang; Zhou, Hao; Zhang, Chuanyou; Shao, Guangting
2017-06-01
A new method of edge detection is proposed in substation environment, which can realize the autonomous navigation of the substation inspection robot. First of all, the road image and information are obtained by using an image acquisition device. Secondly, the noise in the region of interest which is selected in the road image, is removed with the digital image processing algorithm, the road edge is extracted by Canny operator, and the road boundaries are extracted by Hough transform. Finally, the distance between the robot and the left and the right boundaries is calculated, and the travelling distance is obtained. The robot's walking route is controlled according to the travel deviation and the preset threshold. Experimental results show that the proposed method can detect the road area in real time, and the algorithm has high accuracy and stable performance.
NASA Technical Reports Server (NTRS)
Johnson, Dennis A. (Inventor)
1996-01-01
A laser doppler velocimeter uses frequency shifting of a laser beam to provide signal information for each velocity component. A composite electrical signal generated by a light detector is digitized and a processor produces a discrete Fourier transform based on the digitized electrical signal. The transform includes two peak frequencies corresponding to the two velocity components.
Medical imaging: examples of clinical applications
NASA Astrophysics Data System (ADS)
Meinzer, H. P.; Thorn, M.; Vetter, M.; Hassenpflug, P.; Hastenteufel, M.; Wolf, I.
Clinical routine is currently producing a multitude of diagnostic digital images but only a few are used in therapy planning and treatment. Medical imaging is involved in both diagnosis and therapy. Using a computer, existing 2D images can be transformed into interactive 3D volumes and results from different modalities can be merged. Furthermore, it is possible to calculate functional areas that were not visible in the primary images. This paper presents examples of clinical applications that are integrated into clinical routine and are based on medical imaging fundamentals. In liver surgery, the importance of virtual planning is increasing because surgery is still the only possible curative procedure. Visualisation and analysis of heart defects are also gaining in significance due to improved surgery techniques. Finally, an outlook is provided on future developments in medical imaging using navigation to support the surgeon's work. The paper intends to give an impression of the wide range of medical imaging that goes beyond the mere calculation of medical images.
Microlens array processor with programmable weight mask and direct optical input
NASA Astrophysics Data System (ADS)
Schmid, Volker R.; Lueder, Ernst H.; Bader, Gerhard; Maier, Gert; Siegordner, Jochen
1999-03-01
We present an optical feature extraction system with a microlens array processor. The system is suitable for online implementation of a variety of transforms such as the Walsh transform and DCT. Operating with incoherent light, our processor accepts direct optical input. Employing a sandwich- like architecture, we obtain a very compact design of the optical system. The key elements of the microlens array processor are a square array of 15 X 15 spherical microlenses on acrylic substrate and a spatial light modulator as transmissive mask. The light distribution behind the mask is imaged onto the pixels of a customized a-Si image sensor with adjustable gain. We obtain one output sample for each microlens image and its corresponding weight mask area as summation of the transmitted intensity within one sensor pixel. The resulting architecture is very compact and robust like a conventional camera lens while incorporating a high degree of parallelism. We successfully demonstrate a Walsh transform into the spatial frequency domain as well as the implementation of a discrete cosine transform with digitized gray values. We provide results showing the transformation performance for both synthetic image patterns and images of natural texture samples. The extracted frequency features are suitable for neural classification of the input image. Other transforms and correlations can be implemented in real-time allowing adaptive optical signal processing.
A GPU accelerated PDF transparency engine
NASA Astrophysics Data System (ADS)
Recker, John; Lin, I.-Jong; Tastl, Ingeborg
2011-01-01
As commercial printing presses become faster, cheaper and more efficient, so too must the Raster Image Processors (RIP) that prepare data for them to print. Digital press RIPs, however, have been challenged to on the one hand meet the ever increasing print performance of the latest digital presses, and on the other hand process increasingly complex documents with transparent layers and embedded ICC profiles. This paper explores the challenges encountered when implementing a GPU accelerated driver for the open source Ghostscript Adobe PostScript and PDF language interpreter targeted at accelerating PDF transparency for high speed commercial presses. It further describes our solution, including an image memory manager for tiling input and output images and documents, a PDF compatible multiple image layer blending engine, and a GPU accelerated ICC v4 compatible color transformation engine. The result, we believe, is the foundation for a scalable, efficient, distributed RIP system that can meet current and future RIP requirements for a wide range of commercial digital presses.
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Shiri, Ron S.; Vootukuru, Meg; Coletti, Alessandro
2015-01-01
Norden E. Huang et al. had proposed and published the Hilbert-Huang Transform (HHT) concept correspondently in 1996, 1998. The HHT is a novel method for adaptive spectral analysis of non-linear and non-stationary signals. The HHT comprises two components: - the Huang Empirical Mode Decomposition (EMD), resulting in an adaptive data-derived basis of Intrinsic Mode functions (IMFs), and the Hilbert Spectral Analysis (HSA1) based on the Hilbert Transform for 1-dimension (1D) applied to the EMD IMF's outcome. Although paper describes the HHT concept in great depth, it does not contain all needed methodology to implement the HHT computer code. In 2004, Semion Kizhner and Karin Blank implemented the reference digital HHT real-time data processing system for 1D (HHT-DPS Version 1.4). The case for 2-Dimension (2D) (HHT2) proved to be difficult due to the computational complexity of EMD for 2D (EMD2) and absence of a suitable Hilbert Transform for 2D spectral analysis (HSA2). The real-time EMD2 and HSA2 comprise the real-time HHT2. Kizhner completed the real-time EMD2 and the HSA2 reference digital implementations respectively in 2013 & 2014. Still, the HHT2 outcome synthesis remains an active research area. This paper presents the initial concepts and preliminary results of HHT2-based synthesis and its application to processing of signals contaminated by Radio-Frequency Interference (RFI), as well as optical systems' fringe detection and mitigation at design stage. The Soil Moisture Active Passive (SMAP mission (SMAP) carries a radiometer instrument that measures Earth soil moisture at L1 frequency (1.4 GHz polarimetric - H, V, 3rd and 4th Stokes parameters). There is abundant RFI at L1 and because soil moisture is a strategic parameter, it is important to be able to recover the RFI-contaminated measurement samples (15% of telemetry). State-of-the-art only allows RFI detection and removes RFI-contaminated measurements. The HHT-based analysis and synthesis facilitates recovery of measurements contaminated by all kinds of RFI, including jamming [7-8]. The fringes are inherent in optical systems and multi-layer complex contour expensive coatings are employed to remove the unwanted fringes. HHT2-based analysis allows test image decomposition to analyze and detect fringes, and HHT2-based synthesis of useful image.
The paper crisis: from hospitals to medical practices.
Park, Gregory; Neaveill, Rodney S
2009-01-01
Hospitals, not unlike physician practices, are faced with an increasing burden of managing piles of hard copy documents including insurance forms, requests for information, and advance directives. Healthcare organizations are moving to transform paper-based forms and documents into digitized files in order to save time and money and to have those documents available at a moment's notice. The cost of these document management/imaging systems can be easily justified with the significant savings of resources realized from the implementation of these systems. This article illustrates the enormity of the "paper problem" in healthcare and outlines just a few of the required processes that could be improved with the use of automated document management/imaging systems.
Warped document image correction method based on heterogeneous registration strategies
NASA Astrophysics Data System (ADS)
Tong, Lijing; Zhan, Guoliang; Peng, Quanyao; Li, Yang; Li, Yifan
2013-03-01
With the popularity of digital camera and the application requirement of digitalized document images, using digital cameras to digitalize document images has become an irresistible trend. However, the warping of the document surface impacts on the quality of the Optical Character Recognition (OCR) system seriously. To improve the warped document image's vision quality and the OCR rate, this paper proposed a warped document image correction method based on heterogeneous registration strategies. This method mosaics two warped images of the same document from different viewpoints. Firstly, two feature points are selected from one image. Then the two feature points are registered in the other image base on heterogeneous registration strategies. At last, image mosaics are done for the two images, and the best mosaiced image is selected by OCR recognition results. As a result, for the best mosaiced image, the distortions are mostly removed and the OCR results are improved markedly. Experimental results show that the proposed method can resolve the issue of warped document image correction more effectively.
NASA Astrophysics Data System (ADS)
Robbins, William L.; Conklin, James J.
1995-10-01
Medical images (angiography, CT, MRI, nuclear medicine, ultrasound, x ray) play an increasingly important role in the clinical development and regulatory review process for pharmaceuticals and medical devices. Since medical images are increasingly acquired and archived digitally, or are readily digitized from film, they can be visualized, processed and analyzed in a variety of ways using digital image processing and display technology. Moreover, with image-based data management and data visualization tools, medical images can be electronically organized and submitted to the U.S. Food and Drug Administration (FDA) for review. The collection, processing, analysis, archival, and submission of medical images in a digital format versus an analog (film-based) format presents both challenges and opportunities for the clinical and regulatory information management specialist. The medical imaging 'core laboratory' is an important resource for clinical trials and regulatory submissions involving medical imaging data. Use of digital imaging technology within a core laboratory can increase efficiency and decrease overall costs in the image data management and regulatory review process.
Field-Portable Pixel Super-Resolution Colour Microscope
Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan
2013-01-01
Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm2. This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate ‘rainbow’ like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings. PMID:24086742
Field-portable pixel super-resolution colour microscope.
Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan
2013-01-01
Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm(2). This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate 'rainbow' like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings.
Leung, Chung-Chu
2006-03-01
Digital subtraction radiography requires close matching of the contrast in each pair of X-ray images to be subtracted. Previous studies have shown that nonparametric contrast/brightness correction methods using the cumulative density function (CDF) and its improvements, which are based on gray-level transformation associated with the pixel histogram, perform well in uniform contrast/brightness difference conditions. However, for radiographs with nonuniform contrast/ brightness, the CDF produces unsatisfactory results. In this paper, we propose a new approach in contrast correction based on the generalized fuzzy operator with least square method. The result shows that 50% of the contrast/brightness errors can be corrected using this approach when the contrast/brightness difference between a radiographic pair is 10 U. A comparison of our approach with that of CDF is presented, and this modified GFO method produces better contrast normalization results than the CDF approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anaya, O.; Moreno, G.E.L.; Madrigal, M.M.
1999-11-01
In the last years, several definitions of power have been proposed for more accurate measurement of electrical quantities in presence of harmonics pollution on power lines. Nevertheless, only few instruments have been constructed considering these definitions. This paper describes a new microcontroller-based digital instrument, which include definitions based on Harley Transform. The algorithms are fully processed using Fast Hartley Transform (FHT) and 16 bit-microcontroller platform. The constructed prototype was compared with commercial harmonics analyzer instrument.
NASA Astrophysics Data System (ADS)
Li, C.; Li, F.; Liu, Y.; Li, X.; Liu, P.; Xiao, B.
2012-07-01
Building 3D reconstruction based on ground remote sensing data (image, video and lidar) inevitably faces the problem that buildings are always occluded by vegetation, so how to automatically remove and repair vegetation occlusion is a very important preprocessing work for image understanding, compute vision and digital photogrammetry. In the traditional multispectral remote sensing which is achieved by aeronautics and space platforms, the Red and Near-infrared (NIR) bands, such as NDVI (Normalized Difference Vegetation Index), are useful to distinguish vegetation and clouds, amongst other targets. However, especially in the ground platform, CIR (Color Infra Red) is little utilized by compute vision and digital photogrammetry which usually only take true color RBG into account. Therefore whether CIR is necessary for vegetation segmentation or not has significance in that most of close-range cameras don't contain such NIR band. Moreover, the CIE L*a*b color space, which transform from RGB, seems not of much interest by photogrammetrists despite its powerfulness in image classification and analysis. So, CIE (L, a, b) feature and support vector machine (SVM) is suggested for vegetation segmentation to substitute for CIR. Finally, experimental results of visual effect and automation are given. The conclusion is that it's feasible to remove and segment vegetation occlusion without NIR band. This work should pave the way for texture reconstruction and repair for future 3D reconstruction.
Halldin, Cara N; Petsonk, Edward L; Laney, A Scott
2014-03-01
Chest radiographs are recommended for prevention and detection of pneumoconiosis. In 2011, the International Labour Office (ILO) released a revision of the International Classification of Radiographs of Pneumoconioses that included a digitized standard images set. The present study compared results of classifications of digital chest images performed using the new ILO 2011 digitized standard images to classification approaches used in the past. Underground coal miners (N = 172) were examined using both digital and film-screen radiography (FSR) on the same day. Seven National Institute for Occupational Safety and Health-certified B Readers independently classified all 172 digital radiographs, once using the ILO 2011 digitized standard images (DRILO2011-D) and once using digitized standard images used in the previous research (DRRES). The same seven B Readers classified all the miners' chest films using the ILO film-based standards. Agreement between classifications of FSR and digital radiography was identical, using a standard image set (either DRILO2011-D or DRRES). The overall weighted κ value was 0.58. Some specific differences in the results were seen and noted. However, intrareader variability in this study was similar to the published values and did not appear to be affected by the use of the new ILO 2011 digitized standard images. These findings validate the use of the ILO digitized standard images for classification of small pneumoconiotic opacities. When digital chest radiographs are obtained and displayed appropriately, results of pneumoconiosis classifications using the 2011 ILO digitized standards are comparable to film-based ILO classifications and to classifications using earlier research standards. Published by Elsevier Inc.
Digital signal processing based on inverse scattering transform.
Turitsyna, Elena G; Turitsyn, Sergei K
2013-10-15
Through numerical modeling, we illustrate the possibility of a new approach to digital signal processing in coherent optical communications based on the application of the so-called inverse scattering transform. Considering without loss of generality a fiber link with normal dispersion and quadrature phase shift keying signal modulation, we demonstrate how an initial information pattern can be recovered (without direct backward propagation) through the calculation of nonlinear spectral data of the received optical signal.
Speckle reduction in digital holography with resampling ring masks
NASA Astrophysics Data System (ADS)
Zhang, Wenhui; Cao, Liangcai; Jin, Guofan
2018-01-01
One-shot digital holographic imaging has the advantages of high stability and low temporal cost. However, the reconstruction is affected by the speckle noise. Resampling ring-mask method in spectrum domain is proposed for speckle reduction. The useful spectrum of one hologram is divided into several sub-spectra by ring masks. In the reconstruction, angular spectrum transform is applied to guarantee the calculation accuracy which has no approximation. N reconstructed amplitude images are calculated from the corresponding sub-spectra. Thanks to speckle's random distribution, superimposing these N uncorrelated amplitude images would lead to a final reconstructed image with lower speckle noise. Normalized relative standard deviation values of the reconstructed image are used to evaluate the reduction of speckle. Effect of the method on the spatial resolution of the reconstructed image is also quantitatively evaluated. Experimental and simulation results prove the feasibility and effectiveness of the proposed method.
Kruse, Fred A.
1984-01-01
Green areas on Landsat 4/5 - 4/6 - 6/7 (red - blue - green) color-ratio-composite (CRC) images represent limonite on the ground. Color variation on such images was analyzed to determine the causes of the color differences within and between the green areas. Digital transformation of the CRC data into the modified cylindrical Munsell color coordinates - hue, value, and saturation - was used to correlate image color characteristics with properties of surficial materials. The amount of limonite visible to the sensor is the primary cause of color differences in green areas on the CRCs. Vegetation density is a secondary cause of color variation of green areas on Landsat CRC images. Digital color analysis of Landsat CRC images can be used to map unknown areas. Color variations of green pixels allows discrimination among limonitic bedrock, nonlimonitic bedrock, nonlimonitic alluvium, and limonitic alluvium.
The digital transformation of oral health care. Teledentistry and electronic commerce.
Bauer, J C; Brown, W T
2001-02-01
Health care is being changed dramatically by the marriage of computers and telecommunications. Implications for hospitals and physicians already have received extensive media attention, but comparatively little has been said about the impact of information technology on dentistry. This article illustrates how the digital transformation will likely affect dentists and their patients. Based on recent experiences of hospitals and medical practices, dentists can expect to encounter revolutionary changes as a result of the digital transformation. The Internet, the World Wide Web and other developments of the information revolution will redefine patient care, referral relationships, practice management, quality, professional organizations and competition. To respond proactively to the digital transformation of oral health care, dentists must become familiar with its technologies and concepts. They must learn what new information technology can do for them and their patients and then develop creative applications that promote the profession and their approaches to care.
A Robust Image Watermarking in the Joint Time-Frequency Domain
NASA Astrophysics Data System (ADS)
Öztürk, Mahmut; Akan, Aydın; Çekiç, Yalçın
2010-12-01
With the rapid development of computers and internet applications, copyright protection of multimedia data has become an important problem. Watermarking techniques are proposed as a solution to copyright protection of digital media files. In this paper, a new, robust, and high-capacity watermarking method that is based on spatiofrequency (SF) representation is presented. We use the discrete evolutionary transform (DET) calculated by the Gabor expansion to represent an image in the joint SF domain. The watermark is embedded onto selected coefficients in the joint SF domain. Hence, by combining the advantages of spatial and spectral domain watermarking methods, a robust, invisible, secure, and high-capacity watermarking method is presented. A correlation-based detector is also proposed to detect and extract any possible watermarks on an image. The proposed watermarking method was tested on some commonly used test images under different signal processing attacks like additive noise, Wiener and Median filtering, JPEG compression, rotation, and cropping. Simulation results show that our method is robust against all of the attacks.
Zhang, Dongyan; Zhou, Xingen; Zhang, Jian; Lan, Yubin; Xu, Chao; Liang, Dong
2018-01-01
Detection and monitoring are the first essential step for effective management of sheath blight (ShB), a major disease in rice worldwide. Unmanned aerial systems have a high potential of being utilized to improve this detection process since they can reduce the time needed for scouting for the disease at a field scale, and are affordable and user-friendly in operation. In this study, a commercialized quadrotor unmanned aerial vehicle (UAV), equipped with digital and multispectral cameras, was used to capture imagery data of research plots with 67 rice cultivars and elite lines. Collected imagery data were then processed and analyzed to characterize the development of ShB and quantify different levels of the disease in the field. Through color features extraction and color space transformation of images, it was found that the color transformation could qualitatively detect the infected areas of ShB in the field plots. However, it was less effective to detect different levels of the disease. Five vegetation indices were then calculated from the multispectral images, and ground truths of disease severity and GreenSeeker measured NDVI (Normalized Difference Vegetation Index) were collected. The results of relationship analyses indicate that there was a strong correlation between ground-measured NDVIs and image-extracted NDVIs with the R2 of 0.907 and the root mean square error (RMSE) of 0.0854, and a good correlation between image-extracted NDVIs and disease severity with the R2 of 0.627 and the RMSE of 0.0852. Use of image-based NDVIs extracted from multispectral images could quantify different levels of ShB in the field plots with an accuracy of 63%. These results demonstrate that a customer-grade UAV integrated with digital and multispectral cameras can be an effective tool to detect the ShB disease at a field scale.
Pattern recognition and feature extraction with an optical Hough transform
NASA Astrophysics Data System (ADS)
Fernández, Ariel
2016-09-01
Pattern recognition and localization along with feature extraction are image processing applications of great interest in defect inspection and robot vision among others. In comparison to purely digital methods, the attractiveness of optical processors for pattern recognition lies in their highly parallel operation and real-time processing capability. This work presents an optical implementation of the generalized Hough transform (GHT), a well-established technique for the recognition of geometrical features in binary images. Detection of a geometric feature under the GHT is accomplished by mapping the original image to an accumulator space; the large computational requirements for this mapping make the optical implementation an attractive alternative to digital- only methods. Starting from the integral representation of the GHT, it is possible to device an optical setup where the transformation is obtained, and the size and orientation parameters can be controlled, allowing for dynamic scale and orientation-variant pattern recognition. A compact system for the above purposes results from the use of an electrically tunable lens for scale control and a rotating pupil mask for orientation variation, implemented on a high-contrast spatial light modulator (SLM). Real-time (as limited by the frame rate of the device used to capture the GHT) can also be achieved, allowing for the processing of video sequences. Besides, by thresholding of the GHT (with the aid of another SLM) and inverse transforming (which is optically achieved in the incoherent system under appropriate focusing setting), the previously detected features of interest can be extracted.
NASA Technical Reports Server (NTRS)
Gramenopoulos, N. (Principal Investigator)
1973-01-01
The author has identified the following significant results. For the recognition of terrain types, spatial signatures are developed from the diffraction patterns of small areas of ERTS-1 images. This knowledge is exploited for the measurements of a small number of meaningful spatial features from the digital Fourier transforms of ERTS-1 image cells containing 32 x 32 picture elements. Using these spatial features and a heuristic algorithm, the terrain types in the vicinity of Phoenix, Arizona were recognized by the computer with a high accuracy. Then, the spatial features were combined with spectral features and using the maximum likelihood criterion the recognition accuracy of terrain types increased substantially. It was determined that the recognition accuracy with the maximum likelihood criterion depends on the statistics of the feature vectors. Nonlinear transformations of the feature vectors are required so that the terrain class statistics become approximately Gaussian. It was also determined that for a given geographic area the statistics of the classes remain invariable for a period of a month but vary substantially between seasons.
Adaptive multiscale processing for contrast enhancement
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Song, Shuwu; Fan, Jian; Huda, Walter; Honeyman, Janice C.; Steinbach, Barbara G.
1993-07-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through overcomplete multiresolution representations. We show that efficient representations may be identified from digital mammograms within a continuum of scale space and used to enhance features of importance to mammography. Choosing analyzing functions that are well localized in both space and frequency, results in a powerful methodology for image analysis. We describe methods of contrast enhancement based on two overcomplete (redundant) multiscale representations: (1) Dyadic wavelet transform (2) (phi) -transform. Mammograms are reconstructed from transform coefficients modified at one or more levels by non-linear, logarithmic and constant scale-space weight functions. Multiscale edges identified within distinct levels of transform space provide a local support for enhancement throughout each decomposition. We demonstrate that features extracted from wavelet spaces can provide an adaptive mechanism for accomplishing local contrast enhancement. We suggest that multiscale detection and local enhancement of singularities may be effectively employed for the visualization of breast pathology without excessive noise amplification.
Method and Apparatus for Evaluating the Visual Quality of Processed Digital Video Sequences
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
2002-01-01
A Digital Video Quality (DVQ) apparatus and method that incorporate a model of human visual sensitivity to predict the visibility of artifacts. The DVQ method and apparatus are used for the evaluation of the visual quality of processed digital video sequences and for adaptively controlling the bit rate of the processed digital video sequences without compromising the visual quality. The DVQ apparatus minimizes the required amount of memory and computation. The input to the DVQ apparatus is a pair of color image sequences: an original (R) non-compressed sequence, and a processed (T) sequence. Both sequences (R) and (T) are sampled, cropped, and subjected to color transformations. The sequences are then subjected to blocking and discrete cosine transformation, and the results are transformed to local contrast. The next step is a time filtering operation which implements the human sensitivity to different time frequencies. The results are converted to threshold units by dividing each discrete cosine transform coefficient by its respective visual threshold. At the next stage the two sequences are subtracted to produce an error sequence. The error sequence is subjected to a contrast masking operation, which also depends upon the reference sequence (R). The masked errors can be pooled in various ways to illustrate the perceptual error over various dimensions, and the pooled error can be converted to a visual quality measure.
Refining image segmentation by polygon skeletonization
NASA Technical Reports Server (NTRS)
Clarke, Keith C.
1987-01-01
A skeletonization algorithm was encoded and applied to a test data set of land-use polygons taken from a USGS digital land use dataset at 1:250,000. The distance transform produced by this method was instrumental in the description of the shape, size, and level of generalization of the outlines of the polygons. A comparison of the topology of skeletons for forested wetlands and lakes indicated that some distinction based solely upon the shape properties of the areas is possible, and may be of use in an intelligent automated land cover classification system.
ERIC Educational Resources Information Center
Tallman, Oliver H.
A digital simulation of a model for the processing of visual images is derived from known aspects of the human visual system. The fundamental principle of computation suggested by a biological model is a transformation that distributes information contained in an input stimulus everywhere in a transform domain. Each sensory input contributes under…
Making Room for the Transformation of Literacy Instruction in the Digital Classroom
ERIC Educational Resources Information Center
Sofkova Hashemi, Sylvana; Cederlund, Katarina
2017-01-01
Education is in the process of transforming traditional print-based instruction into digital formats. This multi-case study sheds light on the challenge of coping with the old and new in literacy teaching in the context of technology-mediated instruction in the early years of schooling (7-8 years old children). By investigating the relation…
Integrating Digital Images into the Art and Art History Curriculum.
ERIC Educational Resources Information Center
Pitt, Sharon P.; Updike, Christina B.; Guthrie, Miriam E.
2002-01-01
Describes an Internet-based image database system connected to a flexible, in-class teaching and learning tool (the Madison Digital Image Database) developed at James Madison University to bring digital images to the arts and humanities classroom. Discusses content, copyright issues, ensuring system effectiveness, instructional impact, sharing the…
Optical 3D watermark based digital image watermarking for telemedicine
NASA Astrophysics Data System (ADS)
Li, Xiao Wei; Kim, Seok Tae
2013-12-01
Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.
[Improvement of Digital Capsule Endoscopy System and Image Interpolation].
Zhao, Shaopeng; Yan, Guozheng; Liu, Gang; Kuang, Shuai
2016-01-01
Traditional capsule image collects and transmits analog image, with weak anti-interference ability, low frame rate, low resolution. This paper presents a new digital image capsule, which collects and transmits digital image, with frame rate up to 30 frames/sec and pixels resolution of 400 x 400. The image is compressed in the capsule, and is transmitted to the outside of the capsule for decompression and interpolation. A new type of interpolation algorithm is proposed, which is based on the relationship between the image planes, to obtain higher quality colour images. capsule endoscopy, digital image, SCCB protocol, image interpolation
Simple and robust image-based autofocusing for digital microscopy.
Yazdanfar, Siavash; Kenny, Kevin B; Tasimi, Krenar; Corwin, Alex D; Dixon, Elizabeth L; Filkins, Robert J
2008-06-09
A simple image-based autofocusing scheme for digital microscopy is demonstrated that uses as few as two intermediate images to bring the sample into focus. The algorithm is adapted to a commercial inverted microscope and used to automate brightfield and fluorescence imaging of histopathology tissue sections.
NASA Technical Reports Server (NTRS)
Wu, Sherman S. C.; Howington, Annie-Elpis
1987-01-01
The Mars Digital Terrain Model (DTM) is the result of a new project to: (1) digitize the series of 1:2,000,000-scale topographic maps of Mars, which are being derived photogrammetically under a separate project, and (2) reformat the digital contour information into rasters of elevation that can be readily registered with the Digital Image Model (DIM) of Mars. Derivation of DTM's involves interpolation of elevation values into 1/64-degree resolution and transformation of them to a sinusoidal equal-area projection. Digital data are produced in blocks corresponding with the coordinates of the original 1:2,000,000-scale maps, i.e., the dimensions of each block in the equatorial belt are 22.5 deg of longitude and 15 deg of latitude. This DTM is not only compatible with the DIM, but it can also be registered with other data such as geologic units or gravity. It will be the most comprehensive record of topographic information yet compiled for the Martian surface. Once the DTM's are established, any enhancement of Mars topographic information made with updated data, such as data from the planned Mars Observer Mission, will be by mathematical transformation of the DTM's, eliminating the need for recompilation.
The Radon cumulative distribution transform and its application to image classification
Kolouri, Soheil; Park, Se Rim; Rohde, Gustavo K.
2016-01-01
Invertible image representation methods (transforms) are routinely employed as low-level image processing operations based on which feature extraction and recognition algorithms are developed. Most transforms in current use (e.g. Fourier, Wavelet, etc.) are linear transforms, and, by themselves, are unable to substantially simplify the representation of image classes for classification. Here we describe a nonlinear, invertible, low-level image processing transform based on combining the well known Radon transform for image data, and the 1D Cumulative Distribution Transform proposed earlier. We describe a few of the properties of this new transform, and with both theoretical and experimental results show that it can often render certain problems linearly separable in transform space. PMID:26685245
Generation of high-dynamic range image from digital photo
NASA Astrophysics Data System (ADS)
Wang, Ying; Potemin, Igor S.; Zhdanov, Dmitry D.; Wang, Xu-yang; Cheng, Han
2016-10-01
A number of the modern applications such as medical imaging, remote sensing satellites imaging, virtual prototyping etc use the High Dynamic Range Image (HDRI). Generally to obtain HDRI from ordinary digital image the camera is calibrated. The article proposes the camera calibration method based on the clear sky as the standard light source and takes sky luminance from CIE sky model for the corresponding geographical coordinates and time. The article considers base algorithms for getting real luminance values from ordinary digital image and corresponding programmed implementation of the algorithms. Moreover, examples of HDRI reconstructed from ordinary images illustrate the article.
Information recovery through image sequence fusion under wavelet transformation
NASA Astrophysics Data System (ADS)
He, Qiang
2010-04-01
Remote sensing is widely applied to provide information of areas with limited ground access with applications such as to assess the destruction from natural disasters and to plan relief and recovery operations. However, the data collection of aerial digital images is constrained by bad weather, atmospheric conditions, and unstable camera or camcorder. Therefore, how to recover the information from the low-quality remote sensing images and how to enhance the image quality becomes very important for many visual understanding tasks, such like feature detection, object segmentation, and object recognition. The quality of remote sensing imagery can be improved through meaningful combination of the employed images captured from different sensors or from different conditions through information fusion. Here we particularly address information fusion to remote sensing images under multi-resolution analysis in the employed image sequences. The image fusion is to recover complete information by integrating multiple images captured from the same scene. Through image fusion, a new image with high-resolution or more perceptive for human and machine is created from a time series of low-quality images based on image registration between different video frames.
Digital camera with apparatus for authentication of images produced from an image file
NASA Technical Reports Server (NTRS)
Friedman, Gary L. (Inventor)
1993-01-01
A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.
An exposure indicator for digital radiography: AAPM Task Group 116 (executive summary).
Shepard, S Jeff; Wang, Jihong; Flynn, Michael; Gingold, Eric; Goldman, Lee; Krugh, Kerry; Leong, David L; Mah, Eugene; Ogden, Kent; Peck, Donald; Samei, Ehsan; Wang, Jihong; Willis, Charles E
2009-07-01
Digital radiographic imaging systems, such as those using photostimulable storage phosphor, amorphous selenium, amorphous silicon, CCD, and MOSFET technology, can produce adequate image quality over a much broader range of exposure levels than that of screen/film imaging systems. In screen/film imaging, the final image brightness and contrast are indicative of over- and underexposure. In digital imaging, brightness and contrast are often determined entirely by digital postprocessing of the acquired image data. Overexposure and underexposures are not readily recognizable. As a result, patient dose has a tendency to gradually increase over time after a department converts from screen/film-based imaging to digital radiographic imaging. The purpose of this report is to recommend a standard indicator which reflects the radiation exposure that is incident on a detector after every exposure event and that reflects the noise levels present in the image data. The intent is to facilitate the production of consistent, high quality digital radiographic images at acceptable patient doses. This should be based not on image optical density or brightness but on feedback regarding the detector exposure provided and actively monitored by the imaging system. A standard beam calibration condition is recommended that is based on RQA5 but uses filtration materials that are commonly available and simple to use. Recommendations on clinical implementation of the indices to control image quality and patient dose are derived from historical tolerance limits and presented as guidelines.
An exposure indicator for digital radiography: AAPM Task Group 116 (Executive Summary)
Shepard, S. Jeff; Wang, Jihong; Flynn, Michael; Gingold, Eric; Goldman, Lee; Krugh, Kerry; Leong, David L.; Mah, Eugene; Ogden, Kent; Peck, Donald; Samei, Ehsan; Wang, Jihong; Willis, Charles E.
2009-01-01
Digital radiographic imaging systems, such as those using photostimulable storage phosphor, amorphous selenium, amorphous silicon, CCD, and MOSFET technology, can produce adequate image quality over a much broader range of exposure levels than that of screen/film imaging systems. In screen/film imaging, the final image brightness and contrast are indicative of over- and underexposure. In digital imaging, brightness and contrast are often determined entirely by digital postprocessing of the acquired image data. Overexposure and underexposures are not readily recognizable. As a result, patient dose has a tendency to gradually increase over time after a department converts from screen/film-based imaging to digital radiographic imaging. The purpose of this report is to recommend a standard indicator which reflects the radiation exposure that is incident on a detector after every exposure event and that reflects the noise levels present in the image data. The intent is to facilitate the production of consistent, high quality digital radiographic images at acceptable patient doses. This should be based not on image optical density or brightness but on feedback regarding the detector exposure provided and actively monitored by the imaging system. A standard beam calibration condition is recommended that is based on RQA5 but uses filtration materials that are commonly available and simple to use. Recommendations on clinical implementation of the indices to control image quality and patient dose are derived from historical tolerance limits and presented as guidelines. PMID:19673189
Digital processing of radiographic images
NASA Technical Reports Server (NTRS)
Bond, A. D.; Ramapriyan, H. K.
1973-01-01
Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.
2017-10-01
The paper considers results of design and modeling of continuously logical base cells (CL BC) based on current mirrors (CM) with functions of preliminary analogue and subsequent analogue-digital processing for creating sensor multichannel analog-to-digital converters (SMC ADCs) and image processors (IP). For such with vector or matrix parallel inputs-outputs IP and SMC ADCs it is needed active basic photosensitive cells with an extended electronic circuit, which are considered in paper. Such basic cells and ADCs based on them have a number of advantages: high speed and reliability, simplicity, small power consumption, high integration level for linear and matrix structures. We show design of the CL BC and ADC of photocurrents and their various possible implementations and its simulations. We consider CL BC for methods of selection and rank preprocessing and linear array of ADCs with conversion to binary codes and Gray codes. In contrast to our previous works here we will dwell more on analogue preprocessing schemes for signals of neighboring cells. Let us show how the introduction of simple nodes based on current mirrors extends the range of functions performed by the image processor. Each channel of the structure consists of several digital-analog cells (DC) on 15-35 CMOS. The amount of DC does not exceed the number of digits of the formed code, and for an iteration type, only one cell of DC, complemented by the device of selection and holding (SHD), is required. One channel of ADC with iteration is based on one DC-(G) and SHD, and it has only 35 CMOS transistors. In such ADCs easily parallel code can be realized and also serial-parallel output code. The circuits and simulation results of their design with OrCAD are shown. The supply voltage of the DC is 1.8÷3.3V, the range of an input photocurrent is 0.1÷24μA, the transformation time is 20÷30nS at 6-8 bit binary or Gray codes. The general power consumption of the ADC with iteration is only 50÷100μW, if the maximum input current is 4μA. Such simple structure of linear array of ADCs with low power consumption and supply voltage 3.3V, and at the same time with good dynamic characteristics (frequency of digitization even for 1.5μm CMOS-technologies is 40÷50 MHz, and can be increased up to 10 times) and accuracy characteristics are show. The SMC ADCs based on CL BC and CM opens new prospects for realization of linear and matrix IP and photo-electronic structures with matrix operands, which are necessary for neural networks, digital optoelectronic processors, neural-fuzzy controllers.
Spectrally Adaptable Compressive Sensing Imaging System
2014-05-01
signal recovering [?, ?]. The time-varying coded apertures can be implemented using micro-piezo motors [?] or through the use of Digital Micromirror ...feasibility of this testbed by developing a Digital- Micromirror -Device-based Snapshot Spectral Imaging (DMD-SSI) system, which implements CS measurement...Y. Wu, I. O. Mirza, G. R. Arce, and D. W. Prather, ”Development of a digital- micromirror - device- based multishot snapshot spectral imaging
Venkatesh, S K; Wang, G; Seet, J E; Teo, L L S; Chong, V F H
2013-03-01
To evaluate the feasibility of magnetic resonance imaging (MRI) for the transformation of preserved organs and their disease entities into digital formats for medical education and creation of a virtual museum. MRI of selected 114 pathology specimen jars representing different organs and their diseases was performed using a 3 T MRI machine with two or more MRI sequences including three-dimensional (3D) T1-weighted (T1W), 3D-T2W, 3D-FLAIR (fluid attenuated inversion recovery), fat-water separation (DIXON), and gradient-recalled echo (GRE) sequences. Qualitative assessment of MRI for depiction of disease and internal anatomy was performed. Volume rendering was performed on commercially available workstations. The digital images, 3D models, and photographs of specimens were archived into a workstation serving as a virtual pathology museum. MRI was successfully performed on all specimens. The 3D-T1W and 3D-T2W sequences demonstrated the best contrast between normal and pathological tissues. The digital material is a useful aid for understanding disease by giving insights into internal structural changes not apparent on visual inspection alone. Volume rendering produced vivid 3D models with better contrast between normal tissue and diseased tissue compared to real specimens or their photographs in some cases. The digital library provides good illustration material for radiological-pathological correlation by enhancing pathological anatomy and information on nature and signal characteristics of tissues. In some specimens, the MRI appearance may be different from corresponding organ and disease in vivo due to dead tissue and changes induced by prolonged contact with preservative fluid. MRI of pathology specimens is feasible and provides excellent images for education and creating a virtual pathology museum that can serve as permanent record of digital material for self-directed learning, improving teaching aids, and radiological-pathological correlation. Copyright © 2012 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Digital Light Processing update: status and future applications
NASA Astrophysics Data System (ADS)
Hornbeck, Larry J.
1999-05-01
Digital Light Processing (DLP) projection displays based on the Digital Micromirror Device (DMD) were introduced to the market in 1996. Less than 3 years later, DLP-based projectors are found in such diverse applications as mobile, conference room, video wall, home theater, and large-venue. They provide high-quality, seamless, all-digital images that have exceptional stability as well as freedom from both flicker and image lag. Marked improvements have been made in the image quality of DLP-based projection display, including brightness, resolution, contrast ratio, and border image. DLP-based mobile projectors that weighted about 27 pounds in 1996 now weight only about 7 pounds. This weight reduction has been responsible for the definition of an entirely new projector class, the ultraportable. New applications are being developed for this important new projection display technology; these include digital photofinishing for high process speed minilab and maxilab applications and DLP Cinema for the digital delivery of films to audiences around the world. This paper describes the status of DLP-based projection display technology, including its manufacturing, performance improvements, and new applications, with emphasis on DLP Cinema.
NASA Technical Reports Server (NTRS)
Traub, W. A.
1984-01-01
The first physical demonstration of the principle of image reconstruction using a set of images from a diffraction-blurred elongated aperture is reported. This is an optical validation of previous theoretical and numerical simulations of the COSMIC telescope array (coherent optical system of modular imaging collectors). The present experiment utilizes 17 diffraction blurred exposures of a laboratory light source, as imaged by a lens covered by a narrow-slit aperture; the aperture is rotated 10 degrees between each exposure. The images are recorded in digitized form by a CCD camera, Fourier transformed, numerically filtered, and added; the sum is then filtered and inverse Fourier transformed to form the final image. The image reconstruction process is found to be stable with respect to uncertainties in values of all physical parameters such as effective wavelength, rotation angle, pointing jitter, and aperture shape. Future experiments will explore the effects of low counting rates, autoguiding on the image, various aperture configurations, and separated optics.
Steerable dyadic wavelet transform and interval wavelets for enhancement of digital mammography
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Koren, Iztok; Yang, Wuhai; Taylor, Fred J.
1995-04-01
This paper describes two approaches for accomplishing interactive feature analysis by overcomplete multiresolution representations. We show quantitatively that transform coefficients, modified by an adaptive non-linear operator, can make more obvious unseen or barely seen features of mammography without requiring additional radiation. Our results are compared with traditional image enhancement techniques by measuring the local contrast of known mammographic features. We design a filter bank representing a steerable dyadic wavelet transform that can be used for multiresolution analysis along arbitrary orientations. Digital mammograms are enhanced by orientation analysis performed by a steerable dyadic wavelet transform. Arbitrary regions of interest (ROI) are enhanced by Deslauriers-Dubuc interpolation representations on an interval. We demonstrate that our methods can provide radiologists with an interactive capability to support localized processing of selected (suspicion) areas (lesions). Features extracted from multiscale representations can provide an adaptive mechanism for accomplishing local contrast enhancement. By improving the visualization of breast pathology can improve changes of early detection while requiring less time to evaluate mammograms for most patients.
Digital hand atlas and computer-aided bone age assessment via the Web
NASA Astrophysics Data System (ADS)
Cao, Fei; Huang, H. K.; Pietka, Ewa; Gilsanz, Vicente
1999-07-01
A frequently used assessment method of bone age is atlas matching by a radiological examination of a hand image against a reference set of atlas patterns of normal standards. We are in a process of developing a digital hand atlas with a large standard set of normal hand and wrist images that reflect the skeletal maturity, race and sex difference, and current child development. The digital hand atlas will be used for a computer-aided bone age assessment via Web. We have designed and partially implemented a computer-aided diagnostic (CAD) system for Web-based bone age assessment. The system consists of a digital hand atlas, a relational image database and a Web-based user interface. The digital atlas is based on a large standard set of normal hand an wrist images with extracted bone objects and quantitative features. The image database uses a content- based indexing to organize the hand images and their attributes and present to users in a structured way. The Web-based user interface allows users to interact with the hand image database from browsers. Users can use a Web browser to push a clinical hand image to the CAD server for a bone age assessment. Quantitative features on the examined image, which reflect the skeletal maturity, will be extracted and compared with patterns from the atlas database to assess the bone age. The relevant reference imags and the final assessment report will be sent back to the user's browser via Web. The digital atlas will remove the disadvantages of the currently out-of-date one and allow the bone age assessment to be computerized and done conveniently via Web. In this paper, we present the system design and Web-based client-server model for computer-assisted bone age assessment and our initial implementation of the digital atlas database.
Image Discrimination Models Predict Object Detection in Natural Backgrounds
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Rohaly, A. M.; Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)
1994-01-01
Object detection involves looking for one of a large set of object sub-images in a large set of background images. Image discrimination models only predict the probability that an observer will detect a difference between two images. In a recent study based on only six different images, we found that discrimination models can predict the relative detectability of objects in those images, suggesting that these simpler models may be useful in some object detection applications. Here we replicate this result using a new, larger set of images. Fifteen images of a vehicle in an other-wise natural setting were altered to remove the vehicle and mixed with the original image in a proportion chosen to make the target neither perfectly recognizable nor unrecognizable. The target was also rotated about a vertical axis through its center and mixed with the background. Sixteen observers rated these 30 target images and the 15 background-only images for the presence of a vehicle. The likelihoods of the observer responses were computed from a Thurstone scaling model with the assumption that the detectabilities are proportional to the predictions of an image discrimination model. Three image discrimination models were used: a cortex transform model, a single channel model with a contrast sensitivity function filter, and the Root-Mean-Square (RMS) difference of the digital target and background-only images. As in the previous study, the cortex transform model performed best; the RMS difference predictor was second best; and last, but still a reasonable predictor, was the single channel model. Image discrimination models can predict the relative detectabilities of objects in natural backgrounds.
Granero, Luis; Zalevsky, Zeev; Micó, Vicente
2011-04-01
We present a new implementation capable of producing two-dimensional (2D) superresolution (SR) imaging in a single exposure by aperture synthesis in digital lensless Fourier holography when using angular multiplexing provided by a vertical cavity surface-emitting laser source array. The system performs the recording in a single CCD snapshot of a multiplexed hologram coming from the incoherent addition of multiple subholograms, where each contains information about a different 2D spatial frequency band of the object's spectrum. Thus, a set of nonoverlapping bandpass images of the input object can be recovered by Fourier transformation (FT) of the multiplexed hologram. The SR is obtained by coherent addition of the information contained in each bandpass image while generating an enlarged synthetic aperture. Experimental results demonstrate improvement in resolution and image quality.
High-performance compression of astronomical images
NASA Technical Reports Server (NTRS)
White, Richard L.
1993-01-01
Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.
New approaches to digital transformation of petrochemical production
NASA Astrophysics Data System (ADS)
Andieva, E. Y.; Kapelyuhovskaya, A. A.
2017-08-01
The newest concepts of the reference architecture of digital industrial transformation are considered, the problems of their application for the enterprises having in their life cycle oil products processing and marketing are revealed. The concept of the reference architecture, providing a systematic representation of the fundamental changes in the approaches to production management based on the automation of production process control is proposed.
An optical watermarking solution for color personal identification pictures
NASA Astrophysics Data System (ADS)
Tan, Yi-zhou; Liu, Hai-bo; Huang, Shui-hua; Sheng, Ben-jian; Pan, Zhong-ming
2009-11-01
This paper presents a new approach for embedding authentication information into image on printed materials based on optical projection technique. Our experimental setup consists of two parts, one is a common camera, and the other is a LCD projector, which project a pattern on personnel's body (especially on the face). The pattern, generated by a computer, act as the illumination light source with sinusoidal distribution and it is also the watermark signal. For a color image, the watermark is embedded into the blue channel. While we take pictures (256×256 and 512×512, 567×390 pixels, respectively), an invisible mark is embedded directly into magnitude coefficients of Discrete Fourier transform (DFT) at exposure moment. Both optical and digital correlation is suitable for detection of this type of watermark. The decoded watermark is a set of concentric circles or sectors in the DFT domain (middle frequencies region) which is robust to photographing, printing and scanning. The unlawful people modify or replace the original photograph, and make fake passport (drivers' license and so on). Experiments show, it is difficult to forge certificates in which a watermark was embedded by our projector-camera combination based on analogue watermark method rather than classical digital method.
Automated geographic registration and radiometric correction for UAV-based mosaics
NASA Astrophysics Data System (ADS)
Thomasson, J. Alex; Shi, Yeyin; Sima, Chao; Yang, Chenghai; Cope, Dale A.
2017-05-01
Texas A and M University has been operating a large-scale, UAV-based, agricultural remote-sensing research project since 2015. To use UAV-based images in agricultural production, many high-resolution images must be mosaicked together to create an image of an agricultural field. Two key difficulties to science-based utilization of such mosaics are geographic registration and radiometric calibration. In our current research project, image files are taken to the computer laboratory after the flight, and semi-manual pre-processing is implemented on the raw image data, including ortho-mosaicking and radiometric calibration. Ground control points (GCPs) are critical for high-quality geographic registration of images during mosaicking. Applications requiring accurate reflectance data also require radiometric-calibration references so that reflectance values of image objects can be calculated. We have developed a method for automated geographic registration and radiometric correction with targets that are installed semi-permanently at distributed locations around fields. The targets are a combination of black (≍5% reflectance), dark gray (≍20% reflectance), and light gray (≍40% reflectance) sections that provide for a transformation of pixel-value to reflectance in the dynamic range of crop fields. The exact spectral reflectance of each target is known, having been measured with a spectrophotometer. At the time of installation, each target is measured for position with a real-time kinematic GPS receiver to give its precise latitude and longitude. Automated location of the reference targets in the images is required for precise, automated, geographic registration; and automated calculation of the digital-number to reflectance transformation is required for automated radiometric calibration. To validate the system for radiometric calibration, a calibrated UAV-based image mosaic of a field was compared to a calibrated single image from a manned aircraft. Reflectance values in selected zones of each image were strongly linearly related, and the average error of UAV-mosaic reflectances was 3.4% in the red band, 1.9% in the green band, and 1.5% in the blue band. Based on these results, the proposed physical system and automated software for calibrating UAV mosaics show excellent promise.
Graphics and Flow Visualization of Computer Generated Flow Fields
NASA Technical Reports Server (NTRS)
Kathong, M.; Tiwari, S. N.
1987-01-01
Flow field variables are visualized using color representations described on surfaces that are interpolated from computational grids and transformed to digital images. Techniques for displaying two and three dimensional flow field solutions are addressed. The transformations and the use of an interactive graphics program for CFD flow field solutions, called PLOT3D, which runs on the color graphics IRIS workstation are described. An overview of the IRIS workstation is also described.
Automated daily quality control analysis for mammography in a multi-unit imaging center.
Sundell, Veli-Matti; Mäkelä, Teemu; Meaney, Alexander; Kaasalainen, Touko; Savolainen, Sauli
2018-01-01
Background The high requirements for mammography image quality necessitate a systematic quality assurance process. Digital imaging allows automation of the image quality analysis, which can potentially improve repeatability and objectivity compared to a visual evaluation made by the users. Purpose To develop an automatic image quality analysis software for daily mammography quality control in a multi-unit imaging center. Material and Methods An automated image quality analysis software using the discrete wavelet transform and multiresolution analysis was developed for the American College of Radiology accreditation phantom. The software was validated by analyzing 60 randomly selected phantom images from six mammography systems and 20 phantom images with different dose levels from one mammography system. The results were compared to a visual analysis made by four reviewers. Additionally, long-term image quality trends of a full-field digital mammography system and a computed radiography mammography system were investigated. Results The automated software produced feature detection levels comparable to visual analysis. The agreement was good in the case of fibers, while the software detected somewhat more microcalcifications and characteristic masses. Long-term follow-up via a quality assurance web portal demonstrated the feasibility of using the software for monitoring the performance of mammography systems in a multi-unit imaging center. Conclusion Automated image quality analysis enables monitoring the performance of digital mammography systems in an efficient, centralized manner.
Schiffman, Rhett M; Jacobsen, Gordon; Nussbaum, Julian J; Desai, Uday R; Carey, J David; Glasser, David; Zimmer-Galler, Ingrid E; Zeimer, Ran; Goldberg, Morton F
2005-01-01
Because patients with diabetes mellitus may visit their primary care physician regularly but not their ophthalmologist, a retinal risk assessment in the primary care setting could improve the screening rate for diabetic retinopathy. An imaging system for use in the primary care setting to identify diabetic retinopathy requiring referral to an ophthalmologist was evaluated. In a masked prospective study, images were obtained from 11 patients with diabetes mellitus using both the digital retinal imaging system and seven-field stereo color fundus photography. The ability to obtain gradable images and to identify diabetic retinal lesions was compared. Of all images, 85% of digital retinal imaging system images and 88% of seven-field images were gradable. Agreement based on "no retinopathy" versus "any retinopathy" was excellent (Kappa = 0.96). Agreement based on "microaneurysms or less retinopathy" versus "retinal hemorrhages or worse retinopathy" was very good (Kappa = 0.83). The agreement between the digital retinal imaging system and seven-field photography indicates that the digital retinal imaging system may be useful to screen for diabetic retinopathy.
Feng, Ziang; Gao, Zhan; Zhang, Xiaoqiong; Wang, Shengjia; Yang, Dong; Yuan, Hao; Qin, Jie
2015-09-01
Digital shearing speckle pattern interferometry (DSSPI) has been recognized as a practical tool in testing strain. The DSSPI system which is based on temporal analysis is attractive because of its ability to measure strain dynamically. In this paper, such a DSSPI system with Wollaston prism has been built. The principles and system arrangement are described and the preliminary experimental result of the displacement-derivative test of an aluminum plate is shown with the wavelet transformation method and the Fourier transformation method. The simulations have been conducted with the finite element method. The comparison of the results shows that quantitative measurement of displacement-derivative has been realized.
Naumovich, S S; Naumovich, S A; Goncharenko, V G
2015-01-01
The objective of the present study was the development and clinical testing of a three-dimensional (3D) reconstruction method of teeth and a bone tissue of the jaw on the basis of CT images of the maxillofacial region. 3D reconstruction was performed using the specially designed original software based on watershed transformation. Computed tomograms in digital imaging and communications in medicine format obtained on multispiral CT and CBCT scanners were used for creation of 3D models of teeth and the jaws. The processing algorithm is realized in the stepwise threshold image segmentation with the placement of markers in the mode of a multiplanar projection in areas relating to the teeth and a bone tissue. The developed software initially creates coarse 3D models of the entire dentition and the jaw. Then, certain procedures specify the model of the jaw and cut the dentition into separate teeth. The proper selection of the segmentation threshold is very important for CBCT images having a low contrast and high noise level. The developed semi-automatic algorithm of multispiral and cone beam computed tomogram processing allows 3D models of teeth to be created separating them from a bone tissue of the jaws. The software is easy to install in a dentist's workplace, has an intuitive interface and takes little time in processing. The obtained 3D models can be used for solving a wide range of scientific and clinical tasks.
2017-03-20
sub-array, which is based on all-pass filters (APFs) is realized using 130 nm CMOS technology. Approximate- discrete Fourier transform (a-DFT...fixed beams are directed at known directions [9]. The proposed approximate- discrete Fourier transform (a-DFT) based multi-beamformer [9] yields L...to digital conversion daughter board. occurs in the discrete time domain (in ROACH-2 FPGA platform) following signal digitization (see Figs. 1(d) and
Ultra-realistic imaging: a new beginning for display holography
NASA Astrophysics Data System (ADS)
Bjelkhagen, Hans I.; Brotherton-Ratcliffe, David
2014-02-01
Recent improvements in key foundation technologies are set to potentially transform the field of Display Holography. In particular new recording systems, based on recent DPSS and semiconductor lasers combined with novel recording materials and processing, have now demonstrated full-color analogue holograms of both lower noise and higher spectral accuracy. Progress in illumination technology is leading to a further major reduction in display noise and to a significant increase of the clear image depth and brightness of such holograms. So too, recent progress in 1-step Direct-Write Digital Holography (DWDH) now opens the way to the creation of High Virtual Volume Displays (HVV) - large format full-parallax DWDH reflection holograms having fundamentally larger clear image depths. In a certain fashion HVV displays can be thought of as providing a high quality full-color digital equivalent to the large-format laser-illuminated transmission holograms of the sixties and seventies. Back then, the advent of such holograms led to much optimism for display holography in the market. However, problems with laser illumination, their monochromatic analogue nature and image noise are well cited as being responsible for their failure in reality. Is there reason for believing that the latest technology improvements will make the mark this time around? This paper argues that indeed there is.
Constraints to solve parallelogram grid problems in 2D non separable linear canonical transform
NASA Astrophysics Data System (ADS)
Zhao, Liang; Healy, John J.; Muniraj, Inbarasan; Cui, Xiao-Guang; Malallah, Ra'ed; Ryle, James P.; Sheridan, John T.
2017-05-01
The 2D non-separable linear canonical transform (2D-NS-LCT) can model a range of various paraxial optical systems. Digital algorithms to evaluate the 2D-NS-LCTs are important in modeling the light field propagations and also of interest in many digital signal processing applications. In [Zhao 14] we have reported that a given 2D input image with rectangular shape/boundary, in general, results in a parallelogram output sampling grid (generally in an affine coordinates rather than in a Cartesian coordinates) thus limiting the further calculations, e.g. inverse transform. One possible solution is to use the interpolation techniques; however, it reduces the speed and accuracy of the numerical approximations. To alleviate this problem, in this paper, some constraints are derived under which the output samples are located in the Cartesian coordinates. Therefore, no interpolation operation is required and thus the calculation error can be significantly eliminated.
Paivio, A
1978-02-01
Subjects in three experiments were presented with pairs of clock times and were required to choose the one in which the hour and minute hand formed the smaller angle. In Experiments 1 and 2, the times were presented digitally, necessitating a transformation into symbolic representations from which the angular size difference could be inferred. The results revealed orderly symbolic distance effects so that comparison reaction time increased as the angular size difference decreased. Moreover, subjects generally reported using imagery to make the judgment, and subjects scoring high on test of imagery ability were faster than those scoring low on such tests. Experiment 3 added a direct perceptual condition in which subjects compared angles between pairs of hands on two drawn (analog) clocks, as well as a mixed condition involving one digital and one analog clock time. The results showed comparable distance effects for all conditions. In addition, reaction time increased from the perceptual, to the mixed, to the pure-digital condition. These results are consistent with predictions from an image-based dual-coding theory.
ERIC Educational Resources Information Center
Westbrook, R. Niccole; Watkins, Sean
2012-01-01
As primary source materials in the library are digitized and made available online, the focus of related library services is shifting to include new and innovative methods of digital delivery via social media, digital storytelling, and community-based and consortial image repositories. Most images on the Web are not of sufficient quality for most…
78 FR 64916 - Application(s) for Duty-Free Entry of Scientific Instruments
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-30
...., light to heat), crystallization, melting, phase transformations, fracture, and other dynamic events. The... Sciences University, 1120 15th Street, Augusta, GA 30912. Instrument: Imaging System/Digital Microscope... the instrument include fast wavelength change, a dichromotome system, and two different light sources...
Photogrammetric Processing of Planetary Linear Pushbroom Images Based on Approximate Orthophotos
NASA Astrophysics Data System (ADS)
Geng, X.; Xu, Q.; Xing, S.; Hou, Y. F.; Lan, C. Z.; Zhang, J. J.
2018-04-01
It is still a great challenging task to efficiently produce planetary mapping products from orbital remote sensing images. There are many disadvantages in photogrammetric processing of planetary stereo images, such as lacking ground control information and informative features. Among which, image matching is the most difficult job in planetary photogrammetry. This paper designs a photogrammetric processing framework for planetary remote sensing images based on approximate orthophotos. Both tie points extraction for bundle adjustment and dense image matching for generating digital terrain model (DTM) are performed on approximate orthophotos. Since most of planetary remote sensing images are acquired by linear scanner cameras, we mainly deal with linear pushbroom images. In order to improve the computational efficiency of orthophotos generation and coordinates transformation, a fast back-projection algorithm of linear pushbroom images is introduced. Moreover, an iteratively refined DTM and orthophotos scheme was adopted in the DTM generation process, which is helpful to reduce search space of image matching and improve matching accuracy of conjugate points. With the advantages of approximate orthophotos, the matching results of planetary remote sensing images can be greatly improved. We tested the proposed approach with Mars Express (MEX) High Resolution Stereo Camera (HRSC) and Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) images. The preliminary experimental results demonstrate the feasibility of the proposed approach.
Wavelet Compression of Satellite-Transmitted Digital Mammograms
NASA Technical Reports Server (NTRS)
Zheng, Yuan F.
2001-01-01
Breast cancer is one of the major causes of cancer death in women in the United States. The most effective way to treat breast cancer is to detect it at an early stage by screening patients periodically. Conventional film-screening mammography uses X-ray films which are effective in detecting early abnormalities of the breast. Direct digital mammography has the potential to improve the image quality and to take advantages of convenient storage, efficient transmission, and powerful computer-aided diagnosis, etc. One effective alternative to direct digital imaging is secondary digitization of X-ray films. This technique may not provide as high an image quality as the direct digital approach, but definitely have other advantages inherent to digital images. One of them is the usage of satellite-transmission technique for transferring digital mammograms between a remote image-acquisition site and a central image-reading site. This technique can benefit a large population of women who reside in remote areas where major screening and diagnosing facilities are not available. The NASA-Lewis Research Center (LeRC), in collaboration with the Cleveland Clinic Foundation (CCF), has begun a pilot study to investigate the application of the Advanced Communications Technology Satellite (ACTS) network to telemammography. The bandwidth of the T1 transmission is limited (1.544 Mbps) while the size of a mammographic image is huge. It takes a long time to transmit a single mammogram. For example, a mammogram of 4k by 4k pixels with 16 bits per pixel needs more than 4 minutes to transmit. Four images for a typical screening exam would take more than 16 minutes. This is too long a time period for a convenient screening. Consequently, compression is necessary for making satellite-transmission of mammographic images practically possible. The Wavelet Research Group of the Department of Electrical Engineering at The Ohio State University (OSU) participated in the LeRC-CCF collaboration by providing advanced compression technology using wavelet transform. OSU developed a time-efficient software package with various wavelets to compress a serious of mammographic images. This documents reports the result of the compression activities.
NASA Astrophysics Data System (ADS)
Cao, Binfang; Li, Xiaoqin; Liu, Changqing; Li, Jianqi
2017-08-01
With the further applied transformation of local colleges, teachers are urgently needed to make corresponding changes in the teaching content and methods from different courses. The article discusses practice teaching reform of the Photoelectric Image Processing course in the Optoelectronic Information Science and Engineering major. The Digital Signal Processing (DSP) platform is introduced to the experimental teaching. It will mobilize and inspire students and also enhance their learning motivation and innovation through specific examples. The course via teaching practice process has become the most popular course among students, which will further drive students' enthusiasm and confidence to participate in all kinds of electronic competitions.
Detection of fresh bruises in apples by structured-illumination reflectance imaging
NASA Astrophysics Data System (ADS)
Lu, Yuzhen; Li, Richard; Lu, Renfu
2016-05-01
Detection of fresh bruises in apples remains a challenging task due to the absence of visual symptoms and significant chemical alterations of fruit tissues during the initial stage after the fruit have been bruised. This paper reports on a new structured-illumination reflectance imaging (SIRI) technique for enhanced detection of fresh bruises in apples. Using a digital light projector engine, sinusoidally-modulated illumination at the spatial frequencies of 50, 100, 150 and 200 cycles/m was generated. A digital camera was then used to capture the reflectance images from `Gala' and `Jonagold' apples, immediately after they had been subjected to two levels of bruising by impact tests. A conventional three-phase demodulation (TPD) scheme was applied to the acquired images for obtaining the planar (direct component or DC) and amplitude (alternating component or AC) images. Bruises were identified in the amplitude images with varying image contrasts, depending on spatial frequency. The bruise visibility was further enhanced through post-processing of the amplitude images. Furthermore, three spiral phase transform (SPT)-based demodulation methods, using single and two images and two phase-shifted images, were proposed for obtaining AC images. Results showed that the demodulation methods greatly enhanced the contrast and spatial resolution of the AC images, making it feasible to detect the fresh bruises that, otherwise, could not be achieved by conventional imaging technique with planar or uniform illumination. The effectiveness of image enhancement, however, varied with spatial frequency. Both 2-image and 2-phase SPT methods achieved the performance similar to that by conventional TPD. SIRI technique has demonstrated the capability of detecting fresh bruises in apples, and it has the potential as a new imaging modality for enhancing food quality and safety detection.
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2015-07-01
In the field of orthodontic planning, the creation of a complete digital dental model to simulate and predict treatments is of utmost importance. Nowadays, orthodontists use panoramic radiographs (PAN) and dental crown representations obtained by optical scanning. However, these data do not contain any 3D information regarding tooth root geometries. A reliable orthodontic treatment should instead take into account entire geometrical models of dental shapes in order to better predict tooth movements. This paper presents a methodology to create complete 3D patient dental anatomies by combining digital mouth models and panoramic radiographs. The modeling process is based on using crown surfaces, reconstructed by optical scanning, and root geometries, obtained by adapting anatomical CAD templates over patient specific information extracted from radiographic data. The radiographic process is virtually replicated on crown digital geometries through the Discrete Radon Transform (DRT). The resulting virtual PAN image is used to integrate the actual radiographic data and the digital mouth model. This procedure provides the root references on the 3D digital crown models, which guide a shape adjustment of the dental CAD templates. The entire geometrical models are finally created by merging dental crowns, captured by optical scanning, and root geometries, obtained from the CAD templates. Copyright © 2015 Elsevier Ltd. All rights reserved.
Review of image processing fundamentals
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1985-01-01
Image processing through convolution, transform coding, spatial frequency alterations, sampling, and interpolation are considered. It is postulated that convolution in one domain (real or frequency) is equivalent to multiplication in the other (frequency or real), and that the relative amplitudes of the Fourier components must be retained to reproduce any waveshape. It is suggested that all digital systems may be considered equivalent, with a frequency content approximately at the Nyquist limit, and with a Gaussian frequency response. An optimized cubic version of the interpolation continuum image is derived as a set of cubic spines. Pixel replication has been employed to enlarge the visable area of digital samples, however, suitable elimination of the extraneous high frequencies involved in the visable edges, by defocusing, is necessary to allow the underlying object represented by the data values to be seen.
Color Imaging management in film processing
NASA Astrophysics Data System (ADS)
Tremeau, Alain; Konik, Hubert; Colantoni, Philippe
2003-12-01
The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.
Holmström, Oscar; Linder, Nina; Ngasala, Billy; Mårtensson, Andreas; Linder, Ewert; Lundin, Mikael; Moilanen, Hannu; Suutala, Antti; Diwan, Vinod; Lundin, Johan
2017-06-01
Microscopy remains the gold standard in the diagnosis of neglected tropical diseases. As resource limited, rural areas often lack laboratory equipment and trained personnel, new diagnostic techniques are needed. Low-cost, point-of-care imaging devices show potential in the diagnosis of these diseases. Novel, digital image analysis algorithms can be utilized to automate sample analysis. Evaluation of the imaging performance of a miniature digital microscopy scanner for the diagnosis of soil-transmitted helminths and Schistosoma haematobium, and training of a deep learning-based image analysis algorithm for automated detection of soil-transmitted helminths in the captured images. A total of 13 iodine-stained stool samples containing Ascaris lumbricoides, Trichuris trichiura and hookworm eggs and 4 urine samples containing Schistosoma haematobium were digitized using a reference whole slide-scanner and the mobile microscopy scanner. Parasites in the images were identified by visual examination and by analysis with a deep learning-based image analysis algorithm in the stool samples. Results were compared between the digital and visual analysis of the images showing helminth eggs. Parasite identification by visual analysis of digital slides captured with the mobile microscope was feasible for all analyzed parasites. Although the spatial resolution of the reference slide-scanner is higher, the resolution of the mobile microscope is sufficient for reliable identification and classification of all parasites studied. Digital image analysis of stool sample images captured with the mobile microscope showed high sensitivity for detection of all helminths studied (range of sensitivity = 83.3-100%) in the test set (n = 217) of manually labeled helminth eggs. In this proof-of-concept study, the imaging performance of a mobile, digital microscope was sufficient for visual detection of soil-transmitted helminths and Schistosoma haematobium. Furthermore, we show that deep learning-based image analysis can be utilized for the automated detection and classification of helminths in the captured images.
Holmström, Oscar; Linder, Nina; Ngasala, Billy; Mårtensson, Andreas; Linder, Ewert; Lundin, Mikael; Moilanen, Hannu; Suutala, Antti; Diwan, Vinod; Lundin, Johan
2017-01-01
ABSTRACT Background: Microscopy remains the gold standard in the diagnosis of neglected tropical diseases. As resource limited, rural areas often lack laboratory equipment and trained personnel, new diagnostic techniques are needed. Low-cost, point-of-care imaging devices show potential in the diagnosis of these diseases. Novel, digital image analysis algorithms can be utilized to automate sample analysis. Objective: Evaluation of the imaging performance of a miniature digital microscopy scanner for the diagnosis of soil-transmitted helminths and Schistosoma haematobium, and training of a deep learning-based image analysis algorithm for automated detection of soil-transmitted helminths in the captured images. Methods: A total of 13 iodine-stained stool samples containing Ascaris lumbricoides, Trichuris trichiura and hookworm eggs and 4 urine samples containing Schistosoma haematobium were digitized using a reference whole slide-scanner and the mobile microscopy scanner. Parasites in the images were identified by visual examination and by analysis with a deep learning-based image analysis algorithm in the stool samples. Results were compared between the digital and visual analysis of the images showing helminth eggs. Results: Parasite identification by visual analysis of digital slides captured with the mobile microscope was feasible for all analyzed parasites. Although the spatial resolution of the reference slide-scanner is higher, the resolution of the mobile microscope is sufficient for reliable identification and classification of all parasites studied. Digital image analysis of stool sample images captured with the mobile microscope showed high sensitivity for detection of all helminths studied (range of sensitivity = 83.3–100%) in the test set (n = 217) of manually labeled helminth eggs. Conclusions: In this proof-of-concept study, the imaging performance of a mobile, digital microscope was sufficient for visual detection of soil-transmitted helminths and Schistosoma haematobium. Furthermore, we show that deep learning-based image analysis can be utilized for the automated detection and classification of helminths in the captured images. PMID:28838305
Metric Aspects of Digital Images and Digital Image Processing.
1984-09-01
produced in a reconstructed digital image. Synthesized aerial photographs were formed by processing a combined elevation and orthophoto data base. These...brightness values h1 and Iion b) a line equation whose two parameters are calculated h12, along with tile borderline that separates the two intensity
Yi, Faliu; Moon, Inkyu; Javidi, Bahram
2017-10-01
In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm.
Yi, Faliu; Moon, Inkyu; Javidi, Bahram
2017-01-01
In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm. PMID:29082078
Registration of 2D to 3D joint images using phase-based mutual information
NASA Astrophysics Data System (ADS)
Dalvi, Rupin; Abugharbieh, Rafeef; Pickering, Mark; Scarvell, Jennie; Smith, Paul
2007-03-01
Registration of two dimensional to three dimensional orthopaedic medical image data has important applications particularly in the area of image guided surgery and sports medicine. Fluoroscopy to computer tomography (CT) registration is an important case, wherein digitally reconstructed radiographs derived from the CT data are registered to the fluoroscopy data. Traditional registration metrics such as intensity-based mutual information (MI) typically work well but often suffer from gross misregistration errors when the image to be registered contains a partial view of the anatomy visible in the target image. Phase-based MI provides a robust alternative similarity measure which, in addition to possessing the general robustness and noise immunity that MI provides, also employs local phase information in the registration process which makes it less susceptible to the aforementioned errors. In this paper, we propose using the complex wavelet transform for computing image phase information and incorporating that into a phase-based MI measure for image registration. Tests on a CT volume and 6 fluoroscopy images of the knee are presented. The femur and the tibia in the CT volume were individually registered to the fluoroscopy images using intensity-based MI, gradient-based MI and phase-based MI. Errors in the coordinates of fiducials present in the bone structures were used to assess the accuracy of the different registration schemes. Quantitative results demonstrate that the performance of intensity-based MI was the worst. Gradient-based MI performed slightly better, while phase-based MI results were the best consistently producing the lowest errors.
Registration of opthalmic images using control points
NASA Astrophysics Data System (ADS)
Heneghan, Conor; Maguire, Paul
2003-03-01
A method for registering pairs of digital ophthalmic images of the retina is presented using anatomical features as control points present in both images. The anatomical features chosen are blood vessel crossings and bifurcations. These control points are identified by a combination of local contrast enhancement, and morphological processing. In general, the matching between control points is unknown, however, so an automated algorithm is used to determine the matching pairs of control points in the two images as follows. Using two control points from each image, rigid global transform (RGT) coefficients are calculated for all possible combinations of control point pairs, and the set of RGT coefficients is identified. Once control point pairs are established, registration of two images can be achieved by using linear regression to optimize an RGT, bilinear or second order polynomial global transform. An example of cross-modal image registration using an optical image and a fluorescein angiogram of an eye is presented to illustrate the technique.
Raith, Stefan; Vogel, Eric Per; Anees, Naeema; Keul, Christine; Güth, Jan-Frederik; Edelhoff, Daniel; Fischer, Horst
2017-01-01
Chairside manufacturing based on digital image acquisition is gainingincreasing importance in dentistry. For the standardized application of these methods, it is paramount to have highly automated digital workflows that can process acquired 3D image data of dental surfaces. Artificial Neural Networks (ANNs) arenumerical methods primarily used to mimic the complex networks of neural connections in the natural brain. Our hypothesis is that an ANNcan be developed that is capable of classifying dental cusps with sufficient accuracy. This bears enormous potential for an application in chairside manufacturing workflows in the dental field, as it closes the gap between digital acquisition of dental geometries and modern computer-aided manufacturing techniques.Three-dimensional surface scans of dental casts representing natural full dental arches were transformed to range image data. These data were processed using an automated algorithm to detect candidates for tooth cusps according to salient geometrical features. These candidates were classified following common dental terminology and used as training data for a tailored ANN.For the actual cusp feature description, two different approaches were developed and applied to the available data: The first uses the relative location of the detected cusps as input data and the second method directly takes the image information given in the range images. In addition, a combination of both was implemented and investigated.Both approaches showed high performance with correct classifications of 93.3% and 93.5%, respectively, with improvements by the combination shown to be minor.This article presents for the first time a fully automated method for the classification of teeththat could be confirmed to work with sufficient precision to exhibit the potential for its use in clinical practice,which is a prerequisite for automated computer-aided planning of prosthetic treatments with subsequent automated chairside manufacturing. Copyright © 2016 Elsevier Ltd. All rights reserved.
Digital image registration method based upon binary boundary maps
NASA Technical Reports Server (NTRS)
Jayroe, R. R., Jr.; Andrus, J. F.; Campbell, C. W.
1974-01-01
A relatively fast method is presented for matching or registering the digital data of imagery from the same ground scene acquired at different times, or from different multispectral images, sensors, or both. It is assumed that the digital images can be registed by using translations and rotations only, that the images are of the same scale, and that little or no distortion exists between images. It is further assumed that by working with several local areas of the image, the rotational effects in the local areas can be neglected. Thus, by treating the misalignments of local areas as translations, it is possible to determine rotational and translational misalignments for a larger portion of the image containing the local areas. This procedure of determining the misalignment and then registering the data according to the misalignment can be repeated until the desired degree of registration is achieved. The method to be presented is based upon the use of binary boundary maps produced from the raw digital imagery rather than the raw digital data.
Saliency-aware food image segmentation for personal dietary assessment using a wearable computer
Chen, Hsin-Chen; Jia, Wenyan; Sun, Xin; Li, Zhaoxin; Li, Yuecheng; Fernstrom, John D.; Burke, Lora E.; Baranowski, Thomas; Sun, Mingui
2015-01-01
Image-based dietary assessment has recently received much attention in the community of obesity research. In this assessment, foods in digital pictures are specified, and their portion sizes (volumes) are estimated. Although manual processing is currently the most utilized method, image processing holds much promise since it may eventually lead to automatic dietary assessment. In this paper we study the problem of segmenting food objects from images. This segmentation is difficult because of various food types, shapes and colors, different decorating patterns on food containers, and occlusions of food and non-food objects. We propose a novel method based on a saliency-aware active contour model (ACM) for automatic food segmentation from images acquired by a wearable camera. An integrated saliency estimation approach based on food location priors and visual attention features is designed to produce a salient map of possible food regions in the input image. Next, a geometric contour primitive is generated and fitted to the salient map by means of multi-resolution optimization with respect to a set of affine and elastic transformation parameters. The food regions are then extracted after contour fitting. Our experiments using 60 food images showed that the proposed method achieved significantly higher accuracy in food segmentation when compared to conventional segmentation methods. PMID:26257473
Saliency-aware food image segmentation for personal dietary assessment using a wearable computer
NASA Astrophysics Data System (ADS)
Chen, Hsin-Chen; Jia, Wenyan; Sun, Xin; Li, Zhaoxin; Li, Yuecheng; Fernstrom, John D.; Burke, Lora E.; Baranowski, Thomas; Sun, Mingui
2015-02-01
Image-based dietary assessment has recently received much attention in the community of obesity research. In this assessment, foods in digital pictures are specified, and their portion sizes (volumes) are estimated. Although manual processing is currently the most utilized method, image processing holds much promise since it may eventually lead to automatic dietary assessment. In this paper we study the problem of segmenting food objects from images. This segmentation is difficult because of various food types, shapes and colors, different decorating patterns on food containers, and occlusions of food and non-food objects. We propose a novel method based on a saliency-aware active contour model (ACM) for automatic food segmentation from images acquired by a wearable camera. An integrated saliency estimation approach based on food location priors and visual attention features is designed to produce a salient map of possible food regions in the input image. Next, a geometric contour primitive is generated and fitted to the salient map by means of multi-resolution optimization with respect to a set of affine and elastic transformation parameters. The food regions are then extracted after contour fitting. Our experiments using 60 food images showed that the proposed method achieved significantly higher accuracy in food segmentation when compared to conventional segmentation methods.
Electronic voltage and current transformers testing device.
Pan, Feng; Chen, Ruimin; Xiao, Yong; Sun, Weiming
2012-01-01
A method for testing electronic instrument transformers is described, including electronic voltage and current transformers (EVTs, ECTs) with both analog and digital outputs. A testing device prototype is developed. It is based on digital signal processing of the signals that are measured at the secondary outputs of the tested transformer and the reference transformer when the same excitation signal is fed to their primaries. The test that estimates the performance of the prototype has been carried out at the National Centre for High Voltage Measurement and the prototype is approved for testing transformers with precision class up to 0.2 at the industrial frequency (50 Hz or 60 Hz). The device is suitable for on-site testing due to its high accuracy, simple structure and low-cost hardware.
Model-based morphological segmentation and labeling of coronary angiograms.
Haris, K; Efstratiadis, S N; Maglaveras, N; Pappas, C; Gourassas, J; Louridas, G
1999-10-01
A method for extraction and labeling of the coronary arterial tree (CAT) using minimal user supervision in single-view angiograms is proposed. The CAT structural description (skeleton and borders) is produced, along with quantitative information for the artery dimensions and assignment of coded labels, based on a given coronary artery model represented by a graph. The stages of the method are: 1) CAT tracking and detection; 2) artery skeleton and border estimation; 3) feature graph creation; and iv) artery labeling by graph matching. The approximate CAT centerline and borders are extracted by recursive tracking based on circular template analysis. The accurate skeleton and borders of each CAT segment are computed, based on morphological homotopy modification and watershed transform. The approximate centerline and borders are used for constructing the artery segment enclosing area (ASEA), where the defined skeleton and border curves are considered as markers. Using the marked ASEA, an artery gradient image is constructed where all the ASEA pixels (except the skeleton ones) are assigned the gradient magnitude of the original image. The artery gradient image markers are imposed as its unique regional minima by the homotopy modification method, the watershed transform is used for extracting the artery segment borders, and the feature graph is updated. Finally, given the created feature graph and the known model graph, a graph matching algorithm assigns the appropriate labels to the extracted CAT using weighted maximal cliques on the association graph corresponding to the two given graphs. Experimental results using clinical digitized coronary angiograms are presented.
MEMS based digital transform spectrometers
NASA Astrophysics Data System (ADS)
Geller, Yariv; Ramani, Mouli
2005-09-01
Earlier this year, a new breed of Spectrometers based on Micro-Electro-Mechanical-System (MEMS) engines has been introduced to the commercial market. The use of these engines combined with transform mathematics, produces powerful spectrometers at unprecedented low cost in various spectral regions.
Microcomputer-Based Digital Signal Processing Laboratory Experiments.
ERIC Educational Resources Information Center
Tinari, Jr., Rocco; Rao, S. Sathyanarayan
1985-01-01
Describes a system (Apple II microcomputer interfaced to flexible, custom-designed digital hardware) which can provide: (1) Fast Fourier Transform (FFT) computation on real-time data with a video display of spectrum; (2) frequency synthesis experiments using the inverse FFT; and (3) real-time digital filtering experiments. (JN)
36 CFR 1237.18 - What are the environmental standards for audiovisual records storage?
Code of Federal Regulations, 2014 CFR
2014-07-01
... ISO 18920 (incorporated by reference, see § 1237.3). (2) Color images and acetate-based media. Keep in... color images and the deterioration of acetate-based media. (b) Digital images on magnetic tape. For digital images stored on magnetic tape, keep in an area maintained at a constant temperature range of 62...
36 CFR 1237.18 - What are the environmental standards for audiovisual records storage?
Code of Federal Regulations, 2012 CFR
2012-07-01
... ISO 18920 (incorporated by reference, see § 1237.3). (2) Color images and acetate-based media. Keep in... color images and the deterioration of acetate-based media. (b) Digital images on magnetic tape. For digital images stored on magnetic tape, keep in an area maintained at a constant temperature range of 62...
36 CFR 1237.18 - What are the environmental standards for audiovisual records storage?
Code of Federal Regulations, 2011 CFR
2011-07-01
... ISO 18920 (incorporated by reference, see § 1237.3). (2) Color images and acetate-based media. Keep in... color images and the deterioration of acetate-based media. (b) Digital images on magnetic tape. For digital images stored on magnetic tape, keep in an area maintained at a constant temperature range of 62...
36 CFR § 1237.18 - What are the environmental standards for audiovisual records storage?
Code of Federal Regulations, 2013 CFR
2013-07-01
... ISO 18920 (incorporated by reference, see § 1237.3). (2) Color images and acetate-based media. Keep in... color images and the deterioration of acetate-based media. (b) Digital images on magnetic tape. For digital images stored on magnetic tape, keep in an area maintained at a constant temperature range of 62...
NASA Astrophysics Data System (ADS)
Selwyn, Ebenezer Juliet; Florinabel, D. Jemi
2018-04-01
Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.
Iterative nonlinear joint transform correlation for the detection of objects in cluttered scenes
NASA Astrophysics Data System (ADS)
Haist, Tobias; Tiziani, Hans J.
1999-03-01
An iterative correlation technique with digital image processing in the feedback loop for the detection of small objects in cluttered scenes is proposed. A scanning aperture is combined with the method in order to improve the immunity against noise and clutter. Multiple reference objects or different views of one object are processed in parallel. We demonstrate the method by detecting a noisy and distorted face in a crowd with a nonlinear joint transform correlator.
Digital colour management system for colour parameters reconstruction
NASA Astrophysics Data System (ADS)
Grudzinski, Karol; Lasmanowicz, Piotr; Assis, Lucas M. N.; Pawlicka, Agnieszka; Januszko, Adam
2013-10-01
Digital Colour Management System (DCMS) and its application to new adaptive camouflage system are presented in this paper. The DCMS is a digital colour rendering method which would allow for transformation of a real image into a set of colour pixels displayed on a computer monitor. Consequently, it can analyse pixels' colour which comprise images of the environment such as desert, semi-desert, jungle, farmland or rocky mountain in order to prepare an adaptive camouflage pattern most suited for the terrain. This system is described in present work as well as the use the subtractive colours mixing method to construct the real time colour changing electrochromic window/pixel (ECD) for camouflage purpose. The ECD with glass/ITO/Prussian Blue(PB)/electrolyte/CeO2-TiO2/ITO/glass configuration was assembled and characterized. The ECD switched between green and yellow after +/-1.5 V application and the colours have been controlled by Digital Colour Management System and described by CIE LAB parameters.
NASA Technical Reports Server (NTRS)
Colvocoresses, A. P. (Principal Investigator)
1980-01-01
Graphics are presented which show HCMM mapped water-surface temperature in Lake Anna, a 13,000 dendrically-shaped lake which provides cooling for a nuclear power plant in Virginia. The HCMM digital data, produced by NASA were processed by NOAA/NESS into image and line-printer form. A LANDSAT image of the lake illustrates the relationship between MSS band 7 data and the HCMM data as processed by the NASA image processing facility which transforms the data to the same distortion-free hotline oblique Mercator projection. Spatial correlation of the two images is relatively simple by either digital or analog means and the HCMM image has a potential accuracy approaching the 80 m of the original LANDSAT data. While it is difficult to get readings that are not diluted by radiation from cooler adjacent land areas in narrow portions of the lake, digital data indicated by the line-printer display five different temperatures for open-water areas. Where the water surface response was not diluted by land areas, the temperature difference recorded by HCMM corresponds to in situ readings with rsme on the order of 1 C.
Ballistics projectile image analysis for firearm identification.
Li, Dongguang
2006-10-01
This paper is based upon the observation that, when a bullet is fired, it creates characteristic markings on the cartridge case and projectile. From these markings, over 30 different features can be distinguished, which, in combination, produce a "fingerprint" for a firearm. By analyzing features within such a set of firearm fingerprints, it will be possible to identify not only the type and model of a firearm, but also each and every individual weapon just as effectively as human fingerprint identification. A new analytic system based on the fast Fourier transform for identifying projectile specimens by the line-scan imaging technique is proposed in this paper. This paper develops optical, photonic, and mechanical techniques to map the topography of the surfaces of forensic projectiles for the purpose of identification. Experiments discussed in this paper are performed on images acquired from 16 various weapons. Experimental results show that the proposed system can be used for firearm identification efficiently and precisely through digitizing and analyzing the fired projectiles specimens.
Digital methods of recording color television images on film tape
NASA Astrophysics Data System (ADS)
Krivitskaya, R. Y.; Semenov, V. M.
1985-04-01
Three methods are now available for recording color television images on film tape, directly or after appropriate finish of signal processing. Conventional recording of images from the screens of three kinescopes with synthetic crystal face plates is still most effective for high fidelity. This method was improved by digital preprocessing of brightness color-difference signal. Frame-by-frame storage of these signals in the memory in digital form is followed by gamma and aperture correction and electronic correction of crossover distortions in the color layers of the film with fixing in accordance with specific emulsion procedures. The newer method of recording color television images with line arrays of light-emitting diodes involves dichromic superposing mirrors and a movable scanning mirror. This method allows the use of standard movie cameras, simplifies interlacing-to-linewise conversion and the mechanical equipment, and lengthens exposure time while it shortens recording time. The latest image transform method requires an audio-video recorder, a memory disk, a digital computer, and a decoder. The 9-step procedure includes preprocessing the total color television signal with reduction of noise level and time errors, followed by frame frequency conversion and setting the number of lines. The total signal is then resolved into its brightness and color-difference components and phase errors and image blurring are also reduced. After extraction of R,G,B signals and colorimetric matching of TV camera and film tape, the simultaneous R,B, B signals are converted from interlacing to sequential triades of color-quotient frames with linewise scanning at triple frequency. Color-quotient signals are recorded with an electron beam on a smoothly moving black-and-white film tape under vacuum. While digital techniques improve the signal quality and simplify the control of processes, not requiring stabilization of circuits, image processing is still analog.
NASA Astrophysics Data System (ADS)
Yadav, Poonam Lata; Singh, Hukum
2018-05-01
To enhance the security in optical image encryption system and to protect it from the attackers, this paper proposes new digital spiral phase mask based on Fresnel Transform. In this cryptosystem the Spiral Phase Mask (SPM) used is a hybrid of Fresnel Zone Plate (FZP) and Radial Hilbert Mask (RHM) which makes the key strong and enhances the security. The different keys used for encryption and decryption purposed make the system much more secure. Proposed scheme uses various structured phase mask which increases the key space also it increases the number of parameters which makes it difficult for the attackers to exactly find the key to recover the original image. We have also used different keys for encryption and decryption purpose to make the system much more secure. The strength of the proposed cryptosystem has been analyzed by simulating on MATLAB 7.9.0(R2008a). Mean Square Errors (MSE) and Peak Signal to Noise Ratio (PSNR) are calculated for the proposed algorithm. The experimental results are provided to highlight the effectiveness and sustainability of proposed cryptosystem and to prove that the cryptosystem is secure for usage.
Segmentation for the enhancement of microcalcifications in digital mammograms.
Milosevic, Marina; Jankovic, Dragan; Peulic, Aleksandar
2014-01-01
Microcalcification clusters appear as groups of small, bright particles with arbitrary shapes on mammographic images. They are the earliest sign of breast carcinomas and their detection is the key for improving breast cancer prognosis. But due to the low contrast of microcalcifications and same properties as noise, it is difficult to detect microcalcification. This work is devoted to developing a system for the detection of microcalcification in digital mammograms. After removing noise from mammogram using the Discrete Wavelet Transformation (DWT), we first selected the region of interest (ROI) in order to demarcate the breast region on a mammogram. Segmenting region of interest represents one of the most important stages of mammogram processing procedure. The proposed segmentation method is based on a filtering using the Sobel filter. This process will identify the significant pixels, that belong to edges of microcalcifications. Microcalcifications were detected by increasing the contrast of the images obtained by applying Sobel operator. In order to confirm the effectiveness of this microcalcification segmentation method, the Support Vector Machine (SVM) and k-Nearest Neighborhood (k-NN) algorithm are employed for the classification task using cross-validation technique.
Photo-editing in Orthodontics: How Much is Too Much?.
Kapoor, Priyanka
2015-01-01
Digital photography and radiology are the mainstay of orthodontic records but current image editing software programs nave led to increase in instances of digital forgery and scientific misconduct. In the present study, digital image data of orthodontic study casts and photographs were altered using software such as [Microsoft Paint6 1, Picasa3.1, Adobe Photoshop3.6]. Based on ethical guidelines on digital image manipulations, cropping or intensity adjustments in moderation to entire image were considered permissible while cloning or color adjustments were deemed unethical.
Impacts of Digital Imaging versus Drawing on Student Learning in Undergraduate Biodiversity Labs
ERIC Educational Resources Information Center
Basey, John M.; Maines, Anastasia P.; Francis, Clinton D.; Melbourne, Brett
2014-01-01
We examined the effects of documenting observations with digital imaging versus hand drawing in inquiry-based college biodiversity labs. Plant biodiversity labs were divided into two treatments, digital imaging (N = 221) and hand drawing (N = 238). Graduate-student teaching assistants (N = 24) taught one class in each treatment. Assessments…
NASA Astrophysics Data System (ADS)
Al-Mansoori, Saeed; Kunhu, Alavi
2013-10-01
This paper proposes a blind multi-watermarking scheme based on designing two back-to-back encoders. The first encoder is implemented to embed a robust watermark into remote sensing imagery by applying a Discrete Cosine Transform (DCT) approach. Such watermark is used in many applications to protect the copyright of the image. However, the second encoder embeds a fragile watermark using `SHA-1' hash function. The purpose behind embedding a fragile watermark is to prove the authenticity of the image (i.e. tamper-proof). Thus, the proposed technique was developed as a result of new challenges with piracy of remote sensing imagery ownership. This led researchers to look for different means to secure the ownership of satellite imagery and prevent the illegal use of these resources. Therefore, Emirates Institution for Advanced Science and Technology (EIAST) proposed utilizing existing data security concept by embedding a digital signature, "watermark", into DubaiSat-1 satellite imagery. In this study, DubaiSat-1 images with 2.5 meter resolution are used as a cover and a colored EIAST logo is used as a watermark. In order to evaluate the robustness of the proposed technique, a couple of attacks are applied such as JPEG compression, rotation and synchronization attacks. Furthermore, tampering attacks are applied to prove image authenticity.
Diabetic retinopathy grading by digital curvelet transform.
Hajeb Mohammad Alipour, Shirin; Rabbani, Hossein; Akhlaghi, Mohammad Reza
2012-01-01
One of the major complications of diabetes is diabetic retinopathy. As manual analysis and diagnosis of large amount of images are time consuming, automatic detection and grading of diabetic retinopathy are desired. In this paper, we use fundus fluorescein angiography and color fundus images simultaneously, extract 6 features employing curvelet transform, and feed them to support vector machine in order to determine diabetic retinopathy severity stages. These features are area of blood vessels, area, regularity of foveal avascular zone, and the number of micro-aneurisms therein, total number of micro-aneurisms, and area of exudates. In order to extract exudates and vessels, we respectively modify curvelet coefficients of color fundus images and angiograms. The end points of extracted vessels in predefined region of interest based on optic disk are connected together to segment foveal avascular zone region. To extract micro-aneurisms from angiogram, first extracted vessels are subtracted from original image, and after removing detected background by morphological operators and enhancing bright small pixels, micro-aneurisms are detected. 70 patients were involved in this study to classify diabetic retinopathy into 3 groups, that is, (1) no diabetic retinopathy, (2) mild/moderate nonproliferative diabetic retinopathy, (3) severe nonproliferative/proliferative diabetic retinopathy, and our simulations show that the proposed system has sensitivity and specificity of 100% for grading.
ART AND SCIENCE OF IMAGE MAPS.
Kidwell, Richard D.; McSweeney, Joseph A.
1985-01-01
The visual image of reflected light is influenced by the complex interplay of human color discrimination, spatial relationships, surface texture, and the spectral purity of light, dyes, and pigments. Scientific theories of image processing may not always achieve acceptable results as the variety of factors, some psychological, are in part, unpredictable. Tonal relationships that affect digital image processing and the transfer functions used to transform from the continuous-tone source image to a lithographic image, may be interpreted for an insight of where art and science fuse in the production process. The application of art and science in image map production at the U. S. Geological Survey is illustrated and discussed.
Karimi, Davood; Ward, Rabab K
2016-10-01
Image models are central to all image processing tasks. The great advancements in digital image processing would not have been made possible without powerful models which, themselves, have evolved over time. In the past decade, "patch-based" models have emerged as one of the most effective models for natural images. Patch-based methods have outperformed other competing methods in many image processing tasks. These developments have come at a time when greater availability of powerful computational resources and growing concerns over the health risks of the ionizing radiation encourage research on image processing algorithms for computed tomography (CT). The goal of this paper is to explain the principles of patch-based methods and to review some of their recent applications in CT. We first review the central concepts in patch-based image processing and explain some of the state-of-the-art algorithms, with a focus on aspects that are more relevant to CT. Then, we review some of the recent application of patch-based methods in CT. Patch-based methods have already transformed the field of image processing, leading to state-of-the-art results in many applications. More recently, several studies have proposed patch-based algorithms for various image processing tasks in CT, from denoising and restoration to iterative reconstruction. Although these studies have reported good results, the true potential of patch-based methods for CT has not been yet appreciated. Patch-based methods can play a central role in image reconstruction and processing for CT. They have the potential to lead to substantial improvements in the current state of the art.
Tensor Fukunaga-Koontz transform for small target detection in infrared images
NASA Astrophysics Data System (ADS)
Liu, Ruiming; Wang, Jingzhuo; Yang, Huizhen; Gong, Chenglong; Zhou, Yuanshen; Liu, Lipeng; Zhang, Zhen; Shen, Shuli
2016-09-01
Infrared small targets detection plays a crucial role in warning and tracking systems. Some novel methods based on pattern recognition technology catch much attention from researchers. However, those classic methods must reshape images into vectors with the high dimensionality. Moreover, vectorizing breaks the natural structure and correlations in the image data. Image representation based on tensor treats images as matrices and can hold the natural structure and correlation information. So tensor algorithms have better classification performance than vector algorithms. Fukunaga-Koontz transform is one of classification algorithms and it is a vector version method with the disadvantage of all vector algorithms. In this paper, we first extended the Fukunaga-Koontz transform into its tensor version, tensor Fukunaga-Koontz transform. Then we designed a method based on tensor Fukunaga-Koontz transform for detecting targets and used it to detect small targets in infrared images. The experimental results, comparison through signal-to-clutter, signal-to-clutter gain and background suppression factor, have validated the advantage of the target detection based on the tensor Fukunaga-Koontz transform over that based on the Fukunaga-Koontz transform.
Faster processing of multiple spatially-heterodyned direct to digital holograms
Hanson, Gregory R.; Bingham, Philip R.
2006-10-03
Systems and methods are described for faster processing of multiple spatially-heterodyned direct to digital holograms. A method includes of obtaining multiple spatially-heterodyned holograms, includes: digitally recording a first spatially-heterodyned hologram including spatial heterodyne fringes for Fourier analysis; digitally recording a second spatially-heterodyned hologram including spatial heterodyne fringes for Fourier analysis; Fourier analyzing the recorded first spatially-heterodyned hologram by shifting a first original origin of the recorded first spatially-heterodyned hologram including spatial heterodyne fringes in Fourier space to sit on top of a spatial-heterodyne carrier frequency defined as a first angle between a first reference beam and a first, object beam; applying a first digital filter to cut off signals around the first original origin and performing an inverse Fourier transform on the result; Fourier analyzing the recorded second spatially-heterodyned hologram by shifting a second original origin of the recorded second spatially-heterodyned hologram including spatial heterodyne fringes in Fourier space to sit on top of a spatial-heterodyne carrier frequency defined as a second angle between a second reference beam and a second object beam; and applying a second digital filter to cut off signals around the second original origin and performing an inverse Fourier transform on the result, wherein digitally recording the first spatially-heterodyned hologram is completed before digitally recording the second spatially-heterodyned hologram and a single digital image includes both the first spatially-heterodyned hologram and the second spatially-heterodyned hologram.
Faster processing of multiple spatially-heterodyned direct to digital holograms
Hanson, Gregory R [Clinton, TN; Bingham, Philip R [Knoxville, TN
2008-09-09
Systems and methods are described for faster processing of multiple spatially-heterodyned direct to digital holograms. A method includes of obtaining multiple spatially-heterodyned holograms, includes: digitally recording a first spatially-heterodyned hologram including spatial heterodyne fringes for Fourier analysis; digitally recording a second spatially-heterodyned hologram including spatial heterodyne fringes for Fourier analysis; Fourier analyzing the recorded first spatially-heterodyned hologram by shifting a first original origin of the recorded first spatially-heterodyned hologram including spatial heterodyne fringes in Fourier space to sit on top of a spatial-heterodyne carrier frequency defined as a first angle between a first reference beam and a first object beam; applying a first digital filter to cut off signals around the first original origin and performing an inverse Fourier transform on the result; Fourier analyzing the recorded second spatially-heterodyned hologram by shifting a second original origin of the recorded second spatially-heterodyned hologram including spatial heterodyne fringes in Fourier space to sit on top of a spatial-heterodyne carrier frequency defined as a second angle between a second reference beam and a second object beam; and applying a second digital filter to cut off signals around the second original origin and performing an inverse Fourier transform on the result, wherein digitally recording the first spatially-heterodyned hologram is completed before digitally recording the second spatially-heterodyned hologram and a single digital image includes both the first spatially-heterodyned hologram and the second spatially-heterodyned hologram.
Hall, Gunnsteinn; Liang, Wenxuan; Li, Xingde
2017-10-01
Collagen fiber alignment derived from second harmonic generation (SHG) microscopy images can be important for disease diagnostics. Image processing algorithms are needed to robustly quantify the alignment in images with high sensitivity and reliability. Fourier transform (FT) magnitude, 2D power spectrum, and image autocorrelation have previously been used to extract fiber information from images by assuming a certain mathematical model (e.g. Gaussian distribution of the fiber-related parameters) and fitting. The fitting process is slow and fails to converge when the data is not Gaussian. Herein we present an efficient constant-time deterministic algorithm which characterizes the symmetricity of the FT magnitude image in terms of a single parameter, named the fiber alignment anisotropy R ranging from 0 (randomized fibers) to 1 (perfect alignment). This represents an important improvement of the technology and may bring us one step closer to utilizing the technology for various applications in real time. In addition, we present a digital image phantom-based framework for characterizing and validating the algorithm, as well as assessing the robustness of the algorithm against different perturbations.
Bidgood, W D; Bray, B; Brown, N; Mori, A R; Spackman, K A; Golichowski, A; Jones, R H; Korman, L; Dove, B; Hildebrand, L; Berg, M
1999-01-01
To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. The authors introduce the notion of "image acquisition context," the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries.
A color-communication scheme for digital imagery
Acosta, Alex
1987-01-01
Color pictures generated from digital images are frequently used by geologists, foresters, range managers, and others. These color products are preferred over black and white pictures because the human eye is more sensitive to color differences than to various shades of gray. Color discrimination is a function of perception, and therefore colors in these color composites are generally described subjectively, which can lead to ambiguous color communication. Numerous color-coordinate systems are available that quantitively relate digital triplets representing amounts of red, free, and blue to the parameters of hue, saturation, and intensity perceived by the eye. Most of these systems implement a complex transformation of the primary colors to a color space that is hard to visualize, thus making it difficult to relate digital triplets to perception parameters. This paper presents a color-communcation scheme that relates colors on a color triangle to corresponding values of "hue" (H), "saturation" (S), and chromaticity coordinates (x,y,z). The scheme simplifies the relation between red, green, and blue (RGB) digital triplets and the color generated by these triplets. Some examples of the use of the color-communication scheme in digital image processing are presented.
Wavelet compression techniques for hyperspectral data
NASA Technical Reports Server (NTRS)
Evans, Bruce; Ringer, Brian; Yeates, Mathew
1994-01-01
Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet transform coder was used for the two-dimensional compression. The third case used a three dimensional extension of this same algorithm.
A multitemporal (1979-2009) land-use/land-cover dataset of the binational Santa Cruz Watershed
2011-01-01
Trends derived from multitemporal land-cover data can be used to make informed land management decisions and to help managers model future change scenarios. We developed a multitemporal land-use/land-cover dataset for the binational Santa Cruz watershed of southern Arizona, United States, and northern Sonora, Mexico by creating a series of land-cover maps at decadal intervals (1979, 1989, 1999, and 2009) using Landsat Multispectral Scanner and Thematic Mapper data and a classification and regression tree classifier. The classification model exploited phenological changes of different land-cover spectral signatures through the use of biseasonal imagery collected during the (dry) early summer and (wet) late summer following rains from the North American monsoon. Landsat images were corrected to remove atmospheric influences, and the data were converted from raw digital numbers to surface reflectance values. The 14-class land-cover classification scheme is based on the 2001 National Land Cover Database with a focus on "Developed" land-use classes and riverine "Forest" and "Wetlands" cover classes required for specific watershed models. The classification procedure included the creation of several image-derived and topographic variables, including digital elevation model derivatives, image variance, and multitemporal Kauth-Thomas transformations. The accuracy of the land-cover maps was assessed using a random-stratified sampling design, reference aerial photography, and digital imagery. This showed high accuracy results, with kappa values (the statistical measure of agreement between map and reference data) ranging from 0.80 to 0.85.
Digital document imaging systems: An overview and guide
NASA Technical Reports Server (NTRS)
1990-01-01
This is an aid to NASA managers in planning the selection of a Digital Document Imaging System (DDIS) as a possible solution for document information processing and storage. Intended to serve as a manager's guide, this document contains basic information on digital imaging systems, technology, equipment standards, issues of interoperability and interconnectivity, and issues related to selecting appropriate imaging equipment based upon well defined needs.
Subaperture correlation based digital adaptive optics for full field optical coherence tomography.
Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A
2013-05-06
This paper proposes a sub-aperture correlation based numerical phase correction method for interferometric full field imaging systems provided the complex object field information can be extracted. This method corrects for the wavefront aberration at the pupil/ Fourier transform plane without the need of any adaptive optics, spatial light modulators (SLM) and additional cameras. We show that this method does not require the knowledge of any system parameters. In the simulation study, we consider a full field swept source OCT (FF SSOCT) system to show the working principle of the algorithm. Experimental results are presented for a technical and biological sample to demonstrate the proof of the principle.
Interactive Therapeutic Multi-sensory Environment for Cerebral Palsy People
NASA Astrophysics Data System (ADS)
Mauri, Cesar; Solanas, Agusti; Granollers, Toni; Bagés, Joan; García, Mabel
The Interactive Therapeutic Sensory Environment (ITSE) research project offers new opportunities on stimulation, interaction and interactive creation for people with moderate and severe mental and physical disabilities. Mainly based on computer vision techniques, the ITSE project allows the gathering of users’ gestures and their transformation into images, sounds and vibrations. Currently, in the APPC, we are working in a prototype that is capable of generating sounds based on the users’ motion and to process digitally the vocal sounds of the users. Tests with impaired users show that ITSE promotes participation, engagement and play. In this paper, we briefly describe the ITSE system, the experimental methodology, the preliminary results and some future goals.
NASA Astrophysics Data System (ADS)
Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan
2016-06-01
Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.