JPEG vs. JPEG 2000: an objective comparison of image encoding quality
NASA Astrophysics Data System (ADS)
Ebrahimi, Farzad; Chamik, Matthieu; Winkler, Stefan
2004-11-01
This paper describes an objective comparison of the image quality of different encoders. Our approach is based on estimating the visual impact of compression artifacts on perceived quality. We present a tool that measures these artifacts in an image and uses them to compute a prediction of the Mean Opinion Score (MOS) obtained in subjective experiments. We show that the MOS predictions by our proposed tool are a better indicator of perceived image quality than PSNR, especially for highly compressed images. For the encoder comparison, we compress a set of 29 test images with two JPEG encoders (Adobe Photoshop and IrfanView) and three JPEG2000 encoders (JasPer, Kakadu, and IrfanView) at various compression ratios. We compute blockiness, blur, and MOS predictions as well as PSNR of the compressed images. Our results show that the IrfanView JPEG encoder produces consistently better images than the Adobe Photoshop JPEG encoder at the same data rate. The differences between the JPEG2000 encoders in our test are less pronounced; JasPer comes out as the best codec, closely followed by IrfanView and Kakadu. Comparing the JPEG- and JPEG2000-encoding quality of IrfanView, we find that JPEG has a slight edge at low compression ratios, while JPEG2000 is the clear winner at medium and high compression ratios.
Reversible Watermarking Surviving JPEG Compression.
Zain, J; Clarke, M
2005-01-01
This paper will discuss the properties of watermarking medical images. We will also discuss the possibility of such images being compressed by JPEG and give an overview of JPEG compression. We will then propose a watermarking scheme that is reversible and robust to JPEG compression. The purpose is to verify the integrity and authenticity of medical images. We used 800x600x8 bits ultrasound (US) images in our experiment. SHA-256 of the image is then embedded in the Least significant bits (LSB) of an 8x8 block in the Region of Non Interest (RONI). The image is then compressed using JPEG and decompressed using Photoshop 6.0. If the image has not been altered, the watermark extracted will match the hash (SHA256) of the original image. The result shown that the embedded watermark is robust to JPEG compression up to image quality 60 (~91% compressed).
A generalized Benford's law for JPEG coefficients and its applications in image forensics
NASA Astrophysics Data System (ADS)
Fu, Dongdong; Shi, Yun Q.; Su, Wei
2007-02-01
In this paper, a novel statistical model based on Benford's law for the probability distributions of the first digits of the block-DCT and quantized JPEG coefficients is presented. A parametric logarithmic law, i.e., the generalized Benford's law, is formulated. Furthermore, some potential applications of this model in image forensics are discussed in this paper, which include the detection of JPEG compression for images in bitmap format, the estimation of JPEG compression Qfactor for JPEG compressed bitmap image, and the detection of double compressed JPEG image. The results of our extensive experiments demonstrate the effectiveness of the proposed statistical model.
Evaluation of image compression for computer-aided diagnosis of breast tumors in 3D sonography
NASA Astrophysics Data System (ADS)
Chen, We-Min; Huang, Yu-Len; Tao, Chi-Chuan; Chen, Dar-Ren; Moon, Woo-Kyung
2006-03-01
Medical imaging examinations form the basis for physicians diagnosing diseases, as evidenced by the increasing use of digital medical images for picture archiving and communications systems (PACS). However, with enlarged medical image databases and rapid growth of patients' case reports, PACS requires image compression to accelerate the image transmission rate and conserve disk space for diminishing implementation costs. For this purpose, JPEG and JPEG2000 have been accepted as legal formats for the digital imaging and communications in medicine (DICOM). The high compression ratio is felt to be useful for medical imagery. Therefore, this study evaluates the compression ratios of JPEG and JPEG2000 standards for computer-aided diagnosis (CAD) of breast tumors in 3-D medical ultrasound (US) images. The 3-D US data sets with various compression ratios are compressed using the two efficacious image compression standards. The reconstructed data sets are then diagnosed by a previous proposed CAD system. The diagnostic accuracy is measured based on receiver operating characteristic (ROC) analysis. Namely, the ROC curves are used to compare the diagnostic performance of two or more reconstructed images. Analysis results ensure a comparison of the compression ratios by using JPEG and JPEG2000 for 3-D US images. Results of this study provide the possible bit rates using JPEG and JPEG2000 for 3-D breast US images.
JPEG and wavelet compression of ophthalmic images
NASA Astrophysics Data System (ADS)
Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.
1999-05-01
This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.
Estimating JPEG2000 compression for image forensics using Benford's Law
NASA Astrophysics Data System (ADS)
Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.
2010-05-01
With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result clearly indicates a presence of compression in the image. Moreover, we compare the results of 1st digit probability and divergence among JPEG2000 compression rates at 0.1, 0.3, 0.5 and 0.9. The initial results show that the expected difference among them could be used for further analysis to estimate the unknown JPEG2000 compression rates.
Workflow opportunities using JPEG 2000
NASA Astrophysics Data System (ADS)
Foshee, Scott
2002-11-01
JPEG 2000 is a new image compression standard from ISO/IEC JTC1 SC29 WG1, the Joint Photographic Experts Group (JPEG) committee. Better thought of as a sibling to JPEG rather than descendant, the JPEG 2000 standard offers wavelet based compression as well as companion file formats and related standardized technology. This paper examines the JPEG 2000 standard for features in four specific areas-compression, file formats, client-server, and conformance/compliance that enable image workflows.
A block-based JPEG-LS compression technique with lossless region of interest
NASA Astrophysics Data System (ADS)
Deng, Lihua; Huang, Zhenghua; Yao, Shoukui
2018-03-01
JPEG-LS lossless compression algorithm is used in many specialized applications that emphasize on the attainment of high fidelity for its lower complexity and better compression ratios than the lossless JPEG standard. But it cannot prevent error diffusion because of the context dependence of the algorithm, and have low compression rate when compared to lossy compression. In this paper, we firstly divide the image into two parts: ROI regions and non-ROI regions. Then we adopt a block-based image compression technique to decrease the range of error diffusion. We provide JPEG-LS lossless compression for the image blocks which include the whole or part region of interest (ROI) and JPEG-LS near lossless compression for the image blocks which are included in the non-ROI (unimportant) regions. Finally, a set of experiments are designed to assess the effectiveness of the proposed compression method.
Oblivious image watermarking combined with JPEG compression
NASA Astrophysics Data System (ADS)
Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice
2003-06-01
For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.
The effect of JPEG compression on automated detection of microaneurysms in retinal images
NASA Astrophysics Data System (ADS)
Cree, M. J.; Jelinek, H. F.
2008-02-01
As JPEG compression at source is ubiquitous in retinal imaging, and the block artefacts introduced are known to be of similar size to microaneurysms (an important indicator of diabetic retinopathy) it is prudent to evaluate the effect of JPEG compression on automated detection of retinal pathology. Retinal images were acquired at high quality and then compressed to various lower qualities. An automated microaneurysm detector was run on the retinal images of various qualities of JPEG compression and the ability to predict the presence of diabetic retinopathy based on the detected presence of microaneurysms was evaluated with receiver operating characteristic (ROC) methodology. The negative effect of JPEG compression on automated detection was observed even at levels of compression sometimes used in retinal eye-screening programmes and these may have important clinical implications for deciding on acceptable levels of compression for a fully automated eye-screening programme.
High-quality JPEG compression history detection for fake uncompressed images
NASA Astrophysics Data System (ADS)
Zhang, Rong; Wang, Rang-Ding; Guo, Li-Jun; Jiang, Bao-Chuan
2017-05-01
Authenticity is one of the most important evaluation factors of images for photography competitions or journalism. Unusual compression history of an image often implies the illicit intent of its author. Our work aims at distinguishing real uncompressed images from fake uncompressed images that are saved in uncompressed formats but have been previously compressed. To detect the potential image JPEG compression, we analyze the JPEG compression artifacts based on the tetrolet covering, which corresponds to the local image geometrical structure. Since the compression can alter the structure information, the tetrolet covering indexes may be changed if a compression is performed on the test image. Such changes can provide valuable clues about the image compression history. To be specific, the test image is first compressed with different quality factors to generate a set of temporary images. Then, the test image is compared with each temporary image block-by-block to investigate whether the tetrolet covering index of each 4×4 block is different between them. The percentages of the changed tetrolet covering indexes corresponding to the quality factors (from low to high) are computed and used to form the p-curve, the local minimum of which may indicate the potential compression. Our experimental results demonstrate the advantage of our method to detect JPEG compressions of high quality, even the highest quality factors such as 98, 99, or 100 of the standard JPEG compression, from uncompressed-format images. At the same time, our detection algorithm can accurately identify the corresponding compression quality factor.
A comparison of the fractal and JPEG algorithms
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Shahshahani, M.
1991-01-01
A proprietary fractal image compression algorithm and the Joint Photographic Experts Group (JPEG) industry standard algorithm for image compression are compared. In every case, the JPEG algorithm was superior to the fractal method at a given compression ratio according to a root mean square criterion and a peak signal to noise criterion.
JPEG2000 and dissemination of cultural heritage over the Internet.
Politou, Eugenia A; Pavlidis, George P; Chamzas, Christodoulos
2004-03-01
By applying the latest technologies in image compression for managing the storage of massive image data within cultural heritage databases and by exploiting the universality of the Internet we are now able not only to effectively digitize, record and preserve, but also to promote the dissemination of cultural heritage. In this work we present an application of the latest image compression standard JPEG2000 in managing and browsing image databases, focusing on the image transmission aspect rather than database management and indexing. We combine the technologies of JPEG2000 image compression with client-server socket connections and client browser plug-in, as to provide with an all-in-one package for remote browsing of JPEG2000 compressed image databases, suitable for the effective dissemination of cultural heritage.
NASA Astrophysics Data System (ADS)
Clunie, David A.
2000-05-01
Proprietary compression schemes have a cost and risk associated with their support, end of life and interoperability. Standards reduce this cost and risk. The new JPEG-LS process (ISO/IEC 14495-1), and the lossless mode of the proposed JPEG 2000 scheme (ISO/IEC CD15444-1), new standard schemes that may be incorporated into DICOM, are evaluated here. Three thousand, six hundred and seventy-nine (3,679) single frame grayscale images from multiple anatomical regions, modalities and vendors, were tested. For all images combined JPEG-LS and JPEG 2000 performed equally well (3.81), almost as well as CALIC (3.91), a complex predictive scheme used only as a benchmark. Both out-performed existing JPEG (3.04 with optimum predictor choice per image, 2.79 for previous pixel prediction as most commonly used in DICOM). Text dictionary schemes performed poorly (gzip 2.38), as did image dictionary schemes without statistical modeling (PNG 2.76). Proprietary transform based schemes did not perform as well as JPEG-LS or JPEG 2000 (S+P Arithmetic 3.4, CREW 3.56). Stratified by modality, JPEG-LS compressed CT images (4.00), MR (3.59), NM (5.98), US (3.4), IO (2.66), CR (3.64), DX (2.43), and MG (2.62). CALIC always achieved the highest compression except for one modality for which JPEG-LS did better (MG digital vendor A JPEG-LS 4.02, CALIC 4.01). JPEG-LS outperformed existing JPEG for all modalities. The use of standard schemes can achieve state of the art performance, regardless of modality, JPEG-LS is simple, easy to implement, consumes less memory, and is faster than JPEG 2000, though JPEG 2000 will offer lossy and progressive transmission. It is recommended that DICOM add transfer syntaxes for both JPEG-LS and JPEG 2000.
The impact of skull bone intensity on the quality of compressed CT neuro images
NASA Astrophysics Data System (ADS)
Kowalik-Urbaniak, Ilona; Vrscay, Edward R.; Wang, Zhou; Cavaro-Menard, Christine; Koff, David; Wallace, Bill; Obara, Boguslaw
2012-02-01
The increasing use of technologies such as CT and MRI, along with a continuing improvement in their resolution, has contributed to the explosive growth of digital image data being generated. Medical communities around the world have recognized the need for efficient storage, transmission and display of medical images. For example, the Canadian Association of Radiologists (CAR) has recommended compression ratios for various modalities and anatomical regions to be employed by lossy JPEG and JPEG2000 compression in order to preserve diagnostic quality. Here we investigate the effects of the sharp skull edges present in CT neuro images on JPEG and JPEG2000 lossy compression. We conjecture that this atypical effect is caused by the sharp edges between the skull bone and the background regions as well as between the skull bone and the interior regions. These strong edges create large wavelet coefficients that consume an unnecessarily large number of bits in JPEG2000 compression because of its bitplane coding scheme, and thus result in reduced quality at the interior region, which contains most diagnostic information in the image. To validate the conjecture, we investigate a segmentation based compression algorithm based on simple thresholding and morphological operators. As expected, quality is improved in terms of PSNR as well as the structural similarity (SSIM) image quality measure, and its multiscale (MS-SSIM) and informationweighted (IW-SSIM) versions. This study not only supports our conjecture, but also provides a solution to improve the performance of JPEG and JPEG2000 compression for specific types of CT images.
A modified JPEG-LS lossless compression method for remote sensing images
NASA Astrophysics Data System (ADS)
Deng, Lihua; Huang, Zhenghua
2015-12-01
As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.
Detection of shifted double JPEG compression by an adaptive DCT coefficient model
NASA Astrophysics Data System (ADS)
Wang, Shi-Lin; Liew, Alan Wee-Chung; Li, Sheng-Hong; Zhang, Yu-Jin; Li, Jian-Hua
2014-12-01
In many JPEG image splicing forgeries, the tampered image patch has been JPEG-compressed twice with different block alignments. Such phenomenon in JPEG image forgeries is called the shifted double JPEG (SDJPEG) compression effect. Detection of SDJPEG-compressed patches could help in detecting and locating the tampered region. However, the current SDJPEG detection methods do not provide satisfactory results especially when the tampered region is small. In this paper, we propose a new SDJPEG detection method based on an adaptive discrete cosine transform (DCT) coefficient model. DCT coefficient distributions for SDJPEG and non-SDJPEG patches have been analyzed and a discriminative feature has been proposed to perform the two-class classification. An adaptive approach is employed to select the most discriminative DCT modes for SDJPEG detection. The experimental results show that the proposed approach can achieve much better results compared with some existing approaches in SDJPEG patch detection especially when the patch size is small.
Image quality (IQ) guided multispectral image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik
2016-05-01
Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.
McCord, Layne K; Scarfe, William C; Naylor, Rachel H; Scheetz, James P; Silveira, Anibal; Gillespie, Kevin R
2007-05-01
The objectives of this study were to compare the effect of JPEG 2000 compression of hand-wrist radiographs on observer image quality qualitative assessment and to compare with a software-derived quantitative image quality index. Fifteen hand-wrist radiographs were digitized and saved as TIFF and JPEG 2000 images at 4 levels of compression (20:1, 40:1, 60:1, and 80:1). The images, including rereads, were viewed by 13 orthodontic residents who determined the image quality rating on a scale of 1 to 5. A quantitative analysis was also performed by using a readily available software based on the human visual system (Image Quality Measure Computer Program, version 6.2, Mitre, Bedford, Mass). ANOVA was used to determine the optimal compression level (P < or =.05). When we compared subjective indexes, JPEG compression greater than 60:1 significantly reduced image quality. When we used quantitative indexes, the JPEG 2000 images had lower quality at all compression ratios compared with the original TIFF images. There was excellent correlation (R2 >0.92) between qualitative and quantitative indexes. Image Quality Measure indexes are more sensitive than subjective image quality assessments in quantifying image degradation with compression. There is potential for this software-based quantitative method in determining the optimal compression ratio for any image without the use of subjective raters.
A novel high-frequency encoding algorithm for image compression
NASA Astrophysics Data System (ADS)
Siddeq, Mohammed M.; Rodrigues, Marcos A.
2017-12-01
In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.
NASA Astrophysics Data System (ADS)
Sablik, Thomas; Velten, Jörg; Kummert, Anton
2015-03-01
An novel system for automatic privacy protection in digital media based on spectral domain watermarking and JPEG compression is described in the present paper. In a first step private areas are detected. Therefore a detection method is presented. The implemented method uses Haar cascades to detects faces. Integral images are used to speed up calculations and the detection. Multiple detections of one face are combined. Succeeding steps comprise embedding the data into the image as part of JPEG compression using spectral domain methods and protecting the area of privacy. The embedding process is integrated into and adapted to JPEG compression. A Spread Spectrum Watermarking method is used to embed the size and position of the private areas into the cover image. Different methods for embedding regarding their robustness are compared. Moreover the performance of the method concerning tampered images is presented.
Kim, J H; Kang, S W; Kim, J-r; Chang, Y S
2014-01-01
Purpose To evaluate the effect of image compression of spectral-domain optical coherence tomography (OCT) images in the examination of eyes with exudative age-related macular degeneration (AMD). Methods Thirty eyes from 30 patients who were diagnosed with exudative AMD were included in this retrospective observational case series. The horizontal OCT scans centered at the center of the fovea were conducted using spectral-domain OCT. The images were exported to Tag Image File Format (TIFF) and 100, 75, 50, 25 and 10% quality of Joint Photographic Experts Group (JPEG) format. OCT images were taken before and after intravitreal ranibizumab injections, and after relapse. The prevalence of subretinal and intraretinal fluids was determined. Differences in choroidal thickness between the TIFF and JPEG images were compared with the intra-observer variability. Results The prevalence of subretinal and intraretinal fluids was comparable regardless of the degree of compression. However, the chorio–scleral interface was not clearly identified in many images with a high degree of compression. In images with 25 and 10% quality of JPEG, the difference in choroidal thickness between the TIFF images and the respective JPEG images was significantly greater than the intra-observer variability of the TIFF images (P=0.029 and P=0.024, respectively). Conclusions In OCT images of eyes with AMD, 50% of the quality of the JPEG format would be an optimal degree of compression for efficient data storage and transfer without sacrificing image quality. PMID:24788012
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2015-09-01
Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.
Fu, Chi-Yung; Petrich, Loren I.
1997-01-01
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.
JPEG2000 still image coding quality.
Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei
2013-10-01
This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.
Fingerprint recognition of wavelet-based compressed images by neuro-fuzzy clustering
NASA Astrophysics Data System (ADS)
Liu, Ti C.; Mitra, Sunanda
1996-06-01
Image compression plays a crucial role in many important and diverse applications requiring efficient storage and transmission. This work mainly focuses on a wavelet transform (WT) based compression of fingerprint images and the subsequent classification of the reconstructed images. The algorithm developed involves multiresolution wavelet decomposition, uniform scalar quantization, entropy and run- length encoder/decoder and K-means clustering of the invariant moments as fingerprint features. The performance of the WT-based compression algorithm has been compared with JPEG current image compression standard. Simulation results show that WT outperforms JPEG in high compression ratio region and the reconstructed fingerprint image yields proper classification.
Fu, C.Y.; Petrich, L.I.
1997-12-30
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.
Estimation of color filter array data from JPEG images for improved demosaicking
NASA Astrophysics Data System (ADS)
Feng, Wei; Reeves, Stanley J.
2006-02-01
On-camera demosaicking algorithms are necessarily simple and therefore do not yield the best possible images. However, off-camera demosaicking algorithms face the additional challenge that the data has been compressed and therefore corrupted by quantization noise. We propose a method to estimate the original color filter array (CFA) data from JPEG-compressed images so that more sophisticated (and better) demosaicking schemes can be applied to get higher-quality images. The JPEG image formation process, including simple demosaicking, color space transformation, chrominance channel decimation and DCT, is modeled as a series of matrix operations followed by quantization on the CFA data, which is estimated by least squares. An iterative method is used to conserve memory and speed computation. Our experiments show that the mean square error (MSE) with respect to the original CFA data is reduced significantly using our algorithm, compared to that of unprocessed JPEG and deblocked JPEG data.
Halftoning processing on a JPEG-compressed image
NASA Astrophysics Data System (ADS)
Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent
2003-12-01
Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.
Steganalysis based on JPEG compatibility
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Du, Rui
2001-11-01
In this paper, we introduce a new forensic tool that can reliably detect modifications in digital images, such as distortion due to steganography and watermarking, in images that were originally stored in the JPEG format. The JPEG compression leave unique fingerprints and serves as a fragile watermark enabling us to detect changes as small as modifying the LSB of one randomly chosen pixel. The detection of changes is based on investigating the compatibility of 8x8 blocks of pixels with JPEG compression with a given quantization matrix. The proposed steganalytic method is applicable to virtually all steganongraphic and watermarking algorithms with the exception of those that embed message bits into the quantized JPEG DCT coefficients. The method can also be used to estimate the size of the secret message and identify the pixels that carry message bits. As a consequence of our steganalysis, we strongly recommend avoiding using images that have been originally stored in the JPEG format as cover-images for spatial-domain steganography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeon, Chang Ho; Kim, Bohyoung; Gu, Bon Seung
2013-10-15
Purpose: To modify the preprocessing technique, which was previously proposed, improving compressibility of computed tomography (CT) images to cover the diversity of three dimensional configurations of different body parts and to evaluate the robustness of the technique in terms of segmentation correctness and increase in reversible compression ratio (CR) for various CT examinations.Methods: This study had institutional review board approval with waiver of informed patient consent. A preprocessing technique was previously proposed to improve the compressibility of CT images by replacing pixel values outside the body region with a constant value resulting in maximizing data redundancy. Since the technique wasmore » developed aiming at only chest CT images, the authors modified the segmentation method to cover the diversity of three dimensional configurations of different body parts. The modified version was evaluated as follows. In randomly selected 368 CT examinations (352 787 images), each image was preprocessed by using the modified preprocessing technique. Radiologists visually confirmed whether the segmented region covers the body region or not. The images with and without the preprocessing were reversibly compressed using Joint Photographic Experts Group (JPEG), JPEG2000 two-dimensional (2D), and JPEG2000 three-dimensional (3D) compressions. The percentage increase in CR per examination (CR{sub I}) was measured.Results: The rate of correct segmentation was 100.0% (95% CI: 99.9%, 100.0%) for all the examinations. The median of CR{sub I} were 26.1% (95% CI: 24.9%, 27.1%), 40.2% (38.5%, 41.1%), and 34.5% (32.7%, 36.2%) in JPEG, JPEG2000 2D, and JPEG2000 3D, respectively.Conclusions: In various CT examinations, the modified preprocessing technique can increase in the CR by 25% or more without concerning about degradation of diagnostic information.« less
Clunie, David A; Gebow, Dan
2015-01-01
Deidentification of medical images requires attention to both header information as well as the pixel data itself, in which burned-in text may be present. If the pixel data to be deidentified is stored in a compressed form, traditionally it is decompressed, identifying text is redacted, and if necessary, pixel data are recompressed. Decompression without recompression may result in images of excessive or intractable size. Recompression with an irreversible scheme is undesirable because it may cause additional loss in the diagnostically relevant regions of the images. The irreversible (lossy) JPEG compression scheme works on small blocks of the image independently, hence, redaction can selectively be confined only to those blocks containing identifying text, leaving all other blocks unchanged. An open source implementation of selective redaction and a demonstration of its applicability to multiframe color ultrasound images is described. The process can be applied either to standalone JPEG images or JPEG bit streams encapsulated in other formats, which in the case of medical images, is usually DICOM.
Compression of electromyographic signals using image compression techniques.
Costa, Marcus Vinícius Chaffim; Berger, Pedro de Azevedo; da Rocha, Adson Ferreira; de Carvalho, João Luiz Azevedo; Nascimento, Francisco Assis de Oliveira
2008-01-01
Despite the growing interest in the transmission and storage of electromyographic signals for long periods of time, few studies have addressed the compression of such signals. In this article we present an algorithm for compression of electromyographic signals based on the JPEG2000 coding system. Although the JPEG2000 codec was originally designed for compression of still images, we show that it can also be used to compress EMG signals for both isotonic and isometric contractions. For EMG signals acquired during isometric contractions, the proposed algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.75% to 13.7%. For isotonic EMG signals, the algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.4% to 7%. The compression results using the JPEG2000 algorithm were compared to those using other algorithms based on the wavelet transform.
Interband coding extension of the new lossless JPEG standard
NASA Astrophysics Data System (ADS)
Memon, Nasir D.; Wu, Xiaolin; Sippy, V.; Miller, G.
1997-01-01
Due to the perceived inadequacy of current standards for lossless image compression, the JPEG committee of the International Standards Organization (ISO) has been developing a new standard. A baseline algorithm, called JPEG-LS, has already been completed and is awaiting approval by national bodies. The JPEG-LS baseline algorithm despite being simple is surprisingly efficient, and provides compression performance that is within a few percent of the best and more sophisticated techniques reported in the literature. Extensive experimentations performed by the authors seem to indicate that an overall improvement by more than 10 percent in compression performance will be difficult to obtain even at the cost of great complexity; at least not with traditional approaches to lossless image compression. However, if we allow inter-band decorrelation and modeling in the baseline algorithm, nearly 30 percent improvement in compression gains for specific images in the test set become possible with a modest computational cost. In this paper we propose and investigate a few techniques for exploiting inter-band correlations in multi-band images. These techniques have been designed within the framework of the baseline algorithm, and require minimal changes to the basic architecture of the baseline, retaining its essential simplicity.
Clinical evaluation of JPEG2000 compression for digital mammography
NASA Astrophysics Data System (ADS)
Sung, Min-Mo; Kim, Hee-Joung; Kim, Eun-Kyung; Kwak, Jin-Young; Yoo, Jae-Kyung; Yoo, Hyung-Sik
2002-06-01
Medical images, such as computed radiography (CR), and digital mammographic images will require large storage facilities and long transmission times for picture archiving and communications system (PACS) implementation. American College of Radiology and National Equipment Manufacturers Association (ACR/NEMA) group is planning to adopt a JPEG2000 compression algorithm in digital imaging and communications in medicine (DICOM) standard to better utilize medical images. The purpose of the study was to evaluate the compression ratios of JPEG2000 for digital mammographic images using peak signal-to-noise ratio (PSNR), receiver operating characteristic (ROC) analysis, and the t-test. The traditional statistical quality measures such as PSNR, which is a commonly used measure for the evaluation of reconstructed images, measures how the reconstructed image differs from the original by making pixel-by-pixel comparisons. The ability to accurately discriminate diseased cases from normal cases is evaluated using ROC curve analysis. ROC curves can be used to compare the diagnostic performance of two or more reconstructed images. The t test can be also used to evaluate the subjective image quality of reconstructed images. The results of the t test suggested that the possible compression ratios using JPEG2000 for digital mammographic images may be as much as 15:1 without visual loss or with preserving significant medical information at a confidence level of 99%, although both PSNR and ROC analyses suggest as much as 80:1 compression ratio can be achieved without affecting clinical diagnostic performance.
Tampered Region Localization of Digital Color Images Based on JPEG Compression Noise
NASA Astrophysics Data System (ADS)
Wang, Wei; Dong, Jing; Tan, Tieniu
With the availability of various digital image edit tools, seeing is no longer believing. In this paper, we focus on tampered region localization for image forensics. We propose an algorithm which can locate tampered region(s) in a lossless compressed tampered image when its unchanged region is output of JPEG decompressor. We find the tampered region and the unchanged region have different responses for JPEG compression. The tampered region has stronger high frequency quantization noise than the unchanged region. We employ PCA to separate different spatial frequencies quantization noises, i.e. low, medium and high frequency quantization noise, and extract high frequency quantization noise for tampered region localization. Post-processing is involved to get final localization result. The experimental results prove the effectiveness of our proposed method.
NASA Astrophysics Data System (ADS)
Osada, Masakazu; Tsukui, Hideki
2002-09-01
ABSTRACT Picture Archiving and Communication System (PACS) is a system which connects imaging modalities, image archives, and image workstations to reduce film handling cost and improve hospital workflow. Handling diagnostic ultrasound and endoscopy images is challenging, because it produces large amount of data such as motion (cine) images of 30 frames per second, 640 x 480 in resolution, with 24-bit color. Also, it requires enough image quality for clinical review. We have developed PACS which is able to manage ultrasound and endoscopy cine images with above resolution and frame rate, and investigate suitable compression method and compression rate for clinical image review. Results show that clinicians require capability for frame-by-frame forward and backward review of cine images because they carefully look through motion images to find certain color patterns which may appear in one frame. In order to satisfy this quality, we have chosen motion JPEG, installed and confirmed that we could capture this specific pattern. As for acceptable image compression rate, we have performed subjective evaluation. No subjects could tell the difference between original non-compressed images and 1:10 lossy compressed JPEG images. One subject could tell the difference between original and 1:20 lossy compressed JPEG images although it is acceptable. Thus, ratios of 1:10 to 1:20 are acceptable to reduce data amount and cost while maintaining quality for clinical review.
Dynamic code block size for JPEG 2000
NASA Astrophysics Data System (ADS)
Tsai, Ping-Sing; LeCornec, Yann
2008-02-01
Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.
Lossless Compression of JPEG Coded Photo Collections.
Wu, Hao; Sun, Xiaoyan; Yang, Jingyu; Zeng, Wenjun; Wu, Feng
2016-04-06
The explosion of digital photos has posed a significant challenge to photo storage and transmission for both personal devices and cloud platforms. In this paper, we propose a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information. The proposed method jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains. For each collection, we first organize the images into a pseudo video by minimizing the global prediction cost in the feature domain. We then present a hybrid disparity compensation method to better exploit both the global and local correlations among the images in the spatial domain. Furthermore, the redundancy between each compensated signal and the corresponding target image is adaptively reduced in the frequency domain. Experimental results demonstrate the effectiveness of the proposed lossless compression method. Compared to the JPEG coded image collections, our method achieves average bit savings of more than 31%.
A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images
NASA Astrophysics Data System (ADS)
Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo
2007-03-01
Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.
Fast and accurate face recognition based on image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2017-05-01
Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.
Parallel design of JPEG-LS encoder on graphics processing units
NASA Astrophysics Data System (ADS)
Duan, Hao; Fang, Yong; Huang, Bormin
2012-01-01
With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.
High bit depth infrared image compression via low bit depth codecs
NASA Astrophysics Data System (ADS)
Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren
2017-08-01
Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.
Cell edge detection in JPEG2000 wavelet domain - analysis on sigmoid function edge model.
Punys, Vytenis; Maknickas, Ramunas
2011-01-01
Big virtual microscopy images (80K x 60K pixels and larger) are usually stored using the JPEG2000 image compression scheme. Diagnostic quantification, based on image analysis, might be faster if performed on compressed data (approx. 20 times less the original amount), representing the coefficients of the wavelet transform. The analysis of possible edge detection without reverse wavelet transform is presented in the paper. Two edge detection methods, suitable for JPEG2000 bi-orthogonal wavelets, are proposed. The methods are adjusted according calculated parameters of sigmoid edge model. The results of model analysis indicate more suitable method for given bi-orthogonal wavelet.
NASA Astrophysics Data System (ADS)
Wijaya, Surya Li; Savvides, Marios; Vijaya Kumar, B. V. K.
2005-02-01
Face recognition on mobile devices, such as personal digital assistants and cell phones, is a big challenge owing to the limited computational resources available to run verifications on the devices themselves. One approach is to transmit the captured face images by use of the cell-phone connection and to run the verification on a remote station. However, owing to limitations in communication bandwidth, it may be necessary to transmit a compressed version of the image. We propose using the image compression standard JPEG2000, which is a wavelet-based compression engine used to compress the face images to low bit rates suitable for transmission over low-bandwidth communication channels. At the receiver end, the face images are reconstructed with a JPEG2000 decoder and are fed into the verification engine. We explore how advanced correlation filters, such as the minimum average correlation energy filter [Appl. Opt. 26, 3633 (1987)] and its variants, perform by using face images captured under different illumination conditions and encoded with different bit rates under the JPEG2000 wavelet-encoding standard. We evaluate the performance of these filters by using illumination variations from the Carnegie Mellon University's Pose, Illumination, and Expression (PIE) face database. We also demonstrate the tolerance of these filters to noisy versions of images with illumination variations.
Wavelet-based compression of M-FISH images.
Hua, Jianping; Xiong, Zixiang; Wu, Qiang; Castleman, Kenneth R
2005-05-01
Multiplex fluorescence in situ hybridization (M-FISH) is a recently developed technology that enables multi-color chromosome karyotyping for molecular cytogenetic analysis. Each M-FISH image set consists of a number of aligned images of the same chromosome specimen captured at different optical wavelength. This paper presents embedded M-FISH image coding (EMIC), where the foreground objects/chromosomes and the background objects/images are coded separately. We first apply critically sampled integer wavelet transforms to both the foreground and the background. We then use object-based bit-plane coding to compress each object and generate separate embedded bitstreams that allow continuous lossy-to-lossless compression of the foreground and the background. For efficient arithmetic coding of bit planes, we propose a method of designing an optimal context model that specifically exploits the statistical characteristics of M-FISH images in the wavelet domain. Our experiments show that EMIC achieves nearly twice as much compression as Lempel-Ziv-Welch coding. EMIC also performs much better than JPEG-LS and JPEG-2000 for lossless coding. The lossy performance of EMIC is significantly better than that of coding each M-FISH image with JPEG-2000.
JPIC-Rad-Hard JPEG2000 Image Compression ASIC
NASA Astrophysics Data System (ADS)
Zervas, Nikos; Ginosar, Ran; Broyde, Amitai; Alon, Dov
2010-08-01
JPIC is a rad-hard high-performance image compression ASIC for the aerospace market. JPIC implements tier 1 of the ISO/IEC 15444-1 JPEG2000 (a.k.a. J2K) image compression standard [1] as well as the post compression rate-distortion algorithm, which is part of tier 2 coding. A modular architecture enables employing a single JPIC or multiple coordinated JPIC units. JPIC is designed to support wide data sources of imager in optical, panchromatic and multi-spectral space and airborne sensors. JPIC has been developed as a collaboration of Alma Technologies S.A. (Greece), MBT/IAI Ltd (Israel) and Ramon Chips Ltd (Israel). MBT IAI defined the system architecture requirements and interfaces, The JPEG2K-E IP core from Alma implements the compression algorithm [2]. Ramon Chips adds SERDES interfaces and host interfaces and integrates the ASIC. MBT has demonstrated the full chip on an FPGA board and created system boards employing multiple JPIC units. The ASIC implementation, based on Ramon Chips' 180nm CMOS RadSafe[TM] RH cell library enables superior radiation hardness.
1995-02-01
modification of existing JPEG compression and decompression software available from Independent JPEG Users Group to process CIELAB color images and to use...externally specificed Huffman tables. In addition a conversion program was written to convert CIELAB color space images to red, green, blue color space
An evaluation of the effect of JPEG, JPEG2000, and H.264/AVC on CQR codes decoding process
NASA Astrophysics Data System (ADS)
Vizcarra Melgar, Max E.; Farias, Mylène C. Q.; Zaghetto, Alexandre
2015-02-01
This paper presents a binarymatrix code based on QR Code (Quick Response Code), denoted as CQR Code (Colored Quick Response Code), and evaluates the effect of JPEG, JPEG2000 and H.264/AVC compression on the decoding process. The proposed CQR Code has three additional colors (red, green and blue), what enables twice as much storage capacity when compared to the traditional black and white QR Code. Using the Reed-Solomon error-correcting code, the CQR Code model has a theoretical correction capability of 38.41%. The goal of this paper is to evaluate the effect that degradations inserted by common image compression algorithms have on the decoding process. Results show that a successful decoding process can be achieved for compression rates up to 0.3877 bits/pixel, 0.1093 bits/pixel and 0.3808 bits/pixel for JPEG, JPEG2000 and H.264/AVC formats, respectively. The algorithm that presents the best performance is the H.264/AVC, followed by the JPEG2000, and JPEG.
Novel approach to multispectral image compression on the Internet
NASA Astrophysics Data System (ADS)
Zhu, Yanqiu; Jin, Jesse S.
2000-10-01
Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.
A JPEG backward-compatible HDR image compression
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Ebrahimi, Touradj
2012-10-01
High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.
Generalised Category Attack—Improving Histogram-Based Attack on JPEG LSB Embedding
NASA Astrophysics Data System (ADS)
Lee, Kwangsoo; Westfeld, Andreas; Lee, Sangjin
We present a generalised and improved version of the category attack on LSB steganography in JPEG images with straddled embedding path. It detects more reliably low embedding rates and is also less disturbed by double compressed images. The proposed methods are evaluated on several thousand images. The results are compared to both recent blind and specific attacks for JPEG embedding. The proposed attack permits a more reliable detection, although it is based on first order statistics only. Its simple structure makes it very fast.
López, Carlos; Jaén Martinez, Joaquín; Lejeune, Marylène; Escrivà, Patricia; Salvadó, Maria T; Pons, Lluis E; Alvaro, Tomás; Baucells, Jordi; García-Rojo, Marcial; Cugat, Xavier; Bosch, Ramón
2009-10-01
The volume of digital image (DI) storage continues to be an important problem in computer-assisted pathology. DI compression enables the size of files to be reduced but with the disadvantage of loss of quality. Previous results indicated that the efficiency of computer-assisted quantification of immunohistochemically stained cell nuclei may be significantly reduced when compressed DIs are used. This study attempts to show, with respect to immunohistochemically stained nuclei, which morphometric parameters may be altered by the different levels of JPEG compression, and the implications of these alterations for automated nuclear counts, and further, develops a method for correcting this discrepancy in the nuclear count. For this purpose, 47 DIs from different tissues were captured in uncompressed TIFF format and converted to 1:3, 1:23 and 1:46 compression JPEG images. Sixty-five positive objects were selected from these images, and six morphological parameters were measured and compared for each object in TIFF images and those of the different compression levels using a set of previously developed and tested macros. Roundness proved to be the only morphological parameter that was significantly affected by image compression. Factors to correct the discrepancy in the roundness estimate were derived from linear regression models for each compression level, thereby eliminating the statistically significant differences between measurements in the equivalent images. These correction factors were incorporated in the automated macros, where they reduced the nuclear quantification differences arising from image compression. Our results demonstrate that it is possible to carry out unbiased automated immunohistochemical nuclear quantification in compressed DIs with a methodology that could be easily incorporated in different systems of digital image analysis.
Performance of the JPEG Estimated Spectrum Adaptive Postfilter (JPEG-ESAP) for Low Bit Rates
NASA Technical Reports Server (NTRS)
Linares, Irving (Inventor)
2016-01-01
Frequency-based, pixel-adaptive filtering using the JPEG-ESAP algorithm for low bit rate JPEG formatted color images may allow for more compressed images while maintaining equivalent quality at a smaller file size or bitrate. For RGB, an image is decomposed into three color bands--red, green, and blue. The JPEG-ESAP algorithm is then applied to each band (e.g., once for red, once for green, and once for blue) and the output of each application of the algorithm is rebuilt as a single color image. The ESAP algorithm may be repeatedly applied to MPEG-2 video frames to reduce their bit rate by a factor of 2 or 3, while maintaining equivalent video quality, both perceptually, and objectively, as recorded in the computed PSNR values.
Mixed raster content (MRC) model for compound image compression
NASA Astrophysics Data System (ADS)
de Queiroz, Ricardo L.; Buckley, Robert R.; Xu, Ming
1998-12-01
This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary test and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multi-layered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.
Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao
2018-06-01
To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.
Jaferzadeh, Keyvan; Gholami, Samaneh; Moon, Inkyu
2016-12-20
In this paper, we evaluate lossless and lossy compression techniques to compress quantitative phase images of red blood cells (RBCs) obtained by an off-axis digital holographic microscopy (DHM). The RBC phase images are numerically reconstructed from their digital holograms and are stored in 16-bit unsigned integer format. In the case of lossless compression, predictive coding of JPEG lossless (JPEG-LS), JPEG2000, and JP3D are evaluated, and compression ratio (CR) and complexity (compression time) are compared against each other. It turns out that JP2k can outperform other methods by having the best CR. In the lossy case, JP2k and JP3D with different CRs are examined. Because some data is lost in a lossy way, the degradation level is measured by comparing different morphological and biochemical parameters of RBC before and after compression. Morphological parameters are volume, surface area, RBC diameter, sphericity index, and the biochemical cell parameter is mean corpuscular hemoglobin (MCH). Experimental results show that JP2k outperforms JP3D not only in terms of mean square error (MSE) when CR increases, but also in compression time in the lossy compression way. In addition, our compression results with both algorithms demonstrate that with high CR values the three-dimensional profile of RBC can be preserved and morphological and biochemical parameters can still be within the range of reported values.
Improved JPEG anti-forensics with better image visual quality and forensic undetectability.
Singh, Gurinder; Singh, Kulbir
2017-08-01
There is an immediate need to validate the authenticity of digital images due to the availability of powerful image processing tools that can easily manipulate the digital image information without leaving any traces. The digital image forensics most often employs the tampering detectors based on JPEG compression. Therefore, to evaluate the competency of the JPEG forensic detectors, an anti-forensic technique is required. In this paper, two improved JPEG anti-forensic techniques are proposed to remove the blocking artifacts left by the JPEG compression in both spatial and DCT domain. In the proposed framework, the grainy noise left by the perceptual histogram smoothing in DCT domain can be reduced significantly by applying the proposed de-noising operation. Two types of denoising algorithms are proposed, one is based on the constrained minimization problem of total variation of energy and other on the normalized weighted function. Subsequently, an improved TV based deblocking operation is proposed to eliminate the blocking artifacts in the spatial domain. Then, a decalibration operation is applied to bring the processed image statistics back to its standard position. The experimental results show that the proposed anti-forensic approaches outperform the existing state-of-the-art techniques in achieving enhanced tradeoff between image visual quality and forensic undetectability, but with high computational cost. Copyright © 2017 Elsevier B.V. All rights reserved.
Context-dependent JPEG backward-compatible high-dynamic range image compression
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Ebrahimi, Touradj
2013-10-01
High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.
NASA Astrophysics Data System (ADS)
Kerner, H. R.; Bell, J. F., III; Ben Amor, H.
2017-12-01
The Mastcam color imaging system on the Mars Science Laboratory Curiosity rover acquires images within Gale crater for a variety of geologic and atmospheric studies. Images are often JPEG compressed before being downlinked to Earth. While critical for transmitting images on a low-bandwidth connection, this compression can result in image artifacts most noticeable as anomalous brightness or color changes within or near JPEG compression block boundaries. In images with significant high-frequency detail (e.g., in regions showing fine layering or lamination in sedimentary rocks), the image might need to be re-transmitted losslessly to enable accurate scientific interpretation of the data. The process of identifying which images have been adversely affected by compression artifacts is performed manually by the Mastcam science team, costing significant expert human time. To streamline the tedious process of identifying which images might need to be re-transmitted, we present an input-efficient neural network solution for predicting the perceived quality of a compressed Mastcam image. Most neural network solutions require large amounts of hand-labeled training data for the model to learn the target mapping between input (e.g. distorted images) and output (e.g. quality assessment). We propose an automatic labeling method using joint entropy between a compressed and uncompressed image to avoid the need for domain experts to label thousands of training examples by hand. We use automatically labeled data to train a convolutional neural network to estimate the probability that a Mastcam user would find the quality of a given compressed image acceptable for science analysis. We tested our model on a variety of Mastcam images and found that the proposed method correlates well with image quality perception by science team members. When assisted by our proposed method, we estimate that a Mastcam investigator could reduce the time spent reviewing images by a minimum of 70%.
Estimated spectrum adaptive postfilter and the iterative prepost filtering algirighms
NASA Technical Reports Server (NTRS)
Linares, Irving (Inventor)
2004-01-01
The invention presents The Estimated Spectrum Adaptive Postfilter (ESAP) and the Iterative Prepost Filter (IPF) algorithms. These algorithms model a number of image-adaptive post-filtering and pre-post filtering methods. They are designed to minimize Discrete Cosine Transform (DCT) blocking distortion caused when images are highly compressed with the Joint Photographic Expert Group (JPEG) standard. The ESAP and the IPF techniques of the present invention minimize the mean square error (MSE) to improve the objective and subjective quality of low-bit-rate JPEG gray-scale images while simultaneously enhancing perceptual visual quality with respect to baseline JPEG images.
Quantization Distortion in Block Transform-Compressed Data
NASA Technical Reports Server (NTRS)
Boden, A. F.
1995-01-01
The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.
Web surveillance system using platform-based design
NASA Astrophysics Data System (ADS)
Lin, Shin-Yo; Tsai, Tsung-Han
2004-04-01
A revolutionary methodology of SOPC platform-based design environment for multimedia communications will be developed. We embed a softcore processor to perform the image compression in FPGA. Then, we plug-in an Ethernet daughter board in the SOPC development platform system. Afterward, a web surveillance platform system is presented. The web surveillance system consists of three parts: image capture, web server and JPEG compression. In this architecture, user can control the surveillance system by remote. By the IP address configures to Ethernet daughter board, the user can access the surveillance system via browser. When user access the surveillance system, the CMOS sensor presently capture the remote image. After that, it will feed the captured image with the embedded processor. The embedded processor immediately performs the JPEG compression. Afterward, the user receives the compressed data via Ethernet. To sum up of the above mentioned, the all system will be implemented on APEX20K200E484-2X device.
JPEG 2000-based compression of fringe patterns for digital holographic microscopy
NASA Astrophysics Data System (ADS)
Blinder, David; Bruylants, Tim; Ottevaere, Heidi; Munteanu, Adrian; Schelkens, Peter
2014-12-01
With the advent of modern computing and imaging technologies, digital holography is becoming widespread in various scientific disciplines such as microscopy, interferometry, surface shape measurements, vibration analysis, data encoding, and certification. Therefore, designing an efficient data representation technology is of particular importance. Off-axis holograms have very different signal properties with respect to regular imagery, because they represent a recorded interference pattern with its energy biased toward the high-frequency bands. This causes traditional images' coders, which assume an underlying 1/f2 power spectral density distribution, to perform suboptimally for this type of imagery. We propose a JPEG 2000-based codec framework that provides a generic architecture suitable for the compression of many types of off-axis holograms. This framework has a JPEG 2000 codec at its core, extended with (1) fully arbitrary wavelet decomposition styles and (2) directional wavelet transforms. Using this codec, we report significant improvements in coding performance for off-axis holography relative to the conventional JPEG 2000 standard, with Bjøntegaard delta-peak signal-to-noise ratio improvements ranging from 1.3 to 11.6 dB for lossy compression in the 0.125 to 2.00 bpp range and bit-rate reductions of up to 1.6 bpp for lossless compression.
Modeling of video compression effects on target acquisition performance
NASA Astrophysics Data System (ADS)
Cha, Jae H.; Preece, Bradley; Espinola, Richard L.
2009-05-01
The effect of video compression on image quality was investigated from the perspective of target acquisition performance modeling. Human perception tests were conducted recently at the U.S. Army RDECOM CERDEC NVESD, measuring identification (ID) performance on simulated military vehicle targets at various ranges. These videos were compressed with different quality and/or quantization levels utilizing motion JPEG, motion JPEG2000, and MPEG-4 encoding. To model the degradation on task performance, the loss in image quality is fit to an equivalent Gaussian MTF scaled by the Structural Similarity Image Metric (SSIM). Residual compression artifacts are treated as 3-D spatio-temporal noise. This 3-D noise is found by taking the difference of the uncompressed frame, with the estimated equivalent blur applied, and the corresponding compressed frame. Results show good agreement between the experimental data and the model prediction. This method has led to a predictive performance model for video compression by correlating various compression levels to particular blur and noise input parameters for NVESD target acquisition performance model suite.
NASA Astrophysics Data System (ADS)
Kim, Christopher Y.
1999-05-01
Endoscopic images p lay an important role in describing many gastrointestinal (GI) disorders. The field of radiology has been on the leading edge of creating, archiving and transmitting digital images. With the advent of digital videoendoscopy, endoscopists now have the ability to generate images for storage and transmission. X-rays can be compressed 30-40X without appreciable decline in quality. We reported results of a pilot study using JPEG compression of 24-bit color endoscopic images. For that study, the result indicated that adequate compression ratios vary according to the lesion and that images could be compressed to between 31- and 99-fold smaller than the original size without an appreciable decline in quality. The purpose of this study was to expand upon the methodology of the previous sty with an eye towards application for the WWW, a medium which would expand both clinical and educational purposes of color medical imags. The results indicate that endoscopists are able to tolerate very significant compression of endoscopic images without loss of clinical image quality. This finding suggests that even 1 MB color images can be compressed to well under 30KB, which is considered a maximal tolerable image size for downloading on the WWW.
Vulnerability Analysis of HD Photo Image Viewer Applications
2007-09-01
the successor to the ubiquitous JPEG image format, as well as the eventual de facto standard in the digital photography market. With massive efforts...renamed to HD Photo in November of 2006, is being touted as the successor to the ubiquitous JPEG image format, as well as the eventual de facto standard...associated state-of-the-art compression algorithm “specifically designed [for] all types of continuous tone photographic” images [HDPhotoFeatureSpec
NASA Astrophysics Data System (ADS)
Starosolski, Roman
2016-07-01
Reversible denoising and lifting steps (RDLS) are lifting steps integrated with denoising filters in such a way that, despite the inherently irreversible nature of denoising, they are perfectly reversible. We investigated the application of RDLS to reversible color space transforms: RCT, YCoCg-R, RDgDb, and LDgEb. In order to improve RDLS effects, we propose a heuristic for image-adaptive denoising filter selection, a fast estimator of the compressed image bitrate, and a special filter that may result in skipping of the steps. We analyzed the properties of the presented methods, paying special attention to their usefulness from a practical standpoint. For a diverse image test-set and lossless JPEG-LS, JPEG 2000, and JPEG XR algorithms, RDLS improves the bitrates of all the examined transforms. The most interesting results were obtained for an estimation-based heuristic filter selection out of a set of seven filters; the cost of this variant was similar to or lower than the transform cost, and it improved the average lossless JPEG 2000 bitrates by 2.65% for RDgDb and by over 1% for other transforms; bitrates of certain images were improved to a significantly greater extent.
Camera-Model Identification Using Markovian Transition Probability Matrix
NASA Astrophysics Data System (ADS)
Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei
Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.
A Posteriori Restoration of Block Transform-Compressed Data
NASA Technical Reports Server (NTRS)
Brown, R.; Boden, A. F.
1995-01-01
The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.
Unequal power allocation for JPEG transmission over MIMO systems.
Sabir, Muhammad Farooq; Bovik, Alan Conrad; Heath, Robert W
2010-02-01
With the introduction of multiple transmit and receive antennas in next generation wireless systems, real-time image and video communication are expected to become quite common, since very high data rates will become available along with improved data reliability. New joint transmission and coding schemes that explore advantages of multiple antenna systems matched with source statistics are expected to be developed. Based on this idea, we present an unequal power allocation scheme for transmission of JPEG compressed images over multiple-input multiple-output systems employing spatial multiplexing. The JPEG-compressed image is divided into different quality layers, and different layers are transmitted simultaneously from different transmit antennas using unequal transmit power, with a constraint on the total transmit power during any symbol period. Results show that our unequal power allocation scheme provides significant image quality improvement as compared to different equal power allocations schemes, with the peak-signal-to-noise-ratio gain as high as 14 dB at low signal-to-noise-ratios.
Adaptive image coding based on cubic-spline interpolation
NASA Astrophysics Data System (ADS)
Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien
2014-09-01
It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.
An Efficient Image Compressor for Charge Coupled Devices Camera
Li, Jin; Xing, Fei; You, Zheng
2014-01-01
Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the l p-norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977
JPEG 2000 Encoding with Perceptual Distortion Control
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Liu, Zhen; Karam, Lina J.
2008-01-01
An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.
NASA Astrophysics Data System (ADS)
Joshi, Rajan L.
2006-03-01
In medical imaging, the popularity of image capture modalities such as multislice CT and MRI is resulting in an exponential increase in the amount of volumetric data that needs to be archived and transmitted. At the same time, the increased data is taxing the interpretation capabilities of radiologists. One of the workflow strategies recommended for radiologists to overcome the data overload is the use of volumetric navigation. This allows the radiologist to seek a series of oblique slices through the data. However, it might be inconvenient for a radiologist to wait until all the slices are transferred from the PACS server to a client, such as a diagnostic workstation. To overcome this problem, we propose a client-server architecture based on JPEG2000 and JPEG2000 Interactive Protocol (JPIP) for rendering oblique slices through 3D volumetric data stored remotely at a server. The client uses the JPIP protocol for obtaining JPEG2000 compressed data from the server on an as needed basis. In JPEG2000, the image pixels are wavelet-transformed and the wavelet coefficients are grouped into precincts. Based on the positioning of the oblique slice, compressed data from only certain precincts is needed to render the slice. The client communicates this information to the server so that the server can transmit only relevant compressed data. We also discuss the use of caching on the client side for further reduction in bandwidth requirements. Finally, we present simulation results to quantify the bandwidth savings for rendering a series of oblique slices.
The effect of lossy image compression on image classification
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1995-01-01
We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.
A multicenter observer performance study of 3D JPEG2000 compression of thin-slice CT.
Erickson, Bradley J; Krupinski, Elizabeth; Andriole, Katherine P
2010-10-01
The goal of this study was to determine the compression level at which 3D JPEG2000 compression of thin-slice CTs of the chest and abdomen-pelvis becomes visually perceptible. A secondary goal was to determine if residents in training and non-physicians are substantially different from experienced radiologists in their perception of compression-related changes. This study used multidetector computed tomography 3D datasets with 0.625-1-mm thickness slices of standard chest, abdomen, or pelvis, clipped to 12 bits. The Kakadu v5.2 JPEG2000 compression algorithm was used to compress and decompress the 80 examinations creating four sets of images: lossless, 1.5 bpp (8:1), 1 bpp (12:1), and 0.75 bpp (16:1). Two randomly selected slices from each examination were shown to observers using a flicker mode paradigm in which observers rapidly toggled between two images, the original and a compressed version, with the task of deciding whether differences between them could be detected. Six staff radiologists, four residents, and six PhDs experienced in medical imaging (from three institutions) served as observers. Overall, 77.46% of observers detected differences at 8:1, 94.75% at 12:1, and 98.59% at 16:1 compression levels. Across all compression levels, the staff radiologists noted differences 64.70% of the time, the resident's detected differences 71.91% of the time, and the PhDs detected differences 69.95% of the time. Even mild compression is perceptible with current technology. The ability to detect differences does not equate to diagnostic differences, although perception of compression artifacts could affect diagnostic decision making and diagnostic workflow.
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544
Medical image compression based on vector quantization with variable block sizes in wavelet domain.
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
JPEG2000 Image Compression on Solar EUV Images
NASA Astrophysics Data System (ADS)
Fischer, Catherine E.; Müller, Daniel; De Moortel, Ineke
2017-01-01
For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bitrates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disc and off-limb coronal-loop oscillation time-series observed by AIA/SDO.
Mutual information-based analysis of JPEG2000 contexts.
Liu, Zhen; Karam, Lina J
2005-04-01
Context-based arithmetic coding has been widely adopted in image and video compression and is a key component of the new JPEG2000 image compression standard. In this paper, the contexts used in JPEG2000 are analyzed using the mutual information, which is closely related to the compression performance. We first show that, when combining the contexts, the mutual information between the contexts and the encoded data will decrease unless the conditional probability distributions of the combined contexts are the same. Given I, the initial number of contexts, and F, the final desired number of contexts, there are S(I, F) possible context classification schemes where S(I, F) is called the Stirling number of the second kind. The optimal classification scheme is the one that gives the maximum mutual information. Instead of using an exhaustive search, the optimal classification scheme can be obtained through a modified generalized Lloyd algorithm with the relative entropy as the distortion metric. For binary arithmetic coding, the search complexity can be reduced by using dynamic programming. Our experimental results show that the JPEG2000 contexts capture the correlations among the wavelet coefficients very well. At the same time, the number of contexts used as part of the standard can be reduced without loss in the coding performance.
The Pixon Method for Data Compression Image Classification, and Image Reconstruction
NASA Technical Reports Server (NTRS)
Puetter, Richard; Yahil, Amos
2002-01-01
As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.
Edge-Based Image Compression with Homogeneous Diffusion
NASA Astrophysics Data System (ADS)
Mainberger, Markus; Weickert, Joachim
It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.
Observer performance assessment of JPEG-compressed high-resolution chest images
NASA Astrophysics Data System (ADS)
Good, Walter F.; Maitz, Glenn S.; King, Jill L.; Gennari, Rose C.; Gur, David
1999-05-01
The JPEG compression algorithm was tested on a set of 529 chest radiographs that had been digitized at a spatial resolution of 100 micrometer and contrast sensitivity of 12 bits. Images were compressed using five fixed 'psychovisual' quantization tables which produced average compression ratios in the range 15:1 to 61:1, and were then printed onto film. Six experienced radiologists read all cases from the laser printed film, in each of the five compressed modes as well as in the non-compressed mode. For comparison purposes, observers also read the same cases with reduced pixel resolutions of 200 micrometer and 400 micrometer. The specific task involved detecting masses, pneumothoraces, interstitial disease, alveolar infiltrates and rib fractures. Over the range of compression ratios tested, for images digitized at 100 micrometer, we were unable to demonstrate any statistically significant decrease (p greater than 0.05) in observer performance as measured by ROC techniques. However, the observers' subjective assessments of image quality did decrease significantly as image resolution was reduced and suggested a decreasing, but nonsignificant, trend as the compression ratio was increased. The seeming discrepancy between our failure to detect a reduction in observer performance, and other published studies, is likely due to: (1) the higher resolution at which we digitized our images; (2) the higher signal-to-noise ratio of our digitized films versus typical CR images; and (3) our particular choice of an optimized quantization scheme.
Improved compression technique for multipass color printers
NASA Astrophysics Data System (ADS)
Honsinger, Chris
1998-01-01
A multipass color printer prints a color image by printing one color place at a time in a prescribed order, e.g., in a four-color systems, the cyan plane may be printed first, the magenta next, and so on. It is desirable to discard the data related to each color plane once it has been printed, so that data from the next print may be downloaded. In this paper, we present a compression scheme that allows the release of a color plane memory, but still takes advantage of the correlation between the color planes. The compression scheme is based on a block adaptive technique for decorrelating the color planes followed by a spatial lossy compression of the decorrelated data. A preferred method of lossy compression is the DCT-based JPEG compression standard, as it is shown that the block adaptive decorrelation operations can be efficiently performed in the DCT domain. The result of the compression technique are compared to that of using JPEG on RGB data without any decorrelating transform. In general, the technique is shown to improve the compression performance over a practical range of compression ratios by at least 30 percent in all images, and up to 45 percent in some images.
A new security solution to JPEG using hyper-chaotic system and modified zigzag scan coding
NASA Astrophysics Data System (ADS)
Ji, Xiao-yong; Bai, Sen; Guo, Yu; Guo, Hui
2015-05-01
Though JPEG is an excellent compression standard of images, it does not provide any security performance. Thus, a security solution to JPEG was proposed in Zhang et al. (2014). But there are some flaws in Zhang's scheme and in this paper we propose a new scheme based on discrete hyper-chaotic system and modified zigzag scan coding. By shuffling the identifiers of zigzag scan encoded sequence with hyper-chaotic sequence and accurately encrypting the certain coefficients which have little relationship with the correlation of the plain image in zigzag scan encoded domain, we achieve high compression performance and robust security simultaneously. Meanwhile we present and analyze the flaws in Zhang's scheme through theoretical analysis and experimental verification, and give the comparisons between our scheme and Zhang's. Simulation results verify that our method has better performance in security and efficiency.
Image Size Variation Influence on Corrupted and Non-viewable BMP Image
NASA Astrophysics Data System (ADS)
Azmi, Tengku Norsuhaila T.; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Hamid, Isredza Rahmi A.; Chai Wen, Chuah
2017-08-01
Image is one of the evidence component seek in digital forensics. Joint Photographic Experts Group (JPEG) format is most popular used in the Internet because JPEG files are very lossy and easy to compress that can speed up Internet transmitting processes. However, corrupted JPEG images are hard to recover due to the complexities of determining corruption point. Nowadays Bitmap (BMP) images are preferred in image processing compared to another formats because BMP image contain all the image information in a simple format. Therefore, in order to investigate the corruption point in JPEG, the file is required to be converted into BMP format. Nevertheless, there are many things that can influence the corrupting of BMP image such as the changes of image size that make the file non-viewable. In this paper, the experiment indicates that the size of BMP file influences the changes in the image itself through three conditions, deleting, replacing and insertion. From the experiment, we learnt by correcting the file size, it can able to produce a viewable file though partially. Then, it can be investigated further to identify the corruption point.
Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters
NASA Astrophysics Data System (ADS)
Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi
A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.
Overview of the JPEG XS objective evaluation procedures
NASA Astrophysics Data System (ADS)
Willème, Alexandre; Richter, Thomas; Rosewarne, Chris; Macq, Benoit
2017-09-01
JPEG XS is a standardization activity conducted by the Joint Photographic Experts Group (JPEG), formally known as ISO/IEC SC29 WG1 group that aims at standardizing a low-latency, lightweight and visually lossless video compression scheme. This codec is intended to be used in applications where image sequences would otherwise be transmitted or stored in uncompressed form, such as in live production (through SDI or IP transport), display links, or frame buffers. Support for compression ratios ranging from 2:1 to 6:1 allows significant bandwidth and power reduction for signal propagation. This paper describes the objective quality assessment procedures conducted as part of the JPEG XS standardization activity. Firstly, this paper discusses the objective part of the experiments that led to the technology selection during the 73th WG1 meeting in late 2016. This assessment consists of PSNR measurements after a single and multiple compression decompression cycles at various compression ratios. After this assessment phase, two proposals among the six responses to the CfP were selected and merged to form the first JPEG XS test model (XSM). Later, this paper describes the core experiments (CEs) conducted so far on the XSM. These experiments are intended to evaluate its performance in more challenging scenarios, such as insertion of picture overlays, robustness to frame editing, assess the impact of the different algorithmic choices, and also to measure the XSM performance using the HDR VDP metric.
Kim, Bohyoung; Lee, Kyoung Ho; Kim, Kil Joong; Mantiuk, Rafal; Kim, Hye-ri; Kim, Young Hoon
2008-06-01
The objective of our study was to assess the effects of compressing source thin-section abdominal CT images on final transverse average-intensity-projection (AIP) images. At reversible, 4:1, 6:1, 8:1, 10:1, and 15:1 Joint Photographic Experts Group (JPEG) 2000 compressions, we compared the artifacts in 20 matching compressed thin sections (0.67 mm), compressed thick sections (5 mm), and AIP images (5 mm) reformatted from the compressed thin sections. The artifacts were quantitatively measured with peak signal-to-noise ratio (PSNR) and a perceptual quality metric (High Dynamic Range Visual Difference Predictor [HDR-VDP]). By comparing the compressed and original images, three radiologists independently graded the artifacts as 0 (none, indistinguishable), 1 (barely perceptible), 2 (subtle), or 3 (significant). Friedman tests and exact tests for paired proportions were used. At irreversible compressions, the artifacts tended to increase in the order of AIP, thick-section, and thin-section images in terms of PSNR (p < 0.0001), HDR-VDP (p < 0.0001), and the readers' grading (p < 0.01 at 6:1 or higher compressions). At 6:1 and 8:1, distinguishable pairs (grades 1-3) tended to increase in the order of AIP, thick-section, and thin-section images. Visually lossless threshold for the compression varied between images but decreased in the order of AIP, thick-section, and thin-section images (p < 0.0001). Compression artifacts in thin sections are significantly attenuated in AIP images. On the premise that thin sections are typically reviewed using an AIP technique, it is justifiable to compress them to a compression level currently accepted for thick sections.
A Novel Image Compression Algorithm for High Resolution 3D Reconstruction
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2014-06-01
This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.
Embedding intensity image into a binary hologram with strong noise resistant capability
NASA Astrophysics Data System (ADS)
Zhuang, Zhaoyong; Jiao, Shuming; Zou, Wenbin; Li, Xia
2017-11-01
A digital hologram can be employed as a host image for image watermarking applications to protect information security. Past research demonstrates that a gray level intensity image can be embedded into a binary Fresnel hologram by error diffusion method or bit truncation coding method. However, the fidelity of the retrieved watermark image from binary hologram is generally not satisfactory, especially when the binary hologram is contaminated with noise. To address this problem, we propose a JPEG-BCH encoding method in this paper. First, we employ the JPEG standard to compress the intensity image into a binary bit stream. Next, we encode the binary bit stream with BCH code to obtain error correction capability. Finally, the JPEG-BCH code is embedded into the binary hologram. By this way, the intensity image can be retrieved with high fidelity by a BCH-JPEG decoder even if the binary hologram suffers from serious noise contamination. Numerical simulation results show that the image quality of retrieved intensity image with our proposed method is superior to the state-of-the-art work reported.
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.
2001-12-01
A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.
A Unified Steganalysis Framework
2013-04-01
contains more than 1800 images of different scenes. In the experiments, we used four JPEG based steganography techniques: Out- guess [13], F5 [16], model...also compressed these images again since some of the steganography meth- ods are double compressing the images . Stego- images are generated by embedding...randomly chosen messages (in bits) into 1600 grayscale images using each of the four steganography techniques. A random message length was determined
Design of a motion JPEG (M/JPEG) adapter card
NASA Astrophysics Data System (ADS)
Lee, D. H.; Sudharsanan, Subramania I.
1994-05-01
In this paper we describe a design of a high performance JPEG (Joint Photographic Experts Group) Micro Channel adapter card. The card, tested on a range of PS/2 platforms (models 50 to 95), can complete JPEG operations on a 640 by 240 pixel image within 1/60 of a second, thus enabling real-time capture and display of high quality digital video. The card accepts digital pixels for either a YUV 4:2:2 or an RGB 4:4:4 pixel bus and has been shown to handle up to 2.05 MBytes/second of compressed data. The compressed data is transmitted to a host memory area by Direct Memory Access operations. The card uses a single C-Cube's CL550 JPEG processor that complies with the baseline JPEG. We give broad descriptions of the hardware that controls the video interface, CL550, and the system interface. Some critical design points that enhance the overall performance of the M/JPEG systems are pointed out. The control of the adapter card is achieved by an interrupt driven software that runs under DOS. The software performs a variety of tasks that include change of color space (RGB or YUV), change of quantization and Huffman tables, odd and even field control and some diagnostic operations.
Color image lossy compression based on blind evaluation and prediction of noise characteristics
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena
2011-03-01
The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.
An RBF-based compression method for image-based relighting.
Leung, Chi-Sing; Wong, Tien-Tsin; Lam, Ping-Man; Choy, Kwok-Hung
2006-04-01
In image-based relighting, a pixel is associated with a number of sampled radiance values. This paper presents a two-level compression method. In the first level, the plenoptic property of a pixel is approximated by a spherical radial basis function (SRBF) network. That means that the spherical plenoptic function of each pixel is represented by a number of SRBF weights. In the second level, we apply a wavelet-based method to compress these SRBF weights. To reduce the visual artifact due to quantization noise, we develop a constrained method for estimating the SRBF weights. Our proposed approach is superior to JPEG, JPEG2000, and MPEG. Compared with the spherical harmonics approach, our approach has a lower complexity, while the visual quality is comparable. The real-time rendering method for our SRBF representation is also discussed.
A threshold-based fixed predictor for JPEG-LS image compression
NASA Astrophysics Data System (ADS)
Deng, Lihua; Huang, Zhenghua; Yao, Shoukui
2018-03-01
In JPEG-LS, fixed predictor based on median edge detector (MED) only detect horizontal and vertical edges, and thus produces large prediction errors in the locality of diagonal edges. In this paper, we propose a threshold-based edge detection scheme for the fixed predictor. The proposed scheme can detect not only the horizontal and vertical edges, but also diagonal edges. For some certain thresholds, the proposed scheme can be simplified to other existing schemes. So, it can also be regarded as the integration of these existing schemes. For a suitable threshold, the accuracy of horizontal and vertical edges detection is higher than the existing median edge detection in JPEG-LS. Thus, the proposed fixed predictor outperforms the existing JPEG-LS predictors for all images tested, while the complexity of the overall algorithm is maintained at a similar level.
A new efficient method for color image compression based on visual attention mechanism
NASA Astrophysics Data System (ADS)
Shao, Xiaoguang; Gao, Kun; Lv, Lily; Ni, Guoqiang
2010-11-01
One of the key procedures in color image compression is to extract its region of interests (ROIs) and evaluate different compression ratios. A new non-uniform color image compression algorithm with high efficiency is proposed in this paper by using a biology-motivated selective attention model for the effective extraction of ROIs in natural images. When the ROIs have been extracted and labeled in the image, the subsequent work is to encode the ROIs and other regions with different compression ratios via popular JPEG algorithm. Furthermore, experiment results and quantitative and qualitative analysis in the paper show perfect performance when comparing with other traditional color image compression approaches.
An effective and efficient compression algorithm for ECG signals with irregular periods.
Chou, Hsiao-Hsuan; Chen, Ying-Jui; Shiau, Yu-Chien; Kuo, Te-Son
2006-06-01
This paper presents an effective and efficient preprocessing algorithm for two-dimensional (2-D) electrocardiogram (ECG) compression to better compress irregular ECG signals by exploiting their inter- and intra-beat correlations. To better reveal the correlation structure, we first convert the ECG signal into a proper 2-D representation, or image. This involves a few steps including QRS detection and alignment, period sorting, and length equalization. The resulting 2-D ECG representation is then ready to be compressed by an appropriate image compression algorithm. We choose the state-of-the-art JPEG2000 for its high efficiency and flexibility. In this way, the proposed algorithm is shown to outperform some existing arts in the literature by simultaneously achieving high compression ratio (CR), low percent root mean squared difference (PRD), low maximum error (MaxErr), and low standard derivation of errors (StdErr). In particular, because the proposed period sorting method rearranges the detected heartbeats into a smoother image that is easier to compress, this algorithm is insensitive to irregular ECG periods. Thus either the irregular ECG signals or the QRS false-detection cases can be better compressed. This is a significant improvement over existing 2-D ECG compression methods. Moreover, this algorithm is not tied exclusively to JPEG2000. It can also be combined with other 2-D preprocessing methods or appropriate codecs to enhance the compression performance in irregular ECG cases.
JPEG XS call for proposals subjective evaluations
NASA Astrophysics Data System (ADS)
McNally, David; Bruylants, Tim; Willème, Alexandre; Ebrahimi, Touradj; Schelkens, Peter; Macq, Benoit
2017-09-01
In March 2016 the Joint Photographic Experts Group (JPEG), formally known as ISO/IEC SC29 WG1, issued a call for proposals soliciting compression technologies for a low-latency, lightweight and visually transparent video compression scheme. Within the JPEG family of standards, this scheme was denominated JPEG XS. The subjective evaluation of visually lossless compressed video sequences at high resolutions and bit depths poses particular challenges. This paper describes the adopted procedures, the subjective evaluation setup, the evaluation process and summarizes the obtained results which were achieved in the context of the JPEG XS standardization process.
6 CFR 37.31 - Source document retention.
Code of Federal Regulations, 2014 CFR
2014-01-01
... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...
6 CFR 37.31 - Source document retention.
Code of Federal Regulations, 2012 CFR
2012-01-01
... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...
6 CFR 37.31 - Source document retention.
Code of Federal Regulations, 2010 CFR
2010-01-01
... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...
6 CFR 37.31 - Source document retention.
Code of Federal Regulations, 2011 CFR
2011-01-01
... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...
6 CFR 37.31 - Source document retention.
Code of Federal Regulations, 2013 CFR
2013-01-01
... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...
Fast computational scheme of image compression for 32-bit microprocessors
NASA Technical Reports Server (NTRS)
Kasperovich, Leonid
1994-01-01
This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.
Costa, Marcus V C; Carvalho, Joao L A; Berger, Pedro A; Zaghetto, Alexandre; da Rocha, Adson F; Nascimento, Francisco A O
2009-01-01
We present a new preprocessing technique for two-dimensional compression of surface electromyographic (S-EMG) signals, based on correlation sorting. We show that the JPEG2000 coding system (originally designed for compression of still images) and the H.264/AVC encoder (video compression algorithm operating in intraframe mode) can be used for compression of S-EMG signals. We compare the performance of these two off-the-shelf image compression algorithms for S-EMG compression, with and without the proposed preprocessing step. Compression of both isotonic and isometric contraction S-EMG signals is evaluated. The proposed methods were compared with other S-EMG compression algorithms from the literature.
Compression for radiological images
NASA Astrophysics Data System (ADS)
Wilson, Dennis L.
1992-07-01
The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.
Applications of the JPEG standard in a medical environment
NASA Astrophysics Data System (ADS)
Wittenberg, Ulrich
1993-10-01
JPEG is a very versatile image coding and compression standard for single images. Medical images make a higher demand on image quality and precision than the usual 'pretty pictures'. In this paper the potential applications of the various JPEG coding modes in a medical environment are evaluated. Due to legal reasons the lossless modes are especially interesting. The spatial modes are equally important because medical data may well exceed the maximum of 12 bit precision allowed for the DCT modes. The performance of the spatial predictors is investigated. From the users point of view the progressive modes, which provide a fast but coarse approximation of the final image, reduce the subjective time one has to wait for it, so they also reduce the user's frustration. Even the lossy modes will find some applications, but they have to be handled with care, because repeated lossy coding and decoding leads to a degradation of the image quality. The amount of this degradation is investigated. The JPEG standard alone is not sufficient for a PACS because it does not store enough additional data such as creation data or details of the imaging modality. Therefore it will be an imbedded coding format in standards like TIFF or ACR/NEMA. It is concluded that the JPEG standard is versatile enough to match the requirements of the medical community.
JPEG2000 vs. full frame wavelet packet compression for smart card medical records.
Leehan, Joaquín Azpirox; Lerallut, Jean-Francois
2006-01-01
This paper describes a comparison among different compression methods to be used in the context of electronic health records in the newer version of "smart cards". The JPEG2000 standard is compared to a full-frame wavelet packet compression method at high (33:1 and 50:1) compression rates. Results show that the full-frame method outperforms the JPEG2K standard qualitatively and quantitatively.
Impact of JPEG2000 compression on spatial-spectral endmember extraction from hyperspectral data
NASA Astrophysics Data System (ADS)
Martín, Gabriel; Ruiz, V. G.; Plaza, Antonio; Ortiz, Juan P.; García, Inmaculada
2009-08-01
Hyperspectral image compression has received considerable interest in recent years. However, an important issue that has not been investigated in the past is the impact of lossy compression on spectral mixture analysis applications, which characterize mixed pixels in terms of a suitable combination of spectrally pure spectral substances (called endmembers) weighted by their estimated fractional abundances. In this paper, we specifically investigate the impact of JPEG2000 compression of hyperspectral images on the quality of the endmembers extracted by algorithms that incorporate both the spectral and the spatial information (useful for incorporating contextual information in the spectral endmember search). The two considered algorithms are the automatic morphological endmember extraction (AMEE) and the spatial spectral endmember extraction (SSEE) techniques. Experimental results are conducted using a well-known data set collected by AVIRIS over the Cuprite mining district in Nevada and with detailed ground-truth information available from U. S. Geological Survey. Our experiments reveal some interesting findings that may be useful to specialists applying spatial-spectral endmember extraction algorithms to compressed hyperspectral imagery.
Image acquisition system using on sensor compressed sampling technique
NASA Astrophysics Data System (ADS)
Gupta, Pravir Singh; Choi, Gwan Seong
2018-01-01
Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.
About a method for compressing x-ray computed microtomography data
NASA Astrophysics Data System (ADS)
Mancini, Lucia; Kourousias, George; Billè, Fulvio; De Carlo, Francesco; Fidler, Aleš
2018-04-01
The management of scientific data is of high importance especially for experimental techniques that produce big data volumes. Such a technique is x-ray computed tomography (CT) and its community has introduced advanced data formats which allow for better management of experimental data. Rather than the organization of the data and the associated meta-data, the main topic on this work is data compression and its applicability to experimental data collected from a synchrotron-based CT beamline at the Elettra-Sincrotrone Trieste facility (Italy) and studies images acquired from various types of samples. This study covers parallel beam geometry, but it could be easily extended to a cone-beam one. The reconstruction workflow used is the one currently in operation at the beamline. Contrary to standard image compression studies, this manuscript proposes a systematic framework and workflow for the critical examination of different compression techniques and does so by applying it to experimental data. Beyond the methodology framework, this study presents and examines the use of JPEG-XR in combination with HDF5 and TIFF formats providing insights and strategies on data compression and image quality issues that can be used and implemented at other synchrotron facilities and laboratory systems. In conclusion, projection data compression using JPEG-XR appears as a promising, efficient method to reduce data file size and thus to facilitate data handling and image reconstruction.
The effects of video compression on acceptability of images for monitoring life sciences experiments
NASA Astrophysics Data System (ADS)
Haines, Richard F.; Chuang, Sherry L.
1992-07-01
Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.
The effects of video compression on acceptability of images for monitoring life sciences experiments
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Chuang, Sherry L.
1992-01-01
Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.
Toward objective image quality metrics: the AIC Eval Program of the JPEG
NASA Astrophysics Data System (ADS)
Richter, Thomas; Larabi, Chaker
2008-08-01
Objective quality assessment of lossy image compression codecs is an important part of the recent call of the JPEG for Advanced Image Coding. The target of the AIC ad-hoc group is twofold: First, to receive state-of-the-art still image codecs and to propose suitable technology for standardization; and second, to study objective image quality metrics to evaluate the performance of such codes. Even tthough the performance of an objective metric is defined by how well it predicts the outcome of a subjective assessment, one can also study the usefulness of a metric in a non-traditional way indirectly, namely by measuring the subjective quality improvement of a codec that has been optimized for a specific objective metric. This approach shall be demonstrated here on the recently proposed HDPhoto format14 introduced by Microsoft and a SSIM-tuned17 version of it by one of the authors. We compare these two implementations with JPEG1 in two variations and a visual and PSNR optimal JPEG200013 implementation. To this end, we use subjective and objective tests based on the multiscale SSIM and a new DCT based metric.
Optimal color coding for compression of true color images
NASA Astrophysics Data System (ADS)
Musatenko, Yurij S.; Kurashov, Vitalij N.
1998-11-01
In the paper we present the method that improves lossy compression of the true color or other multispectral images. The essence of the method is to project initial color planes into Karhunen-Loeve (KL) basis that gives completely decorrelated representation for the image and to compress basis functions instead of the planes. To do that the new fast algorithm of true KL basis construction with low memory consumption is suggested and our recently proposed scheme for finding optimal losses of Kl functions while compression is used. Compare to standard JPEG compression of the CMYK images the method provides the PSNR gain from 0.2 to 2 dB for the convenient compression ratios. Experimental results are obtained for high resolution CMYK images. It is demonstrated that presented scheme could work on common hardware.
Compressing images for the Internet
NASA Astrophysics Data System (ADS)
Beretta, Giordano B.
1998-01-01
The World Wide Web has rapidly become the hot new mass communications medium. Content creators are using similar design and layout styles as in printed magazines, i.e., with many color images and graphics. The information is transmitted over plain telephone lines, where the speed/price trade-off is much more severe than in the case of printed media. The standard design approach is to use palettized color and to limit as much as possible the number of colors used, so that the images can be encoded with a small number of bits per pixel using the Graphics Interchange Format (GIF) file format. The World Wide Web standards contemplate a second data encoding method (JPEG) that allows color fidelity but usually performs poorly on text, which is a critical element of information communicated on this medium. We analyze the spatial compression of color images and describe a methodology for using the JPEG method in a way that allows a compact representation while preserving full color fidelity.
Storage, retrieval, and edit of digital video using Motion JPEG
NASA Astrophysics Data System (ADS)
Sudharsanan, Subramania I.; Lee, D. H.
1994-04-01
In a companion paper we describe a Micro Channel adapter card that can perform real-time JPEG (Joint Photographic Experts Group) compression of a 640 by 480 24-bit image within 1/30th of a second. Since this corresponds to NTSC video rates at considerably good perceptual quality, this system can be used for real-time capture and manipulation of continuously fed video. To facilitate capturing the compressed video in a storage medium, an IBM Bus master SCSI adapter with cache is utilized. Efficacy of the data transfer mechanism is considerably improved using the System Control Block architecture, an extension to Micro Channel bus masters. We show experimental results that the overall system can perform at compressed data rates of about 1.5 MBytes/second sustained and with sporadic peaks to about 1.8 MBytes/second depending on the image sequence content. We also describe mechanisms to access the compressed data very efficiently through special file formats. This in turn permits creation of simpler sequence editors. Another advantage of the special file format is easy control of forward, backward and slow motion playback. The proposed method can be extended for design of a video compression subsystem for a variety of personal computing systems.
NASA Astrophysics Data System (ADS)
Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek
2009-02-01
Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands at the same level of decomposition. The insignicant quadtrees in dierent subbands in the high-frequency subband class are coded by a combined function to reduce redundancy. A number of experiments conducted on microscopic multispectral images have shown promising results for the proposed method over current state-of-the-art image-compression techniques.
NASA Astrophysics Data System (ADS)
Martin, Gabriel; Gonzalez-Ruiz, Vicente; Plaza, Antonio; Ortiz, Juan P.; Garcia, Inmaculada
2010-07-01
Lossy hyperspectral image compression has received considerable interest in recent years due to the extremely high dimensionality of the data. However, the impact of lossy compression on spectral unmixing techniques has not been widely studied. These techniques characterize mixed pixels (resulting from insufficient spatial resolution) in terms of a suitable combination of spectrally pure substances (called endmembers) weighted by their estimated fractional abundances. This paper focuses on the impact of JPEG2000-based lossy compression of hyperspectral images on the quality of the endmembers extracted by different algorithms. The three considered algorithms are the orthogonal subspace projection (OSP), which uses only spatial information, and the automatic morphological endmember extraction (AMEE) and spatial spectral endmember extraction (SSEE), which integrate both spatial and spectral information in the search for endmembers. The impact of compression on the resulting abundance estimation based on the endmembers derived by different methods is also substantiated. Experimental results are conducted using a hyperspectral data set collected by NASA Jet Propulsion Laboratory over the Cuprite mining district in Nevada. The experimental results are quantitatively analyzed using reference information available from U.S. Geological Survey, resulting in recommendations to specialists interested in applying endmember extraction and unmixing algorithms to compressed hyperspectral data.
Research on lossless compression of true color RGB image with low time and space complexity
NASA Astrophysics Data System (ADS)
Pan, ShuLin; Xie, ChengJun; Xu, Lin
2008-12-01
Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.
NASA Technical Reports Server (NTRS)
Robinson, Julie A.; Webb, Edward L.; Evangelista, Arlene
2000-01-01
Studies that utilize astronaut-acquired orbital photographs for visual or digital classification require high-quality data to ensure accuracy. The majority of images available must be digitized from film and electronically transferred to scientific users. This study examined the effect of scanning spatial resolution (1200, 2400 pixels per inch [21.2 and 10.6 microns/pixel]), scanning density range option (Auto, Full) and compression ratio (non-lossy [TIFF], and lossy JPEG 10:1, 46:1, 83:1) on digital classification results of an orbital photograph from the NASA - Johnson Space Center archive. Qualitative results suggested that 1200 ppi was acceptable for visual interpretive uses for major land cover types. Moreover, Auto scanning density range was superior to Full density range. Quantitative assessment of the processing steps indicated that, while 2400 ppi scanning spatial resolution resulted in more classified polygons as well as a substantially greater proportion of polygons < 0.2 ha, overall agreement between 1200 ppi and 2400 ppi was quite high. JPEG compression up to approximately 46:1 also did not appear to have a major impact on quantitative classification characteristics. We conclude that both 1200 and 2400 ppi scanning resolutions are acceptable options for this level of land cover classification, as well as a compression ratio at or below approximately 46:1. Auto range density should always be used during scanning because it acquires more of the information from the film. The particular combination of scanning spatial resolution and compression level will require a case-by-case decision and will depend upon memory capabilities, analytical objectives and the spatial properties of the objects in the image.
Compression of CCD raw images for digital still cameras
NASA Astrophysics Data System (ADS)
Sriram, Parthasarathy; Sudharsanan, Subramania
2005-03-01
Lossless compression of raw CCD images captured using color filter arrays has several benefits. The benefits include improved storage capacity, reduced memory bandwidth, and lower power consumption for digital still camera processors. The paper discusses the benefits in detail and proposes the use of a computationally efficient block adaptive scheme for lossless compression. Experimental results are provided that indicate that the scheme performs well for CCD raw images attaining compression factors of more than two. The block adaptive method also compares favorably with JPEG-LS. A discussion is provided indicating how the proposed lossless coding scheme can be incorporated into digital still camera processors enabling lower memory bandwidth and storage requirements.
A high-throughput two channel discrete wavelet transform architecture for the JPEG2000 standard
NASA Astrophysics Data System (ADS)
Badakhshannoory, Hossein; Hashemi, Mahmoud R.; Aminlou, Alireza; Fatemi, Omid
2005-07-01
The Discrete Wavelet Transform (DWT) is increasingly recognized in image and video compression standards, as indicated by its use in JPEG2000. The lifting scheme algorithm is an alternative DWT implementation that has a lower computational complexity and reduced resource requirement. In the JPEG2000 standard two lifting scheme based filter banks are introduced: the 5/3 and 9/7. In this paper a high throughput, two channel DWT architecture for both of the JPEG2000 DWT filters is presented. The proposed pipelined architecture has two separate input channels that process the incoming samples simultaneously with minimum memory requirement for each channel. The architecture had been implemented in VHDL and synthesized on a Xilinx Virtex2 XCV1000. The proposed architecture applies DWT on a 2K by 1K image at 33 fps with a 75 MHZ clock frequency. This performance is achieved with 70% less resources than two independent single channel modules. The high throughput and reduced resource requirement has made this architecture the proper choice for real time applications such as Digital Cinema.
DCTune Perceptual Optimization of Compressed Dental X-Rays
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)
1996-01-01
In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCTune is a technology for optimizing DCT (digital communication technology) quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. Perceptual optimization of DCT color quantization matrices. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays, 1) to verify the advantage of DCTune over standard JPEG (Joint Photographic Experts Group), 2) to verify the quality control feature of DCTune, and 3) to discover regularities in the optimized matrices of a set of images. We optimized matrices for a total of 20 images at two resolutions (150 and 300 dpi) and four bit-rates (0.25, 0.5, 0.75, 1.0 bits/pixel), and examined structural regularities in the resulting matrices. We also conducted psychophysical studies (1) to discover the DCTune quality level at which the images became 'visually lossless,' and (2) to rate the relative quality of DCTune and standard JPEG images at various bitrates. Results include: (1) At both resolutions, DCTune quality is a linear function of bit-rate. (2) DCTune quantization matrices for all images at all bitrates and resolutions are modeled well by an inverse Gaussian, with parameters of amplitude and width. (3) As bit-rate is varied, optimal values of both amplitude and width covary in an approximately linear fashion. (4) Both amplitude and width vary in systematic and orderly fashion with either bit-rate or DCTune quality; simple mathematical functions serve to describe these relationships. (5) In going from 150 to 300 dpi, amplitude parameters are substantially lower and widths larger at corresponding bit-rates or qualities. (6) Visually lossless compression occurs at a DCTune quality value of about 1. (7) At 0.25 bits/pixel, comparative ratings give DCTune a substantial advantage over standard JPEG. As visually lossless bit-rates are approached, this advantage of necessity diminishes. We have concluded that DCTune optimized quantization matrices provide better visual quality than standard JPEG. Meaningful quality levels may be specified by means of the DCTune metric. Optimized matrices are very similar across the class of dental x-rays, suggesting the possibility of a 'class-optimal' matrix. DCTune technology appears to provide some value in the context of compressed dental x-rays.
First Digit Law and Its Application to Digital Forensics
NASA Astrophysics Data System (ADS)
Shi, Yun Q.
Digital data forensics, which gathers evidence of data composition, origin, and history, is crucial in our digital world. Although this new research field is still in its infancy stage, it has started to attract increasing attention from the multimedia-security research community. This lecture addresses the first digit law and its applications to digital forensics. First, the Benford and generalized Benford laws, referred to as first digit law, are introduced. Then, the application of first digit law to detection of JPEG compression history for a given BMP image and detection of double JPEG compressions are presented. Finally, applying first digit law to detection of double MPEG video compressions is discussed. It is expected that the first digit law may play an active role in other task of digital forensics. The lesson learned is that statistical models play an important role in digital forensics and for a specific forensic task different models may provide different performance.
NASA Technical Reports Server (NTRS)
Stanboli, Alice
2013-01-01
Phxtelemproc is a C/C++ based telemetry processing program that processes SFDU telemetry packets from the Telemetry Data System (TDS). It generates Experiment Data Records (EDRs) for several instruments including surface stereo imager (SSI); robotic arm camera (RAC); robotic arm (RA); microscopy, electrochemistry, and conductivity analyzer (MECA); and the optical microscope (OM). It processes both uncompressed and compressed telemetry, and incorporates unique subroutines for the following compression algorithms: JPEG Arithmetic, JPEG Huffman, Rice, LUT3, RA, and SX4. This program was in the critical path for the daily command cycle of the Phoenix mission. The products generated by this program were part of the RA commanding process, as well as the SSI, RAC, OM, and MECA image and science analysis process. Its output products were used to advance science of the near polar regions of Mars, and were used to prove that water is found in abundance there. Phxtelemproc is part of the MIPL (Multi-mission Image Processing Laboratory) system. This software produced Level 1 products used to analyze images returned by in situ spacecraft. It ultimately assisted in operations, planning, commanding, science, and outreach.
López, Carlos; Lejeune, Marylène; Escrivà, Patricia; Bosch, Ramón; Salvadó, Maria Teresa; Pons, Lluis E.; Baucells, Jordi; Cugat, Xavier; Álvaro, Tomás; Jaén, Joaquín
2008-01-01
This study investigates the effects of digital image compression on automatic quantification of immunohistochemical nuclear markers. We examined 188 images with a previously validated computer-assisted analysis system. A first group was composed of 47 images captured in TIFF format, and other three contained the same images converted from TIFF to JPEG format with 3×, 23× and 46× compression. Counts of TIFF format images were compared with the other three groups. Overall, differences in the count of the images increased with the percentage of compression. Low-complexity images (≤100 cells/field, without clusters or with small-area clusters) had small differences (<5 cells/field in 95–100% of cases) and high-complexity images showed substantial differences (<35–50 cells/field in 95–100% of cases). Compression does not compromise the accuracy of immunohistochemical nuclear marker counts obtained by computer-assisted analysis systems for digital images with low complexity and could be an efficient method for storing these images. PMID:18755997
Quality Scalability Aware Watermarking for Visual Content.
Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.
Digital storage and analysis of color Doppler echocardiograms
NASA Technical Reports Server (NTRS)
Chandra, S.; Thomas, J. D.
1997-01-01
Color Doppler flow mapping has played an important role in clinical echocardiography. Most of the clinical work, however, has been primarily qualitative. Although qualitative information is very valuable, there is considerable quantitative information stored within the velocity map that has not been extensively exploited so far. Recently, many researchers have shown interest in using the encoded velocities to address the clinical problems such as quantification of valvular regurgitation, calculation of cardiac output, and characterization of ventricular filling. In this article, we review some basic physics and engineering aspects of color Doppler echocardiography, as well as drawbacks of trying to retrieve velocities from video tape data. Digital storage, which plays a critical role in performing quantitative analysis, is discussed in some detail with special attention to velocity encoding in DICOM 3.0 (medical image storage standard) and the use of digital compression. Lossy compression can considerably reduce file size with minimal loss of information (mostly redundant); this is critical for digital storage because of the enormous amount of data generated (a 10 minute study could require 18 Gigabytes of storage capacity). Lossy JPEG compression and its impact on quantitative analysis has been studied, showing that images compressed at 27:1 using the JPEG algorithm compares favorably with directly digitized video images, the current goldstandard. Some potential applications of these velocities in analyzing the proximal convergence zones, mitral inflow, and some areas of future development are also discussed in the article.
Performance comparison of leading image codecs: H.264/AVC Intra, JPEG2000, and Microsoft HD Photo
NASA Astrophysics Data System (ADS)
Tran, Trac D.; Liu, Lijie; Topiwala, Pankaj
2007-09-01
This paper provides a detailed rate-distortion performance comparison between JPEG2000, Microsoft HD Photo, and H.264/AVC High Profile 4:4:4 I-frame coding for high-resolution still images and high-definition (HD) 1080p video sequences. This work is an extension to our previous comparative study published in previous SPIE conferences [1, 2]. Here we further optimize all three codecs for compression performance. Coding simulations are performed on a set of large-format color images captured from mainstream digital cameras and 1080p HD video sequences commonly used for H.264/AVC standardization work. Overall, our experimental results show that all three codecs offer very similar coding performances at the high-quality, high-resolution setting. Differences tend to be data-dependent: JPEG2000 with the wavelet technology tends to be the best performer with smooth spatial data; H.264/AVC High-Profile with advanced spatial prediction modes tends to cope best with more complex visual content; Microsoft HD Photo tends to be the most consistent across the board. For the still-image data sets, JPEG2000 offers the best R-D performance gains (around 0.2 to 1 dB in peak signal-to-noise ratio) over H.264/AVC High-Profile intra coding and Microsoft HD Photo. For the 1080p video data set, all three codecs offer very similar coding performance. As in [1, 2], neither do we consider scalability nor complexity in this study (JPEG2000 is operating in non-scalable, but optimal performance mode).
A new JPEG-based steganographic algorithm for mobile devices
NASA Astrophysics Data System (ADS)
Agaian, Sos S.; Cherukuri, Ravindranath C.; Schneider, Erik C.; White, Gregory B.
2006-05-01
Currently, cellular phones constitute a significant portion of the global telecommunications market. Modern cellular phones offer sophisticated features such as Internet access, on-board cameras, and expandable memory which provide these devices with excellent multimedia capabilities. Because of the high volume of cellular traffic, as well as the ability of these devices to transmit nearly all forms of data. The need for an increased level of security in wireless communications is becoming a growing concern. Steganography could provide a solution to this important problem. In this article, we present a new algorithm for JPEG-compressed images which is applicable to mobile platforms. This algorithm embeds sensitive information into quantized discrete cosine transform coefficients obtained from the cover JPEG. These coefficients are rearranged based on certain statistical properties and the inherent processing and memory constraints of mobile devices. Based on the energy variation and block characteristics of the cover image, the sensitive data is hidden by using a switching embedding technique proposed in this article. The proposed system offers high capacity while simultaneously withstanding visual and statistical attacks. Based on simulation results, the proposed method demonstrates an improved retention of first-order statistics when compared to existing JPEG-based steganographic algorithms, while maintaining a capacity which is comparable to F5 for certain cover images.
2001-10-25
Table III. In spite of the same quality in ROI, it is decided that the images in the cases where QF is 1.3, 1.5 or 2.0 are not good for diagnosis. Of...but (b) is not good for diagnosis by decision of ultrasonographer. Results reveal that wavelet transform achieves higher quality of image compared
Region of interest and windowing-based progressive medical image delivery using JPEG2000
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin; Mukhopadhyay, Sudipta; Wheeler, Frederick W.; Avila, Ricardo S.
2003-05-01
An important telemedicine application is the perusal of CT scans (digital format) from a central server housed in a healthcare enterprise across a bandwidth constrained network by radiologists situated at remote locations for medical diagnostic purposes. It is generally expected that a viewing station respond to an image request by displaying the image within 1-2 seconds. Owing to limited bandwidth, it may not be possible to deliver the complete image in such a short period of time with traditional techniques. In this paper, we investigate progressive image delivery solutions by using JPEG 2000. An estimate of the time taken in different network bandwidths is performed to compare their relative merits. We further make use of the fact that most medical images are 12-16 bits, but would ultimately be converted to an 8-bit image via windowing for display on the monitor. We propose a windowing progressive RoI technique to exploit this and investigate JPEG 2000 RoI based compression after applying a favorite or a default window setting on the original image. Subsequent requests for different RoIs and window settings would then be processed at the server. For the windowing progressive RoI mode, we report a 50% reduction in transmission time.
Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.
Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin
2017-02-01
Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.
An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).
Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling
2018-04-17
Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.
A joint source-channel distortion model for JPEG compressed images.
Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C
2006-06-01
The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.
Wavelet-based compression of pathological images for telemedicine applications
NASA Astrophysics Data System (ADS)
Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun
2000-05-01
In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.
View compensated compression of volume rendered images for remote visualization.
Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S
2009-07-01
Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.
Digital watermarking algorithm research of color images based on quaternion Fourier transform
NASA Astrophysics Data System (ADS)
An, Mali; Wang, Weijiang; Zhao, Zhen
2013-10-01
A watermarking algorithm of color images based on the quaternion Fourier Transform (QFFT) and improved quantization index algorithm (QIM) is proposed in this paper. The original image is transformed by QFFT, the watermark image is processed by compression and quantization coding, and then the processed watermark image is embedded into the components of the transformed original image. It achieves embedding and blind extraction of the watermark image. The experimental results show that the watermarking algorithm based on the improved QIM algorithm with distortion compensation achieves a good tradeoff between invisibility and robustness, and better robustness for the attacks of Gaussian noises, salt and pepper noises, JPEG compression, cropping, filtering and image enhancement than the traditional QIM algorithm.
Compression strategies for LiDAR waveform cube
NASA Astrophysics Data System (ADS)
Jóźków, Grzegorz; Toth, Charles; Quirk, Mihaela; Grejner-Brzezinska, Dorota
2015-01-01
Full-waveform LiDAR data (FWD) provide a wealth of information about the shape and materials of the surveyed areas. Unlike discrete data that retains only a few strong returns, FWD generally keeps the whole signal, at all times, regardless of the signal intensity. Hence, FWD will have an increasingly well-deserved role in mapping and beyond, in the much desired classification in the raw data format. Full-waveform systems currently perform only the recording of the waveform data at the acquisition stage; the return extraction is mostly deferred to post-processing. Although the full waveform preserves most of the details of the real data, it presents a serious practical challenge for a wide use: much larger datasets compared to those from the classical discrete return systems. Atop the need for more storage space, the acquisition speed of the FWD may also limit the pulse rate on most systems that cannot store data fast enough, and thus, reduces the perceived system performance. This work introduces a waveform cube model to compress waveforms in selected subsets of the cube, aimed at achieving decreased storage while maintaining the maximum pulse rate of FWD systems. In our experiments, the waveform cube is compressed using classical methods for 2D imagery that are further tested to assess the feasibility of the proposed solution. The spatial distribution of airborne waveform data is irregular; however, the manner of the FWD acquisition allows the organization of the waveforms in a regular 3D structure similar to familiar multi-component imagery, as those of hyper-spectral cubes or 3D volumetric tomography scans. This study presents the performance analysis of several lossy compression methods applied to the LiDAR waveform cube, including JPEG-1, JPEG-2000, and PCA-based techniques. Wide ranges of tests performed on real airborne datasets have demonstrated the benefits of the JPEG-2000 Standard where high compression rates incur fairly small data degradation. In addition, the JPEG-2000 Standard-compliant compression implementation can be fast and, thus, used in real-time systems, as compressed data sequences can be formed progressively during the waveform data collection. We conclude from our experiments that 2D image compression strategies are feasible and efficient approaches, thus they might be applied during the acquisition of the FWD sensors.
SEMG signal compression based on two-dimensional techniques.
de Melo, Wheidima Carneiro; de Lima Filho, Eddie Batista; da Silva Júnior, Waldir Sabino
2016-04-18
Recently, two-dimensional techniques have been successfully employed for compressing surface electromyographic (SEMG) records as images, through the use of image and video encoders. Such schemes usually provide specific compressors, which are tuned for SEMG data, or employ preprocessing techniques, before the two-dimensional encoding procedure, in order to provide a suitable data organization, whose correlations can be better exploited by off-the-shelf encoders. Besides preprocessing input matrices, one may also depart from those approaches and employ an adaptive framework, which is able to directly tackle SEMG signals reassembled as images. This paper proposes a new two-dimensional approach for SEMG signal compression, which is based on a recurrent pattern matching algorithm called multidimensional multiscale parser (MMP). The mentioned encoder was modified, in order to efficiently work with SEMG signals and exploit their inherent redundancies. Moreover, a new preprocessing technique, named as segmentation by similarity (SbS), which has the potential to enhance the exploitation of intra- and intersegment correlations, is introduced, the percentage difference sorting (PDS) algorithm is employed, with different image compressors, and results with the high efficiency video coding (HEVC), H.264/AVC, and JPEG2000 encoders are presented. Experiments were carried out with real isometric and dynamic records, acquired in laboratory. Dynamic signals compressed with H.264/AVC and HEVC, when combined with preprocessing techniques, resulted in good percent root-mean-square difference [Formula: see text] compression factor figures, for low and high compression factors, respectively. Besides, regarding isometric signals, the modified two-dimensional MMP algorithm outperformed state-of-the-art schemes, for low compression factors, the combination between SbS and HEVC proved to be competitive, for high compression factors, and JPEG2000, combined with PDS, provided good performance allied to low computational complexity, all in terms of percent root-mean-square difference [Formula: see text] compression factor. The proposed schemes are effective and, specifically, the modified MMP algorithm can be considered as an interesting alternative for isometric signals, regarding traditional SEMG encoders. Besides, the approach based on off-the-shelf image encoders has the potential of fast implementation and dissemination, given that many embedded systems may already have such encoders available, in the underlying hardware/software architecture.
NASA Astrophysics Data System (ADS)
Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke
2008-08-01
A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.
On LSB Spatial Domain Steganography and Channel Capacity
2008-03-21
reveal the hidden information should not be taken as proof that the image is now clean. The survivability of LSB type spatial domain steganography ...the mindset that JPEG compressing an image is sufficient to destroy the steganography for spatial domain LSB type stego. We agree that JPEGing...modeling of 2 bit LSB steganography shows that theoretically there is non-zero stego payload possible even though the image has been JPEGed. We wish to
Iris Recognition: The Consequences of Image Compression
NASA Astrophysics Data System (ADS)
Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig
2010-12-01
Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.
2015-03-26
Fourier Analysis and Applications, vol. 14, pp. 838–858, 2008. 11. D. J. Cooke, “A discrete X - ray transform for chromotomographic hyperspectral imaging ... medical imaging , e.g., magnetic resonance imaging (MRI). Since the early 1980s, MRI has granted doctors the ability to distinguish between healthy tissue...i.e., at most K entries of x are nonzero. In many settings, this is a valid signal model; for example, JPEG2000 exploits the fact that natural images
FBCOT: a fast block coding option for JPEG 2000
NASA Astrophysics Data System (ADS)
Taubman, David; Naman, Aous; Mathew, Reji
2017-09-01
Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).
Embedded wavelet packet transform technique for texture compression
NASA Astrophysics Data System (ADS)
Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay
1995-09-01
A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.
Digital image modification detection using color information and its histograms.
Zhou, Haoyu; Shen, Yue; Zhu, Xinghui; Liu, Bo; Fu, Zigang; Fan, Na
2016-09-01
The rapid development of many open source and commercial image editing software makes the authenticity of the digital images questionable. Copy-move forgery is one of the most widely used tampering techniques to create desirable objects or conceal undesirable objects in a scene. Existing techniques reported in the literature to detect such tampering aim to improve the robustness of these methods against the use of JPEG compression, blurring, noise, or other types of post processing operations. These post processing operations are frequently used with the intention to conceal tampering and reduce tampering clues. A robust method based on the color moments and other five image descriptors is proposed in this paper. The method divides the image into fixed size overlapping blocks. Clustering operation divides entire search space into smaller pieces with similar color distribution. Blocks from the tampered regions will reside within the same cluster since both copied and moved regions have similar color distributions. Five image descriptors are used to extract block features, which makes the method more robust to post processing operations. An ensemble of deep compositional pattern-producing neural networks are trained with these extracted features. Similarity among feature vectors in clusters indicates possible forged regions. Experimental results show that the proposed method can detect copy-move forgery even if an image was distorted by gamma correction, addictive white Gaussian noise, JPEG compression, or blurring. Copyright © 2016. Published by Elsevier Ireland Ltd.
Comparative performance between compressed and uncompressed airborne imagery
NASA Astrophysics Data System (ADS)
Phan, Chung; Rupp, Ronald; Agarwal, Sanjeev; Trang, Anh; Nair, Sumesh
2008-04-01
The US Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD), Countermine Division is evaluating the compressibility of airborne multi-spectral imagery for mine and minefield detection application. Of particular interest is to assess the highest image data compression rate that can be afforded without the loss of image quality for war fighters in the loop and performance of near real time mine detection algorithm. The JPEG-2000 compression standard is used to perform data compression. Both lossless and lossy compressions are considered. A multi-spectral anomaly detector such as RX (Reed & Xiaoli), which is widely used as a core algorithm baseline in airborne mine and minefield detection on different mine types, minefields, and terrains to identify potential individual targets, is used to compare the mine detection performance. This paper presents the compression scheme and compares detection performance results between compressed and uncompressed imagery for various level of compressions. The compression efficiency is evaluated and its dependence upon different backgrounds and other factors are documented and presented using multi-spectral data.
NASA Astrophysics Data System (ADS)
Yao, Juncai; Liu, Guizhong
2017-03-01
In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.
NASA Astrophysics Data System (ADS)
Karam, Lina J.; Zhu, Tong
2015-03-01
The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.
NASA Technical Reports Server (NTRS)
Linares, Irving; Mersereau, Russell M.; Smith, Mark J. T.
1994-01-01
Two representative sample images of Band 4 of the Landsat Thematic Mapper are compressed with the JPEG algorithm at 8:1, 16:1 and 24:1 Compression Ratios for experimental browsing purposes. We then apply the Optimal PSNR Estimated Spectra Adaptive Postfiltering (ESAP) algorithm to reduce the DCT blocking distortion. ESAP reduces the blocking distortion while preserving most of the image's edge information by adaptively postfiltering the decoded image using the block's spectral information already obtainable from each block's DCT coefficients. The algorithm iteratively applied a one dimensional log-sigmoid weighting function to the separable interpolated local block estimated spectra of the decoded image until it converges to the optimal PSNR with respect to the original using a 2-D steepest ascent search. Convergence is obtained in a few iterations for integer parameters. The optimal logsig parameters are transmitted to the decoder as a negligible byte of overhead data. A unique maxima is guaranteed due to the 2-D asymptotic exponential overshoot shape of the surface generated by the algorithm. ESAP is based on a DFT analysis of the DCT basis functions. It is implemented with pixel-by-pixel spatially adaptive separable FIR postfilters. PSNR objective improvements between 0.4 to 0.8 dB are shown together with their corresponding optimal PSNR adaptive postfiltered images.
Application of content-based image compression to telepathology
NASA Astrophysics Data System (ADS)
Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace
2002-05-01
Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.
Joint reconstruction of multiview compressed images.
Thirumalai, Vijayaraghavan; Frossard, Pascal
2013-05-01
Distributed representation of correlated multiview images is an important problem that arises in vision sensor networks. This paper concentrates on the joint reconstruction problem where the distributively compressed images are decoded together in order to take benefit from the image correlation. We consider a scenario where the images captured at different viewpoints are encoded independently using common coding solutions (e.g., JPEG) with a balanced rate distribution among different cameras. A central decoder first estimates the inter-view image correlation from the independently compressed data. The joint reconstruction is then cast as a constrained convex optimization problem that reconstructs total-variation (TV) smooth images, which comply with the estimated correlation model. At the same time, we add constraints that force the reconstructed images to be as close as possible to their compressed versions. We show through experiments that the proposed joint reconstruction scheme outperforms independent reconstruction in terms of image quality, for a given target bit rate. In addition, the decoding performance of our algorithm compares advantageously to state-of-the-art distributed coding schemes based on motion learning and on the DISCOVER algorithm.
A visual detection model for DCT coefficient quantization
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Watson, Andrew B.
1994-01-01
The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.
Recce imagery compression options
NASA Astrophysics Data System (ADS)
Healy, Donald J.
1995-09-01
The errors introduced into reconstructed RECCE imagery by ATARS DPCM compression are compared to those introduced by the more modern DCT-based JPEG compression algorithm. For storage applications in which uncompressed sensor data is available JPEG provides better mean-square-error performance while also providing more flexibility in the selection of compressed data rates. When ATARS DPCM compression has already been performed, lossless encoding techniques may be applied to the DPCM deltas to achieve further compression without introducing additional errors. The abilities of several lossless compression algorithms including Huffman, Lempel-Ziv, Lempel-Ziv-Welch, and Rice encoding to provide this additional compression of ATARS DPCM deltas are compared. It is shown that the amount of noise in the original imagery significantly affects these comparisons.
Low bit rate coding of Earth science images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.
1993-01-01
In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.
Digital image forensics for photographic copying
NASA Astrophysics Data System (ADS)
Yin, Jing; Fang, Yanmei
2012-03-01
Image display technology has greatly developed over the past few decades, which make it possible to recapture high-quality images from the display medium, such as a liquid crystal display(LCD) screen or a printed paper. The recaptured images are not regarded as a separate image class in the current research of digital image forensics, while the content of the recaptured images may have been tempered. In this paper, two sets of features based on the noise and the traces of double JPEG compression are proposed to identify these recaptured images. Experimental results showed that our proposed features perform well for detecting photographic copying.
Adaptive intercolor error prediction coder for lossless color (rgb) picutre compression
NASA Astrophysics Data System (ADS)
Mann, Y.; Peretz, Y.; Mitchell, Harvey B.
2001-09-01
Most of the current lossless compression algorithms, including the new international baseline JPEG-LS algorithm, do not exploit the interspectral correlations that exist between the color planes in an input color picture. To improve the compression performance (i.e., lower the bit rate) it is necessary to exploit these correlations. A major concern is to find efficient methods for exploiting the correlations that, at the same time, are compatible with and can be incorporated into the JPEG-LS algorithm. One such algorithm is the method of intercolor error prediction (IEP), which when used with the JPEG-LS algorithm, results on average in a reduction of 8% in the overall bit rate. We show how the IEP algorithm can be simply modified and that it nearly doubles the size of the reduction in bit rate to 15%.
Integration of radiographic images with an electronic medical record.
Overhage, J. M.; Aisen, A.; Barnes, M.; Tucker, M.; McDonald, C. J.
2001-01-01
Radiographic images are important and expensive diagnostic tests. However, the provider caring for the patient often does not review the images directly due to time constraints. Institutions can use picture archiving and communications systems to make images more available to the provider, but this may not be the best solution. We integrated radiographic image review into the Regenstrief Medical Record System in order to address this problem. To achieve adequate performance, we store JPEG compressed images directly in the RMRS. Currently, physicians review about 5% of all radiographic studies using the RMRS image review function. PMID:11825241
High Performance Compression of Science Data
NASA Technical Reports Server (NTRS)
Storer, James A.; Carpentieri, Bruno; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
Local wavelet transform: a cost-efficient custom processor for space image compression
NASA Astrophysics Data System (ADS)
Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier
2002-11-01
Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.
Real-time 3D video compression for tele-immersive environments
NASA Astrophysics Data System (ADS)
Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William
2006-01-01
Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).
JPEG XS-based frame buffer compression inside HEVC for power-aware video compression
NASA Astrophysics Data System (ADS)
Willème, Alexandre; Descampe, Antonin; Rouvroy, Gaël.; Pellegrin, Pascal; Macq, Benoit
2017-09-01
With the emergence of Ultra-High Definition video, reference frame buffers (FBs) inside HEVC-like encoders and decoders have to sustain huge bandwidth. The power consumed by these external memory accesses accounts for a significant share of the codec's total consumption. This paper describes a solution to significantly decrease the FB's bandwidth, making HEVC encoder more suitable for use in power-aware applications. The proposed prototype consists in integrating an embedded lightweight, low-latency and visually lossless codec at the FB interface inside HEVC in order to store each reference frame as several compressed bitstreams. As opposed to previous works, our solution compresses large picture areas (ranging from a CTU to a frame stripe) independently in order to better exploit the spatial redundancy found in the reference frame. This work investigates two data reuse schemes namely Level-C and Level-D. Our approach is made possible thanks to simplified motion estimation mechanisms further reducing the FB's bandwidth and inducing very low quality degradation. In this work, we integrated JPEG XS, the upcoming standard for lightweight low-latency video compression, inside HEVC. In practice, the proposed implementation is based on HM 16.8 and on XSM 1.1.2 (JPEG XS Test Model). Through this paper, the architecture of our HEVC with JPEG XS-based frame buffer compression is described. Then its performance is compared to HM encoder. Compared to previous works, our prototype provides significant external memory bandwidth reduction. Depending on the reuse scheme, one can expect bandwidth and FB size reduction ranging from 50% to 83.3% without significant quality degradation.
Rate distortion optimal bit allocation methods for volumetric data using JPEG 2000.
Kosheleva, Olga M; Usevitch, Bryan E; Cabrera, Sergio D; Vidal, Edward
2006-08-01
Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.
The JPEG XT suite of standards: status and future plans
NASA Astrophysics Data System (ADS)
Richter, Thomas; Bruylants, Tim; Schelkens, Peter; Ebrahimi, Touradj
2015-09-01
The JPEG standard has known an enormous market adoption. Daily, billions of pictures are created, stored and exchanged in this format. The JPEG committee acknowledges this success and spends continued efforts in maintaining and expanding the standard specifications. JPEG XT is a standardization effort targeting the extension of the JPEG features by enabling support for high dynamic range imaging, lossless and near-lossless coding, and alpha channel coding, while also guaranteeing backward and forward compatibility with the JPEG legacy format. This paper gives an overview of the current status of the JPEG XT standards suite. It discusses the JPEG legacy specification, and details how higher dynamic range support is facilitated both for integer and floating-point color representations. The paper shows how JPEG XT's support for lossless and near-lossless coding of low and high dynamic range images is achieved in combination with backward compatibility to JPEG legacy. In addition, the extensible boxed-based JPEG XT file format on which all following and future extensions of JPEG will be based is introduced. This paper also details how the lossy and lossless representations of alpha channels are supported to allow coding transparency information and arbitrarily shaped images. Finally, we conclude by giving prospects on upcoming JPEG standardization initiative JPEG Privacy & Security, and a number of other possible extensions in JPEG XT.
Watermarking scheme for authentication of compressed image
NASA Astrophysics Data System (ADS)
Hsieh, Tsung-Han; Li, Chang-Tsun; Wang, Shuo
2003-11-01
As images are commonly transmitted or stored in compressed form such as JPEG, to extend the applicability of our previous work, a new scheme for embedding watermark in compressed domain without resorting to cryptography is proposed. In this work, a target image is first DCT transformed and quantised. Then, all the coefficients are implicitly watermarked in order to minimize the risk of being attacked on the unwatermarked coefficients. The watermarking is done through registering/blending the zero-valued coefficients with a binary sequence to create the watermark and involving the unembedded coefficients during the process of embedding the selected coefficients. The second-order neighbors and the block itself are considered in the process of the watermark embedding in order to thwart different attacks such as cover-up, vector quantisation, and transplantation. The experiments demonstrate the capability of the proposed scheme in thwarting local tampering, geometric transformation such as cropping, and common signal operations such as lowpass filtering.
NASA Astrophysics Data System (ADS)
Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.
2015-03-01
Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.
Non-linear Post Processing Image Enhancement
NASA Technical Reports Server (NTRS)
Hunt, Shawn; Lopez, Alex; Torres, Angel
1997-01-01
A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,
Digital cinema system using JPEG2000 movie of 8-million pixel resolution
NASA Astrophysics Data System (ADS)
Fujii, Tatsuya; Nomura, Mitsuru; Shirai, Daisuke; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu
2003-05-01
We have developed a prototype digital cinema system that can store, transmit and display extra high quality movies of 8-million pixel resolution, using JPEG2000 coding algorithm. The image quality is 4 times better than HDTV in resolution, and enables us to replace conventional films with digital cinema archives. Using wide-area optical gigabit IP networks, cinema contents are distributed and played back as a video-on-demand (VoD) system. The system consists of three main devices, a video server, a real-time JPEG2000 decoder, and a large-venue LCD projector. All digital movie data are compressed by JPEG2000 and stored in advance. The coded streams of 300~500 Mbps can be continuously transmitted from the PC server using TCP/IP. The decoder can perform the real-time decompression at 24/48 frames per second, using 120 parallel JPEG2000 processing elements. The received streams are expanded into 4.5Gbps raw video signals. The prototype LCD projector uses 3 pieces of 3840×2048 pixel reflective LCD panels (D-ILA) to show RGB 30-bit color movies fed by the decoder. The brightness exceeds 3000 ANSI lumens for a 300-inch screen. The refresh rate is chosen to 96Hz to thoroughly eliminate flickers, while preserving compatibility to cinema movies of 24 frames per second.
Request redirection paradigm in medical image archive implementation.
Dragan, Dinu; Ivetić, Dragan
2012-08-01
It is widely recognized that the JPEG2000 facilitates issues in medical imaging: storage, communication, sharing, remote access, interoperability, and presentation scalability. Therefore, JPEG2000 support was added to the DICOM standard Supplement 61. Two approaches to support JPEG2000 medical image are explicitly defined by the DICOM standard: replacing the DICOM image format with corresponding JPEG2000 codestream, or by the Pixel Data Provider service, DICOM supplement 106. The latest one supposes two-step retrieval of medical image: DICOM request and response from a DICOM server, and then JPIP request and response from a JPEG2000 server. We propose a novel strategy for transmission of scalable JPEG2000 images extracted from a single codestream over DICOM network using the DICOM Private Data Element without sacrificing system interoperability. It employs the request redirection paradigm: DICOM request and response from JPEG2000 server through DICOM server. The paper presents programming solution for implementation of request redirection paradigm in a DICOM transparent manner. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S
2011-02-01
A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.
Optimized atom position and coefficient coding for matching pursuit-based image compression.
Shoa, Alireza; Shirani, Shahram
2009-12-01
In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.
NASA Astrophysics Data System (ADS)
Yu, Shanshan; Murakami, Yuri; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki
2006-09-01
The article proposes a multispectral image compression scheme using nonlinear spectral transform for better colorimetric and spectral reproducibility. In the method, we show the reduction of colorimetric error under a defined viewing illuminant and also that spectral accuracy can be improved simultaneously using a nonlinear spectral transform called Labplus, which takes into account the nonlinearity of human color vision. Moreover, we show that the addition of diagonal matrices to Labplus can further preserve the spectral accuracy and has a generalized effect of improving the colorimetric accuracy under other viewing illuminants than the defined one. Finally, we discuss the usage of the first-order Markov model to form the analysis vectors for the higher order channels in Labplus to reduce the computational complexity. We implement a multispectral image compression system that integrates Labplus with JPEG2000 for high colorimetric and spectral reproducibility. Experimental results for a 16-band multispectral image show the effectiveness of the proposed scheme.
Wu, Xiaolin; Zhang, Xiangjun; Wang, Xiaohan
2009-03-01
Recently, many researchers started to challenge a long-standing practice of digital photography: oversampling followed by compression and pursuing more intelligent sparse sampling techniques. In this paper, we propose a practical approach of uniform down sampling in image space and yet making the sampling adaptive by spatially varying, directional low-pass prefiltering. The resulting down-sampled prefiltered image remains a conventional square sample grid, and, thus, it can be compressed and transmitted without any change to current image coding standards and systems. The decoder first decompresses the low-resolution image and then upconverts it to the original resolution in a constrained least squares restoration process, using a 2-D piecewise autoregressive model and the knowledge of directional low-pass prefiltering. The proposed compression approach of collaborative adaptive down-sampling and upconversion (CADU) outperforms JPEG 2000 in PSNR measure at low to medium bit rates and achieves superior visual quality, as well. The superior low bit-rate performance of the CADU approach seems to suggest that oversampling not only wastes hardware resources and energy, and it could be counterproductive to image quality given a tight bit budget.
High performance compression of science data
NASA Technical Reports Server (NTRS)
Storer, James A.; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology
Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.
2015-01-01
The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032
Cloud Optimized Image Format and Compression
NASA Astrophysics Data System (ADS)
Becker, P.; Plesea, L.; Maurer, T.
2015-04-01
Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.
NASA Astrophysics Data System (ADS)
Xie, ChengJun; Xu, Lin
2008-03-01
This paper presents a new algorithm based on mixing transform to eliminate redundancy, SHIRCT and subtraction mixing transform is used to eliminate spectral redundancy, 2D-CDF(2,2)DWT to eliminate spatial redundancy, This transform has priority in hardware realization convenience, since it can be fully implemented by add and shift operation. Its redundancy elimination effect is better than (1D+2D)CDF(2,2)DWT. Here improved SPIHT+CABAC mixing compression coding algorithm is used to implement compression coding. The experiment results show that in lossless image compression applications the effect of this method is a little better than the result acquired using (1D+2D)CDF(2,2)DWT+improved SPIHT+CABAC, still it is much better than the results acquired by JPEG-LS, WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of Sciences, NMST and MST. Using hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, on the average the compression ratio of this algorithm exceeds the above algorithms by 42%,37%,35%,30%,16%,13%,11% respectively.
Compression techniques in tele-radiology
NASA Astrophysics Data System (ADS)
Lu, Tianyu; Xiong, Zixiang; Yun, David Y.
1999-10-01
This paper describes a prototype telemedicine system for remote 3D radiation treatment planning. Due to voluminous medical image data and image streams generated in interactive frame rate involved in the application, the importance of deploying adjustable lossy to lossless compression techniques is emphasized in order to achieve acceptable performance via various kinds of communication networks. In particular, the compression of the data substantially reduces the transmission time and therefore allows large-scale radiation distribution simulation and interactive volume visualization using remote supercomputing resources in a timely fashion. The compression algorithms currently used in the software we developed are JPEG and H.263 lossy methods and Lempel-Ziv (LZ77) lossless methods. Both objective and subjective assessment of the effect of lossy compression methods on the volume data are conducted. Favorable results are obtained showing that substantial compression ratio is achievable within distortion tolerance. From our experience, we conclude that 30dB (PSNR) is about the lower bound to achieve acceptable quality when applying lossy compression to anatomy volume data (e.g. CT). For computer simulated data, much higher PSNR (up to 100dB) is expectable. This work not only introduces such novel approach for delivering medical services that will have significant impact on the existing cooperative image-based services, but also provides a platform for the physicians to assess the effects of lossy compression techniques on the diagnostic and aesthetic appearance of medical imaging.
DCTune Perceptual Optimization of Compressed Dental X-Rays
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)
1997-01-01
In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCtune is a technology for optimizing DCT quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays: (1) to verify the advantage of DCTune over standard JPEG; (2) to verify the quality control feature of DCTune; and (3) to discover regularities in the optimized matrices of a set of images. Additional information is contained in the original extended abstract.
A visual detection model for DCT coefficient quantization
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Peterson, Heidi A.
1993-01-01
The discrete cosine transform (DCT) is widely used in image compression, and is part of the JPEG and MPEG compression standards. The degree of compression, and the amount of distortion in the decompressed image are determined by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. Our approach is to set the quantization level for each coefficient so that the quantization error is at the threshold of visibility. Here we combine results from our previous work to form our current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color.
Pantanowitz, Liron; Liu, Chi; Huang, Yue; Guo, Huazhang; Rohde, Gustavo K
2017-01-01
The quality of data obtained from image analysis can be directly affected by several preanalytical (e.g., staining, image acquisition), analytical (e.g., algorithm, region of interest [ROI]), and postanalytical (e.g., computer processing) variables. Whole-slide scanners generate digital images that may vary depending on the type of scanner and device settings. Our goal was to evaluate the impact of altering brightness, contrast, compression, and blurring on image analysis data quality. Slides from 55 patients with invasive breast carcinoma were digitized to include a spectrum of human epidermal growth factor receptor 2 (HER2) scores analyzed with Visiopharm (30 cases with score 0, 10 with 1+, 5 with 2+, and 10 with 3+). For all images, an ROI was selected and four parameters (brightness, contrast, JPEG2000 compression, out-of-focus blurring) then serially adjusted. HER2 scores were obtained for each altered image. HER2 scores decreased with increased illumination, higher compression ratios, and increased blurring. HER2 scores increased with greater contrast. Cases with HER2 score 0 were least affected by image adjustments. This experiment shows that variations in image brightness, contrast, compression, and blurring can have major influences on image analysis results. Such changes can result in under- or over-scoring with image algorithms. Standardization of image analysis is recommended to minimize the undesirable impact such variations may have on data output.
JPEG XS, a new standard for visually lossless low-latency lightweight image compression
NASA Astrophysics Data System (ADS)
Descampe, Antonin; Keinert, Joachim; Richter, Thomas; Fößel, Siegfried; Rouvroy, Gaël.
2017-09-01
JPEG XS is an upcoming standard from the JPEG Committee (formally known as ISO/IEC SC29 WG1). It aims to provide an interoperable visually lossless low-latency lightweight codec for a wide range of applications including mezzanine compression in broadcast and Pro-AV markets. This requires optimal support of a wide range of implementation technologies such as FPGAs, CPUs and GPUs. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. In addition to the evaluation of the visual transparency of the selected technologies, a detailed analysis of the hardware and software complexity as well as the latency has been done to make sure that the new codec meets the requirements of the above-mentioned use cases. In particular, the end-to-end latency has been constrained to a maximum of 32 lines. Concerning the hardware complexity, neither encoder nor decoder should require more than 50% of an FPGA similar to Xilinx Artix 7 or 25% of an FPGA similar to Altera Cyclon 5. This process resulted in a coding scheme made of an optional color transform, a wavelet transform, the entropy coding of the highest magnitude level of groups of coefficients, and the raw inclusion of the truncated wavelet coefficients. This paper presents the details and status of the standardization process, a technical description of the future standard, and the latest performance evaluation results.
Parallel efficient rate control methods for JPEG 2000
NASA Astrophysics Data System (ADS)
Martínez-del-Amor, Miguel Á.; Bruns, Volker; Sparenberg, Heiko
2017-09-01
Since the introduction of JPEG 2000, several rate control methods have been proposed. Among them, post-compression rate-distortion optimization (PCRD-Opt) is the most widely used, and the one recommended by the standard. The approach followed by this method is to first compress the entire image split in code blocks, and subsequently, optimally truncate the set of generated bit streams according to the maximum target bit rate constraint. The literature proposes various strategies on how to estimate ahead of time where a block will get truncated in order to stop the execution prematurely and save time. However, none of them have been defined bearing in mind a parallel implementation. Today, multi-core and many-core architectures are becoming popular for JPEG 2000 codecs implementations. Therefore, in this paper, we analyze how some techniques for efficient rate control can be deployed in GPUs. In order to do that, the design of our GPU-based codec is extended, allowing stopping the process at a given point. This extension also harnesses a higher level of parallelism on the GPU, leading to up to 40% of speedup with 4K test material on a Titan X. In a second step, three selected rate control methods are adapted and implemented in our parallel encoder. A comparison is then carried out, and used to select the best candidate to be deployed in a GPU encoder, which gave an extra 40% of speedup in those situations where it was really employed.
Efficient transmission of compressed data for remote volume visualization.
Krishnan, Karthik; Marcellin, Michael W; Bilgin, Ali; Nadar, Mariappan S
2006-09-01
One of the goals of telemedicine is to enable remote visualization and browsing of medical volumes. There is a need to employ scalable compression schemes and efficient client-server models to obtain interactivity and an enhanced viewing experience. First, we present a scheme that uses JPEG2000 and JPIP (JPEG2000 Interactive Protocol) to transmit data in a multi-resolution and progressive fashion. The server exploits the spatial locality offered by the wavelet transform and packet indexing information to transmit, in so far as possible, compressed volume data relevant to the clients query. Once the client identifies its volume of interest (VOI), the volume is refined progressively within the VOI from an initial lossy to a final lossless representation. Contextual background information can also be made available having quality fading away from the VOI. Second, we present a prioritization that enables the client to progressively visualize scene content from a compressed file. In our specific example, the client is able to make requests to progressively receive data corresponding to any tissue type. The server is now capable of reordering the same compressed data file on the fly to serve data packets prioritized as per the client's request. Lastly, we describe the effect of compression parameters on compression ratio, decoding times and interactivity. We also present suggestions for optimizing JPEG2000 for remote volume visualization and volume browsing applications. The resulting system is ideally suited for client-server applications with the server maintaining the compressed volume data, to be browsed by a client with a low bandwidth constraint.
Evaluation of Algorithms for Compressing Hyperspectral Data
NASA Technical Reports Server (NTRS)
Cook, Sid; Harsanyi, Joseph; Faber, Vance
2003-01-01
With EO-1 Hyperion in orbit NASA is showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI spectral compression and Mapping Science (MSI) for JPEG 2000 spatial compression expertise, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor > 100, while retaining the necessary spectral and spatial fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our compression algorithms leverage commercial-off-the-shelf (COTS) spectral and spatial exploitation algorithms. We are currently in the process of evaluating these compression algorithms using statistical analysis and NASA scientists. We are also developing special purpose processors for executing these algorithms onboard a spacecraft.
2015-12-24
Signal to Noise Ratio SPICE Simulation Program with Integrated Circuit Emphasis TIFF Tagged Image File Format USC University of Southern California xvii...sources can create errors in digital circuits. These effects can be simulated using Simulation Program with Integrated Circuit Emphasis ( SPICE ) or...compute summary statistics. 4.1 Circuit Simulations Noisy analog circuits can be simulated in SPICE or Cadence SpectreTM software via noisy voltage
Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993
NASA Technical Reports Server (NTRS)
Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)
1993-01-01
Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.
A study on multiresolution lossless video coding using inter/intra frame adaptive prediction
NASA Astrophysics Data System (ADS)
Nakachi, Takayuki; Sawabe, Tomoko; Fujii, Tetsuro
2003-06-01
Lossless video coding is required in the fields of archiving and editing digital cinema or digital broadcasting contents. This paper combines a discrete wavelet transform and adaptive inter/intra-frame prediction in the wavelet transform domain to create multiresolution lossless video coding. The multiresolution structure offered by the wavelet transform facilitates interchange among several video source formats such as Super High Definition (SHD) images, HDTV, SDTV, and mobile applications. Adaptive inter/intra-frame prediction is an extension of JPEG-LS, a state-of-the-art lossless still image compression standard. Based on the image statistics of the wavelet transform domains in successive frames, inter/intra frame adaptive prediction is applied to the appropriate wavelet transform domain. This adaptation offers superior compression performance. This is achieved with low computational cost and no increase in additional information. Experiments on digital cinema test sequences confirm the effectiveness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Xie, ChengJun; Xu, Lin
2008-03-01
This paper presents an algorithm based on mixing transform of wave band grouping to eliminate spectral redundancy, the algorithm adapts to the relativity difference between different frequency spectrum images, and still it works well when the band number is not the power of 2. Using non-boundary extension CDF(2,2)DWT and subtraction mixing transform to eliminate spectral redundancy, employing CDF(2,2)DWT to eliminate spatial redundancy and SPIHT+CABAC for compression coding, the experiment shows that a satisfied lossless compression result can be achieved. Using hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, when the band number is not the power of 2, lossless compression result of this compression algorithm is much better than the results acquired by JPEG-LS, WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of Sciences, Minimum Spanning Tree and Near Minimum Spanning Tree, on the average the compression ratio of this algorithm exceeds the above algorithms by 41%,37%,35%,29%,16%,10%,8% respectively; when the band number is the power of 2, for 128 frames of the image Canal, taking 8, 16 and 32 respectively as the number of one group for groupings based on different numbers, considering factors like compression storage complexity, the type of wave band and the compression effect, we suggest using 8 as the number of bands included in one group to achieve a better compression effect. The algorithm of this paper has priority in operation speed and hardware realization convenience.
Pantanowitz, Liron; Liu, Chi; Huang, Yue; Guo, Huazhang; Rohde, Gustavo K.
2017-01-01
Introduction: The quality of data obtained from image analysis can be directly affected by several preanalytical (e.g., staining, image acquisition), analytical (e.g., algorithm, region of interest [ROI]), and postanalytical (e.g., computer processing) variables. Whole-slide scanners generate digital images that may vary depending on the type of scanner and device settings. Our goal was to evaluate the impact of altering brightness, contrast, compression, and blurring on image analysis data quality. Methods: Slides from 55 patients with invasive breast carcinoma were digitized to include a spectrum of human epidermal growth factor receptor 2 (HER2) scores analyzed with Visiopharm (30 cases with score 0, 10 with 1+, 5 with 2+, and 10 with 3+). For all images, an ROI was selected and four parameters (brightness, contrast, JPEG2000 compression, out-of-focus blurring) then serially adjusted. HER2 scores were obtained for each altered image. Results: HER2 scores decreased with increased illumination, higher compression ratios, and increased blurring. HER2 scores increased with greater contrast. Cases with HER2 score 0 were least affected by image adjustments. Conclusion: This experiment shows that variations in image brightness, contrast, compression, and blurring can have major influences on image analysis results. Such changes can result in under- or over-scoring with image algorithms. Standardization of image analysis is recommended to minimize the undesirable impact such variations may have on data output. PMID:28966838
Implementation of image transmission server system using embedded Linux
NASA Astrophysics Data System (ADS)
Park, Jong-Hyun; Jung, Yeon Sung; Nam, Boo Hee
2005-12-01
In this paper, we performed the implementation of image transmission server system using embedded system that is for the specified object and easy to install and move. Since the embedded system has lower capability than the PC, we have to reduce the quantity of calculation of the baseline JPEG image compression and transmission. We used the Redhat Linux 9.0 OS at the host PC and the target board based on embedded Linux. The image sequences are obtained from the camera attached to the FPGA (Field Programmable Gate Array) board with ALTERA cooperation chip. For effectiveness and avoiding some constraints from the vendor's own, we made the device driver using kernel module.
Non-parametric adaptative JPEG fragments carving
NASA Astrophysics Data System (ADS)
Amrouche, Sabrina Cherifa; Salamani, Dalila
2018-04-01
The most challenging JPEG recovery tasks arise when the file header is missing. In this paper we propose to use a two layer machine learning model to restore headerless JPEG images. We first build a classifier able to identify the structural properties of the images/fragments and then use an AutoEncoder (AE) to learn the fragment features for the header prediction. We define a JPEG universal header and the remaining free image parameters (Height, Width) are predicted with a Gradient Boosting Classifier. Our approach resulted in 90% accuracy using the manually defined features and 78% accuracy using the AE features.
Toward privacy-preserving JPEG image retrieval
NASA Astrophysics Data System (ADS)
Cheng, Hang; Wang, Jingyue; Wang, Meiqing; Zhong, Shangping
2017-07-01
This paper proposes a privacy-preserving retrieval scheme for JPEG images based on local variance. Three parties are involved in the scheme: the content owner, the server, and the authorized user. The content owner encrypts JPEG images for privacy protection by jointly using permutation cipher and stream cipher, and then, the encrypted versions are uploaded to the server. With an encrypted query image provided by an authorized user, the server may extract blockwise local variances in different directions without knowing the plaintext content. After that, it can calculate the similarity between the encrypted query image and each encrypted database image by a local variance-based feature comparison mechanism. The authorized user with the encryption key can decrypt the returned encrypted images with plaintext content similar to the query image. The experimental results show that the proposed scheme not only provides effective privacy-preserving retrieval service but also ensures both format compliance and file size preservation for encrypted JPEG images.
Multiple-image hiding using super resolution reconstruction in high-frequency domains
NASA Astrophysics Data System (ADS)
Li, Xiao-Wei; Zhao, Wu-Xiang; Wang, Jun; Wang, Qiong-Hua
2017-12-01
In this paper, a robust multiple-image hiding method using the computer-generated integral imaging and the modified super-resolution reconstruction algorithm is proposed. In our work, the host image is first transformed into frequency domains by cellular automata (CA), to assure the quality of the stego-image, the secret images are embedded into the CA high-frequency domains. The proposed method has the following advantages: (1) robustness to geometric attacks because of the memory-distributed property of elemental images, (2) increasing quality of the reconstructed secret images as the scheme utilizes the modified super-resolution reconstruction algorithm. The simulation results show that the proposed multiple-image hiding method outperforms other similar hiding methods and is robust to some geometric attacks, e.g., Gaussian noise and JPEG compression attacks.
Perceptual Image Compression in Telemedicine
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)
1996-01-01
The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications of our technology to the special problems of telemedicine.
An efficient multiple exposure image fusion in JPEG domain
NASA Astrophysics Data System (ADS)
Hebbalaguppe, Ramya; Kakarala, Ramakrishna
2012-01-01
In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.
Optimizing Cloud Based Image Storage, Dissemination and Processing Through Use of Mrf and Lerc
NASA Astrophysics Data System (ADS)
Becker, Peter; Plesea, Lucian; Maurer, Thomas
2016-06-01
The volume and numbers of geospatial images being collected continue to increase exponentially with the ever increasing number of airborne and satellite imaging platforms, and the increasing rate of data collection. As a result, the cost of fast storage required to provide access to the imagery is a major cost factor in enterprise image management solutions to handle, process and disseminate the imagery and information extracted from the imagery. Cloud based object storage offers to provide significantly lower cost and elastic storage for this imagery, but also adds some disadvantages in terms of greater latency for data access and lack of traditional file access. Although traditional file formats geoTIF, JPEG2000 and NITF can be downloaded from such object storage, their structure and available compression are not optimum and access performance is curtailed. This paper provides details on a solution by utilizing a new open image formats for storage and access to geospatial imagery optimized for cloud storage and processing. MRF (Meta Raster Format) is optimized for large collections of scenes such as those acquired from optical sensors. The format enables optimized data access from cloud storage, along with the use of new compression options which cannot easily be added to existing formats. The paper also provides an overview of LERC a new image compression that can be used with MRF that provides very good lossless and controlled lossy compression.
Use of zerotree coding in a high-speed pyramid image multiresolution decomposition
NASA Astrophysics Data System (ADS)
Vega-Pineda, Javier; Cabrera, Sergio D.; Lucero, Aldo
1995-03-01
A Zerotree (ZT) coding scheme is applied as a post-processing stage to avoid transmitting zero data in the High-Speed Pyramid (HSP) image compression algorithm. This algorithm has features that increase the capability of the ZT coding to give very high compression rates. In this paper the impact of the ZT coding scheme is analyzed and quantified. The HSP algorithm creates a discrete-time multiresolution analysis based on a hierarchical decomposition technique that is a subsampling pyramid. The filters used to create the image residues and expansions can be related to wavelet representations. According to the pixel coordinates and the level in the pyramid, N2 different wavelet basis functions of various sizes and rotations are linearly combined. The HSP algorithm is computationally efficient because of the simplicity of the required operations, and as a consequence, it can be very easily implemented with VLSI hardware. This is the HSP's principal advantage over other compression schemes. The ZT coding technique transforms the different quantized image residual levels created by the HSP algorithm into a bit stream. The use of ZT's compresses even further the already compressed image taking advantage of parent-child relationships (trees) between the pixels of the residue images at different levels of the pyramid. Zerotree coding uses the links between zeros along the hierarchical structure of the pyramid, to avoid transmission of those that form branches of all zeros. Compression performance and algorithm complexity of the combined HSP-ZT method are compared with those of the JPEG standard technique.
Robust image obfuscation for privacy protection in Web 2.0 applications
NASA Astrophysics Data System (ADS)
Poller, Andreas; Steinebach, Martin; Liu, Huajian
2012-03-01
We present two approaches to robust image obfuscation based on permutation of image regions and channel intensity modulation. The proposed concept of robust image obfuscation is a step towards end-to-end security in Web 2.0 applications. It helps to protect the privacy of the users against threats caused by internet bots and web applications that extract biometric and other features from images for data-linkage purposes. The approaches described in this paper consider that images uploaded to Web 2.0 applications pass several transformations, such as scaling and JPEG compression, until the receiver downloads them. In contrast to existing approaches, our focus is on usability, therefore the primary goal is not a maximum of security but an acceptable trade-off between security and resulting image quality.
NASA Technical Reports Server (NTRS)
Tilton, James C.; Manohar, Mareboyana
1994-01-01
Recent advances in imaging technology make it possible to obtain imagery data of the Earth at high spatial, spectral and radiometric resolutions from Earth orbiting satellites. The rate at which the data is collected from these satellites can far exceed the channel capacity of the data downlink. Reducing the data rate to within the channel capacity can often require painful trade-offs in which certain scientific returns are sacrificed for the sake of others. In this paper we model the radiometric version of this form of lossy compression by dropping a specified number of least significant bits from each data pixel and compressing the remaining bits using an appropriate lossless compression technique. We call this approach 'truncation followed by lossless compression' or TLLC. We compare the TLLC approach with applying a lossy compression technique to the data for reducing the data rate to the channel capacity, and demonstrate that each of three different lossy compression techniques (JPEG/DCT, VQ and Model-Based VQ) give a better effective radiometric resolution than TLLC for a given channel rate.
[Development of a video image system for wireless capsule endoscopes based on DSP].
Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua
2008-02-01
A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.
Visually Lossless JPEG 2000 for Remote Image Browsing
Oh, Han; Bilgin, Ali; Marcellin, Michael
2017-01-01
Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream. This codestream is JPEG2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results. PMID:28748112
Segmentation-driven compound document coding based on H.264/AVC-INTRA.
Zaghetto, Alexandre; de Queiroz, Ricardo L
2007-07-01
In this paper, we explore H.264/AVC operating in intraframe mode to compress a mixed image, i.e., composed of text, graphics, and pictures. Even though mixed contents (compound) documents usually require the use of multiple compressors, we apply a single compressor for both text and pictures. For that, distortion is taken into account differently between text and picture regions. Our approach is to use a segmentation-driven adaptation strategy to change the H.264/AVC quantization parameter on a macroblock by macroblock basis, i.e., we deviate bits from pictorial regions to text in order to keep text edges sharp. We show results of a segmentation driven quantizer adaptation method applied to compress documents. Our reconstructed images have better text sharpness compared to straight unadapted coding, at negligible visual losses on pictorial regions. Our results also highlight the fact that H.264/AVC-INTRA outperforms coders such as JPEG-2000 as a single coder for compound images.
2015-12-24
Ripple-Carry RCA Ripple-Carry Adder RF Radio Frequency RMS Root-Mean-Square SEU Single Event Upset SIPI Signal and Image Processing Institute SNR...correctness, where 0.5 < p < 1, and a probability (1−p) of error. Errors could be caused by noise, radio frequency (RF) interference, crosstalk...utilized in the Apollo Guidance Computer is the three input NOR Gate. . . At the time that the decision was made to use in- 11 tegrated circuits, the
2015-12-24
Ripple-Carry RCA Ripple-Carry Adder RF Radio Frequency RMS Root-Mean-Square SEU Single Event Upset SIPI Signal and Image Processing Institute SNR...correctness, where 0.5 < p < 1, and a probability (1−p) of error. Errors could be caused by noise, radio frequency (RF) interference, crosstalk...utilized in the Apollo Guidance Computer is the three input NOR Gate. . . At the time that the decision was made to use in- 11 tegrated circuits, the
Enabling Near Real-Time Remote Search for Fast Transient Events with Lossy Data Compression
NASA Astrophysics Data System (ADS)
Vohl, Dany; Pritchard, Tyler; Andreoni, Igor; Cooke, Jeffrey; Meade, Bernard
2017-09-01
We present a systematic evaluation of JPEG2000 (ISO/IEC 15444) as a transport data format to enable rapid remote searches for fast transient events as part of the Deeper Wider Faster programme. Deeper Wider Faster programme uses 20 telescopes from radio to gamma rays to perform simultaneous and rapid-response follow-up searches for fast transient events on millisecond-to-hours timescales. Deeper Wider Faster programme search demands have a set of constraints that is becoming common amongst large collaborations. Here, we focus on the rapid optical data component of Deeper Wider Faster programme led by the Dark Energy Camera at Cerro Tololo Inter-American Observatory. Each Dark Energy Camera image has 70 total coupled-charged devices saved as a 1.2 gigabyte FITS file. Near real-time data processing and fast transient candidate identifications-in minutes for rapid follow-up triggers on other telescopes-requires computational power exceeding what is currently available on-site at Cerro Tololo Inter-American Observatory. In this context, data files need to be transmitted rapidly to a foreign location for supercomputing post-processing, source finding, visualisation and analysis. This step in the search process poses a major bottleneck, and reducing the data size helps accommodate faster data transmission. To maximise our gain in transfer time and still achieve our science goals, we opt for lossy data compression-keeping in mind that raw data is archived and can be evaluated at a later time. We evaluate how lossy JPEG2000 compression affects the process of finding transients, and find only a negligible effect for compression ratios up to 25:1. We also find a linear relation between compression ratio and the mean estimated data transmission speed-up factor. Adding highly customised compression and decompression steps to the science pipeline considerably reduces the transmission time-validating its introduction to the Deeper Wider Faster programme science pipeline and enabling science that was otherwise too difficult with current technology.
Visualization of JPEG Metadata
NASA Astrophysics Data System (ADS)
Malik Mohamad, Kamaruddin; Deris, Mustafa Mat
There are a lot of information embedded in JPEG image than just graphics. Visualization of its metadata would benefit digital forensic investigator to view embedded data including corrupted image where no graphics can be displayed in order to assist in evidence collection for cases such as child pornography or steganography. There are already available tools such as metadata readers, editors and extraction tools but mostly focusing on visualizing attribute information of JPEG Exif. However, none have been done to visualize metadata by consolidating markers summary, header structure, Huffman table and quantization table in a single program. In this paper, metadata visualization is done by developing a program that able to summarize all existing markers, header structure, Huffman table and quantization table in JPEG. The result shows that visualization of metadata helps viewing the hidden information within JPEG more easily.
NASA Astrophysics Data System (ADS)
Aizenberg, Evgeni; Bigio, Irving J.; Rodriguez-Diaz, Eladio
2012-03-01
The Fourier descriptors paradigm is a well-established approach for affine-invariant characterization of shape contours. In the work presented here, we extend this method to images, and obtain a 2D Fourier representation that is invariant to image rotation. The proposed technique retains phase uniqueness, and therefore structural image information is not lost. Rotation-invariant phase coefficients were used to train a single multi-valued neuron (MVN) to recognize satellite and human face images rotated by a wide range of angles. Experiments yielded 100% and 96.43% classification rate for each data set, respectively. Recognition performance was additionally evaluated under effects of lossy JPEG compression and additive Gaussian noise. Preliminary results show that the derived rotation-invariant features combined with the MVN provide a promising scheme for efficient recognition of rotated images.
77 FR 59692 - 2014 Diversity Immigrant Visa Program
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-28
... the E-DV system. The entry will not be accepted and must be resubmitted. Group or family photographs... must be in the Joint Photographic Experts Group (JPEG) format. Image File Size: The maximum file size...). Image File Format: The image must be in the Joint Photographic Experts Group (JPEG) format. Image File...
NASA Astrophysics Data System (ADS)
Agueh, Max; Diouris, Jean-François; Diop, Magaye; Devaux, François-Olivier; De Vleeschouwer, Christophe; Macq, Benoit
2008-12-01
Based on the analysis of real mobile ad hoc network (MANET) traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC) rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS) to wireless clients is demonstrated.
Observation sequences and onboard data processing of Planet-C
NASA Astrophysics Data System (ADS)
Suzuki, M.; Imamura, T.; Nakamura, M.; Ishi, N.; Ueno, M.; Hihara, H.; Abe, T.; Yamada, T.
Planet-C or VCO Venus Climate Orbiter will carry 5 cameras IR1 IR 1micrometer camera IR2 IR 2micrometer camera UVI UV Imager LIR Long-IR camera and LAC Lightning and Airglow Camera in the UV-IR region to investigate atmospheric dynamics of Venus During 30 hr orbiting designed to quasi-synchronize to the super rotation of the Venus atmosphere 3 groups of scientific observations will be carried out i image acquisition of 4 cameras IR1 IR2 UVI LIR 20 min in 2 hrs ii LAC operation only when VCO is within Venus shadow and iii radio occultation These observation sequences will define the scientific outputs of VCO program but the sequences must be compromised with command telemetry downlink and thermal power conditions For maximizing science data downlink it must be well compressed and the compression efficiency and image quality have the significant scientific importance in the VCO program Images of 4 cameras IR1 2 and UVI 1Kx1K and LIR 240x240 will be compressed using JPEG2000 J2K standard J2K is selected because of a no block noise b efficiency c both reversible and irreversible d patent loyalty free and e already implemented as academic commercial software ICs and ASIC logic designs Data compression efficiencies of J2K are about 0 3 reversible and 0 1 sim 0 01 irreversible The DE Digital Electronics unit which controls 4 cameras and handles onboard data processing compression is under concept design stage It is concluded that the J2K data compression logics circuits using space
New procedures to evaluate visually lossless compression for display systems
NASA Astrophysics Data System (ADS)
Stolitzka, Dale F.; Schelkens, Peter; Bruylants, Tim
2017-09-01
Visually lossless image coding in isochronous display streaming or plesiochronous networks reduces link complexity and power consumption and increases available link bandwidth. A new set of codecs developed within the last four years promise a new level of coding quality, but require new techniques that are sufficiently sensitive to the small artifacts or color variations induced by this new breed of codecs. This paper begins with a summary of the new ISO/IEC 29170-2, a procedure for evaluation of lossless coding and reports the new work by JPEG to extend the procedure in two important ways, for HDR content and for evaluating the differences between still images, panning images and image sequences. ISO/IEC 29170-2 relies on processing test images through a well-defined process chain for subjective, forced-choice psychophysical experiments. The procedure sets an acceptable quality level equal to one just noticeable difference. Traditional image and video coding evaluation techniques, such as, those used for television evaluation have not proven sufficiently sensitive to the small artifacts that may be induced by this breed of codecs. In 2015, JPEG received new requirements to expand evaluation of visually lossless coding for high dynamic range images, slowly moving images, i.e., panning, and image sequences. These requirements are the basis for new amendments of the ISO/IEC 29170-2 procedures described in this paper. These amendments promise to be highly useful for the new content in television and cinema mezzanine networks. The amendments passed the final ballot in April 2017 and are on track to be published in 2018.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-27
... already a U.S. citizen or a Lawful Permanent Resident, but you will not be penalized if you do. Group... specifications: Image File Format: The miage must be in the Joint Photographic Experts Group (JPEG) format. Image... in the Joint Photographic Experts Group (JPEG) format. Image File Size: The maximum image file size...
Volcanoes of the Wrangell Mountains and Cook Inlet region, Alaska: selected photographs
Neal, Christina A.; McGimsey, Robert G.; Diggles, Michael F.
2001-01-01
Alaska is home to more than 40 active volcanoes, many of which have erupted violently and repeatedly in the last 200 years. This CD-ROM contains 97 digitized color 35-mm images which represent a small fraction of thousands of photographs taken by Alaska Volcano Observatory scientists, other researchers, and private citizens. The photographs were selected to portray Alaska's volcanoes, to document recent eruptive activity, and to illustrate the range of volcanic phenomena observed in Alaska. These images are for use by the interested public, multimedia producers, desktop publishers, and the high-end printing industry. The digital images are stored in the 'images' folder and can be read across Macintosh, Windows, DOS, OS/2, SGI, and UNIX platforms with applications that can read JPG (JPEG - Joint Photographic Experts Group format) or PCD (Kodak's PhotoCD (YCC) format) files. Throughout this publication, the image numbers match among the file names, figure captions, thumbnail labels, and other references. Also included on this CD-ROM are Windows and Macintosh viewers and engines for keyword searches (Adobe Acrobat Reader with Search). At the time of this publication, Kodak's policy on the distribution of color-management files is still unresolved, and so none is included on this CD-ROM. However, using the Universal Ektachrome or Universal Kodachrome transforms found in your software will provide excellent color. In addition to PhotoCD (PCD) files, this CD-ROM contains large (14.2'x19.5') and small (4'x6') screen-resolution (72 dots per inch; dpi) images in JPEG format. These undergo downsizing and compression relative to the PhotoCD images.
NASA Astrophysics Data System (ADS)
Yang, Keon Ho; Jung, Haijo; Kang, Won-Suk; Jang, Bong Mun; Kim, Joong Il; Han, Dong Hoon; Yoo, Sun-Kook; Yoo, Hyung-Sik; Kim, Hee-Joung
2006-03-01
The wireless mobile service with a high bit rate using CDMA-1X EVDO is now widely used in Korea. Mobile devices are also increasingly being used as the conventional communication mechanism. We have developed a web-based mobile system that communicates patient information and images, using CDMA-1X EVDO for emergency diagnosis. It is composed of a Mobile web application system using the Microsoft Windows 2003 server and an internet information service. Also, a mobile web PACS used for a database managing patient information and images was developed by using Microsoft access 2003. A wireless mobile emergency patient information and imaging communication system is developed by using Microsoft Visual Studio.NET, and JPEG 2000 ActiveX control for PDA phone was developed by using the Microsoft Embedded Visual C++. Also, the CDMA-1X EVDO is used for connections between mobile web servers and the PDA phone. This system allows fast access to the patient information database, storing both medical images and patient information anytime and anywhere. Especially, images were compressed into a JPEG2000 format and transmitted from a mobile web PACS inside the hospital to the radiologist using a PDA phone located outside the hospital. Also, this system shows radiological images as well as physiological signal data, including blood pressure, vital signs and so on, in the web browser of the PDA phone so radiologists can diagnose more effectively. Also, we acquired good results using an RW-6100 PDA phone used in the university hospital system of the Sinchon Severance Hospital in Korea.
High-fidelity data embedding for image annotation.
He, Shan; Kirovski, Darko; Wu, Min
2009-02-01
High fidelity is a demanding requirement for data hiding, especially for images with artistic or medical value. This correspondence proposes a high-fidelity image watermarking for annotation with robustness to moderate distortion. To achieve the high fidelity of the embedded image, we introduce a visual perception model that aims at quantifying the local tolerance to noise for arbitrary imagery. Based on this model, we embed two kinds of watermarks: a pilot watermark that indicates the existence of the watermark and an information watermark that conveys a payload of several dozen bits. The objective is to embed 32 bits of metadata into a single image in such a way that it is robust to JPEG compression and cropping. We demonstrate the effectiveness of the visual model and the application of the proposed annotation technology using a database of challenging photographic and medical images that contain a large amount of smooth regions.
An efficient system for reliably transmitting image and video data over low bit rate noisy channels
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.
1994-01-01
This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.
Switching theory-based steganographic system for JPEG images
NASA Astrophysics Data System (ADS)
Cherukuri, Ravindranath C.; Agaian, Sos S.
2007-04-01
Cellular communications constitute a significant portion of the global telecommunications market. Therefore, the need for secured communication over a mobile platform has increased exponentially. Steganography is an art of hiding critical data into an innocuous signal, which provide answers to the above needs. The JPEG is one of commonly used format for storing and transmitting images on the web. In addition, the pictures captured using mobile cameras are in mostly in JPEG format. In this article, we introduce a switching theory based steganographic system for JPEG images which is applicable for mobile and computer platforms. The proposed algorithm uses the fact that energy distribution among the quantized AC coefficients varies from block to block and coefficient to coefficient. Existing approaches are effective with a part of these coefficients but when employed over all the coefficients they show there ineffectiveness. Therefore, we propose an approach that works each set of AC coefficients with different frame work thus enhancing the performance of the approach. The proposed system offers a high capacity and embedding efficiency simultaneously withstanding to simple statistical attacks. In addition, the embedded information could be retrieved without prior knowledge of the cover image. Based on simulation results, the proposed method demonstrates an improved embedding capacity over existing algorithms while maintaining a high embedding efficiency and preserving the statistics of the JPEG image after hiding information.
A New Approach for Fingerprint Image Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazieres, Bertrand
1997-12-01
The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefactsmore » which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.« less
JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age (Invited)
NASA Astrophysics Data System (ADS)
Mueller, D.; Dimitoglou, G.; Langenberg, M.; Pagel, S.; Dau, A.; Nuhn, M.; Garcia Ortiz, J. P.; Dietert, H.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.
2010-12-01
The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is bound to be accessible only from a few repositories and users will have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community.
Detection of Copy-Rotate-Move Forgery Using Zernike Moments
NASA Astrophysics Data System (ADS)
Ryu, Seung-Jin; Lee, Min-Jeong; Lee, Heung-Kyu
As forgeries have become popular, the importance of forgery detection is much increased. Copy-move forgery, one of the most commonly used methods, copies a part of the image and pastes it into another part of the the image. In this paper, we propose a detection method of copy-move forgery that localizes duplicated regions using Zernike moments. Since the magnitude of Zernike moments is algebraically invariant against rotation, the proposed method can detect a forged region even though it is rotated. Our scheme is also resilient to the intentional distortions such as additive white Gaussian noise, JPEG compression, and blurring. Experimental results demonstrate that the proposed scheme is appropriate to identify the forged region by copy-rotate-move forgery.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-01
... need to submit a photo for a child who is already a U.S. citizen or a Legal Permanent Resident. Group... Joint Photographic Experts Group (JPEG) format; it must have a maximum image file size of two hundred... (dpi); the image file format in Joint Photographic Experts Group (JPEG) format; the maximum image file...
Lossless compression algorithm for multispectral imagers
NASA Astrophysics Data System (ADS)
Gladkova, Irina; Grossberg, Michael; Gottipati, Srikanth
2008-08-01
Multispectral imaging is becoming an increasingly important tool for monitoring the earth and its environment from space borne and airborne platforms. Multispectral imaging data consists of visible and IR measurements from a scene across space and spectrum. Growing data rates resulting from faster scanning and finer spatial and spectral resolution makes compression an increasingly critical tool to reduce data volume for transmission and archiving. Research for NOAA NESDIS has been directed to finding for the characteristics of satellite atmospheric Earth science Imager sensor data what level of Lossless compression ratio can be obtained as well as appropriate types of mathematics and approaches that can lead to approaching this data's entropy level. Conventional lossless do not achieve the theoretical limits for lossless compression on imager data as estimated from the Shannon entropy. In a previous paper, the authors introduce a lossless compression algorithm developed for MODIS as a proxy for future NOAA-NESDIS satellite based Earth science multispectral imagers such as GOES-R. The algorithm is based on capturing spectral correlations using spectral prediction, and spatial correlations with a linear transform encoder. In decompression, the algorithm uses a statistically computed look up table to iteratively predict each channel from a channel decompressed in the previous iteration. In this paper we present a new approach which fundamentally differs from our prior work. In this new approach, instead of having a single predictor for each pair of bands we introduce a piecewise spatially varying predictor which significantly improves the compression results. Our new algorithm also now optimizes the sequence of channels we use for prediction. Our results are evaluated by comparison with a state of the art wavelet based image compression scheme, Jpeg2000. We present results on the 14 channel subset of the MODIS imager, which serves as a proxy for the GOES-R imager. We will also show results of the algorithm for on NOAA AVHRR data and data from SEVIRI. The algorithm is designed to be adapted to the wide range of multispectral imagers and should facilitate distribution of data throughout globally. This compression research is managed by Roger Heymann, PE of OSD NOAA NESDIS Engineering, in collaboration with the NOAA NESDIS STAR Research Office through Mitch Goldberg, Tim Schmit, Walter Wolf.
The Hazards Data Distribution System update
Jones, Brenda K.; Lamb, Rynn M.
2010-01-01
After a major disaster, a satellite image or a collection of aerial photographs of the event is frequently the fastest, most effective way to determine its scope and severity. The U.S. Geological Survey (USGS) Emergency Operations Portal provides emergency first responders and support personnel with easy access to imagery and geospatial data, geospatial Web services, and a digital library focused on emergency operations. Imagery and geospatial data are accessed through the Hazards Data Distribution System (HDDS). HDDS historically provided data access and delivery services through nongraphical interfaces that allow emergency response personnel to select and obtain pre-event baseline data and (or) event/disaster response data. First responders are able to access full-resolution GeoTIFF images or JPEG images at medium- and low-quality compressions through ftp downloads. USGS HDDS home page: http://hdds.usgs.gov/hdds2/
Integrated test system of infrared and laser data based on USB 3.0
NASA Astrophysics Data System (ADS)
Fu, Hui Quan; Tang, Lin Bo; Zhang, Chao; Zhao, Bao Jun; Li, Mao Wen
2017-07-01
Based on USB3.0, this paper presents the design method of an integrated test system for both infrared image data and laser signal data processing module. The core of the design is FPGA logic control, the design uses dual-chip DDR3 SDRAM to achieve high-speed laser data cache, and receive parallel LVDS image data through serial-to-parallel conversion chip, and it achieves high-speed data communication between the system and host computer through the USB3.0 bus. The experimental results show that the developed PC software realizes the real-time display of 14-bit LVDS original image after 14-to-8 bit conversion and JPEG2000 compressed image after decompression in software, and can realize the real-time display of the acquired laser signal data. The correctness of the test system design is verified, indicating that the interface link is normal.
Compressed domain ECG biometric with two-lead features
NASA Astrophysics Data System (ADS)
Lee, Wan-Jou; Chang, Wen-Whei
2016-07-01
This study presents a new method to combine ECG biometrics with data compression within a common JPEG2000 framework. We target the two-lead ECG configuration that is routinely used in long-term heart monitoring. Incorporation of compressed-domain biometric techniques enables faster person identification as it by-passes the full decompression. Experiments on public ECG databases demonstrate the validity of the proposed method for biometric identification with high accuracies on both healthy and diseased subjects.
Interactive Courseware Standards
1992-07-01
music industry standard provides data formats and transmission specifications for musical notation. Joint Photographic Experts Group (JPEG). This...has been used in the music industry for several years, especially for electronically programmable keyboards and 16 instruments. The video compression
Effects of compression and individual variability on face recognition performance
NASA Astrophysics Data System (ADS)
McGarry, Delia P.; Arndt, Craig M.; McCabe, Steven A.; D'Amato, Donald P.
2004-08-01
The Enhanced Border Security and Visa Entry Reform Act of 2002 requires that the Visa Waiver Program be available only to countries that have a program to issue to their nationals machine-readable passports incorporating biometric identifiers complying with applicable standards established by the International Civil Aviation Organization (ICAO). In June 2002, the New Technologies Working Group of ICAO unanimously endorsed the use of face recognition (FR) as the globally interoperable biometric for machine-assisted identity confirmation with machine-readable travel documents (MRTDs), although Member States may elect to use fingerprint and/or iris recognition as additional biometric technologies. The means and formats are still being developed through which biometric information might be stored in the constrained space of integrated circuit chips embedded within travel documents. Such information will be stored in an open, yet unalterable and very compact format, probably as digitally signed and efficiently compressed images. The objective of this research is to characterize the many factors that affect FR system performance with respect to the legislated mandates concerning FR. A photograph acquisition environment and a commercial face recognition system have been installed at Mitretek, and over 1,400 images have been collected of volunteers. The image database and FR system are being used to analyze the effects of lossy image compression, individual differences, such as eyeglasses and facial hair, and the acquisition environment on FR system performance. Images are compressed by varying ratios using JPEG2000 to determine the trade-off points between recognition accuracy and compression ratio. The various acquisition factors that contribute to differences in FR system performance among individuals are also being measured. The results of this study will be used to refine and test efficient face image interchange standards that ensure highly accurate recognition, both for automated FR systems and human inspectors. Working within the M1-Biometrics Technical Committee of the InterNational Committee for Information Technology Standards (INCITS) organization, a standard face image format will be tested and submitted to organizations such as ICAO.
NASA Astrophysics Data System (ADS)
Ma, Long; Zhao, Deping
2011-12-01
Spectral imaging technology have been used mostly in remote sensing, but have recently been extended to new area requiring high fidelity color reproductions like telemedicine, e-commerce, etc. These spectral imaging systems are important because they offer improved color reproduction quality not only for a standard observer under a particular illuminantion, but for any other individual exhibiting normal color vision capability under another illuminantion. A possibility for browsing of the archives is needed. In this paper, the authors present a new spectral image browsing architecture. The architecture for browsing is expressed as follow: (1) The spectral domain of the spectral image is reduced with the PCA transform. As a result of the PCA transform the eigenvectors and the eigenimages are obtained. (2) We quantize the eigenimages with the original bit depth of spectral image (e.g. if spectral image is originally 8bit, then quantize eigenimage to 8bit), and use 32bit floating numbers for the eigenvectors. (3) The first eigenimage is lossless compressed by JPEG-LS, the other eigenimages were lossy compressed by wavelet based SPIHT algorithm. For experimental evalution, the following measures were used. We used PSNR as the measurement for spectral accuracy. And for the evaluation of color reproducibility, ΔE was used.here standard D65 was used as a light source. To test the proposed method, we used FOREST and CORAL spectral image databases contrain 12 and 10 spectral images, respectively. The images were acquired in the range of 403-696nm. The size of the images were 128*128, the number of bands was 40 and the resolution was 8 bits per sample. Our experiments show the proposed compression method is suitable for browsing, i.e., for visual purpose.
An FPGA-Based People Detection System
NASA Astrophysics Data System (ADS)
Nair, Vinod; Laprise, Pierre-Olivier; Clark, James J.
2005-12-01
This paper presents an FPGA-based system for detecting people from video. The system is designed to use JPEG-compressed frames from a network camera. Unlike previous approaches that use techniques such as background subtraction and motion detection, we use a machine-learning-based approach to train an accurate detector. We address the hardware design challenges involved in implementing such a detector, along with JPEG decompression, on an FPGA. We also present an algorithm that efficiently combines JPEG decompression with the detection process. This algorithm carries out the inverse DCT step of JPEG decompression only partially. Therefore, it is computationally more efficient and simpler to implement, and it takes up less space on the chip than the full inverse DCT algorithm. The system is demonstrated on an automated video surveillance application and the performance of both hardware and software implementations is analyzed. The results show that the system can detect people accurately at a rate of about[InlineEquation not available: see fulltext.] frames per second on a Virtex-II 2V1000 using a MicroBlaze processor running at[InlineEquation not available: see fulltext.], communicating with dedicated hardware over FSL links.
Analysis of signal-dependent sensor noise on JPEG 2000-compressed Sentinel-2 multi-spectral images
NASA Astrophysics Data System (ADS)
Uss, M.; Vozel, B.; Lukin, V.; Chehdi, K.
2017-10-01
The processing chain of Sentinel-2 MultiSpectral Instrument (MSI) data involves filtering and compression stages that modify MSI sensor noise. As a result, noise in Sentinel-2 Level-1C data distributed to users becomes processed. We demonstrate that processed noise variance model is bivariate: noise variance depends on image intensity (caused by signal-dependency of photon counting detectors) and signal-to-noise ratio (SNR; caused by filtering/compression). To provide information on processed noise parameters, which is missing in Sentinel-2 metadata, we propose to use blind noise parameter estimation approach. Existing methods are restricted to univariate noise model. Therefore, we propose extension of existing vcNI+fBm blind noise parameter estimation method to multivariate noise model, mvcNI+fBm, and apply it to each band of Sentinel-2A data. Obtained results clearly demonstrate that noise variance is affected by filtering/compression for SNR less than about 15. Processed noise variance is reduced by a factor of 2 - 5 in homogeneous areas as compared to noise variance for high SNR values. Estimate of noise variance model parameters are provided for each Sentinel-2A band. Sentinel-2A MSI Level-1C noise models obtained in this paper could be useful for end users and researchers working in a variety of remote sensing applications.
Scan-Based Implementation of JPEG 2000 Extensions
NASA Technical Reports Server (NTRS)
Rountree, Janet C.; Webb, Brian N.; Flohr, Thomas J.; Marcellin, Michael W.
2001-01-01
JPEG 2000 Part 2 (Extensions) contains a number of technologies that are of potential interest in remote sensing applications. These include arbitrary wavelet transforms, techniques to limit boundary artifacts in tiles, multiple component transforms, and trellis-coded quantization (TCQ). We are investigating the addition of these features to the low-memory (scan-based) implementation of JPEG 2000 Part 1. A scan-based implementation of TCQ has been realized and tested, with a very small performance loss as compared with the full image (frame-based) version. A proposed amendment to JPEG 2000 Part 2 will effect the syntax changes required to make scan-based TCQ compatible with the standard.
NASA Astrophysics Data System (ADS)
Kusyk, Janusz; Eskicioglu, Ahmet M.
2005-10-01
Digital watermarking is considered to be a major technology for the protection of multimedia data. Some of the important applications are broadcast monitoring, copyright protection, and access control. In this paper, we present a semi-blind watermarking scheme for embedding a logo in color images using the DFT domain. After computing the DFT of the luminance layer of the cover image, the magnitudes of DFT coefficients are compared, and modified. A given watermark is embedded in three frequency bands: Low, middle, and high. Our experiments show that the watermarks extracted from the lower frequencies have the best visual quality for low pass filtering, adding Gaussian noise, JPEG compression, resizing, rotation, and scaling, and the watermarks extracted from the higher frequencies have the best visual quality for cropping, intensity adjustment, histogram equalization, and gamma correction. Extractions from the fragmented and translated image are identical to extractions from the unattacked watermarked image. The collusion and rewatermarking attacks do not provide the hacker with useful tools.
System considerations for efficient communication and storage of MSTI image data
NASA Technical Reports Server (NTRS)
Rice, Robert F.
1994-01-01
The Ballistic Missile Defense Organization has been developing the capability to evaluate one or more high-rate sensor/hardware combinations by incorporating them as payloads on a series of Miniature Seeker Technology Insertion (MSTI) flights. This publication represents the final report of a 1993 study to analyze the potential impact f data compression and of related communication system technologies on post-MSTI 3 flights. Lossless compression is considered alone and in conjunction with various spatial editing modes. Additionally, JPEG and Fractal algorithms are examined in order to bound the potential gains from the use of lossy compression. but lossless compression is clearly shown to better fit the goals of the MSTI investigations. Lossless compression factors of between 2:1 and 6:1 would provide significant benefits to both on-board mass memory and the downlink. for on-board mass memory, the savings could range from $5 million to $9 million. Such benefits should be possible by direct application of recently developed NASA VLSI microcircuits. It is shown that further downlink enhancements of 2:1 to 3:1 should be feasible thorough use of practical modifications to the existing modulation system and incorporation of Reed-Solomon channel coding. The latter enhancement could also be achieved by applying recently developed VLSI microcircuits.
Digital Semaphore: Technical Feasibility of QR Code Optical Signaling for Fleet Communications
2013-06-01
Standards (http://www.iso.org) JIS Japanese Industrial Standard JPEG Joint Photographic Experts Group (digital image format; http://www.jpeg.org) LED...Denso Wave corporation in the 1990s for the Japanese automotive manufacturing industry. See Appendix A for full details. Reed-Solomon Error...eliminates camera blur induced by the shutter, providing clear images at extremely high frame rates. Thusly, digital cinema cameras are more suitable
JPEG2000 encoding with perceptual distortion control.
Liu, Zhen; Karam, Lina J; Watson, Andrew B
2006-07-01
In this paper, a new encoding approach is proposed to control the JPEG2000 encoding in order to reach a desired perceptual quality. The new method is based on a vision model that incorporates various masking effects of human visual perception and a perceptual distortion metric that takes spatial and spectral summation of individual quantization errors into account. Compared with the conventional rate-based distortion minimization JPEG2000 encoding, the new method provides a way to generate consistent quality images at a lower bit rate.
A Robust Image Watermarking in the Joint Time-Frequency Domain
NASA Astrophysics Data System (ADS)
Öztürk, Mahmut; Akan, Aydın; Çekiç, Yalçın
2010-12-01
With the rapid development of computers and internet applications, copyright protection of multimedia data has become an important problem. Watermarking techniques are proposed as a solution to copyright protection of digital media files. In this paper, a new, robust, and high-capacity watermarking method that is based on spatiofrequency (SF) representation is presented. We use the discrete evolutionary transform (DET) calculated by the Gabor expansion to represent an image in the joint SF domain. The watermark is embedded onto selected coefficients in the joint SF domain. Hence, by combining the advantages of spatial and spectral domain watermarking methods, a robust, invisible, secure, and high-capacity watermarking method is presented. A correlation-based detector is also proposed to detect and extract any possible watermarks on an image. The proposed watermarking method was tested on some commonly used test images under different signal processing attacks like additive noise, Wiener and Median filtering, JPEG compression, rotation, and cropping. Simulation results show that our method is robust against all of the attacks.
Face detection on distorted images using perceptual quality-aware features
NASA Astrophysics Data System (ADS)
Gunasekar, Suriya; Ghosh, Joydeep; Bovik, Alan C.
2014-02-01
We quantify the degradation in performance of a popular and effective face detector when human-perceived image quality is degraded by distortions due to additive white gaussian noise, gaussian blur or JPEG compression. It is observed that, within a certain range of perceived image quality, a modest increase in image quality can drastically improve face detection performance. These results can be used to guide resource or bandwidth allocation in a communication/delivery system that is associated with face detection tasks. A new face detector based on QualHOG features is also proposed that augments face-indicative HOG features with perceptual quality-aware spatial Natural Scene Statistics (NSS) features, yielding improved tolerance against image distortions. The new detector provides statistically significant improvements over a strong baseline on a large database of face images representing a wide range of distortions. To facilitate this study, we created a new Distorted Face Database, containing face and non-face patches from images impaired by a variety of common distortion types and levels. This new dataset is available for download and further experimentation at www.ideal.ece.utexas.edu/˜suriya/DFD/.
JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age
NASA Astrophysics Data System (ADS)
Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Langenberg, M.; Nuhn, M.; Dau, A.; Pagel, S.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.
2011-12-01
The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is accessible only from a few repositories and users have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the ESA/NASA Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community. In addition, the easy-to-use graphical user interface enables the general public and educators to access, enjoy and reuse data from space missions without barriers.
Perceptually-Based Adaptive JPEG Coding
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)
1996-01-01
An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.
An interactive toolbox for atlas-based segmentation and coding of volumetric images
NASA Astrophysics Data System (ADS)
Menegaz, G.; Luti, S.; Duay, V.; Thiran, J.-Ph.
2007-03-01
Medical imaging poses the great challenge of having compression algorithms that are lossless for diagnostic and legal reasons and yet provide high compression rates for reduced storage and transmission time. The images usually consist of a region of interest representing the part of the body under investigation surrounded by a "background", which is often noisy and not of diagnostic interest. In this paper, we propose a ROI-based 3D coding system integrating both the segmentation and the compression tools. The ROI is extracted by an atlas based 3D segmentation method combining active contours with information theoretic principles, and the resulting segmentation map is exploited for ROI based coding. The system is equipped with a GUI allowing the medical doctors to supervise the segmentation process and eventually reshape the detected contours at any point. The process is initiated by the user through the selection of either one pre-de.ned reference image or one image of the volume to be used as the 2D "atlas". The object contour is successively propagated from one frame to the next where it is used as the initial border estimation. In this way, the entire volume is segmented based on a unique 2D atlas. The resulting 3D segmentation map is exploited for adaptive coding of the different image regions. Two coding systems were considered: the JPEG3D standard and the 3D-SPITH. The evaluation of the performance with respect to both segmentation and coding proved the high potential of the proposed system in providing an integrated, low-cost and computationally effective solution for CAD and PAC systems.
NASA Astrophysics Data System (ADS)
Brown, Nicholas J.; Lloyd, David S.; Reynolds, Melvin I.; Plummer, David L.
2002-05-01
A visible digital image is rendered from a set of digital image data. Medical digital image data can be stored as either: (a) pre-rendered format, corresponding to a photographic print, or (b) un-rendered format, corresponding to a photographic negative. The appropriate image data storage format and associated header data (metadata) required by a user of the results of a diagnostic procedure recorded electronically depends on the task(s) to be performed. The DICOM standard provides a rich set of metadata that supports the needs of complex applications. Many end user applications, such as simple report text viewing and display of a selected image, are not so demanding and generic image formats such as JPEG are sometimes used. However, these are lacking some basic identification requirements. In this paper we make specific proposals for minimal extensions to generic image metadata of value in various domains, which enable safe use in the case of two simple healthcare end user scenarios: (a) viewing of text and a selected JPEG image activated by a hyperlink and (b) viewing of one or more JPEG images together with superimposed text and graphics annotation using a file specified by a profile of the ISO/IEC Basic Image Interchange Format (BIIF).
Chassy, Philippe; Lindell, Trym A E; Jones, Jessica A; Paramei, Galina V
2015-01-01
Image aesthetic pleasure (AP) is conjectured to be related to image visual complexity (VC). The aim of the present study was to investigate whether (a) two image attributes, AP and VC, are reflected in eye-movement parameters; and (b) subjective measures of AP and VC are related. Participants (N=26) explored car front images (M=50) while their eye movements were recorded. Following image exposure (10 seconds), its VC and AP were rated. Fixation count was found to positively correlate with the subjective VC and its objective proxy, JPEG compression size, suggesting that this eye-movement parameter can be considered an objective behavioral measure of VC. AP, in comparison, positively correlated with average dwelling time. Subjective measures of AP and VC were related too, following an inverted U-shape function best-fit by a quadratic equation. In addition, AP was found to be modulated by car prestige. Our findings reveal a close relationship between subjective and objective measures of complexity and aesthetic appraisal, which is interpreted within a prototype-based theory framework. © The Author(s) 2015.
Diagnostic accuracy of chest X-rays acquired using a digital camera for low-cost teleradiology.
Szot, Agnieszka; Jacobson, Francine L; Munn, Samson; Jazayeri, Darius; Nardell, Edward; Harrison, David; Drosten, Ralph; Ohno-Machado, Lucila; Smeaton, Laura M; Fraser, Hamish S F
2004-02-01
Store-and-forward telemedicine, using e-mail to send clinical data and digital images, offers a low-cost alternative for physicians in developing countries to obtain second opinions from specialists. To explore the potential usefulness of this technique, 91 chest X-ray images were photographed using a digital camera and a view box. Four independent readers (three radiologists and one pulmonologist) read two types of digital (JPEG and JPEG2000) and original film images and indicated their confidence in the presence of eight features known to be radiological indicators of tuberculosis (TB). The results were compared to a "gold standard" established by two different radiologists, and assessed using receiver operating characteristic (ROC) curve analysis. There was no statistical difference in the overall performance between the readings from the original films and both types of digital images. The size of JPEG2000 images was approximately 120KB, making this technique feasible for slow internet connections. Our preliminary results show the potential usefulness of this technique particularly for tuberculosis and lung disease, but further studies are required to refine its potential.
A new compression format for fiber tracking datasets.
Presseau, Caroline; Jodoin, Pierre-Marc; Houde, Jean-Christophe; Descoteaux, Maxime
2015-04-01
A single diffusion MRI streamline fiber tracking dataset may contain hundreds of thousands, and often millions of streamlines and can take up to several gigabytes of memory. This amount of data is not only heavy to compute, but also difficult to visualize and hard to store on disk (especially when dealing with a collection of brains). These problems call for a fiber-specific compression format that simplifies its manipulation. As of today, no fiber compression format has yet been adopted and the need for it is now becoming an issue for future connectomics research. In this work, we propose a new compression format, .zfib, for streamline tractography datasets reconstructed from diffusion magnetic resonance imaging (dMRI). Tracts contain a large amount of redundant information and are relatively smooth. Hence, they are highly compressible. The proposed method is a processing pipeline containing a linearization, a quantization and an encoding step. Our pipeline is tested and validated under a wide range of DTI and HARDI tractography configurations (step size, streamline number, deterministic and probabilistic tracking) and compression options. Similar to JPEG, the user has one parameter to select: a worst-case maximum tolerance error in millimeter (mm). Overall, we find a compression factor of more than 96% for a maximum error of 0.1mm without any perceptual change or change of diffusion statistics (mean fractional anisotropy and mean diffusivity) along bundles. This opens new opportunities for connectomics and tractometry applications. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wantuch, Andrew C.; Vita, Joshua A.; Jimenez, Edward S.; Bray, Iliana E.
2016-10-01
Despite object detection, recognition, and identification being very active areas of computer vision research, many of the available tools to aid in these processes are designed with only photographs in mind. Although some algorithms used specifically for feature detection and identification may not take explicit advantage of the colors available in the image, they still under-perform on radiographs, which are grayscale images. We are especially interested in the robustness of these algorithms, specifically their performance on a preexisting database of X-ray radiographs in compressed JPEG form, with multiple ways of describing pixel information. We will review various aspects of the performance of available feature detection and identification systems, including MATLABs Computer Vision toolbox, VLFeat, and OpenCV on our non-ideal database. In the process, we will explore possible reasons for the algorithms' lessened ability to detect and identify features from the X-ray radiographs.
Human visual system-based color image steganography using the contourlet transform
NASA Astrophysics Data System (ADS)
Abdul, W.; Carré, P.; Gaborit, P.
2010-01-01
We present a steganographic scheme based on the contourlet transform which uses the contrast sensitivity function (CSF) to control the force of insertion of the hidden information in a perceptually uniform color space. The CIELAB color space is used as it is well suited for steganographic applications because any change in the CIELAB color space has a corresponding effect on the human visual system as is very important for steganographic schemes to be undetectable by the human visual system (HVS). The perceptual decomposition of the contourlet transform gives it a natural advantage over other decompositions as it can be molded with respect to the human perception of different frequencies in an image. The evaluation of the imperceptibility of the steganographic scheme with respect to the color perception of the HVS is done using standard methods such as the structural similarity (SSIM) and CIEDE2000. The robustness of the inserted watermark is tested against JPEG compression.
History of the Universe Poster
History of the Universe Poster You are free to use these images if you give credit to: Particle Data Group at Lawrence Berkeley National Lab. New Version (2014) History of the Universe Poster Download: JPEG version PDF version Old Version (2013) History of the Universe Poster Download: JPEG version
The comparison between SVD-DCT and SVD-DWT digital image watermarking
NASA Astrophysics Data System (ADS)
Wira Handito, Kurniawan; Fauzi, Zulfikar; Aminy Ma’ruf, Firda; Widyaningrum, Tanti; Muslim Lhaksmana, Kemas
2018-03-01
With internet, anyone can publish their creation into digital data simply, inexpensively, and absolutely easy to be accessed by everyone. However, the problem appears when anyone else claims that the creation is their property or modifies some part of that creation. It causes necessary protection of copyrights; one of the examples is with watermarking method in digital image. The application of watermarking technique on digital data, especially on image, enables total invisibility if inserted in carrier image. Carrier image will not undergo any decrease of quality and also the inserted image will not be affected by attack. In this paper, watermarking will be implemented on digital image using Singular Value Decomposition based on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) by expectation in good performance of watermarking result. In this case, trade-off happen between invisibility and robustness of image watermarking. In embedding process, image watermarking has a good quality for scaling factor < 0.1. The quality of image watermarking in decomposition level 3 is better than level 2 and level 1. Embedding watermark in low-frequency is robust to Gaussian blur attack, rescale, and JPEG compression, but in high-frequency is robust to Gaussian noise.
Steganographic embedding in containers-images
NASA Astrophysics Data System (ADS)
Nikishova, A. V.; Omelchenko, T. A.; Makedonskij, S. A.
2018-05-01
Steganography is one of the approaches to ensuring the protection of information transmitted over the network. But a steganographic method should vary depending on a used container. According to statistics, the most widely used containers are images and the most common image format is JPEG. Authors propose a method of data embedding into a frequency area of images in format JPEG 2000. It is proposed to use the method of Benham-Memon- Yeo-Yeung, in which instead of discrete cosine transform, discrete wavelet transform is used. Two requirements for images are formulated. Structure similarity is chosen to obtain quality assessment of data embedding. Experiments confirm that requirements satisfaction allows achieving high quality assessment of data embedding.
Alaskan Auroral All-Sky Images on the World Wide Web
NASA Technical Reports Server (NTRS)
Stenbaek-Nielsen, H. C.
1997-01-01
In response to a 1995 NASA SPDS announcement of support for preservation and distribution of important data sets online, the Geophysical Institute, University of Alaska Fairbanks, Alaska, proposed to provide World Wide Web access to the Poker Flat Auroral All-sky Camera images in real time. The Poker auroral all-sky camera is located in the Davis Science Operation Center at Poker Flat Rocket Range about 30 miles north-east of Fairbanks, Alaska, and is connected, through a microwave link, with the Geophysical Institute where we maintain the data base linked to the Web. To protect the low light-level all-sky TV camera from damage due to excessive light, we only operate during the winter season when the moon is down. The camera and data acquisition is now fully computer controlled. Digital images are transmitted each minute to the Web linked data base where the data are available in a number of different presentations: (1) Individual JPEG compressed images (1 minute resolution); (2) Time lapse MPEG movie of the stored images; and (3) A meridional plot of the entire night activity.
A complete passive blind image copy-move forensics scheme based on compound statistics features.
Peng, Fei; Nie, Yun-ying; Long, Min
2011-10-10
Since most sensor pattern noise based image copy-move forensics methods require a known reference sensor pattern noise, it generally results in non-blinded passive forensics, which significantly confines the application circumstances. In view of this, a novel passive-blind image copy-move forensics scheme is proposed in this paper. Firstly, a color image is transformed into a grayscale one, and wavelet transform based de-noising filter is used to extract the sensor pattern noise, then the variance of the pattern noise, the signal noise ratio between the de-noised image and the pattern noise, the information entropy and the average energy gradient of the original grayscale image are chosen as features, non-overlapping sliding window operations are done to the images to divide them into different sub-blocks. Finally, the tampered areas are detected by analyzing the correlation of the features between the sub-blocks and the whole image. Experimental results and analysis show that the proposed scheme is completely passive-blind, has a good detection rate, and is robust against JPEG compression, noise, rotation, scaling and blurring. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Dynamic power scheduling system for JPEG2000 delivery over wireless networks
NASA Astrophysics Data System (ADS)
Martina, Maurizio; Vacca, Fabrizio
2003-06-01
Third generation mobile terminals diffusion is encouraging the development of new multimedia based applications. The reliable transmission of audiovisual content will gain major interest being one of the most valuable services. Nevertheless, mobile scenario is severely power constrained: high compression ratios and refined energy management strategies are highly advisable. JPEG2000 as the source encoding stage assures excellent performance with extremely good visual quality. However the limited power budged imposes to limit the computational effort in order to save as much power as possible. Starting from an error prone environment, as the wireless one, high error-resilience features need to be employed. This paper tries to investigate the trade-off between quality and power in such a challenging environment.
Networking of three dimensional sonography volume data.
Kratochwil, A; Lee, A; Schoisswohl, A
2000-09-01
Three-dimensioned (3D) sonography enables the examiner to store, instead of copies from single B-scan planes, a volume consisting of 300 scan planes. The volume is displayed on a monitor in form of three orthogonal planes--longitudinal, axial and coronal. Translation and rotation facilitates anatomical orientation and provides any arbitrary plane within the volume to generate organ optimized scan planes. Different algorithms allow the extraction of different information such as surface, or bone structures by maximum mode, or fluid filled structures, such as vessels by the minimum mode. The volume may contain as well color information of vessels. The digitized information is stored on a magnetic optical disc. This allows virtual scanning in absence of the patient under the same conditions as the volume was primarily stored. The volume size is dependent on different, examiner-controlled settings. A volume may need a storage capacity between 2 and 16 MB of 8-bit gray level information. As such huge data sets are unsuitable for network transfer, data compression is of paramount interest. 100 stored volumes were submitted to JPEG, MPEG, and biorthogonal wavelet compression. The original and compressed volumes were randomly shown on two monitors. In case of noticeable image degradation, information on the location of the original and compressed volume and the ratio of compression was read. Numerical values for proving compression fidelity as pixel error calculation and computation of square root error have been unsuitable for evaluating image degradation. The best results in recognizing image degradation were achieved by image experts. The experts disagreed on the ratio where image degradation became visible in only 4% of the volumes. Wavelet compression ratios of 20:1 or 30:1 could be performed without discernible information reduction. The effect of volume compression is reflected both in the reduction of transfer time and in storage capacity. Transmission time for a volume of 6 MB using a normal telephone with a data flow of 56 kB/s was reduced from 14 min to 28 s at a compression rate of 30:1. Compression reduced storage requirements from 6 MB uncompressed to 200 kB at a compression rate of 30:1. This successful compression opens new possibilities of intra- and extra-hospital and global information for 3D sonography. The key to this communication is not only volume compression, but also the fact that the 3D examination can be simulated on any PC by the developed 3D software. PACS teleradiology using digitized radiographs transmitted over standard telephone lines. Systems in combination with the management systems of HIS and RIS are available for archiving, retrieval of images and reports and for local and global communication. This form of tele-medicine will have an impact on cost reduction in hospitals, reduction of transport costs. On this fundament worldwide education and multi-center studies becomes possible.
Wavelet-based scalable L-infinity-oriented compression.
Alecu, Alin; Munteanu, Adrian; Cornelis, Jan P H; Schelkens, Peter
2006-09-01
Among the different classes of coding techniques proposed in literature, predictive schemes have proven their outstanding performance in near-lossless compression. However, these schemes are incapable of providing embedded L(infinity)-oriented compression, or, at most, provide a very limited number of potential L(infinity) bit-stream truncation points. We propose a new multidimensional wavelet-based L(infinity)-constrained scalable coding framework that generates a fully embedded L(infinity)-oriented bit stream and that retains the coding performance and all the scalability options of state-of-the-art L2-oriented wavelet codecs. Moreover, our codec instantiation of the proposed framework clearly outperforms JPEG2000 in L(infinity) coding sense.
High-speed low-complexity video coding with EDiCTius: a DCT coding proposal for JPEG XS
NASA Astrophysics Data System (ADS)
Richter, Thomas; Fößel, Siegfried; Keinert, Joachim; Scherl, Christian
2017-09-01
In its 71th meeting, the JPEG committee issued a call for low complexity, high speed image coding, designed to address the needs of low-cost video-over-ip applications. As an answer to this call, Fraunhofer IIS and the Computing Center of the University of Stuttgart jointly developed an embedded DCT image codec requiring only minimal resources while maximizing throughput on FPGA and GPU implementations. Objective and subjective tests performed for the 73rd meeting confirmed its excellent performance and suitability for its purpose, and it was selected as one of the two key contributions for the development of a joined test model. In this paper, its authors describe the design principles of the codec, provide a high-level overview of the encoder and decoder chain and provide evaluation results on the test corpus selected by the JPEG committee.
Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding
Xiao, Rui; Gao, Junbin; Bossomaier, Terry
2016-01-01
A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102
Lossless data compression for improving the performance of a GPU-based beamformer.
Lok, U-Wai; Fan, Gang-Wei; Li, Pai-Chi
2015-04-01
The powerful parallel computation ability of a graphics processing unit (GPU) makes it feasible to perform dynamic receive beamforming However, a real time GPU-based beamformer requires high data rate to transfer radio-frequency (RF) data from hardware to software memory, as well as from central processing unit (CPU) to GPU memory. There are data compression methods (e.g. Joint Photographic Experts Group (JPEG)) available for the hardware front end to reduce data size, alleviating the data transfer requirement of the hardware interface. Nevertheless, the required decoding time may even be larger than the transmission time of its original data, in turn degrading the overall performance of the GPU-based beamformer. This article proposes and implements a lossless compression-decompression algorithm, which enables in parallel compression and decompression of data. By this means, the data transfer requirement of hardware interface and the transmission time of CPU to GPU data transfers are reduced, without sacrificing image quality. In simulation results, the compression ratio reached around 1.7. The encoder design of our lossless compression approach requires low hardware resources and reasonable latency in a field programmable gate array. In addition, the transmission time of transferring data from CPU to GPU with the parallel decoding process improved by threefold, as compared with transferring original uncompressed data. These results show that our proposed lossless compression plus parallel decoder approach not only mitigate the transmission bandwidth requirement to transfer data from hardware front end to software system but also reduce the transmission time for CPU to GPU data transfer. © The Author(s) 2014.
Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images.
Liu, Xianming; Cheung, Gene; Lin, Chia-Wen; Zhao, Debin; Gao, Wen
2018-07-01
Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors-sparsity prior and graph-signal smoothness prior-for reverse mapping to recover original fine quantization bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).
Digital video technologies and their network requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. P. Tsang; H. Y. Chen; J. M. Brandt
1999-11-01
Coded digital video signals are considered to be one of the most difficult data types to transport due to their real-time requirements and high bit rate variability. In this study, the authors discuss the coding mechanisms incorporated by the major compression standards bodies, i.e., JPEG and MPEG, as well as more advanced coding mechanisms such as wavelet and fractal techniques. The relationship between the applications which use these coding schemes and their network requirements are the major focus of this study. Specifically, the authors relate network latency, channel transmission reliability, random access speed, buffering and network bandwidth with the variousmore » coding techniques as a function of the applications which use them. Such applications include High-Definition Television, Video Conferencing, Computer-Supported Collaborative Work (CSCW), and Medical Imaging.« less
Report about the Solar Eclipse on August 11, 1999
NASA Astrophysics Data System (ADS)
1999-08-01
This webpage provides information about the total eclipse on Wednesday, August 11, 1999, as it was seen by ESO staff, mostly at or near the ESO Headquarters in Garching (Bavaria, Germany). The zone of totality was about 108 km wide and the ESO HQ were located only 8 km south of the line of maximum totality. The duration of the phase of totality was about 2 min 17 sec. The weather was quite troublesome in this geographical area. Heavy clouds moved across the sky during the entire event, but there were also some holes in between. Consequently, sites that were only a few kilometres from each other had very different viewing conditions. Some photos and spectra of the eclipsed Sun are displayed below, with short texts about the circumstances under which they were made. Please note that reproduction of pictures on this webpage is only permitted, if the author is mentioned as source. Information made available before the eclipse is available here. Eclipse Impressions at the ESO HQ Photo by Eddy Pomaroli Preparing for the Eclipse Photo: Eddy Pomaroli [JEG: 400 x 239 pix - 116k] [JPEG: 800 x 477 pix - 481k] [JPEG: 3000 x 1789 pix - 3.9M] Photo by Eddy Pomaroli During the 1st Partial Phase Photo: Eddy Pomaroli [JPEG: 400 x 275 pix - 135k] [JPEG: 800 x 549 pix - 434k] [JPEG: 2908 x 1997 pix - 5.9M] Photo by Hamid Mehrgan Heavy Clouds Above Digital Photo: Hamid Mehrgan [JPEG: 400 x 320 pix - 140k] [JPEG: 800 x 640 pix - 540k] [JPEG: 1280 x 1024 pix - 631k] Photo by Olaf Iwert Totality Approaching Digital Photo: Olaf Iwert [JPEG: 400 x 320 pix - 149k] [JPEG: 800 x 640 pix - 380k] [JPEG: 1280 x 1024 pix - 536k] Photo by Olaf Iwert Beginning of Totality Digital Photo: Olaf Iwert [JPEG: 400 x 236 pix - 86k] [JPEG: 800 x 471 pix - 184k] [JPEG: 1280 x 753 pix - 217k] Photo by Olaf Iwert A Happy Eclipse Watcher Digital Photo: Olaf Iwert [JPEG: 400 x 311 pix - 144k] [JPEG: 800 x 622 pix - 333k] [JPEG: 1280 x 995 pix - 644k] ESO HQ Eclipse Video Clip [MPEG-version] ESO HQ Eclipse Video Clip (2425 frames/01:37 min) [MPEG Video; 160x120 pix; 2.2M] [MPEG Video; 320x240 pix; 4.4Mb] [RealMedia; streaming; 33kps] [RealMedia; streaming; 200kps] This Video Clip was prepared from a "reportage" of the event at the ESO HQ that was transmitted in real-time to ESO-Chile via ESO's satellite link. It begins with some sequences of the first partial phase and the eclipse watchers. Clouds move over and the landscape darkens as the phase of totality approaches. The Sun is again visible at the very moment this phase ends. Some further sequences from the second partial phase follow. Produced by Herbert Zodet. Dire Forecasts The weather predictions in the days before the eclipse were not good for Munich and surroundings. A heavy front with rain and thick clouds that completely covered the sky moved across Bavaria the day before and the meteorologists predicted a 20% chance of seeing anything at all. On August 10, it seemed that the chances were best in France and in the western parts of Germany, and much less close to the Alps. This changed to the opposite during the night before the eclipse. Now the main concern in Munich was a weather front approaching from the west - would it reach this area before the eclipse? The better chances were then further east, nearer the Austrian border. Many people travelled back and forth along the German highways, many of which quickly became heavily congested. Preparations About 500 persons, mostly ESO staff with their families and friends, were present at the ESO HQ in the morning of August 11. Prior to the eclipse, they received information about the various aspects of solar eclipses and about the specific conditions of this one in the auditorium. Protective glasses were handed out and it was the idea that they would then follow the eclipse from outside. In view of the pessimistic weather forecasts, TV sets had been set up in two large rooms, but in the end most chose to watch the eclipse from the terasse in front of the cafeteria and from the area south of the building. Several telescopes were set up among the trees and on the adjoining field (just harvested). Clouds and Holes It was an unusual solar eclipse experience. Heavy clouds were passing by with sudden rainshowers, but fortunately there were also some holes with blue sky in between. While much of the first partial phase was visible through these, some really heavy clouds moved in a few minutes before the total phase, when the light had begun to fade. They drifted slowly - too slowly! - towards the east and the corona was never seen from the ESO HQ site. From here, the view towards the eclipsed Sun only cleared at the very instant of the second "diamond ring" phenomenon. This was beautiful, however, and evidently took most of the photographers by surprise, so very few, if any, photos were made of this memorable moment. Temperature Curve by Benoit Pirenne Temperature Curve on August 11 [JPEG: 646 x 395 pix - 35k] Measured by Benoit Pirenne - see also his meteorological webpage Nevertheless, the entire experience was fantastic - there were all the expected effects, the darkness, the cool air, the wind and the silence. It was very impressive indeed! And it was certainly a unique day in ESO history! Carolyn Collins Petersen from "Sky & Telescope" participated in the conference at ESO in the days before and watched the eclipse from the "Bürgerplatz" in Garching, about 1.5 km south of the ESO HQ. She managed to see part of the totality phase and filed some dramatic reports at the S&T Eclipse Expedition website. They describe very well the feelings of those in this area! Eclipse Photos Several members of the ESO staff went elsewhere and had more luck with the weather, especially at the moment of totality. Below are some of their impressive pictures. Eclipse Photo by Philippe Duhoux First "Diamond Ring" [JPEG: 400 x 292 pix - 34k] [JPEG: 800 x 583 pix - 144k] [JPEG: 2531 x 1846 pix - 1.3M] Eclipse Photo by Philippe Duhoux Totality [JPEG: 400 x 306 pix - 49k] [JPEG: 800 x 612 pix - 262k] [JPEG: 3039 x 1846 pix - 3.6M] Eclipse Photo by Philippe Duhoux Second "Diamond Ring" [JPEG: 400 x 301 pix - 34k] [JPEG: 800 x 601 pix - 163k] [JPEG: 2905 x 2181 pix - 2.0M] The Corona (Philippe Duhoux) "For the observation of the eclipse, I chose a field on a hill offering a wide view towards the western horizon and located about 10 kilometers north west of Garching." "While the partial phase was mostly cloudy, the sky went clear 3 minutes before the totality and remained so for about 15 minutes. Enough to enjoy the event!" "The images were taken on Agfa CT100 colour slide film with an Olympus OM-20 at the focus of a Maksutov telescope (f = 1000 mm, f/D = 10). The exposure times were automatically set by the camera. During the partial phase, I used an off-axis mask of 40 mm diameter with a mylar filter ND = 3.6, which I removed for the diamond rings and the corona." Note in particular the strong, detached protuberances to the right of the rim, particularly noticeable in the last photo. Eclipse Photo by Cyril Cavadore Totality [JPEG: 400 x 360 pix - 45k] [JPEG: 800 x 719 pix - 144k] [JPEG: 908 x 816 pix - 207k] The Corona (Cyril Cavadore) "We (C.Cavadore from ESO and L. Bernasconi and B. Gaillard from Obs. de la Cote d'Azur) took this photo in France at Vouzier (Champagne-Ardennes), between Reims and Nancy. A large blue opening developed in the sky at 10 o'clock and we decided to set up the telescope and the camera at that time. During the partial phase, a lot of clouds passed over, making it hard to focus properly. Nevertheless, 5 min before totality, a deep blue sky opened above us, allowing us to watch it and to take this picture. 5-10 Minutes after the totality, the sky was almost overcast up to the 4th contact". "The image was taken with a 2x2K (14 µm pixels) Thomson "homemade" CCD camera mounted on a CN212 Takahashi (200 mm diameter telescope) with a 1/10.000 neutral filter. The acquisition software set exposure time (2 sec) and took images in a complete automated way, allowing us to observe the eclipse by naked eye or with binoculars. To get as many images as possible during totality, we use binning 2x2 to reduce the readout time to 19 sec. Afterward, one of the best image was flat-fielded and processed with a special algorithm that modelled a fit the continuous component of the corona and then subtracted from the original image. The remaining details were enhanced by unsharp masking and added to the original image. Finally, gaussian histogram equalization was applied". Eclipse Photo by Eddy Pomaroli Second "Diamond Ring" [JPEG: 400 x 438 pix - 129k] [JPEG: 731 x 800 pix - 277k] [JPEG: 1940 x 2123 pix - 2.3M] Diamond Ring at ESO HQ (Eddy Pomaroli) "Despite the clouds, we saw the second "diamond ring" from the ESO HQ. In a sense, we were quite lucky, since the clouds were very heavy during the total phase and we might easily have missed it all!". "I used an old Minolta SRT-101 camera and a teleobjective (450 mm; f/8). The exposure was 1/125 sec on Kodak Elite 100 (pushed to 200 ASA). I had the feeling that the Sun would become visible and had the camera pointed, by good luck in the correct direction, as soon as the cloud moved away". Eclipse Photo by Roland Reiss First Partial Phase [JPEG: 400 x 330 pix - 94k] [JPEG: 800 x 660 pix - 492k] [JPEG: 3000 x 2475 pix - 4.5M] End of First Partial Phase (Roland Reiss) "I observed the eclipse from my home in Garching. The clouds kept moving and this was the last photo I was able to obtain during the first partial phase, before they blocked everything". "The photo is interesting, because it shows two more images of the eclipsed Sun, below the overexposed central part. In one of them, the remaining, narrow crescent is particularly well visible. They are caused by reflections in the camera. I used a Minolta camera and a Fuji colour slide film". Eclipse Spectra Some ESO people went a step further and obtained spectra of the Sun at the time of the eclipse. Eclipse Spectrum by Roland Reiss Coronal Spectrum [JPEG: 400 x 273 pix - 94k] [JPEG: 800 x 546 pix - 492k] [JPEG: 3000 x 2046 pix - 4.5M] Coronal Spectrum (CAOS Group) The Club of Amateurs in Optical Spectroscopy (with Carlos Guirao Sanchez, Gerardo Avila and Jesus Rodriguez) obtained a spectrum of the solar corona from a site in Garching, about 2 km south of the ESO HQ. "This is a plot of the spectrum and the corresponding CCD image that we took during the total eclipse. The main coronal lines are well visible and have been identified in the figure. Note in particular one at 6374 Angstrom that was first ascribed to the mysterious substance "Coronium". We now know that it is emitted by iron atoms that have lost nine electrons (Fe X)". The equipment was: * Telescope: Schmidt Cassegrain F/6.3; Diameter: 250 mm * FIASCO Spectrograph: Fibre: 135 micron core diameter F = 100 mm collimator, f = 80 mm camera; Grating: 1300 gr/mm blazed at 500 nm; SBIG ST8E CCD camera; Exposure time was 20 sec. Eclipse Spectrum by Bob Fosbury Chromospheric Spectrum [JPEG: 120 x 549 pix - 20k] Chromospheric and Coronal Spectra (Bob Fosbury) "The 11 August 1999 total solar eclipse was seen from a small farm complex called Wolfersberg in open fields some 20km ESE of the centre of Munich. It was chosen to be within the 2min band of totality but likely to be relatively unpopulated". "There were intermittent views of the Sun between first and second contact with quite a heavy rainshower which stopped 9min before totality. A large clear patch of sky revealed a perfect view of the Sun just 2min before second contact and it remained clear for at least half an hour after third contact". "The principal project was to photograph the spectrum of the chromosphere during totality using a transmission grating in front of a moderate telephoto lens. The desire to do this was stimulated by a view of the 1976 eclipse in Australia when I held the same grating up to the eclipsed Sun and was thrilled by the view of the emission line spectrum. The trick now was to get the exposure right!". "A sequence of 13 H-alpha images was combined into a looping movie. The exposure times were different, but some attempt has been made to equalise the intensities. The last two frames show the low chromosphere and then the photosphere emerging at 3rd contact. The [FeX] coronal line can be seen on the left in the middle of the sequence. I used a Hasselblad camera and Agfa slide film (RSX II 100)".
A software platform for the analysis of dermatology images
NASA Astrophysics Data System (ADS)
Vlassi, Maria; Mavraganis, Vlasios; Asvestas, Panteleimon
2017-11-01
The purpose of this paper is to present a software platform developed in Python programming environment that can be used for the processing and analysis of dermatology images. The platform provides the capability for reading a file that contains a dermatology image. The platform supports image formats such as Windows bitmaps, JPEG, JPEG2000, portable network graphics, TIFF. Furthermore, it provides suitable tools for selecting, either manually or automatically, a region of interest (ROI) on the image. The automated selection of a ROI includes filtering for smoothing the image and thresholding. The proposed software platform has a friendly and clear graphical user interface and could be a useful second-opinion tool to a dermatologist. Furthermore, it could be used to classify images including from other anatomical parts such as breast or lung, after proper re-training of the classification algorithms.
Exploring the feasibility of traditional image querying tasks for industrial radiographs
NASA Astrophysics Data System (ADS)
Bray, Iliana E.; Tsai, Stephany J.; Jimenez, Edward S.
2015-08-01
Although there have been great strides in object recognition with optical images (photographs), there has been comparatively little research into object recognition for X-ray radiographs. Our exploratory work contributes to this area by creating an object recognition system designed to recognize components from a related database of radiographs. Object recognition for radiographs must be approached differently than for optical images, because radiographs have much less color-based information to distinguish objects, and they exhibit transmission overlap that alters perceived object shapes. The dataset used in this work contained more than 55,000 intermixed radiographs and photographs, all in a compressed JPEG form and with multiple ways of describing pixel information. For this work, a robust and efficient system is needed to combat problems presented by properties of the X-ray imaging modality, the large size of the given database, and the quality of the images contained in said database. We have explored various pre-processing techniques to clean the cluttered and low-quality images in the database, and we have developed our object recognition system by combining multiple object detection and feature extraction methods. We present the preliminary results of the still-evolving hybrid object recognition system.
Image enhancement using the hypothesis selection filter: theory and application to JPEG decoding.
Wong, Tak-Shing; Bouman, Charles A; Pollak, Ilya
2013-03-01
We introduce the hypothesis selection filter (HSF) as a new approach for image quality enhancement. We assume that a set of filters has been selected a priori to improve the quality of a distorted image containing regions with different characteristics. At each pixel, HSF uses a locally computed feature vector to predict the relative performance of the filters in estimating the corresponding pixel intensity in the original undistorted image. The prediction result then determines the proportion of each filter used to obtain the final processed output. In this way, the HSF serves as a framework for combining the outputs of a number of different user selected filters, each best suited for a different region of an image. We formulate our scheme in a probabilistic framework where the HSF output is obtained as the Bayesian minimum mean square error estimate of the original image. Maximum likelihood estimates of the model parameters are determined from an offline fully unsupervised training procedure that is derived from the expectation-maximization algorithm. To illustrate how to apply the HSF and to demonstrate its potential, we apply our scheme as a post-processing step to improve the decoding quality of JPEG-encoded document images. The scheme consistently improves the quality of the decoded image over a variety of image content with different characteristics. We show that our scheme results in quantitative improvements over several other state-of-the-art JPEG decoding methods.
Hyperspectral Imagery Throughput and Fusion Evaluation over Compression and Interpolation
2008-07-01
MSE ⎛ ⎞ = ⎜ ⎟ ⎝ ⎠ (17) The PSNR values and compression ratios are shown in Table 1 and a plot of PSNR against the bits per pixel ( bpp ) is shown...Ratio bpp 59.3 2.9:1 2.76 46.0 9.2:1 0.87 43.2 14.5:1 0.55 40.8 25.0:1 0.32 38.7 34.6:1 0.23 35.5 62.1:1 0.13 Figure 11. PSNR vs. bits per...and a plot of PSNR against the bits per pixel ( bpp ) is shown in Figure 13. The 3D DCT compression yielded better results than the baseline JPEG
DCT-based cyber defense techniques
NASA Astrophysics Data System (ADS)
Amsalem, Yaron; Puzanov, Anton; Bedinerman, Anton; Kutcher, Maxim; Hadar, Ofer
2015-09-01
With the increasing popularity of video streaming services and multimedia sharing via social networks, there is a need to protect the multimedia from malicious use. An attacker may use steganography and watermarking techniques to embed malicious content, in order to attack the end user. Most of the attack algorithms are robust to basic image processing techniques such as filtering, compression, noise addition, etc. Hence, in this article two novel, real-time, defense techniques are proposed: Smart threshold and anomaly correction. Both techniques operate at the DCT domain, and are applicable for JPEG images and H.264 I-Frames. The defense performance was evaluated against a highly robust attack, and the perceptual quality degradation was measured by the well-known PSNR and SSIM quality assessment metrics. A set of defense techniques is suggested for improving the defense efficiency. For the most aggressive attack configuration, the combination of all the defense techniques results in 80% protection against cyber-attacks with PSNR of 25.74 db.
Google Books: making the public domain universally accessible
NASA Astrophysics Data System (ADS)
Langley, Adam; Bloomberg, Dan S.
2007-01-01
Google Book Search is working with libraries and publishers around the world to digitally scan books. Some of those works are now in the public domain and, in keeping with Google's mission to make all the world's information useful and universally accessible, we wish to allow users to download them all. For users, it is important that the files are as small as possible and of printable quality. This means that a single codec for both text and images is impractical. We use PDF as a container for a mixture of JBIG2 and JPEG2000 images which are composed into a final set of pages. We discuss both the implementation of an open source JBIG2 encoder, which we use to compress text data, and the design of the infrastructure needed to meet the technical, legal and user requirements of serving many scanned works. We also cover the lessons learnt about dealing with different PDF readers and how to write files that work on most of the readers, most of the time.
Realisation and robustness evaluation of a blind spatial domain watermarking technique
NASA Astrophysics Data System (ADS)
Parah, Shabir A.; Sheikh, Javaid A.; Assad, Umer I.; Bhat, Ghulam M.
2017-04-01
A blind digital image watermarking scheme based on spatial domain is presented and investigated in this paper. The watermark has been embedded in intermediate significant bit planes besides the least significant bit plane at the address locations determined by pseudorandom address vector (PAV). The watermark embedding using PAV makes it difficult for an adversary to locate the watermark and hence adds to security of the system. The scheme has been evaluated to ascertain the spatial locations that are robust to various image processing and geometric attacks JPEG compression, additive white Gaussian noise, salt and pepper noise, filtering and rotation. The experimental results obtained, reveal an interesting fact, that, for all the above mentioned attacks, other than rotation, higher the bit plane in which watermark is embedded more robust the system. Further, the perceptual quality of the watermarked images obtained in the proposed system has been compared with some state-of-art watermarking techniques. The proposed technique outperforms the techniques under comparison, even if compared with the worst case peak signal-to-noise ratio obtained in our scheme.
Dimensionality of visual complexity in computer graphics scenes
NASA Astrophysics Data System (ADS)
Ramanarayanan, Ganesh; Bala, Kavita; Ferwerda, James A.; Walter, Bruce
2008-02-01
How do human observers perceive visual complexity in images? This problem is especially relevant for computer graphics, where a better understanding of visual complexity can aid in the development of more advanced rendering algorithms. In this paper, we describe a study of the dimensionality of visual complexity in computer graphics scenes. We conducted an experiment where subjects judged the relative complexity of 21 high-resolution scenes, rendered with photorealistic methods. Scenes were gathered from web archives and varied in theme, number and layout of objects, material properties, and lighting. We analyzed the subject responses using multidimensional scaling of pooled subject responses. This analysis embedded the stimulus images in a two-dimensional space, with axes that roughly corresponded to "numerosity" and "material / lighting complexity". In a follow-up analysis, we derived a one-dimensional complexity ordering of the stimulus images. We compared this ordering with several computable complexity metrics, such as scene polygon count and JPEG compression size, and did not find them to be very correlated. Understanding the differences between these measures can lead to the design of more efficient rendering algorithms in computer graphics.
Applying image quality in cell phone cameras: lens distortion
NASA Astrophysics Data System (ADS)
Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje
2009-01-01
This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.
Video segmentation for post-production
NASA Astrophysics Data System (ADS)
Wills, Ciaran
2001-12-01
Specialist post-production is an industry that has much to gain from the application of content-based video analysis techniques. However the types of material handled in specialist post-production, such as television commercials, pop music videos and special effects are quite different in nature from the typical broadcast material which many video analysis techniques are designed to work with; shots are short and highly dynamic, and the transitions are often novel or ambiguous. We address the problem of scene change detection and develop a new algorithm which tackles some of the common aspects of post-production material that cause difficulties for past algorithms, such as illumination changes and jump cuts. Operating in the compressed domain on Motion JPEG compressed video, our algorithm detects cuts and fades by analyzing each JPEG macroblock in the context of its temporal and spatial neighbors. Analyzing the DCT coefficients directly we can extract the mean color of a block and an approximate detail level. We can also perform an approximated cross-correlation between two blocks. The algorithm is part of a set of tools being developed to work with an automated asset management system designed specifically for use in post-production facilities.
A New Color Image Encryption Scheme Using CML and a Fractional-Order Chaotic System
Wu, Xiangjun; Li, Yang; Kurths, Jürgen
2015-01-01
The chaos-based image cryptosystems have been widely investigated in recent years to provide real-time encryption and transmission. In this paper, a novel color image encryption algorithm by using coupled-map lattices (CML) and a fractional-order chaotic system is proposed to enhance the security and robustness of the encryption algorithms with a permutation-diffusion structure. To make the encryption procedure more confusing and complex, an image division-shuffling process is put forward, where the plain-image is first divided into four sub-images, and then the position of the pixels in the whole image is shuffled. In order to generate initial conditions and parameters of two chaotic systems, a 280-bit long external secret key is employed. The key space analysis, various statistical analysis, information entropy analysis, differential analysis and key sensitivity analysis are introduced to test the security of the new image encryption algorithm. The cryptosystem speed is analyzed and tested as well. Experimental results confirm that, in comparison to other image encryption schemes, the new algorithm has higher security and is fast for practical image encryption. Moreover, an extensive tolerance analysis of some common image processing operations such as noise adding, cropping, JPEG compression, rotation, brightening and darkening, has been performed on the proposed image encryption technique. Corresponding results reveal that the proposed image encryption method has good robustness against some image processing operations and geometric attacks. PMID:25826602
Application of M-JPEG compression hardware to dynamic stimulus production.
Mulligan, J B
1997-01-01
Inexpensive circuit boards have appeared on the market which transform a normal micro-computer's disk drive into a video disk capable of playing extended video sequences in real time. This technology enables the performance of experiments which were previously impossible, or at least prohibitively expensive. The new technology achieves this capability using special-purpose hardware to compress and decompress individual video frames, enabling a video stream to be transferred over relatively low-bandwidth disk interfaces. This paper will describe the use of such devices for visual psychophysics and present the technical issues that must be considered when evaluating individual products.
Fragmentation Point Detection of JPEG Images at DHT Using Validator
NASA Astrophysics Data System (ADS)
Mohamad, Kamaruddin Malik; Deris, Mustafa Mat
File carving is an important, practical technique for data recovery in digital forensics investigation and is particularly useful when filesystem metadata is unavailable or damaged. The research on reassembly of JPEG files with RST markers, fragmented within the scan area have been done before. However, fragmentation within Define Huffman Table (DHT) segment is yet to be resolved. This paper analyzes the fragmentation within the DHT area and list out all the fragmentation possibilities. Two main contributions are made in this paper. Firstly, three fragmentation points within DHT area are listed. Secondly, few novel validators are proposed to detect these fragmentations. The result obtained from tests done on manually fragmented JPEG files, showed that all three fragmentation points within DHT are successfully detected using validators.
Setti, E; Musumeci, R
2001-06-01
The world wide web is an exciting service that allows one to publish electronic documents made of text and images on the internet. Client software called a web browser can access these documents, and display and print them. The most popular browsers are currently Microsoft Internet Explorer (Microsoft, Redmond, WA) and Netscape Communicator (Netscape Communications, Mountain View, CA). These browsers can display text in hypertext markup language (HTML) format and images in Joint Photographic Expert Group (JPEG) and Graphic Interchange Format (GIF). Currently, neither browser can display radiologic images in native Digital Imaging and Communications in Medicine (DICOM) format. With the aim to publish radiologic images on the internet, we wrote a dedicated Java applet. Our software can display radiologic and histologic images in DICOM, JPEG, and GIF formats, and provides a a number of functions like windowing and magnification lens. The applet is compatible with some web browsers, even the older versions. The software is free and available from the author.
Atmospheric Science Data Center
2014-05-15
article title: Los Alamos, New Mexico View Larger JPEG image ... kb) Multi-angle views of the Fire in Los Alamos, New Mexico, May 9, 2000. These true-color images covering north-central New Mexico ...
NASA Astrophysics Data System (ADS)
Muneyasu, Mitsuji; Odani, Shuhei; Kitaura, Yoshihiro; Namba, Hitoshi
On the use of a surveillance camera, there is a case where privacy protection should be considered. This paper proposes a new privacy protection method by automatically degrading the face region in surveillance images. The proposed method consists of ROI coding of JPEG2000 and a face detection method based on template matching. The experimental result shows that the face region can be detected and hidden correctly.
JHelioviewer. Time-dependent 3D visualisation of solar and heliospheric data
NASA Astrophysics Data System (ADS)
Müller, D.; Nicula, B.; Felix, S.; Verstringe, F.; Bourgoignie, B.; Csillaghy, A.; Berghmans, D.; Jiggens, P.; García-Ortiz, J. P.; Ireland, J.; Zahniy, S.; Fleck, B.
2017-09-01
Context. Solar observatories are providing the world-wide community with a wealth of data, covering wide time ranges (e.g. Solar and Heliospheric Observatory, SOHO), multiple viewpoints (Solar TErrestrial RElations Observatory, STEREO), and returning large amounts of data (Solar Dynamics Observatory, SDO). In particular, the large volume of SDO data presents challenges; the data are available only from a few repositories, and full-disk, full-cadence data for reasonable durations of scientific interest are difficult to download, due to their size and the download rates available to most users. From a scientist's perspective this poses three problems: accessing, browsing, and finding interesting data as efficiently as possible. Aims: To address these challenges, we have developed JHelioviewer, a visualisation tool for solar data based on the JPEG 2000 compression standard and part of the open source ESA/NASA Helioviewer Project. Since the first release of JHelioviewer in 2009, the scientific functionality of the software has been extended significantly, and the objective of this paper is to highlight these improvements. Methods: The JPEG 2000 standard offers useful new features that facilitate the dissemination and analysis of high-resolution image data and offers a solution to the challenge of efficiently browsing petabyte-scale image archives. The JHelioviewer software is open source, platform independent, and extendable via a plug-in architecture. Results: With JHelioviewer, users can visualise the Sun for any time period between September 1991 and today; they can perform basic image processing in real time, track features on the Sun, and interactively overlay magnetic field extrapolations. The software integrates solar event data and a timeline display. Once an interesting event has been identified, science quality data can be accessed for in-depth analysis. As a first step towards supporting science planning of the upcoming Solar Orbiter mission, JHelioviewer offers a virtual camera model that enables users to set the vantage point to the location of a spacecraft or celestial body at any given time.
The effects of lossy compression on diagnostically relevant seizure information in EEG signals.
Higgins, G; McGinley, B; Faul, S; McEvoy, R P; Glavin, M; Marnane, W P; Jones, E
2013-01-01
This paper examines the effects of compression on EEG signals, in the context of automated detection of epileptic seizures. Specifically, it examines the use of lossy compression on EEG signals in order to reduce the amount of data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to diagnosing epileptic seizures. Two popular compression methods, JPEG2000 and SPIHT, were used. A range of compression levels was selected for both algorithms in order to compress the signals with varying degrees of loss. This compression was applied to the database of epileptiform data provided by the University of Freiburg, Germany. The real-time EEG analysis for event detection automated seizure detection system was used in place of a trained clinician for scoring the reconstructed data. Results demonstrate that compression by a factor of up to 120:1 can be achieved, with minimal loss in seizure detection performance as measured by the area under the receiver operating characteristic curve of the seizure detection system.
JPEG 2000 in advanced ground station architectures
NASA Astrophysics Data System (ADS)
Chien, Alan T.; Brower, Bernard V.; Rajan, Sreekanth D.
2000-11-01
The integration and management of information from distributed and heterogeneous information producers and providers must be a key foundation of any developing imagery intelligence system. Historically, imagery providers acted as production agencies for imagery, imagery intelligence, and geospatial information. In the future, these imagery producers will be evolving to act more like e-business information brokers. The management of imagery and geospatial information-visible, spectral, infrared (IR), radar, elevation, or other feature and foundation data-is crucial from a quality and content perspective. By 2005, there will be significantly advanced collection systems and a myriad of storage devices. There will also be a number of automated and man-in-the-loop correlation, fusion, and exploitation capabilities. All of these new imagery collection and storage systems will result in a higher volume and greater variety of imagery being disseminated and archived in the future. This paper illustrates the importance-from a collection, storage, exploitation, and dissemination perspective-of the proper selection and implementation of standards-based compression technology for ground station and dissemination/archive networks. It specifically discusses the new compression capabilities featured in JPEG 2000 and how that commercially based technology can provide significant improvements to the overall imagery and geospatial enterprise both from an architectural perspective as well as from a user's prospective.
Turuk, Mousami; Dhande, Ashwin
2018-04-01
The recent innovations in information and communication technologies have appreciably changed the panorama of health information system (HIS). These advances provide new means to process, handle, and share medical images and also augment the medical image security issues in terms of confidentiality, reliability, and integrity. Digital watermarking has emerged as new era that offers acceptable solutions to the security issues in HIS. Texture is a significant feature to detect the embedding sites in an image, which further leads to substantial improvement in the robustness. However, considering the perspective of digital watermarking, this feature has received meager attention in the reported literature. This paper exploits the texture property of an image and presents a novel hybrid texture-quantization-based approach for reversible multiple watermarking. The watermarked image quality has been accessed by peak signal to noise ratio (PSNR), structural similarity measure (SSIM), and universal image quality index (UIQI), and the obtained results are superior to the state-of-the-art methods. The algorithm has been evaluated on a variety of medical imaging modalities (CT, MRA, MRI, US) and robustness has been verified, considering various image processing attacks including JPEG compression. The proposed scheme offers additional security using repetitive embedding of BCH encoded watermarks and ADM encrypted ECG signal. Experimental results achieved a maximum of 22,616 bits hiding capacity with PSNR of 53.64 dB.
Bit-Grooming: Shave Your Bits with Razor-sharp Precision
NASA Astrophysics Data System (ADS)
Zender, C. S.; Silver, J.
2017-12-01
Lossless compression can reduce climate data storage by 30-40%. Further reduction requires lossy compression that also reduces precision. Fortunately, geoscientific models and measurements generate false precision (scientifically meaningless data bits) that can be eliminated without sacrificing scientifically meaningful data. We introduce Bit Grooming, a lossy compression algorithm that removes the bloat due to false-precision, those bits and bytes beyond the meaningful precision of the data.Bit Grooming is statistically unbiased, applies to all floating point numbers, and is easy to use. Bit-Grooming reduces geoscience data storage requirements by 40-80%. We compared Bit Grooming to competitors Linear Packing, Layer Packing, and GRIB2/JPEG2000. The other compression methods have the edge in terms of compression, but Bit Grooming is the most accurate and certainly the most usable and portable.Bit Grooming provides flexible and well-balanced solutions to the trade-offs among compression, accuracy, and usability required by lossy compression. Geoscientists could reduce their long term storage costs, and show leadership in the elimination of false precision, by adopting Bit Grooming.
[Glossary of terms used by radiologists in image processing].
Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P
1995-01-01
We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.
Two VLT 8.2-m Unit Telescopes in Action
NASA Astrophysics Data System (ADS)
1999-04-01
Visitors at ANTU - Astronomical Images from KUEYEN The VLT Control Room at the Paranal Observatory is becoming a busy place indeed. From here, two specialist teams of ESO astronomers and engineers now operate two VLT 8.2-m Unit Telescopes in parallel, ANTU and KUEYEN (formerly UT1 and UT2, for more information about the naming and the pronunciation, see ESO Press Release 06/99 ). Regular science observations have just started with the first of these giant telescopes, while impressive astronomical images are being obtained with the second. The work is hard, but the mood in the control room is good. Insiders claim that there have even been occasions on which the groups have had a friendly "competition" about which telescope makes the "best" images! The ANTU-team has worked with the FORS multi-mode instrument , their colleagues at KUEYEN use the VLT Test Camera for the ongoing tests of this new telescope. While the first is a highly developed astronomical instrument with a large-field CCD imager (6.8 x 6.8 arcmin 2 in the normal mode; 3.4 x 3.4 arcmin 2 in the high-resolution mode), the other is a less complex CCD camera with a smaller field (1.5 x 1.5 arcmin 2 ), suited to verify the optical performance of the telescope. As these images demonstrate, the performance of the second VLT Unit Telescope is steadily improving and it may not be too long before its optical quality will approach that of the first. First KUEYEN photos of stars and galaxies We present here some of the first astronomical images, taken with the second telescope, KUEYEN, in late March and early April 1999. They reflect the current status of the optical, electronic and mechanical systems, still in the process of being tuned. As expected, the experience gained from ANTU last year has turned out to be invaluable and has allowed good progress during this extremely delicate process. ESO PR Photo 19a/99 ESO PR Photo 19a/99 [Preview - JPEG: 400 x 433 pix - 160k] [Normal - JPEG: 800 x 866 pix - 457k] [High-Res - JPEG: 1985 x 2148 pix - 2.0M] ESO PR Photo 19b/99 ESO PR Photo 19b/99 [Preview - JPEG: 400 x 478 pix - 165k] [Normal - JPEG: 800 x 956 pix - 594k] [High-Res - JPEG: 3000 x 3583 pix - 7.1M] Caption to PR Photo 19a/99 : This photo was obtained with VLT KUEYEN on April 4, 1999. It is reproduced from an excellent 60-second R(ed)-band exposure of the innermost region of a globular cluster, Messier 68 (NGC 4590) , in the southern constellation Hydra (The Water-Snake). The distance to this 8-mag cluster is about 35,000 light years, and the diameter is about 140 light-years. The excellent image quality is 0.38 arcsec , demonstrating a good optical and mechanical state of the telescope, already at this early stage of the commissioning phase. The field measures about 90 x 90 arcsec 2. The original scale is 0.0455 pix/arcsec and there are 2048x2048 pixels in one frame. North is up and East is left. Caption to PR Photo 19b/99 : This photo shows the central region of spiral galaxy ESO 269-57 , located in the southern constellation Centaurus at a distance of about 150 million light-years. Many galaxies are seen in this direction at about the same distance, forming a loose cluster; there are also some fainter, more distant ones in the background. The designation refers to the ESO/Uppsala Survey of the Southern Sky in the 1970's during which over 15,000 southern galaxies were catalogued. ESO 269-57 is a tightly bound object of type Sar , the "r" referring to the "ring" that surrounds the bright centre, that is overexposed here. The photo is a composite, based on three exposures (Blue - 600 sec; Yellow-Green - 300 sec; Red - 300 sec) obtained with KUEYEN on March 28, 1999. The image quality is 0.7 arcsec and the field is 90 x 90 arcsec 2. North is up and East is left. ESO PR Photo 19c/99 ESO PR Photo 19c/99 [Preview - JPEG: 400 x 478 pix - 132k] [Normal - JPEG: 800 x 956 pix - 446k] [High-Res - JPEG: 3000 x 3583 pix - 4.6M] ESO PR Photo 19d/99 ESO PR Photo 19d/99 [Preview - JPEG: 400 x 454 pix - 86k] [Normal - JPEG: 800 x 907 pix - 301k] [High-Res - JPEG: 978 x 1109 pix - 282k] Caption to PR Photo 19c/99 : Somewhat further out in space, and right on the border between the southern constellations Hydra and Centaurus lies this knotty spiral galaxy, IC 4248 ; the distance is about 210 million light-years. It was imaged with KUEYEN on March 28, 1999, with the same filters and exposure times as used for Photo 19b/99. The image quality is 0.75 arcsec and the field is 90 x 90 arcsec 2. North is up and East is left. Caption to PR Photo 19d/99 : This is a close-up view of the double galaxy NGC 5090 (right) and NGC 5091 (left), in the southern constellation Centaurus. The first is a typical S0 galaxy with a bright diffuse centre, surrounded by a fainter envelope of stars (not resolved in this picture). However, some of the starlike objects seen in this region may be globular clusters (or dwarf galaxies) in orbit around NGC 5090. The other galaxy is of type Sa (the spiral structure is more developed) and is seen at a steep angle. The three-colour composite is based on frames obtained with KUEYEN on March 29, 1999, with the same filters and exposure times as used for Photo 19b/99. The image quality is 0.7 arcsec and the field is 90 x 90 arcsec 2. North is up and East is left. ( Note inserted on April 26: The original caption text identified the second galaxy as NGC 5090B - this error has now been corrected. ESO PR Photo 19e/99 ESO PR Photo 19e/99 [Preview - JPEG: 400 x 441 pix - 282k] [Normal - JPEG: 800 x 882 pix - 966k] [High-Res - JPEG: 3000 x 3307 pix - 6,4M] Caption to PR Photo 19e/99 : Wide-angle photo of the second 8.2-m VLT Unit Telescope, KUEYEN , obtained on March 10, 1999, with the main mirror and its cell in place at the bottom of the telescope structure. The Test Camera with which the astronomical images above were made, is positioned at the Cassegrain focus, inside this mirror cell. The Paranal Inauguration on March 5, 1999, took place under this telescope that was tilted towards the horizon to accommodate nearly 300 persons on the observing floor. Astronomical observations with ANTU have started On April 1, 1999, the first 8.2-m VLT Unit Telescope, ANTU , was "handed over" to the astronomers. Last year, about 270 observing proposals competed about the first, precious observing time at Europe's largest optical telescope and more than 100 of these were accommodated within the six-month period until the end of September 1999. The complete observing schedule is available on the web. These observations will be carried out in two different modes. During the Visitor Mode , the astronomers will be present at the telescope, while in the Service Mode , ESO observers perform the observations. The latter procedure allows a greater degree of flexibility and the possibility to assign periods of particularly good observing conditions to programmes whose success is critically dependent on this. The first ten nights at ANTU were allocated to service mode observations. After some initial technical problems with the instruments, these have now started. Already in the first night, programmes at ISAAC requiring 0.4 arcsec conditions could be satisfied, and some images better than 0.3 arcsec were obtained in the near-infrared . The first astronomers to use the telescope in visitors mode will be Professors Immo Appenzeller (Heidelberg, Germany; "Photo-polarimetry of pulsars") and George Miley (Leiden, The Netherlands; "Distant radio galaxies") with their respective team colleagues. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory. Note also the dedicated webarea with VLT Information.
Passive forensics for copy-move image forgery using a method based on DCT and SVD.
Zhao, Jie; Guo, Jichang
2013-12-10
As powerful image editing tools are widely used, the demand for identifying the authenticity of an image is much increased. Copy-move forgery is one of the tampering techniques which are frequently used. Most existing techniques to expose this forgery need to improve the robustness for common post-processing operations and fail to precisely locate the tampering region especially when there are large similar or flat regions in the image. In this paper, a robust method based on DCT and SVD is proposed to detect this specific artifact. Firstly, the suspicious image is divided into fixed-size overlapping blocks and 2D-DCT is applied to each block, then the DCT coefficients are quantized by a quantization matrix to obtain a more robust representation of each block. Secondly, each quantized block is divided non-overlapping sub-blocks and SVD is applied to each sub-block, then features are extracted to reduce the dimension of each block using its largest singular value. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks will be matched by predefined shift frequency threshold. Experiment results demonstrate that our proposed method can effectively detect multiple copy-move forgery and precisely locate the duplicated regions, even when an image was distorted by Gaussian blurring, AWGN, JPEG compression and their mixed operations. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Hu, J H; Wang, Y; Cahill, P T
1997-01-01
This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.
NASA Astrophysics Data System (ADS)
Al-Mansoori, Saeed; Kunhu, Alavi
2013-10-01
This paper proposes a blind multi-watermarking scheme based on designing two back-to-back encoders. The first encoder is implemented to embed a robust watermark into remote sensing imagery by applying a Discrete Cosine Transform (DCT) approach. Such watermark is used in many applications to protect the copyright of the image. However, the second encoder embeds a fragile watermark using `SHA-1' hash function. The purpose behind embedding a fragile watermark is to prove the authenticity of the image (i.e. tamper-proof). Thus, the proposed technique was developed as a result of new challenges with piracy of remote sensing imagery ownership. This led researchers to look for different means to secure the ownership of satellite imagery and prevent the illegal use of these resources. Therefore, Emirates Institution for Advanced Science and Technology (EIAST) proposed utilizing existing data security concept by embedding a digital signature, "watermark", into DubaiSat-1 satellite imagery. In this study, DubaiSat-1 images with 2.5 meter resolution are used as a cover and a colored EIAST logo is used as a watermark. In order to evaluate the robustness of the proposed technique, a couple of attacks are applied such as JPEG compression, rotation and synchronization attacks. Furthermore, tampering attacks are applied to prove image authenticity.
Low-complex energy-aware image communication in visual sensor networks
NASA Astrophysics Data System (ADS)
Phamila, Yesudhas Asnath Victy; Amutha, Ramachandran
2013-10-01
A low-complex, low bit rate, energy-efficient image compression algorithm explicitly designed for resource-constrained visual sensor networks applied for surveillance, battle field, habitat monitoring, etc. is presented, where voluminous amount of image data has to be communicated over a bandwidth-limited wireless medium. The proposed method overcomes the energy limitation of individual nodes and is investigated in terms of image quality, entropy, processing time, overall energy consumption, and system lifetime. This algorithm is highly energy efficient and extremely fast since it applies energy-aware zonal binary discrete cosine transform (DCT) that computes only the few required significant coefficients and codes them using enhanced complementary Golomb Rice code without using any floating point operations. Experiments are performed using the Atmel Atmega128 and MSP430 processors to measure the resultant energy savings. Simulation results show that the proposed energy-aware fast zonal transform consumes only 0.3% of energy needed by conventional DCT. This algorithm consumes only 6% of energy needed by Independent JPEG Group (fast) version, and it suits for embedded systems requiring low power consumption. The proposed scheme is unique since it significantly enhances the lifetime of the camera sensor node and the network without any need for distributed processing as was traditionally required in existing algorithms.
Image steganalysis using Artificial Bee Colony algorithm
NASA Astrophysics Data System (ADS)
Sajedi, Hedieh
2017-09-01
Steganography is the science of secure communication where the presence of the communication cannot be detected while steganalysis is the art of discovering the existence of the secret communication. Processing a huge amount of information takes extensive execution time and computational sources most of the time. As a result, it is needed to employ a phase of preprocessing, which can moderate the execution time and computational sources. In this paper, we propose a new feature-based blind steganalysis method for detecting stego images from the cover (clean) images with JPEG format. In this regard, we present a feature selection technique based on an improved Artificial Bee Colony (ABC). ABC algorithm is inspired by honeybees' social behaviour in their search for perfect food sources. In the proposed method, classifier performance and the dimension of the selected feature vector depend on using wrapper-based methods. The experiments are performed using two large data-sets of JPEG images. Experimental results demonstrate the effectiveness of the proposed steganalysis technique compared to the other existing techniques.
Forensic Analysis of Digital Image Tampering
2004-12-01
analysis of when each method fails, which Chapter 4 discusses. Finally, a test image containing an invisible watermark using LSB steganography is...2.2 – Example of invisible watermark using Steganography Software F5 ............. 8 Figure 2.3 – Example of copy-move image forgery [12...Figure 3.11 – Algorithm for JPEG Block Technique ....................................................... 54 Figure 3.12 – “Forged” Image with Result
NASA Astrophysics Data System (ADS)
Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui
2017-01-01
Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.
Progressive data transmission for anatomical landmark detection in a cloud.
Sofka, M; Ralovich, K; Zhang, J; Zhou, S K; Comaniciu, D
2012-01-01
In the concept of cloud-computing-based systems, various authorized users have secure access to patient records from a number of care delivery organizations from any location. This creates a growing need for remote visualization, advanced image processing, state-of-the-art image analysis, and computer aided diagnosis. This paper proposes a system of algorithms for automatic detection of anatomical landmarks in 3D volumes in the cloud computing environment. The system addresses the inherent problem of limited bandwidth between a (thin) client, data center, and data analysis server. The problem of limited bandwidth is solved by a hierarchical sequential detection algorithm that obtains data by progressively transmitting only image regions required for processing. The client sends a request to detect a set of landmarks for region visualization or further analysis. The algorithm running on the data analysis server obtains a coarse level image from the data center and generates landmark location candidates. The candidates are then used to obtain image neighborhood regions at a finer resolution level for further detection. This way, the landmark locations are hierarchically and sequentially detected and refined. Only image regions surrounding landmark location candidates need to be trans- mitted during detection. Furthermore, the image regions are lossy compressed with JPEG 2000. Together, these properties amount to at least 30 times bandwidth reduction while achieving similar accuracy when compared to an algorithm using the original data. The hierarchical sequential algorithm with progressive data transmission considerably reduces bandwidth requirements in cloud-based detection systems.
Helioviewer.org: Browsing Very Large Image Archives Online Using JPEG 2000
NASA Astrophysics Data System (ADS)
Hughitt, V. K.; Ireland, J.; Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Schmidt, L.; Wamsler, B.; Beck, J.; Alexanderian, A.; Fleck, B.
2009-12-01
As the amount of solar data available to scientists continues to increase at faster and faster rates, it is important that there exist simple tools for navigating this data quickly with a minimal amount of effort. By combining heterogeneous solar physics datatypes such as full-disk images and coronagraphs, along with feature and event information, Helioviewer offers a simple and intuitive way to browse multiple datasets simultaneously. Images are stored in a repository using the JPEG 2000 format and tiled dynamically upon a client's request. By tiling images and serving only the portions of the image requested, it is possible for the client to work with very large images without having to fetch all of the data at once. In addition to a focus on intercommunication with other virtual observatories and browsers (VSO, HEK, etc), Helioviewer will offer a number of externally-available application programming interfaces (APIs) to enable easy third party use, adoption and extension. Recent efforts have resulted in increased performance, dynamic movie generation, and improved support for mobile web browsers. Future functionality will include: support for additional data-sources including RHESSI, SDO, STEREO, and TRACE, a navigable timeline of recorded solar events, social annotation, and basic client-side image processing.
21 CFR 892.2030 - Medical image digitizer.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Medical image digitizer. 892.2030 Section 892.2030 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std.). [63 FR 23387, Apr. 29...
21 CFR 892.2040 - Medical image hardcopy device.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Medical image hardcopy device. 892.2040 Section 892.2040 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture...
ImageJ: Image processing and analysis in Java
NASA Astrophysics Data System (ADS)
Rasband, W. S.
2012-06-01
ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.
A new image representation for compact and secure communication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, Lakshman; Skourikhine, A. N.
In many areas of nuclear materials management there is a need for communication, archival, and retrieval of annotated image data between heterogeneous platforms and devices to effectively implement safety, security, and safeguards of nuclear materials. Current image formats such as JPEG are not ideally suited in such scenarios as they are not scalable to different viewing formats, and do not provide a high-level representation of images that facilitate automatic object/change detection or annotation. The new Scalable Vector Graphics (SVG) open standard for representing graphical information, recommended by the World Wide Web Consortium (W3C) is designed to address issues of imagemore » scalability, portability, and annotation. However, until now there has been no viable technology to efficiently field images of high visual quality under this standard. Recently, LANL has developed a vectorized image representation that is compatible with the SVG standard and preserves visual quality. This is based on a new geometric framework for characterizing complex features in real-world imagery that incorporates perceptual principles of processing visual information known from cognitive psychology and vision science, to obtain a polygonal image representation of high fidelity. This representation can take advantage of all textual compression and encryption routines unavailable to other image formats. Moreover, this vectorized image representation can be exploited to facilitate automated object recognition that can reduce time required for data review. The objects/features of interest in these vectorized images can be annotated via animated graphics to facilitate quick and easy display and comprehension of processed image content.« less
Implementation of remote monitoring and managing switches
NASA Astrophysics Data System (ADS)
Leng, Junmin; Fu, Guo
2010-12-01
In order to strengthen the safety performance of the network and provide the big convenience and efficiency for the operator and the manager, the system of remote monitoring and managing switches has been designed and achieved using the advanced network technology and present network resources. The fast speed Internet Protocol Cameras (FS IP Camera) is selected, which has 32-bit RSIC embedded processor and can support a number of protocols. An Optimal image compress algorithm Motion-JPEG is adopted so that high resolution images can be transmitted by narrow network bandwidth. The architecture of the whole monitoring and managing system is designed and implemented according to the current infrastructure of the network and switches. The control and administrative software is projected. The dynamical webpage Java Server Pages (JSP) development platform is utilized in the system. SQL (Structured Query Language) Server database is applied to save and access images information, network messages and users' data. The reliability and security of the system is further strengthened by the access control. The software in the system is made to be cross-platform so that multiple operating systems (UNIX, Linux and Windows operating systems) are supported. The application of the system can greatly reduce manpower cost, and can quickly find and solve problems.
Cornelissen, Frans; Cik, Miroslav; Gustin, Emmanuel
2012-04-01
High-content screening has brought new dimensions to cellular assays by generating rich data sets that characterize cell populations in great detail and detect subtle phenotypes. To derive relevant, reliable conclusions from these complex data, it is crucial to have informatics tools supporting quality control, data reduction, and data mining. These tools must reconcile the complexity of advanced analysis methods with the user-friendliness demanded by the user community. After review of existing applications, we realized the possibility of adding innovative new analysis options. Phaedra was developed to support workflows for drug screening and target discovery, interact with several laboratory information management systems, and process data generated by a range of techniques including high-content imaging, multicolor flow cytometry, and traditional high-throughput screening assays. The application is modular and flexible, with an interface that can be tuned to specific user roles. It offers user-friendly data visualization and reduction tools for HCS but also integrates Matlab for custom image analysis and the Konstanz Information Miner (KNIME) framework for data mining. Phaedra features efficient JPEG2000 compression and full drill-down functionality from dose-response curves down to individual cells, with exclusion and annotation options, cell classification, statistical quality controls, and reporting.
Fast H.264/AVC FRExt intra coding using belief propagation.
Milani, Simone
2011-01-01
In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.
Introducing keytagging, a novel technique for the protection of medical image-based tests.
Rubio, Óscar J; Alesanco, Álvaro; García, José
2015-08-01
This paper introduces keytagging, a novel technique to protect medical image-based tests by implementing image authentication, integrity control and location of tampered areas, private captioning with role-based access control, traceability and copyright protection. It relies on the association of tags (binary data strings) to stable, semistable or volatile features of the image, whose access keys (called keytags) depend on both the image and the tag content. Unlike watermarking, this technique can associate information to the most stable features of the image without distortion. Thus, this method preserves the clinical content of the image without the need for assessment, prevents eavesdropping and collusion attacks, and obtains a substantial capacity-robustness tradeoff with simple operations. The evaluation of this technique, involving images of different sizes from various acquisition modalities and image modifications that are typical in the medical context, demonstrates that all the aforementioned security measures can be implemented simultaneously and that the algorithm presents good scalability. In addition to this, keytags can be protected with standard Cryptographic Message Syntax and the keytagging process can be easily combined with JPEG2000 compression since both share the same wavelet transform. This reduces the delays for associating keytags and retrieving the corresponding tags to implement the aforementioned measures to only ≃30 and ≃90ms respectively. As a result, keytags can be seamlessly integrated within DICOM, reducing delays and bandwidth when the image test is updated and shared in secure architectures where different users cooperate, e.g. physicians who interpret the test, clinicians caring for the patient and researchers. Copyright © 2015 Elsevier Inc. All rights reserved.
Content Preserving Watermarking for Medical Images Using Shearlet Transform and SVD
NASA Astrophysics Data System (ADS)
Favorskaya, M. N.; Savchina, E. I.
2017-05-01
Medical Image Watermarking (MIW) is a special field of a watermarking due to the requirements of the Digital Imaging and COmmunications in Medicine (DICOM) standard since 1993. All 20 parts of the DICOM standard are revised periodically. The main idea of the MIW is to embed various types of information including the doctor's digital signature, fragile watermark, electronic patient record, and main watermark in a view of region of interest for the doctor into the host medical image. These four types of information are represented in different forms; some of them are encrypted according to the DICOM requirements. However, all types of information ought to be resulted into the generalized binary stream for embedding. The generalized binary stream may have a huge volume. Therefore, not all watermarking methods can be applied successfully. Recently, the digital shearlet transform had been introduced as a rigorous mathematical framework for the geometric representation of multi-dimensional data. Some modifications of the shearlet transform, particularly the non-subsampled shearlet transform, can be associated to a multi-resolution analysis that provides a fully shift-invariant, multi-scale, and multi-directional expansion. During experiments, a quality of the extracted watermarks under the JPEG compression and typical internet attacks was estimated using several metrics, including the peak signal to noise ratio, structural similarity index measure, and bit error rate.
Lossless compression algorithm for REBL direct-write e-beam lithography system
NASA Astrophysics Data System (ADS)
Cramer, George; Liu, Hsin-I.; Zakhor, Avideh
2010-03-01
Future lithography systems must produce microchips with smaller feature sizes, while maintaining throughputs comparable to those of today's optical lithography systems. This places stringent constraints on the effective data throughput of any maskless lithography system. In recent years, we have developed a datapath architecture for direct-write lithography systems, and have shown that compression plays a key role in reducing throughput requirements of such systems. Our approach integrates a low complexity hardware-based decoder with the writers, in order to decompress a compressed data layer in real time on the fly. In doing so, we have developed a spectrum of lossless compression algorithms for integrated circuit layout data to provide a tradeoff between compression efficiency and hardware complexity, the latest of which is Block Golomb Context Copy Coding (Block GC3). In this paper, we present a modified version of Block GC3 called Block RGC3, specifically tailored to the REBL direct-write E-beam lithography system. Two characteristic features of the REBL system are a rotary stage resulting in arbitrarily-rotated layout imagery, and E-beam corrections prior to writing the data, both of which present significant challenges to lossless compression algorithms. Together, these effects reduce the effectiveness of both the copy and predict compression methods within Block GC3. Similar to Block GC3, our newly proposed technique Block RGC3, divides the image into a grid of two-dimensional "blocks" of pixels, each of which copies from a specified location in a history buffer of recently-decoded pixels. However, in Block RGC3 the number of possible copy locations is significantly increased, so as to allow repetition to be discovered along any angle of orientation, rather than horizontal or vertical. Also, by copying smaller groups of pixels at a time, repetition in layout patterns is easier to find and take advantage of. As a side effect, this increases the total number of copy locations to transmit; this is combated with an extra region-growing step, which enforces spatial coherence among neighboring copy locations, thereby improving compression efficiency. We characterize the performance of Block RGC3 in terms of compression efficiency and encoding complexity on a number of rotated Metal 1, Poly, and Via layouts at various angles, and show that Block RGC3 provides higher compression efficiency than existing lossless compression algorithms, including JPEG-LS, ZIP, BZIP2, and Block GC3.
Confidential storage and transmission of medical image data.
Norcen, R; Podesser, M; Pommer, A; Schmidt, H-P; Uhl, A
2003-05-01
We discuss computationally efficient techniques for confidential storage and transmission of medical image data. Two types of partial encryption techniques based on AES are proposed. The first encrypts a subset of bitplanes of plain image data whereas the second encrypts parts of the JPEG2000 bitstream. We find that encrypting between 20% and 50% of the visual data is sufficient to provide high confidentiality.
NASA Technical Reports Server (NTRS)
2002-01-01
Full-size images June 17, 2001 (2.0 MB JPEG) June 14, 2000 (2.1 MB JPEG) Light snowfall in the winter of 2000-01 led to a dry summer in the Pacific Northwest. The drought led to a conflict between farmers and fishing communities in the Klamath River Basin over water rights, and a series of forest fires in Washington, Oregon, and Northern California. The pair of images above, both acquired by the Enhanced Thematic Mapper Plus (ETM+) aboard the Landsat 7 satellite, show the snowpack on Mt. Shasta in June 2000 and 2001. On June 14, 2000, the snow extends to the lower slopes of the 4,317-meter (14,162-foot) volcano. At nearly the same time this year (June 17, 2001) the snow had retreated well above the tree-line. The drought in the region was categorized as moderate to severe by the National Oceanographic and Atmospheric Administration (NOAA), and the United States Geological Survey (USGS) reported that streamflow during June was only about 25 percent of the average. Above and to the left of Mt. Shasta is Lake Shastina, a reservoir which is noticeably lower in the 2001 image than the 2000 image. Images courtesy USGS EROS Data Center and the Landsat 7 Science Team
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-06
... Resident. We will not accept group or family photographs; you must include a separate photograph for each... new digital image: The image file format must be in the Joint Photographic Experts Group (JPEG) format... Web site four to six weeks before the scheduled interviews with U.S. consular officers at overseas...
Pine Island Glacier, Antarctica, MISR Multi-angle Composite
Atmospheric Science Data Center
2013-12-17
... View Larger Image (JPEG) A large iceberg has finally separated from the calving front ... next due to stereo parallax. This parallax is used in MISR processing to retrieve cloud heights over snow and ice. Additionally, a plume ...
Capacity is the Wrong Paradigm
2002-01-01
short, steganography values detection over ro- bustness, whereas watermarking values robustness over de - tection.) Hiding techniques for JPEG images ...world length of the code. D: If the algorithm is known, this method is trivially de - tectable if we are sending images (with no encryption). If we are...implications of the work of Chaitin and Kolmogorov on algorithmic complex- ity [5]. We have also concentrated on screen images in this paper and have not
Study and validation of tools interoperability in JPSEC
NASA Astrophysics Data System (ADS)
Conan, V.; Sadourny, Y.; Jean-Marie, K.; Chan, C.; Wee, S.; Apostolopoulos, J.
2005-08-01
Digital imagery is important in many applications today, and the security of digital imagery is important today and is likely to gain in importance in the near future. The emerging international standard ISO/IEC JPEG-2000 Security (JPSEC) is designed to provide security for digital imagery, and in particular digital imagery coded with the JPEG-2000 image coding standard. One of the primary goals of a standard is to ensure interoperability between creators and consumers produced by different manufacturers. The JPSEC standard, similar to the popular JPEG and MPEG family of standards, specifies only the bitstream syntax and the receiver's processing, and not how the bitstream is created or the details of how it is consumed. This paper examines the interoperability for the JPSEC standard, and presents an example JPSEC consumption process which can provide insights in the design of JPSEC consumers. Initial interoperability tests between different groups with independently created implementations of JPSEC creators and consumers have been successful in providing the JPSEC security services of confidentiality (via encryption) and authentication (via message authentication codes, or MACs). Further interoperability work is on-going.
Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, Christopher M.
2012-08-13
How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementationmore » techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.« less
Rate-distortion optimized tree-structured compression algorithms for piecewise polynomial images.
Shukla, Rahul; Dragotti, Pier Luigi; Do, Minh N; Vetterli, Martin
2005-03-01
This paper presents novel coding algorithms based on tree-structured segmentation, which achieve the correct asymptotic rate-distortion (R-D) behavior for a simple class of signals, known as piecewise polynomials, by using an R-D based prune and join scheme. For the one-dimensional case, our scheme is based on binary-tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly to achieve the correct exponentially decaying R-D behavior (D(R) - c(o)2(-c1R)), thus improving over classic wavelet schemes. We also prove that the computational complexity of the scheme is of O(N log N). We then show the extension of this scheme to the two-dimensional case using a quadtree. This quadtree-coding scheme also achieves an exponentially decaying R-D behavior, for the polygonal image model composed of a white polygon-shaped object against a uniform black background, with low computational cost of O(N log N). Again, the key is an R-D optimized prune and join strategy. Finally, we conclude with numerical results, which show that the proposed quadtree-coding scheme outperforms JPEG2000 by about 1 dB for real images, like cameraman, at low rates of around 0.15 bpp.
NASA Astrophysics Data System (ADS)
Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won
2005-12-01
The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.
A Steganographic Embedding Undetectable by JPEG Compatibility Steganalysis
2002-01-01
itd.nrl.navy.mil Abstract. Steganography and steganalysis of digital images is a cat- and-mouse game. In recent work, Fridrich, Goljan and Du introduced a method...proposed embedding method. 1 Introduction Steganography and steganalysis of digital images is a cat-and-mouse game. Ever since Kurak and McHugh’s seminal...paper on LSB embeddings in images [10], various researchers have published work on either increasing the payload, im- proving the resistance to
NASA Astrophysics Data System (ADS)
Yabuta, Kenichi; Kitazawa, Hitoshi; Tanaka, Toshihisa
2006-02-01
Recently, monitoring cameras for security have been extensively increasing. However, it is normally difficult to know when and where we are monitored by these cameras and how the recorded images are stored and/or used. Therefore, how to protect privacy in the recorded images is a crucial issue. In this paper, we address this problem and introduce a framework for security monitoring systems considering the privacy protection. We state requirements for monitoring systems in this framework. We propose a possible implementation that satisfies the requirements. To protect privacy of recorded objects, they are made invisible by appropriate image processing techniques. Moreover, the original objects are encrypted and watermarked into the image with the "invisible" objects, which is coded by the JPEG standard. Therefore, the image decoded by a normal JPEG viewer includes the objects that are unrecognized or invisible. We also introduce in this paper a so-called "special viewer" in order to decrypt and display the original objects. This special viewer can be used by limited users when necessary for crime investigation, etc. The special viewer allows us to choose objects to be decoded and displayed. Moreover, in this proposed system, real-time processing can be performed, since no future frame is needed to generate a bitstream.
Scanning fluorescent microscopy is an alternative for quantitative fluorescent cell analysis.
Varga, Viktor Sebestyén; Bocsi, József; Sipos, Ferenc; Csendes, Gábor; Tulassay, Zsolt; Molnár, Béla
2004-07-01
Fluorescent measurements on cells are performed today with FCM and laser scanning cytometry. The scientific community dealing with quantitative cell analysis would benefit from the development of a new digital multichannel and virtual microscopy based scanning fluorescent microscopy technology and from its evaluation on routine standardized fluorescent beads and clinical specimens. We applied a commercial motorized fluorescent microscope system. The scanning was done at 20 x (0.5 NA) magnification, on three channels (Rhodamine, FITC, Hoechst). The SFM (scanning fluorescent microscopy) software included the following features: scanning area, exposure time, and channel definition, autofocused scanning, densitometric and morphometric cellular feature determination, gating on scatterplots and frequency histograms, and preparation of galleries of the gated cells. For the calibration and standardization Immuno-Brite beads were used. With application of shading compensation, the CV of fluorescence of the beads decreased from 24.3% to 3.9%. Standard JPEG image compression until 1:150 resulted in no significant change. The change of focus influenced the CV significantly only after +/-5 microm error. SFM is a valuable method for the evaluation of fluorescently labeled cells. Copyright 2004 Wiley-Liss, Inc.
Limited distortion in LSB steganography
NASA Astrophysics Data System (ADS)
Kim, Younhee; Duric, Zoran; Richards, Dana
2006-02-01
It is well known that all information hiding methods that modify the least significant bits introduce distortions into the cover objects. Those distortions have been utilized by steganalysis algorithms to detect that the objects had been modified. It has been proposed that only coefficients whose modification does not introduce large distortions should be used for embedding. In this paper we propose an effcient algorithm for information hiding in the LSBs of JPEG coefficients. Our algorithm uses parity coding to choose the coefficients whose modifications introduce minimal additional distortion. We derive the expected value of the additional distortion as a function of the message length and the probability distribution of the JPEG quantization errors of cover images. Our experiments show close agreement between the theoretical prediction and the actual additional distortion.
VIMOS - a Cosmology Machine for the VLT
NASA Astrophysics Data System (ADS)
2002-03-01
Successful Test Observations With Powerful New Instrument at Paranal [1] Summary One of the most fundamental tasks of modern astrophysics is the study of the evolution of the Universe . This is a daunting undertaking that requires extensive observations of large samples of objects in order to produce reasonably detailed maps of the distribution of galaxies in the Universe and to perform statistical analysis. Much effort is now being put into mapping the relatively nearby space and thereby to learn how the Universe looks today . But to study its evolution, we must compare this with how it looked when it still was young . This is possible, because astronomers can "look back in time" by studying remote objects - the larger their distance, the longer the light we now observe has been underway to us, and the longer is thus the corresponding "look-back time". This may sound easy, but it is not. Very distant objects are very dim and can only be observed with large telescopes. Looking at one object at a time would make such a study extremely time-consuming and, in practical terms, impossible. To do it anyhow, we need the largest possible telescope with a highly specialised, exceedingly sensitive instrument that is able to observe a very large number of (faint) objects in the remote universe simultaneously . The VLT VIsible Multi-Object Spectrograph (VIMOS) is such an instrument. It can obtain many hundreds of spectra of individual galaxies in the shortest possible time; in fact, in one special observing mode, up to 6400 spectra of the galaxies in a remote cluster during a single exposure, augmenting the data gathering power of the telescope by the same proportion. This marvellous science machine has just been installed at the 8.2-m MELIPAL telescope, the third unit of the Very Large Telescope (VLT) at the ESO Paranal Observatory. A main task will be to carry out 3-dimensional mapping of the distant Universe from which we can learn its large-scale structure . "First light" was achieved on February 26, 2002, and a first series of test observations has successfully demonstrated the huge potential of this amazing facility. Much work on VIMOS is still ahead during the coming months in order to put into full operation and fine-tune the most efficient "galaxy cruncher" in the world. VIMOS is the outcome of a fruitful collaboration between ESO and several research institutes in France and Italy, under the responsibility of the Laboratoire d'Astrophysique de Marseille (CNRS, France). The other partners in the "VIRMOS Consortium" are the Laboratoire d'Astrophysique de Toulouse, Observatoire Midi-Pyrénées, and Observatoire de Haute-Provence in France, and Istituto di Radioastronomia (Bologna), Istituto di Fisica Cosmica e Tecnologie Relative (Milano), Osservatorio Astronomico di Bologna, Osservatorio Astronomico di Brera (Milano) and Osservatorio Astronomico di Capodimonte (Naples) in Italy. PR Photo 09a/02 : VIMOS image of the Antennae Galaxies (centre). PR Photo 09b/02 : First VIMOS Multi-Object Spectrum (full field) PR Photo 09c/02 : The VIMOS instrument on VLT MELIPAL PR Photo 09d/02 : The VIMOS team at "First Light". PR Photo 09e/02 : "First Light" image of NGC 5364 PR Photo 09f/02 : Image of the Crab Nebula PR Photo 09g/02 : Image of spiral galaxy NGC 2613 PR Photo 09h/02 : Image of spiral galaxy Messier 100 PR Photo 09i/02 : Image of cluster of galaxies ACO 3341 PR Photo 09j/02 : Image of cluster of galaxies MS 1008.1-1224 PR Photo 09k/02 : Mask design for MOS exposure PR Photo 09l/02 : First VIMOS Multi-Object Spectrum (detail) PR Photo 09m/02 : Integrated Field Spectroscopy of central area of the "Antennae Galaxies" PR Photo 09n/02 : Integrated Field Spectroscopy of central area of the "Antennae Galaxies" (detail) Science with VIMOS ESO PR Photo 09a/02 ESO PR Photo 09a/02 [Preview - JPEG: 400 x 469 pix - 152k] [Normal - JPEG: 800 x 938 pix - 408k] ESO PR Photo 09b/02 ESO PR Photo 09b/02 [Preview - JPEG: 400 x 511 pix - 304k] [Normal - JPEG: 800 x 1022 pix - 728k] Caption : PR Photo 09a/02 : One of the first images from the new VIMOS facility, obtained right after the moment of "first light" on Ferbruary 26, 2002. It shows the famous "Antennae Galaxies" (NGC 4038/39), the result of a recent collision between two galaxies. As an immediate outcome of this dramatic event, stars are born within massive complexes that appear blue in this composite photo, based on exposures through green, orange and red optical filtres. PR Photo 09b/02 : Some of the first spectra of distant galaxies obtained with VIMOS in Multi-Object-Spectroscopy (MOS) mode. More than 220 galaxies were observed simultaneously, an unprecedented efficiency for such a "deep" exposure, reaching so far out in space. These spectra allow to obtain the redshift, a measure of distance, as well as to assess the physical status of the gas and stars in each of these galaxies. A part of this photo is enlarged as PR Photo 09l/02. Technical information about these photos is available below. Other "First Light" images from VIMOS are shown in the photo gallery below. The next in the long series of front-line instruments to be installed on the ESO Very Large Telescope (VLT), VIMOS (and its complementary, infrared-sensitive counterpart NIRMOS, now in the design stage) will allow mapping of the distribution of galaxies, clusters, and quasars during a time interval spanning more than 90% of the age of the universe. It will let us look back in time to a moment only ~1.5 billion years after the Big Bang (corresponding to a redshift of about 5). Like archaeologists, astronomers can then dig deep into those early ages when the first building blocks of galaxies were still in the process of formation. They will be able to determine when most of the star formation occurred in the universe and how it evolved with time. They will analyse how the galaxies cluster in space, and how this distribution varies with time. Such observations will put important constraints on evolution models, in particular on the average density of matter in the Universe. Mapping the distant universe requires to determine the distances of the enormous numbers of remote galaxies seen in deep pictures of the sky, adding depth - the third, indispensible dimension - to the photo. VIMOS offers this capability, and very efficiently. Multi-object spectroscopy is a technique by which many objects are observed simultaneously. VIMOS can observe the spectra of about 1000 galaxies in one exposure, from which redshifts, hence distances, can be measured [2]. The possibility to observe two galaxies at once would be equivalent to having a telescope twice the size of a VLT Unit Telescope. VIMOS thus effectively "increases" the size of the VLT hundreds of times. From these spectra, the stellar and gaseous content and internal velocities of galaxies can be infered, forming the base for detailed physical studies. At present the distances of only a few thousand galaxies and quasars have been measured in the distant universe. VIMOS aims at observing 100 times more, over one hundred thousand of those remote objects. This will form a solid base for unprecedented and detailed statistical studies of the population of galaxies and quasars in the very early universe. The international VIRMOS Consortium VIMOS is one of two major astronomical instruments to be delivered by the VIRMOS Consortium of French and Italian institutes under a contract signed in the summer of 1997 between the European Southern Observatory (ESO) and the French Centre National de la Recherche Scientifique (CNRS). The participating institutes are: in France: * Laboratoire d'Astrophysique de Marseille (LAM), Observatoire Marseille-Provence (project responsible) * Laboratoire d'Astrophysique de Toulouse, Observatoire Midi-Pyrénées * Observatoire de Haute-Provence (OHP) in Italy: * Istituto di Radioastronomia (IRA-CNR) (Bologna) * Istituto di Fisica Cosmica e Tecnologie Relative (IFCTR) (Milano) * Osservatorio Astronomico di Capodimonte (OAC) (Naples) * Osservatorio Astronomico di Bologna (OABo) * Osservatorio Astronomico di Brera (OABr) (Milano) VIMOS at the VLT: a unique and powerful combination ESO PR Photo 09c/02 ESO PR Photo 09c/02 [Preview - JPEG: 501 x 400 pix - 312k] [Normal - JPEG: 1002 x 800 pix - 840k] Caption : PR Photo 09c/02 shows the new VIMOS instrument on one of the Nasmyth platforms of the 8.2-m VLT MELIPAL telescope at Paranal. VIMOS is installed on the Nasmyth "Focus B" platform of the 8.2-m VLT MELIPAL telescope, cf. PR Photo 09c/02 . It may be compared to four multi-mode instruments of the FORS-type (cf. ESO PR 14/98 ), joined in one stiff structure. The construction of VIMOS has involved the production of large and complex optical elements and their integration in more than 30 remotely controlled, finely moving functions in the instrument. In the configuration employed for the "first light", VIMOS made use of two of its four channels. The two others will be put into operation in the next commissioning period during the coming months. However, VIMOS is already now the most efficient multi-object spectrograph in the world , with an equivalent (accumulated) slit length of up to 70 arcmin on the sky. VIMOS has a field-of-view as large as half of the full moon (14 x 16 arcmin 2 for the four quadrants), the largest sky field to be imaged so far by the VLT. It has excellent sensitivity in the blue region of the spectrum (about 60% more efficient than any other similar instruments in the ultraviolet band), and it is also very sensitive in all other visible spectral regions, all the way to the red limit. But the absolutely unique feature of VIMOS is its capability to take large numbers of spectra simultaneously , leading to exceedingly efficient use of the observing time. Up to about 1000 objects can be observed in a single exposure in multi-slit mode. And no less than 6400 spectra can be recorded with the Integral Field Unit , in which a closely packed fibre optics bundle can simultaneously observe a continuous sky area measuring no less than 56 x 56 arcsec 2. A dedicated machine, the Mask Manufacturing Unit (MMU) , cuts the slits for the entrance apertures of the spectrograph. The laser is capable of cutting 200 slits in less than 15 minutes. This facility was put into operation at Paranal by the VIRMOS Consortium already in August 2000 and has since been extensively used for observations with the FORS2 instrument; more details are available in ESO PR 19/99. Fast start-up of VIMOS at Paranal ESO PR Photo 09d/02 ESO PR Photo 09d/02 [Preview - JPEG: 473 x 400 pix - 280k] [Normal - JPEG: 946 x 1209 pix - 728k] ESO PR Photo 09e/02 ESO PR Photo 09e/02 [Preview - JPEG: 400 x 438 pix - 176k] [Normal - JPEG: 800 x 876 pix - 664k] Caption : PR Photo 09d/02 : The VIRMOS team in the MELIPAL control room, moments after "First Light" on February 26, 2002. From left to right: Oreste Caputi, Marco Scodeggio, Giovanni Sciarretta , Olivier Le Fevre, Sylvie Brau-Nogue, Christian Lucuix, Bianca Garilli, Markus Kissler-Patig (in front), Xavier Reyes, Michel Saisse, Luc Arnold and Guido Mancini . PR Photo 09e/02 : The spiral galaxy NGC 5364 was the first object to be observed by VIMOS. This false-colour near-infrared, raw "First Light" photo shows the extensive spiral arms. Technical information about this photo is available below. VIMOS was shipped from Observatoire de Haute-Provence (France) at the end of 2001, and reassembled at Paranal during a first period in January 2002. From mid-February, the instrument was made ready for installation on the VLT MELIPAL telescope; this happened on February 24, 2002. VIMOS saw "First Light" just two days later, on February 26, 2000, cf. PR Photo 09e/02 . During the same night, a number of excellent images were obtained of various objects, demonstrating the fine capabilities of the instrument in the "direct imaging"-mode. The first spectra were successfully taken during the night of March 2 - 3, 2002 . The slit masks that were used on this occasion were prepared with dedicated software that also optimizes the object selection, cf. PR Photo 09k/02 , and were then cut with the laser machine. From the first try on, the masks have been well aligned on the sky objects. The first observations with large numbers of spectra were obtained shortly thereafter. First accomplishments Images of nearby galaxies, clusters of galaxies, and distant galaxy fields were among the first to be obtained, using the VIMOS imaging mode and demonstrating the excellent efficiency of the instrument, various examples are shown below. The first observations of multi-spectra were performed in a selected sky field in which many faint galaxies are present; it is known as the "VIRMOS-VLT Deep Survey Field at 1000+02". Thanks to the excellent sensitivity of VIMOS, the spectra of galaxies as faint as (red) magnitude R = 23 (i.e. over 6 million times fainter than what can be perceived with the unaided eye) are visible on exposures lasting only 15 minutes. Some of the first observations with the Integral Field Unit were made of the core of the famous Antennae Galaxies (NGC 4038/39) . They will form the basis for a detailed map of the strong emission produced by the current, dramatic collision of the two galaxies. First Images and Spectra from VIMOS - a Gallery The following photos are from a collection of the first images and spectra obtained with VIMOS . See also PR Photos 09a/02 , 09b/02 and 09e/02 , reproduced above. Technical information about all of them is available below. ESO PR Photo 09f/02 ESO PR Photo 09f/02 [Preview - JPEG: 400 x 469 pix - 224k] [Normal - JPEG: 800 x 937 pix - 544k] [HiRes - JPEG: 2001 x 2343 pix - 3.6M] Caption : PR Photo 09f/02 : The Crab Nebula (Messier 1) , as observed by VIMOS. This well-known object is the remnant of a stellar explosion in the year 1054. ESO PR Photo 09g/02 ESO PR Photo 09g/02 [Preview - JPEG: 478 x 400 pix - 184k] [Normal - JPEG: 956 x 1209 pix - 416k] [HiRes - JPEG: 1801 x 1507 pix - 1.4M] Caption : PR Photo 09g/02 : VIMOS photo of NGC 2613 , a spiral galaxy that ressembles our own Milky Way. ESO PR Photo 09h/02 ESO PR Photo 09h/02 [Preview - JPEG: 400 x 469 pix - 152k] [Normal - JPEG: 800 x 938 pix - 440k] [HiRes - JPEG: 1800 x 2100 pix - 2.0M] Caption : PR Photo 09h/02 : Messier 100 is one of the largest and brightest spiral galaxies in the sky. ESO PR Photo 09i/02 ESO PR Photo 09i/02 [Preview - JPEG: 400 x 405 pix - 144k] [Normal - JPEG: 800 x 810 pix - 312k] Caption : PR Photo 09i/02 : The cluster of galaxies ACO 3341 is located at a distance of about 300 million light-years (redshift z = 0.037), i.e., comparatively nearby in cosmological terms. It contains a large number of galaxies of different size and brightness that are bound together by gravity. ESO PR Photo 09j/02 ESO PR Photo 09j/02 [Preview - JPEG: 447 x 400 pix - 200k] [Normal - JPEG: 893 x 800 pix - 472k] [HiRes - JPEG: 1562 x 1399 pix - 1.1M] Caption : PR Photo 09j/02 : The distant cluster of galaxies MS 1008.1-1224 is some 3 billion light-years distant (redshift z = 0.301). The galaxies in this cluster - that we observe as they were 3 billion years ago - are different from galaxies in our neighborhood; their stellar populations, on the average, are younger. ESO PR Photo 09k/02 ESO PR Photo 09k/02 [Preview - JPEG: 400 x 455 pix - 280k] [Normal - JPEG: 800 x 909 pix - 696k] Caption : PR Photo 09k/02 : Design of a Mask for Multi-Object Spectroscopy (MOS) observations with VIMOS. The mask serves to block, as far as possible, unwanted background light from the "night sky" (radiation from atoms and molecules in the Earth's upper atmosphere). During the set-up process for multi-object observations, the VIMOS software optimizes the position of the individual slits in the mask (one for each object for which a spectrum will be obtained) before these are cut. The photo shows an example of this fitting process, with the slit contours superposed on a short pre-exposure of the sky field to be observed. ESO PR Photo 09l/02 ESO PR Photo 09l/02 [Preview - JPEG: 470 x 400 pix - 200k] [Normal - JPEG: 939 x 800 pix - 464k] Caption : PR Photo 09l/02 : First Multi-Object Spectroscopy (MOS) observations with VIMOS; enlargement of a small part of the field shown in PR Photo 09b/02. The light from each galaxy passes through the dedicated slit in the mask (see PR Photo 09k/02 ) and produces a spectrum on the detector. Each vertical rectangle contains the spectrum of one galaxy that is located several billion light-years away. The horizontal lines are the strong emission from the "night sky" (radiation from atoms and molecules in the Earth's upper atmosphere), while the vertical traces are the spectral signatures of the galaxies. The full field contains the spectra of over 220 galaxies that were observed simultaneously, illustrating the great efficiency of this technique. Later, about 1000 spectra will be obtained in one exposure. ESO PR Photo 09m/02 ESO PR Photo 09m/02 [Preview - JPEG: 470 x 400 pix - 264k] [Normal - JPEG: 939 x 800 pix - 720k] Caption : PR Photo 09m/02 : was obtained with the Integral Field Spectroscopy mode of VIMOS. In one single exposure, more than 3000 spectra were taken of the central area of the Antennae Galaxies ( PR Photo 09a/02 ). ESO PR Photo 09n/02 ESO PR Photo 09n/02 [Preview - JPEG: 532 x 400 pix - 320k] [Normal - JPEG: 1063 x 800 pix - 864k] Caption : PR Photo 09n/02 : An enlargement of a small area in PR Photo 09m/02. This observation allows mapping of the distribution of elements like hydrogen (H) and sulphur (S II), for which the signatures are clearly identified in these spectra. The wavelength increases towards the top (arrow). Notes [1]: This is a joint Press Release of ESO , Centre National de la Recherche Scientifique (CNRS) in France, and Consiglio Nazionale delle Ricerche (CNR) and Istituto Nazionale di Astrofisica (INAF) in Italy. [2]: In astronomy, the redshift denotes the fraction by which the lines in the spectrum of an object are shifted towards longer wavelengths. The observed redshift of a distant galaxy gives a direct estimate of the apparent recession velocity as caused by the universal expansion. Since the expansion rate increases with distance, the velocity is itself a function (the Hubble relation) of the distance to the object. Technical information about the photos PR Photo 09a/01 : Composite VRI image of NGC 4038/39, obtained on 26 February 2002, in a bright sky (full moon). Individual exposures of 60 sec each; image quality 0.6 arcsec FWHM; the field measures 3.5 x 3.5 arcmin 2. North is up and East is left. PR Photo 09b/02 : MOS-spectra obtained with two quadrants totalling 221 slits + 6 reference objects (stars placed in square holes to ensure a correct alignment). Exposure time 15 min; LR(red) grism. This is the raw (unprocessed) image of the spectra. PR Photo 09e/02 : A 60 sec i exposure of NGC 5364 on February 26, 2002; image quality 0.6 arcsec FWHM; full moon; 3.5 x 3.5 arcmin 2 ; North is up and East is left. PR Photo 09f/02 : Composite VRI image of Messier 1, obtained on March 4, 2002. The individual exposures lasted 180 sec; image quality 0.7 arcsec FWHM; field 7 x 7 arcmin 2 ; North is up and East is left. PR Photo 09g/02 : Composite VRI image of NGC 2613, obtained on February 28, 2002. The individual exposures lasted 180 sec; image quality 0.7 arcsec FWHM; field 7 x 7 arcmin 2 ; North is up and East is left. PR Photo 09h/02 : Composite VRI image of Messier 100, obtained on March 3, 2002. The individual exposures lasted 180 sec, image quality 0.7 arcsec FWHM; field 7 x 7 arcmin 2 ; North is up and East is left. PR Photo 09i/02 : R-band image of galaxy cluster ACO 3341, obtained on March 4, 2002. Exposure 300 sec, image quality 0.5 arcsec FWHM;. field 7 x 7 arcmin 2 ; North is up and East is left. PR Photo 09j/02 : Composite VRI image of the distant cluster of galaxies MS 1008.1-1224. The individual exposures lasted 300 sec; image quality 0.8 arcsec FWHM; field 5 x 3 arcmin 2 ; North is to the right and East is up. PR Photo 09k/02 : Mask design made with the VMMPS tool, overlaying a pre-image. The selected objects are seen at the centre of the yellow squares, where a 1 arcsec slit is cut along the spatial X-axis. The rectangles in white represent the dispersion in wavelength of the spectra along the Y-axis. Masks are cut with the Mask Manufacturing Unit (MMU) built by the Virmos Consortium. PR Photo 09l/02 : Enlargement of a small area of PR Photo 09b/02. PR Photo 09m/02 : Spectra of the central area of NGC 4038/39, obtained with the Integral Field Unit on February 26, 2002. The exposure lasted 5 min and was made with the low resolution red grating. PR Photo 09m/02 : Zoom-in on small area of PR Photo 09m/02. The strong emission lines of hydrogen (H-alpha) and ionized sulphur (S II) are seen.
NASA Astrophysics Data System (ADS)
2001-04-01
A Window towards the Distant Universe Summary The Osservatorio Astronomico Capodimonte Deep Field (OACDF) is a multi-colour imaging survey project that is opening a new window towards the distant universe. It is conducted with the ESO Wide Field Imager (WFI) , a 67-million pixel advanced camera attached to the MPG/ESO 2.2-m telescope at the La Silla Observatory (Chile). As a pilot project at the Osservatorio Astronomico di Capodimonte (OAC) [1], the OACDF aims at providing a large photometric database for deep extragalactic studies, with important by-products for galactic and planetary research. Moreover, it also serves to gather experience in the proper and efficient handling of very large data sets, preparing for the arrival of the VLT Survey Telescope (VST) with the 1 x 1 degree 2 OmegaCam facility. PR Photo 15a/01 : Colour composite of the OACDF2 field . PR Photo 15b/01 : Interacting galaxies in the OACDF2 field. PR Photo 15c/01 : Spiral galaxy and nebulous object in the OACDF2 field. PR Photo 15d/01 : A galaxy cluster in the OACDF2 field. PR Photo 15e/01 : Another galaxy cluster in the OACDF2 field. PR Photo 15f/01 : An elliptical galaxy in the OACDF2 field. The Capodimonte Deep Field ESO PR Photo 15a/01 ESO PR Photo 15a/01 [Preview - JPEG: 400 x 426 pix - 73k] [Normal - JPEG: 800 x 851 pix - 736k] [Hi-Res - JPEG: 3000 x 3190 pix - 7.3M] Caption : This three-colour image of about 1/4 of the Capodimonte Deep Field (OACDF) was obtained with the Wide-Field Imager (WFI) on the MPG/ESO 2.2-m telescope at the la Silla Observatory. It covers "OACDF Subfield no. 2 (OACDF2)" with an area of about 35 x 32 arcmin 2 (about the size of the full moon), and it is one of the "deepest" wide-field images ever obtained. Technical information about this photo is available below. With the comparatively few large telescopes available in the world, it is not possible to study the Universe to its outmost limits in all directions. Instead, astronomers try to obtain the most detailed information possible in selected viewing directions, assuming that what they find there is representative for the Universe as a whole. This is the philosophy behind the so-called "deep-field" projects that subject small areas of the sky to intensive observations with different telescopes and methods. The astronomers determine the properties of the objects seen, as well as their distances and are then able to obtain a map of the space within the corresponding cone-of-view (the "pencil beam"). Recent, successful examples of this technique are the "Hubble Deep Field" (cf. ESO PR Photo 26/98 ) and the "Chandra Deep Field" ( ESO PR 05/01 ). In this context, the Capodimonte Deep Field (OACDF) is a pilot research project, now underway at the Osservatorio Astronomico di Capodimonte (OAC) in Napoli (Italy). It is a multi-colour imaging survey performed with the Wide Field Imager (WFI) , a 67-million pixel (8k x 8k) digital camera that is installed at the 2.2-m MPG/ESO Telescope at ESO's La Silla Observatory in Chile. The scientific goal of the OACDF is to provide an important database for subsequent extragalactic, galactic and planetary studies. It will allow the astronomers at OAC - who are involved in the VLT Survey Telescope (VST) project - to gain insight into the processing (and use) of the large data flow from a camera similar to, but four times smaller than the OmegaCam wide-field camera that will be installed at the VST. The field selection for the OACDF was based on the following criteria: * There must be no stars brighter than about 9th magnitude in the field, in order to avoid saturation of the CCD detector and effects from straylight in the telescope and camera. No Solar System planets should be near the field during the observations; * It must be located far from the Milky Way plane (at high galactic latitude) in order to reduce the number of galactic stars seen in this direction; * It must be located in the southern sky in order to optimize observing conditions (in particular, the altitude of the field above the horizon), as seen from the La Silla and Paranal sites; * There should be little interstellar material in this direction that may obscure the view towards the distant Universe; * Observations in this field should have been made with the Hubble Space Telescope (HST) that may serve for comparison and calibration purposes. Based on these criteria, the astronomers selected a field measuring about 1 x 1 deg 2 in the southern constellation of Corvus (The Raven). This is now known as the Capodimonte Deep Field (OACDF) . The above photo ( PR Photo 15a/01 ) covers one-quarter of the full field (Subfield No. 2 - OACDF2) - some of the objects seen in this area are shown below in more detail. More than 35,000 objects have been found in this area; the faintest are nearly 100 million fainter than what can be perceived with the unaided eye in the dark sky. Selected objects in the Capodimonte Deep Field ESO PR Photo 15b/01 ESO PR Photo 15b/01 [Preview - JPEG: 400 x 435 pix - 60k] [Normal - JPEG: 800 x 870 pix - 738k] [Hi-Res - JPEG: 3000 x 3261 pix - 5.1M] Caption : Enlargement of the interacting galaxies that are seen in the upper left corner of the OACDF2 field shown in PR Photo 15a/01 . The enlargement covers 1250 x 1130 WFI pixels (1 pixel = 0.24 arcsec), or about 5.0 x 4.5 arcmin 2 in the sky. The lower spiral is itself an interactive double. ESO PR Photo 15c/01 ESO PR Photo 15c/01 [Preview - JPEG: 557 x 400 pix - 93k] [Normal - JPEG: 1113 x 800 pix - 937k] [Hi-Res - JPEG: 3000 x 2156 pix - 4.0M] Caption : Enlargement of a spiral galaxy and a nebulous object in this area. The field shown covers 1250 x 750 pixels, or about 5 x 3 arcmin 2 in the sky. Note the very red objects next to the two bright stars in the lower-right corner. The colours of these objects are consistent with those of spheroidal galaxies at intermediate distances (redshifts). ESO PR Photo 15d/01 ESO PR Photo 15d/01 [Preview - JPEG: 400 x 530 pix - 68k] [Normal - JPEG: 800 x 1060 pix - 870k] [Hi-Res - JPEG: 2768 x 3668 pix - 6.2M] Caption : A further enlargement of a galaxy cluster of which most members are located in the north-east quadrant (upper left) and have a reddish colour. The nebulous object to the upper left is a dwarf galaxy of spheroidal shape. The red object, located near the centre of the field and resembling a double star, is very likely a gravitational lens [2]. Some of the very red, point-like objects in the field may be distant quasars, very-low mass stars or, possibly, relatively nearby brown dwarf stars. The field shown covers 1380 x 1630 pixels, or 5.5 x 6.5 arcmin 2. ESO PR Photo 15e/01 ESO PR Photo 15e/01 [Preview - JPEG: 400 x 418 pix - 56k] [Normal - JPEG: 800 x 835 pix - 700k] [Hi-Res - JPEG: 3000 x 3131 pix - 5.0M] Caption : Enlargement of a moderately distant galaxy cluster in the south-east quadrant (lower left) of the OACDF2 field. The field measures 1380 x 1260 pixels, or about 5.5 x 5.0 arcmin 2 in the sky. ESO PR Photo 15f/01 ESO PR Photo 15f/01 [Preview - JPEG: 449 x 400 pix - 68k] [Normal - JPEG: 897 x 800 pix - 799k] [Hi-Res - JPEG: 3000 x 2675 pix - 5.6M] Caption : Enlargement of the elliptical galaxy that is located to the west (right) in the OACDF2 field. The numerous tiny objects surrounding the galaxy may be globular clusters. The fuzzy object on the right edge of the field may be a dwarf spheroidal galaxy. The size of the field is about 6 x 5 arcmin 2. Technical Information about the OACDF Survey The observations for the OACDF project were performed in three different ESO periods (18-22 April 1999, 7-12 March 2000 and 26-30 April 2000). Some 100 Gbyte of raw data were collected during each of the three observing runs. The first OACDF run was done just after the commissioning of the ESO-WFI. The observational strategy was to perform a 1 x 1 deg 2 short-exposure ("shallow") survey and then a 0.5 x 1 deg 2 "deep" survey. The shallow survey was performed in the B, V, R and I broad-band filters. Four adjacent 30 x 30 arcmin 2 fields, together covering a 1 x 1 deg 2 field in the sky, were observed for the shallow survey. Two of these fields were chosen for the 0.5 x 1 deg 2 deep survey; OACDF2 shown above is one of these. The deep survey was performed in the B, V, R broad-bands and in other intermediate-band filters. The OACDF data are fully reduced and the catalogue extraction has started. A two-processor (500 Mhz each) DS20 machine with 100 Gbyte of hard disk, specifically acquired at the OAC for WFI data reduction, was used. The detailed guidelines of the data reduction, as well as the catalogue extraction, are reported in a research paper that will appear in the European research journal Astronomy & Astrophysics . Notes [1]: The team members are: Massimo Capaccioli, Juan M. Alcala', Roberto Silvotti, Magda Arnaboldi, Vincenzo Ripepi, Emanuella Puddu, Massimo Dall'Ora, Giuseppe Longo and Roberto Scaramella . [2]: This is a preliminary result by Juan Alcala', Massimo Capaccioli, Giuseppe Longo, Mikhail Sazhin, Roberto Silvotti and Vincenzo Testa , based on recent observations with the Telescopio Nazionale Galileo (TNG) which show that the spectra of the two objects are identical. Technical information about the photos PR Photo 15a/01 has been obtained by the combination of the B, V, and R stacked images of the OACDF2 field. The total exposure times in the three bands are 2 hours in B and V (12 ditherings of 10 min each were stacked to produce the B and V images) and 3 hours in R (13 ditherings of 15 min each). The mosaic images in the B and V bands were aligned relative to the R-band image and adjusted to a logarithmic intensity scale prior to the combination. The typical seeing was of the order of 1 arcsec in each of the three bands. Preliminary estimates of the three-sigma limiting magnitudes in B, V and R indicate 25.5, 25.0 and 25.0, respectively. More than 35,000 objects are detected above the three-sigma level. PR Photos 15b-f/01 display selected areas of the field shown in PR Photo 15a/01 at the original WFI scale, hereby also demonstrating the enormous amount of information contained in these wide-field images. In all photos, North is up and East is left.
Content-based video retrieval by example video clip
NASA Astrophysics Data System (ADS)
Dimitrova, Nevenka; Abdel-Mottaleb, Mohamed
1997-01-01
This paper presents a novel approach for video retrieval from a large archive of MPEG or Motion JPEG compressed video clips. We introduce a retrieval algorithm that takes a video clip as a query and searches the database for clips with similar contents. Video clips are characterized by a sequence of representative frame signatures, which are constructed from DC coefficients and motion information (`DC+M' signatures). The similarity between two video clips is determined by using their respective signatures. This method facilitates retrieval of clips for the purpose of video editing, broadcast news retrieval, or copyright violation detection.
NASA Astrophysics Data System (ADS)
Bell, J. F.; Godber, A.; McNair, S.; Caplinger, M. A.; Maki, J. N.; Lemmon, M. T.; Van Beek, J.; Malin, M. C.; Wellington, D.; Kinch, K. M.; Madsen, M. B.; Hardgrove, C.; Ravine, M. A.; Jensen, E.; Harker, D.; Anderson, R. B.; Herkenhoff, K. E.; Morris, R. V.; Cisneros, E.; Deen, R. G.
2017-07-01
The NASA Curiosity rover Mast Camera (Mastcam) system is a pair of fixed-focal length, multispectral, color CCD imagers mounted 2 m above the surface on the rover's remote sensing mast, along with associated electronics and an onboard calibration target. The left Mastcam (M-34) has a 34 mm focal length, an instantaneous field of view (IFOV) of 0.22 mrad, and a FOV of 20° × 15° over the full 1648 × 1200 pixel span of its Kodak KAI-2020 CCD. The right Mastcam (M-100) has a 100 mm focal length, an IFOV of 0.074 mrad, and a FOV of 6.8° × 5.1° using the same detector. The cameras are separated by 24.2 cm on the mast, allowing stereo images to be obtained at the resolution of the M-34 camera. Each camera has an eight-position filter wheel, enabling it to take Bayer pattern red, green, and blue (RGB) "true color" images, multispectral images in nine additional bands spanning 400-1100 nm, and images of the Sun in two colors through neutral density-coated filters. An associated Digital Electronics Assembly provides command and data interfaces to the rover, 8 Gb of image storage per camera, 11 bit to 8 bit companding, JPEG compression, and acquisition of high-definition video. Here we describe the preflight and in-flight calibration of Mastcam images, the ways that they are being archived in the NASA Planetary Data System, and the ways that calibration refinements are being developed as the investigation progresses on Mars. We also provide some examples of data sets and analyses that help to validate the accuracy and precision of the calibration.
NASA Astrophysics Data System (ADS)
McEwen, A. S.; Eliason, E.; Gulick, V. C.; Spinoza, Y.; Beyer, R. A.; HiRISE Team
2010-12-01
The High Resolution Imaging Science Experiment (HiRISE) camera, orbiting Mars since 2006 on the Mars Reconnaissance Orbiter (MRO), has returned more than 17,000 large images with scales as small as 25 cm/pixel. From it’s beginning, the HiRISE team has followed “The People’s Camera” concept, with rapid release of useful images, explanations, and tools, and facilitating public image suggestions. The camera includes 14 CCDs, each read out into 2 data channels, so compressed images are returned from MRO as 28 long (up to 120,000 line) images that are 1024 pixels wide (or binned 2x2 to 512 pixels, etc.). This raw data is very difficult to use, especially for the public. At the HiRISE operations center the raw data are calibrated and processed into a series of B&W and color products, including browse images and JPEG2000-compressed images and tools to make it easy for everyone to explore these enormous images (see http://hirise.lpl.arizona.edu/). Automated pipelines do all of this processing, so we can keep up with the high data rate; images go directly to the format of the Planetary Data System (PDS). After students visually check each image product for errors, they are fully released just 1 month after receipt; captioned images (written by science team members) may be released sooner. These processed HiRISE images have been incorporated into tools such as Google Mars and World Wide Telescope for even greater accessibility. 51 Digital Terrain Models derived from HiRISE stereo pairs have been released, resulting in some spectacular flyover movies produced by members of the public and viewed up to 50,000 times according to YouTube. Public targeting began in 2007 via NASA Quest (http://marsoweb.nas.nasa.gov/HiRISE/quest/) and more than 200 images have been acquired, mostly by students and educators. At the beginning of 2010 we released HiWish (http://www.uahirise.org/hiwish/), opening HiRISE targeting to anyone in the world with Internet access, and already more than 100 public suggestions have been acquired. HiRISE has proven very popular with the public and science community. For example, a Google search on “HiRISE Mars” returns 626,000 results. We've participated in well over a two dozen presentations, specifically talking to middle and high-schoolers about HiRISE. Our images and captions have been featured in high-quality print magazines such as "National Geographic, Ciel et Espace, and Sky and Telescope.
Multi-Class Classification for Identifying JPEG Steganography Embedding Methods
2008-09-01
B.H. (2000). STEGANOGRAPHY: Hidden Images, A New Challenge in the Fight Against Child Porn . UPDATE, Volume 13, Number 2, pp. 1-4, Retrieved June 3...Other crimes involving the use of steganography include child pornography where the stego files are used to hide a predator’s location when posting
Another Look at an Enigmatic New World
NASA Astrophysics Data System (ADS)
2005-02-01
VLT NACO Performs Outstanding Observations of Titan's Atmosphere and Surface On January 14, 2005, the ESA Huygens probe arrived at Saturn's largest satellite, Titan. After a faultless descent through the dense atmosphere, it touched down on the icy surface of this strange world from where it continued to transmit precious data back to the Earth. Several of the world's large ground-based telescopes were also active during this exciting event, observing Titan before and near the Huygens encounter, within the framework of a dedicated campaign coordinated by the members of the Huygens Project Scientist Team. Indeed, large astronomical telescopes with state-of-the art adaptive optics systems allow scientists to image Titan's disc in quite some detail. Moreover, ground-based observations are not restricted to the limited period of the fly-by of Cassini and landing of Huygens. They hence complement ideally the data gathered by this NASA/ESA mission, further optimising the overall scientific return. A group of astronomers [1] observed Titan with ESO's Very Large Telescope (VLT) at the Paranal Observatory (Chile) during the nights from 14 to 16 January, by means of the adaptive optics NAOS/CONICA instrument mounted on the 8.2-m Yepun telescope [2]. The observations were carried out in several modes, resulting in a series of fine images and detailed spectra of this mysterious moon. They complement earlier VLT observations of Titan, cf. ESO Press Photos 08/04 and ESO Press Release 09/04. The highest contrast images ESO PR Photo 04a/05 ESO PR Photo 04a/05 Titan's surface (NACO/VLT) [Preview - JPEG: 400 x 712 pix - 64k] [Normal - JPEG: 800 x 1424 pix - 524k] ESO PR Photo 04b/05 ESO PR Photo 04b/05 Map of Titan's Surface (NACO/VLT) [Preview - JPEG: 400 x 651 pix - 41k] [Normal - JPEG: 800 x 1301 pix - 432k] Caption: ESO PR Photo 04a/05 shows Titan's trailing hemisphere [3] with the Huygens landing site marked as an "X". The left image was taken with NACO and a narrow-band filter centred at 2 microns. On the right is the NACO/SDI image of the same location showing Titan's surface through the 1.6 micron methane window. A spherical projection with coordinates on Titan is overplotted. ESO PR Photo 04b/05 is a map of Titan taken with NACO at 1.28 micron (a methane window allowing it to probe down to the surface). On the leading side of Titan, the bright equatorial feature ("Xanadu") is dominating. On the trailing side, the landing site of the Huygens probe is indicated. ESO PR Photo 04c/05 ESO PR Photo 04c/05 Titan, the Enigmatic Moon, and Huygens Landing Site (NACO-SDI/VLT and Cassini/ISS) [Preview - JPEG: 400 x 589 pix - 40k] [Normal - JPEG: 800 x 1178 pix - 290k] Caption: ESO PR Photo 04c/05 is a comparison between the NACO/SDI image and an image taken by Cassini/ISS while approaching Titan. The Cassini image shows the Huygens landing site map wrapped around Titan, rotated to the same position as the January NACO SDI observations. The yellow "X" marks the landing site of the ESA Huygens probe. The Cassini/ISS image is courtesy of NASA, JPL, Space Science Institute (see http://sci.esa.int/science-e/www/object/index.cfm?fobjectid=36222). The coloured lines delineate the regions that were imaged by Cassini at differing resolutions. The lower-resolution imaging sequences are outlined in blue. Other areas have been specifically targeted for moderate and high resolution mosaicking of surface features. These include the site where the European Space Agency's Huygens probe has touched down in mid-January (marked with the yellow X), part of the bright region named Xanadu (easternmost extent of the area covered), and a boundary between dark and bright regions. ESO PR Photo 04d/05 ESO PR Photo 04d/05 Evolution of the Atmosphere of Titan (NACO/VLT) [Preview - JPEG: 400 x 902 pix - 40k] [Normal - JPEG: 800 x 1804 pix - 320k] Caption: ESO PR Photo 04d/05 is an image of Titan's atmosphere at 2.12 microns as observed with NACO on the VLT at three different epochs from 2002 till now. Titan's atmosphere exhibits seasonal and meteorological changes which can clearly be seen here : the North-South asymmetry - indicative of changes in the chemical composition in one pole or the other, depending on the season - is now clearly in favour of the North pole. Indeed, the situation has reversed with respect to a few years ago when the South pole was brighter. Also visible in these images is a bright feature in the South pole, found to be presently dimming after having appeared very bright from 2000 to 2003. The differences in size are due to the variation in the distance to Earth of Saturn and its planetary system. The new images show Titan's atmosphere and surface at various near-infrared spectral bands. The surface of Titan's trailing side is visible in images taken through narrow-band filters at wavelengths 1.28, 1.6 and 2.0 microns. They correspond to the so-called "methane windows" which allow to peer all the way through the lower Titan atmosphere to the surface. On the other hand, Titan's atmosphere is visible through filters centred in the wings of these methane bands, e.g. at 2.12 and 2.17 microns. Eric Gendron of the Paris Observatory in France and leader of the team, is extremely pleased: "We believe that some of these images are the highest-contrast images of Titan ever taken with any ground-based or earth-orbiting telescope." The excellent images of Titan's surface show the location of the Huygens landing site in much detail. In particular, those centred at wavelength 1.6 micron and obtained with the Simultaneous Differential Imager (SDI) on NACO [4] provide the highest contrast and best views. This is firstly because the filters match the 1.6 micron methane window most accurately. Secondly, it is possible to get an even clearer view of the surface by subtracting accurately the simultaneously recorded images of the atmospheric haze, taken at wavelength 1.625 micron. The images show the great complexity of Titan's trailing side, which was earlier thought to be very dark. However, it is now obvious that bright and dark regions cover the field of these images. The best resolution achieved on the surface features is about 0.039 arcsec, corresponding to 200 km on Titan. ESO PR Photo 04c/04 illustrates the striking agreement between the NACO/SDI image taken with the VLT from the ground and the ISS/Cassini map. The images of Titan's atmosphere at 2.12 microns show a still-bright south pole with an additional atmospheric bright feature, which may be clouds or some other meteorological phenomena. The astronomers have followed it since 2002 with NACO and notice that it seems to be fading with time. At 2.17 microns, this feature is not visible and the north-south asymmetry - also known as "Titan's smile" - is clearly in favour in the north. The two filters probe different altitude levels and the images thus provide information about the extent and evolution of the north-south asymmetry. Probing the composition of the surface ESO PR Photo 04e/05 ESO PR Photo 04e/05 Spectrum of Two Regions on Titan (NACO/VLT) [Preview - JPEG: 400 x 623 pix - 44k] [Normal - JPEG: 800 x 1246 pix - 283k] Caption: ESO PR Photo 04e/05 represents two of the many spectra obtained on January 16, 2005 with NACO and covering the 2.02 to 2.53 micron range. The blue spectrum corresponds to the brightest region on Titan's surface within the slit, while the red spectrum corresponds to the dark area around the Huygens landing site. In the methane band, the two spectra are equal, indicating a similar atmospheric content; in the methane window centred at 2.0 microns, the spectra show differences in brightness, but are in phase. This suggests that there is no real variation in the composition beyond different atmospheric mixings. ESO PR Photo 04f/05 ESO PR Photo 04f/05 Imaging Titan with a Tunable Filter (NACO Fabry-Perot/VLT) [Preview - JPEG: 400 x 718 pix - 44k] [Normal - JPEG: 800 x 1435 pix - 326k] Caption: ESO PR Photo 04f/05 presents a series of images of Titan taken around the 2.0 micron methane window probing different layers of the atmosphere and the surface. The images are currently under thorough processing and analysis so as to reveal any subtle variations in wavelength that could be indicative of the spectral response of the various surface components, thus allowing the astronomers to identify them. Because the astronomers have also obtained spectroscopic data at different wavelengths, they will be able to recover useful information on the surface composition. The Cassini/VIMS instrument explores Titan's surface in the infrared range and, being so close to this moon, it obtains spectra with a much better spatial resolution than what is possible with Earth-based telescopes. However, with NACO at the VLT, the astronomers have the advantage of observing Titan with considerably higher spectral resolution, and thus to gain more detailed spectral information about the composition, etc. The observations therefore complement each other. Once the composition of the surface at the location of the Huygens landing is known from the detailed analysis of the in-situ measurements, it should become possible to learn the nature of the surface features elsewhere on Titan by combining the Huygens results with more extended cartography from Cassini as well as from VLT observations to come. More information Results on Titan obtained with data from NACO/VLT are in press in the journal Icarus ("Maps of Titan's surface from 1 to 2.5 micron" by A. Coustenis et al.). Previous images of Titan obtained with NACO and with NACO/SDI are accessible as ESO PR Photos 08/04 and ESO PR Photos 11/04. See also these Press Releases for additional scientific references.
The Helioviewer Project: Solar Data Visualization and Exploration
NASA Astrophysics Data System (ADS)
Hughitt, V. Keith; Ireland, J.; Müller, D.; García Ortiz, J.; Dimitoglou, G.; Fleck, B.
2011-05-01
SDO has only been operating a little over a year, but in that short time it has already transmitted hundreds of terabytes of data, making it impossible for data providers to maintain a complete archive of data online. By storing an extremely efficiently compressed subset of the data, however, the Helioviewer project has been able to maintain a continuous record of high-quality SDO images starting from soon after the commissioning phase. The Helioviewer project was not designed to deal with SDO alone, however, and continues to add support for new types of data, the most recent of which are STEREO EUVI and COR1/COR2 images. In addition to adding support for new types of data, improvements have been made to both the server-side and client-side products that are part of the project. A new open-source JPEG2000 (JPIP) streaming server has been developed offering a vastly more flexible and reliable backend for the Java/OpenGL application JHelioviewer. Meanwhile the web front-end, Helioviewer.org, has also made great strides both in improving reliability, and also in adding new features such as the ability to create and share movies on YouTube. Helioviewer users are creating nearly two thousand movies a day from the over six million images that are available to them, and that number continues to grow each day. We provide an overview of recent progress with the various Helioviewer Project components and discuss plans for future development.
Lossless compression techniques for maskless lithography data
NASA Astrophysics Data System (ADS)
Dai, Vito; Zakhor, Avideh
2002-07-01
Future lithography systems must produce more dense chips with smaller feature sizes, while maintaining the throughput of one wafer per sixty seconds per layer achieved by today's optical lithography systems. To achieve this throughput with a direct-write maskless lithography system, using 25 nm pixels for 50 nm feature sizes, requires data rates of about 10 Tb/s. In a previous paper, we presented an architecture which achieves this data rate contingent on consistent 25 to 1 compression of lithography data, and on implementation of a decoder-writer chip with a real-time decompressor fabricated on the same chip as the massively parallel array of lithography writers. In this paper, we examine the compression efficiency of a spectrum of techniques suitable for lithography data, including two industry standards JBIG and JPEG-LS, a wavelet based technique SPIHT, general file compression techniques ZIP and BZIP2, our own 2D-LZ technique, and a simple list-of-rectangles representation RECT. Layouts rasterized both to black-and-white pixels, and to 32 level gray pixels are considered. Based on compression efficiency, JBIG, ZIP, 2D-LZ, and BZIP2 are found to be strong candidates for application to maskless lithography data, in many cases far exceeding the required compression ratio of 25. To demonstrate the feasibility of implementing the decoder-writer chip, we consider the design of a hardware decoder based on ZIP, the simplest of the four candidate techniques. The basic algorithm behind ZIP compression is Lempel-Ziv 1977 (LZ77), and the design parameters of LZ77 decompression are optimized to minimize circuit usage while maintaining compression efficiency.
Sharper and Deeper Views with MACAO-VLTI
NASA Astrophysics Data System (ADS)
2003-05-01
"First Light" with Powerful Adaptive Optics System for the VLT Interferometer Summary On April 18, 2003, a team of engineers from ESO celebrated the successful accomplishment of "First Light" for the MACAO-VLTI Adaptive Optics facility on the Very Large Telescope (VLT) at the Paranal Observatory (Chile). This is the second Adaptive Optics (AO) system put into operation at this observatory, following the NACO facility ( ESO PR 25/01 ). The achievable image sharpness of a ground-based telescope is normally limited by the effect of atmospheric turbulence. However, with Adaptive Optics (AO) techniques, this major drawback can be overcome so that the telescope produces images that are as sharp as theoretically possible, i.e., as if they were taken from space. The acronym "MACAO" stands for "Multi Application Curvature Adaptive Optics" which refers to the particular way optical corrections are made which "eliminate" the blurring effect of atmospheric turbulence. The MACAO-VLTI facility was developed at ESO. It is a highly complex system of which four, one for each 8.2-m VLT Unit Telescope, will be installed below the telescopes (in the Coudé rooms). These systems correct the distortions of the light beams from the large telescopes (induced by the atmospheric turbulence) before they are directed towards the common focus at the VLT Interferometer (VLTI). The installation of the four MACAO-VLTI units of which the first one is now in place, will amount to nothing less than a revolution in VLT interferometry . An enormous gain in efficiency will result, because of the associated 100-fold gain in sensitivity of the VLTI. Put in simple words, with MACAO-VLTI it will become possible to observe celestial objects 100 times fainter than now . Soon the astronomers will be thus able to obtain interference fringes with the VLTI ( ESO PR 23/01 ) of a large number of objects hitherto out of reach with this powerful observing technique, e.g. external galaxies. The ensuing high-resolution images and spectra will open entirely new perspectives in extragalactic research and also in the studies of many faint objects in our own galaxy, the Milky Way. During the present period, the first of the four MACAO-VLTI facilties was installed, integrated and tested by means of a series of observations. For these tests, an infrared camera was specially developed which allowed a detailed evaluation of the performance. It also provided some first, spectacular views of various celestial objects, some of which are shown here. PR Photo 12a/03 : View of the first MACAO-VLTI facility at Paranal PR Photo 12b/03 : The star HIC 59206 (uncorrected image). PR Photo 12c/03 : HIC 59206 (AO corrected image) PR Photo 12e/03 : HIC 69495 (AO corrected image) PR Photo 12f/03 : 3-D plot of HIC 69495 images (without and with AO correction) PR Photo 12g/03 : 3-D plot of the artificially dimmed star HIC 74324 (without and with AO correction) PR Photo 12d/03 : The MACAO-VLTI commissioning team at "First Light" PR Photo 12h/03 : K-band image of the Galactic Center PR Photo 12i/03 : K-band image of the unstable star Eta Carinae PR Photo 12j/03 : K-band image of the peculiar star Frosty Leo MACAO - the Multi Application Curvature Adaptive Optics facility ESO PR Photo 12a/03 ESO PR Photo 12a/03 [Preview - JPEG: 408 x 400 pix - 56k [Normal - JPEG: 815 x 800 pix - 720k] Captions : PR Photo 12a/03 is a front view of the first MACAO-VLTI unit, now installed at the 8.2-m VLT KUEYEN telescope. Adaptive Optics (AO) systems work by means of a computer-controlled deformable mirror (DM) that counteracts the image distortion induced by atmospheric turbulence. It is based on real-time optical corrections computed from image data obtained by a "wavefront sensor" (a special camera) at very high speed, many hundreds of times each second. The ESO Multi Application Curvature Adaptive Optics (MACAO) system uses a 60-element bimorph deformable mirror (DM) and a 60-element curvature wavefront sensor, with a "heartbeat" of 350 Hz (times per second). With this high spatial and temporal correcting power, MACAO is able to nearly restore the theoretically possible ("diffraction-limited") image quality of an 8.2-m VLT Unit Telescope in the near-infrared region of the spectrum, at a wavelength of about 2 µm. The resulting image resolution (sharpness) of the order of 60 milli-arcsec is an improvement by more than a factor of 10 as compared to standard seeing-limited observations. Without the benefit of the AO technique, such image sharpness could only be obtained if the telescope were placed above the Earth's atmosphere. The technical development of MACAO-VLTI in its present form was begun in 1999 and with project reviews at 6 months' intervals, the project quickly reached cruising speed. The effective design is the result of a very fruitful collaboration between the AO department at ESO and European industry which contributed with the diligent fabrication of numerous high-tech components, including the bimorph DM with 60 actuators, a fast-reaction tip-tilt mount and many others. The assembly, tests and performance-tuning of this complex real-time system was assumed by ESO-Garching staff. Installation at Paranal The first crates of the 60+ cubic-meter shipment with MACAO components arrived at the Paranal Observatory on March 12, 2003. Shortly thereafter, ESO engineers and technicians began the painstaking assembly of this complex instrument, below the VLT 8.2-m KUEYEN telescope (formerly UT2). They followed a carefully planned scheme, involving installation of the electronics, water cooling systems, mechanical and optical components. At the end, they performed the demanding optical alignment, delivering a fully assembled instrument one week before the planned first test observations. This extra week provided a very welcome and useful opportunity to perform a multitude of tests and calibrations in preparation of the actual observations. AO to the service of Interferometry The VLT Interferometer (VLTI) combines starlight captured by two or more 8.2- VLT Unit Telescopes (later also from four moveable1.8-m Auxiliary Telescopes) and allows to vastly increase the image resolution. The light beams from the telescopes are brought together "in phase" (coherently). Starting out at the primary mirrors, they undergo numerous reflections along their different paths over total distances of several hundred meters before they reach the interferometric Laboratory where they are combined to within a fraction of a wavelength, i.e., within nanometers! The gain by the interferometric technique is enormous - combining the light beams from two telescopes separated by 100 metres allows observation of details which could otherwise only be resolved by a single telescope with a diameter of 100 metres. Sophisticated data reduction is necessary to interpret interferometric measurements and to deduce important physical parameters of the observed objects like the diameters of stars, etc., cf. ESO PR 22/02 . The VLTI measures the degree of coherence of the combined beams as expressed by the contrast of the observed interferometric fringe pattern. The higher the degree of coherence between the individual beams, the stronger is the measured signal. By removing wavefront aberrations introduced by atmospheric turbulence, the MACAO-VLTI systems enormously increase the efficiency of combining the individual telescope beams. In the interferometric measurement process, the starlight must be injected into optical fibers which are extremely small in order to accomplish their function; only 6 µm (0.006 mm) in diameter. Without the "refocussing" action of MACAO, only a tiny fraction of the starlight captured by the telescopes can be injected into the fibers and the VLTI would not be working at the peak of efficiency for which it has been designed. MACAO-VLTI will now allow a gain of a factor 100 in the injected light flux - this will be tested in detail when two VLT Unit Telescopes, both equipped with MACAO-VLTI's, work together. However, the very good performance actually achieved with the first system makes the engineers very confident that a gain of this order will indeed be reached. This ultimate test will be performed as soon as the second MACAO-VLTI system has been installed later this year. MACAO-VLTI First Light After one month of installation work and following tests by means of an artificial light source installed in the Nasmyth focus of KUEYEN, MACAO-VLTI had "First Light" on April 18 when it received "real" light from several astronomical obejcts. During the preceding performance tests to measure the image improvement (sharpness, light energy concentration) in near-infrared spectral bands at 1.2, 1.6 and 2.2 µm, MACAO-VLTI was checked by means of a custom-made Infrared Test Camera developed for this purpose by ESO. This intermediate test was required to ensure the proper functioning of MACAO before it is used to feed a corrected beam of light into the VLTI. After only a few nights of testing and optimizing of the various functions and operational parameters, MACAO-VLTI was ready to be used for astronomical observations. The images below were taken under average seeing conditions and illustrate the improvement of the image quality when using MACAO-VLTI . MACAO-VLTI - First Images Here are some of the first images obtained with the test camera at the first MACAO-VLTI system, now installed at the 8.2-m VLT KUEYEN telescope. ESO PR Photo 12b/03 ESO PR Photo 12b/03 [Preview - JPEG: 400 x 468 pix - 25k [Normal - JPEG: 800 x 938 pix - 291k] ESO PR Photo 12c/03 ESO PR Photo 12c/03 [Preview - JPEG: 400 x 469 pix - 14k [Normal - JPEG: 800 x 938 pix - 135k] Captions : PR Photos 12b-c/03 show the first image, obtained by the first MACAO-VLTI system at the 8.2-m VLT KUEYEN telescope in the infrared K-band (wavelength 2.2 µm). It displays images of the star HIC 59206 (visual magnitude 10) obtained before (left; Photo 12b/03 ) and after (right; Photo 12c/03 ) the adaptive optics system was switched on. The binary is separated by 0.120 arcsec and the image was taken under medium seeing conditions (0.75 arcsec) seeing. The dramatic improvement in image quality is obvious. ESO PR Photo 12d/03 ESO PR Photo 12d/03 [Preview - JPEG: 400 x 427 pix - 18k [Normal - JPEG: 800 x 854 pix - 205k] ESO PR Photo 12e/03 ESO PR Photo 12e/03 [Preview - JPEG: 483 x 400 pix - 17k [Normal - JPEG: 966 x 800 pix - 169k] Captions : PR Photo 12d/03 shows one of the best images obtained with MACAO-VLTI (logarithmic intensity scale). The seeing was 0.8 arcsec at the time of the observations and three diffraction rings can clearly be seen around the star HIC 69495 of visual magnitude 9.9. This pattern is only well visible when the image resolution is very close to the theoretical limit. The exposure of the point-like source lasted 100 seconds through a narrow K-band filter. It has a Strehl ratio (a measure of light concentration) of about 55% and a Full-Width- Half-Maximum (FWHM) of 0.060 arcsec. The 3-D plot ( PRPhoto 12e/03 ) demonstrates the tremendous gain in peak intensity of the AO image (right) in peak intensity as compared to "open-loop" image (the "noise" to the left) obtained without the benefit of AO. ESO PR Photo 12f/03 ESO PR Photo 12f/03 [Preview - JPEG: 494 x 400 pix - 20k [Normal - JPEG: 988 x 800 pix - 204k] Caption : PR Photo 12f/03 demonstrates the correction performance of MACAO-VLTI when using a faint guide star. The observed star ( HIC 74324 (stellar spectral type G0 and visual magnitude 9.4) was artificially dimmed by a neutral optical filter to visual magnitude 16.5. The observation was carried out in 0.55 arcsec seeing and with a rather short atmospheric correlation time of 3 milliseconds at visible wavelengths. The Strehl ratio in the 25-second K-band exposure is about 10% and the FWHM is 0.14 arcseconds. The uncorrected image is shown to the left for comparison. The improvement is again impressive, even for a star as faint as this, indicating that guide stars of this magnitude are feasible during future observations. ESO PR Photo 12g/03 ESO PR Photo 12g/03 [Preview - JPEG: 528 x 400 pix - 48k [Normal - JPEG: 1055 x 800 pix - 542k] Captions : PR Photo 12g/03 shows some of the MACAO-VLTI commissioning team members in the VLT Control Room at the moment of "First Light" during the night between April 18-19, 2003. Sitting: Markus Kasper, Enrico Fedrigo - Standing: Robin Arsenault, Sebastien Tordo, Christophe Dupuy, Toomas Erm, Jason Spyromilio, Rob Donaldson (all from ESO). PR Photos 12b-c/03 show the first image in the infrared K-band (wavelength 2.2 µm) of a star (visual magnitude 10) obtained without and with image corrections by means of adaptive optics. PR Photo 12d/03 displays one of the best images obtained with MACAO-VLTI during the early tests. It shows a Strehl ratio (measure of light concentration) that fulfills the specifications according to which MACAO-VLTI was built. This enormous improvement when using AO techniques is clearly demonstrated in PR Photo 12e/03 , with the uncorrected image profile (left) hardly visible when compared to the corrected profile (right). PR Photo 11f/03 demonstrates the correction capabilities of MACAO-VLTI when using a faint guide star. Tests using different spectral types showed that the limiting visual magnitude varies between 16 for early-type B-stars and about 18 for late-type M-stars. Astronomical Objects seen at the Diffraction Limit The following examples of MACAO-VLTI observations of two well-known astronomical objects were obtained in order to provisionally evaluate the research opportunities now opening with MACAO-VLTI. They may well be compared with space-based images. The Galactic Center ESO PR Photo 12h/03 ESO PR Photo 12h/03 [Preview - JPEG: 693 x 400 pix - 46k [Normal - JPEG: 1386 x 800 pix - 403k] Caption : PR Photo 12h/03 shows a 90-second K-band exposure of the central 6 x 13 arcsec 2 around the Galactic Center obtained by MACAO-VLTI under average atmospheric conditions (0.8 arcsec seeing). Although the 14.6 magnitude guide star is located roughly 20 arcsec from the field center - this leading to isoplanatic degradation of image sharpness - the present image is nearly diffraction limited and has a point-source FWHM of about 0.115 arcsec. The center of our own galaxy is located in the Sagittarius constellation at a distance of approximately 30,000 light-years. PR Photo 12h/03 shows a short-exposure infrared view of this region, obtained by MACAO-VLTI during the early test phase. Recent AO observations using the NACO facility at the VLT provide compelling evidence that a supermassive black hole with 2.6 million solar masses is located at the very center, cf. ESO PR 17/02 . This result, based on astrometric observations of a star orbiting the black hole and approaching it to within a distance of only 17 light-hours, would not have been possible without images of diffraction limited resolution. Eta Carinae ESO PR Photo 12i/03 ESO PR Photo 12i/03 [Preview - JPEG: 400 x 482 pix - 25k [Normal - JPEG: 800 x 963 pix - 313k] Caption : PR Photo 12i/03 displays an infrared narrow K-band image of the massive star Eta Carinae . The image quality is difficult to estimate because the central star saturated the detector, but the clear structure of the diffraction spikes and the size of the smallest features visible in the photo indicate a near-diffraction limited performance. The field measures about 6.5 x 6.5 arcsec 2. Eta Carinae is one of the heaviest stars known, with a mass that probably exceeds 100 solar masses. It is about 4 million times brighter than the Sun, making it one of the most luminous stars known. Such a massive star has a comparatively short lifetime of about 1 million years only and - measured in the cosmic timescale- Eta Carinae must have formed quite recently. This star is highly unstable and prone to violent outbursts. They are caused by the very high radiation pressure at the star's upper layers, which blows significant portions of the matter at the "surface" into space during violent eruptions that may last several years. The last of these outbursts occurred between 1835 and 1855 and peaked in 1843. Despite its comparaticely large distance - some 7,500 to 10,000 light-years - Eta Carinae briefly became the second brightest star in the sky at that time (with an apparent magnitude -1), only surpassed by Sirius. Frosty Leo ESO PR Photo 12j/03 ESO PR Photo 12j/03 [Preview - JPEG: 411 x 400 pix - 22k [Normal - JPEG: 821 x 800 pix - 344k] Caption : PR Photo 12j/03 shows a 5 x 5 arcsec 2 K-band image of the peculiar star known as "Frosty Leo" obtained in 0.7 arcsec seeing. Although the object is comparatively bright (visual magnitude 11), it is a difficult AO target because of its extension of about 3 arcsec at visible wavelengths. The corrected image quality is about FWHM 0.1 arcsec. Frosty Leo is a magnitude 11 (post-AGB) star surrounded by an envelope of gas, dust, and large amounts of ice (hence the name). The associated nebula is of "butterfly" shape (bipolar morphology) and it is one of the best known examples of the brief transitional phase between two late evolutionary stages, asymptotic giant branch (AGB) and the subsequent planetary nebulae (PNe). For a three-solar-mass object like this one, this phase is believed to last only a few thousand years, the wink of an eye in the life of the star. Hence, objects like this one are very rare and Frosty Leo is one of the nearest and brightest among them.
DICOM image integration into an electronic medical record using thin viewing clients
NASA Astrophysics Data System (ADS)
Stewart, Brent K.; Langer, Steven G.; Taira, Ricky K.
1998-07-01
Purpose -- To integrate radiological DICOM images into our currently existing web-browsable Electronic Medical Record (MINDscape). Over the last five years the University of Washington has created a clinical data repository combining in a distributed relational database information from multiple departmental databases (MIND). A text-based view of this data called the Mini Medical Record (MMR) has been available for three years. MINDscape, unlike the text based MMR, provides a platform independent, web browser view of the MIND dataset that can easily be linked to other information resources on the network. We have now added the integration of radiological images into MINDscape through a DICOM webserver. Methods/New Work -- we have integrated a commercial webserver that acts as a DICOM Storage Class Provider to our, computed radiography (CR), computed tomography (CT), digital fluoroscopy (DF), magnetic resonance (MR) and ultrasound (US) scanning devices. These images can be accessed through CGI queries or by linking the image server database using ODBC or SQL gateways. This allows the use of dynamic HTML links to the images on the DICOM webserver from MINDscape, so that the radiology reports already resident in the MIND repository can be married with the associated images through the unique examination accession number generated by our Radiology Information System (RIS). The web browser plug-in used provides a wavelet decompression engine (up to 16-bits per pixel) and performs the following image manipulation functions: window/level, flip, invert, sort, rotate, zoom, cine-loop and save as JPEG. Results -- Radiological DICOM image sets (CR, CT, MR and US) are displayed with associated exam reports for referring physician and clinicians anywhere within the widespread academic medical center on PCs, Macs, X-terminals and Unix computers. This system is also being used for home teleradiology application. Conclusion -- Radiological DICOM images can be made available medical center wide to physicians quickly using low-cost and ubiquitous, thin client browsing technology and wavelet compression.
Geller, G.N.; Fosnight, E.A.; Chaudhuri, Sambhudas
2008-01-01
Access to satellite images has been largely limited to communities with specialized tools and expertise, even though images could also benefit other communities. This situation has resulted in underutilization of the data. TerraLook, which consists of collections of georeferenced JPEG images and an open source toolkit to use them, makes satellite images available to those lacking experience with remote sensing. Users can find, roam, and zoom images, create and display vector overlays, adjust and annotate images so they can be used as a communication vehicle, compare images taken at different times, and perform other activities useful for natural resource management, sustainable development, education, and other activities. ?? 2007 IEEE.
Geller, G.N.; Fosnight, E.A.; Chaudhuri, Sambhudas
2007-01-01
Access to satellite images has been largely limited to communities with specialized tools and expertise, even though images could also benefit other communities. This situation has resulted in underutilization of the data. TerraLook, which consists of collections of georeferenced JPEG images and an open source toolkit to use them, makes satellite images available to those lacking experience with remote sensing. Users can find, roam, and zoom images, create and display vector overlays, adjust and annotate images so they can be used as a communication vehicle, compare images taken at different times, and perform other activities useful for natural resource management, sustainable development, education, and other activities. ?? 2007 IEEE.
A two-factor error model for quantitative steganalysis
NASA Astrophysics Data System (ADS)
Böhme, Rainer; Ker, Andrew D.
2006-02-01
Quantitative steganalysis refers to the exercise not only of detecting the presence of hidden stego messages in carrier objects, but also of estimating the secret message length. This problem is well studied, with many detectors proposed but only a sparse analysis of errors in the estimators. A deep understanding of the error model, however, is a fundamental requirement for the assessment and comparison of different detection methods. This paper presents a rationale for a two-factor model for sources of error in quantitative steganalysis, and shows evidence from a dedicated large-scale nested experimental set-up with a total of more than 200 million attacks. Apart from general findings about the distribution functions found in both classes of errors, their respective weight is determined, and implications for statistical hypothesis tests in benchmarking scenarios or regression analyses are demonstrated. The results are based on a rigorous comparison of five different detection methods under many different external conditions, such as size of the carrier, previous JPEG compression, and colour channel selection. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error.
Digital pathology: DICOM-conform draft, testbed, and first results.
Zwönitzer, Ralf; Kalinski, Thomas; Hofmann, Harald; Roessner, Albert; Bernarding, Johannes
2007-09-01
Hospital information systems are state of the art nowadays. Therefore, Digital Pathology, also labelled as Virtual Microscopy, has gained increased attention. Triggered by radiology, standardized information models and workflows were world-wide defined based on DICOM. However, DICOM-conform integration of Digital Pathology into existing clinical information systems imposes new problems requiring specific solutions concerning the huge amount of data as well as the special structure of the data to be managed, transferred, and stored. We implemented a testbed to realize and evaluate the workflow of digitized slides from acquisition to archiving. The experiences led to the draft of a DICOM-conform information model that accounted for extensions, definitions, and technical requirements necessary to integrate digital pathology in a hospital-wide DICOM environment. Slides were digitized, compressed, and could be viewed remotely. Real-time transfer of the huge amount of data was optimized using streaming techniques. Compared to a recent discussion in the DICOM Working Group for Digital Pathology (WG26) our experiences led to a preference of a JPEG2000/JPIP-based streaming of the whole slide image. The results showed that digital pathology is feasible but strong efforts by users and vendors are still necessary to integrate Digital Pathology into existing information systems.
A secure and robust information hiding technique for covert communication
NASA Astrophysics Data System (ADS)
Parah, S. A.; Sheikh, J. A.; Hafiz, A. M.; Bhat, G. M.
2015-08-01
The unprecedented advancement of multimedia and growth of the internet has made it possible to reproduce and distribute digital media easier and faster. This has given birth to information security issues, especially when the information pertains to national security, e-banking transactions, etc. The disguised form of encrypted data makes an adversary suspicious and increases the chance of attack. Information hiding overcomes this inherent problem of cryptographic systems and is emerging as an effective means of securing sensitive data being transmitted over insecure channels. In this paper, a secure and robust information hiding technique referred to as Intermediate Significant Bit Plane Embedding (ISBPE) is presented. The data to be embedded is scrambled and embedding is carried out using the concept of Pseudorandom Address Vector (PAV) and Complementary Address Vector (CAV) to enhance the security of the embedded data. The proposed ISBPE technique is fully immune to Least Significant Bit (LSB) removal/replacement attack. Experimental investigations reveal that the proposed technique is more robust to various image processing attacks like JPEG compression, Additive White Gaussian Noise (AWGN), low pass filtering, etc. compared to conventional LSB techniques. The various advantages offered by ISBPE technique make it a good candidate for covert communication.
NASA Astrophysics Data System (ADS)
1999-11-01
First Images from FORS2 at VLT KUEYEN on Paranal The first, major astronomical instrument to be installed at the ESO Very Large Telescope (VLT) was FORS1 ( FO cal R educer and S pectrograph) in September 1998. Immediately after being attached to the Cassegrain focus of the first 8.2-m Unit Telescope, ANTU , it produced a series of spectacular images, cf. ESO PR 14/98. Many important observations have since been made with this outstanding facility. Now FORS2 , its powerful twin, has been installed at the second VLT Unit Telescope, KUEYEN . It is the fourth major instrument at the VLT after FORS1 , ISAAC and UVES.. The FORS2 Commissioning Team that is busy installing and testing this large and complex instrument reports that "First Light" was successfully achieved already on October 29, 1999, only two days after FORS2 was first mounted at the Cassegrain focus. Since then, various observation modes have been carefully tested, including normal and high-resolution imaging, echelle and multi-object spectroscopy, as well as fast photometry with millisecond time resolution. A number of fine images were obtained during this work, some of which are made available with the present Press Release. The FORS instruments ESO PR Photo 40a/99 ESO PR Photo 40a/99 [Preview - JPEG: 400 x 345 pix - 203k] [Normal - JPEG: 800 x 689 pix - 563kb] [Full-Res - JPEG: 1280 x 1103 pix - 666kb] Caption to PR Photo 40a/99: This digital photo shows the twin instruments, FORS2 at KUEYEN (in the foreground) and FORS1 at ANTU, seen in the background through the open ventilation doors in the two telescope enclosures. Although they look alike, the two instruments have specific functions, as described in the text. FORS1 and FORS2 are the products of one of the most thorough and advanced technological studies ever made of a ground-based astronomical instrument. They have been specifically designed to investigate the faintest and most remote objects in the universe. They are "multi-mode instruments" that may be used in several different observation modes. FORS2 is largely identical to FORS1 , but there are a number of important differences. For example, it contains a Mask Exchange Unit (MXU) for laser-cut star-plates [1] that may be inserted at the focus, allowing a large number of spectra of different objects, in practice up to about 70, to be taken simultaneously. Highly sophisticated software assigns slits to individual objects in an optimal way, ensuring a great degree of observing efficiency. Instead of the polarimetry optics found in FORS1 , FORS2 has new grisms that allow the use of higher spectral resolutions. The FORS project was carried out under ESO contract by a consortium of three German astronomical institutes, the Heidelberg State Observatory and the University Observatories of Göttingen and Munich. The participating institutes have invested a total of about 180 man-years of work in this unique programme. The photos below demonstrate some of the impressive possibilities with this new instrument. They are based on observations with the FORS2 standard resolution collimator (field size 6.8 x 6.8 armin = 2048 x 2048 pixels; 1 pixel = 0.20 arcsec). In addition, observations of the Crab pulsar demonstrate a new observing mode, high-speed photometry. Protostar HH-34 in Orion ESO PR Photo 40b/99 ESO PR Photo 40b/99 [Preview - JPEG: 400 x 444 pix - 220kb] [Normal - JPEG: 800 x 887 pix - 806kb] [Full-Res - JPEG: 2000 x 2217 pix - 3.6Mb] The Area around HH-34 in Orion ESO PR Photo 40c/99 ESO PR Photo 40c/99 [Preview - JPEG: 400 x 494 pix - 262kb] [Full-Res - JPEG: 802 x 991 pix - 760 kb] The HH-34 Superjet in Orion (centre) PR Photo 40b/99 shows a three-colour composite of the young object Herbig-Haro 34 (HH-34) , now in the protostar stage of evolution. It is based on CCD frames obtained with the FORS2 instrument in imaging mode, on November 2 and 6, 1999. This object has a remarkable, very complicated appearance that includes two opposite jets that ram into the surrounding interstellar matter. This structure is produced by a machine-gun-like blast of "bullets" of dense gas ejected from the star at high velocities (approaching 250 km/sec). This seems to indicate that the star experiences episodic "outbursts" when large chunks of material fall onto it from a surrounding disk. HH-34 is located at a distance of approx. 1,500 light-years, near the famous Orion Nebula , one of the most productive star birth regions. Note also the enigmatic "waterfall" to the upper left, a feature that is still unexplained. PR Photo 40c/99 is an enlargement of a smaller area around the central object. Technical information : Photo 40b/99 is based on a composite of three images taken through three different filters: B (wavelength 429 nm; Full-Width-Half-Maximum (FWHM) 88 nm; exposure time 10 min; here rendered as blue), H-alpha (centered on the hydrogen emission line at wavelength 656 nm; FWHM 6 nm; 30 min; green) and S II (centrered at the emission lines of inonized sulphur at wavelength 673 nm; FWHM 6 nm; 30 min; red) during a period of 0.8 arcsec seeing. The field shown measures 6.8 x 6.8 arcmin and the images were recorded in frames of 2048 x 2048 pixels, each measuring 0.2 arcsec. The Full Resolution version shows the original pixels. North is up; East is left. N 70 Nebula in the Large Magellanic Cloud ESO PR Photo 40d/99 ESO PR Photo 40d/99 [Preview - JPEG: 400 x 444 pix - 360kb] [Normal - JPEG: 800 x 887 pix - 1.0Mb] [Full-Res - JPEG: 1997 x 2213 pix - 3.4Mb] The N 70 Nebula in the LMC ESO PR Photo 40e/99 ESO PR Photo 40e/99 [Preview - JPEG: 400 x 485 pix - 346kb] [Full-Res - JPEG: 986 x 1196 pix - 1.2Mb] The N70 Nebula in the LMC (detail) PR Photo 40d/99 shows a three-colour composite of the N 70 nebula. It is a "Super Bubble" in the Large Magellanic Cloud (LMC) , a satellite galaxy to the Milky Way system, located in the southern sky at a distance of about 160,000 light-years. This photo is based on CCD frames obtained with the FORS2 instrument in imaging mode in the morning of November 5, 1999. N 70 is a luminous bubble of interstellar gas, measuring about 300 light-years in diameter. It was created by winds from hot, massive stars and supernova explosions and the interior is filled with tenuous, hot expanding gas. An object like N70 provides astronomers with an excellent opportunity to explore the connection between the lifecycles of stars and the evolution of galaxies. Very massive stars profoundly affect their environment. They stir and mix the interstellar clouds of gas and dust, and they leave their mark in the compositions and locations of future generations of stars and star systems. PR Photo 40e/99 is an enlargement of a smaller area of this nebula. Technical information : Photos 40d/99 is based on a composite of three images taken through three different filters: B (429 nm; FWHM 88 nm; 3 min; here rendered as blue), V (554 nm; FWHM 111 nm; 3 min; green) and H-alpha (656 nm; FWHM 6 nm; 3 min; red) during a period of 1.0 arcsec seeing. The field shown measures 6.8 x 6.8 arcmin and the images were recorded in frames of 2048 x 2048 pixels, each measuring 0.2 arcsec. The Full Resolution version shows the original pixels. North is up; East is left. The Crab Nebula in Taurus ESO PR Photo 40f/99 ESO PR Photo 40f/99 [Preview - JPEG: 400 x 446 pix - 262k] [Normal - JPEG: 800 x 892 pix - 839 kb] [Full-Res - JPEG: 2036 x 2269 pix - 3.6Mb] The Crab Nebula in Taurus ESO PR Photo 40g/99 ESO PR Photo 40g/99 [Preview - JPEG: 400 x 444 pix - 215kb] [Full-Res - JPEG: 817 x 907 pix - 485 kb] The Crab Nebula in Taurus (detail) PR Photo 40f/99 shows a three colour composite of the well-known Crab Nebula (also known as "Messier 1" ), as observed with the FORS2 instrument in imaging mode in the morning of November 10, 1999. It is the remnant of a supernova explosion at a distance of about 6,000 light-years, observed almost 1000 years ago, in the year 1054. It contains a neutron star near its center that spins 30 times per second around its axis (see below). PR Photo 40g/99 is an enlargement of a smaller area. More information on the Crab Nebula and its pulsar is available on the web, e.g. at a dedicated website for Messier objects. In this picture, the green light is predominantly produced by hydrogen emission from material ejected by the star that exploded. The blue light is predominantly emitted by very high-energy ("relativistic") electrons that spiral in a large-scale magnetic field (so-called syncrotron emission ). It is believed that these electrons are continuously accelerated and ejected by the rapidly spinning neutron star at the centre of the nebula and which is the remnant core of the exploded star. This pulsar has been identified with the lower/right of the two close stars near the geometric center of the nebula, immediately left of the small arc-like feature, best seen in PR Photo 40g/99 . Technical information : Photo 40f/99 is based on a composite of three images taken through three different optical filters: B (429 nm; FWHM 88 nm; 5 min; here rendered as blue), R (657 nm; FWHM 150 nm; 1 min; green) and S II (673 nm; FWHM 6 nm; 5 min; red) during periods of 0.65 arcsec (R, S II) and 0.80 (B) seeing, respectively. The field shown measures 6.8 x 6.8 arcmin and the images were recorded in frames of 2048 x 2048 pixels, each measuring 0.2 arcsec. The Full Resolution version shows the original pixels. North is up; East is left. The High Time Resolution mode (HIT) of FORS2 ESO PR Photo 40h/99 ESO PR Photo 40h/99 [Preview - JPEG: 400 x 304 pix - 90kb] [Normal - JPEG: 707 x 538 pix - 217kb] Time Sequence of the Pulsar in the Crab Nebula ESO PR Photo 40i/99 ESO PR Photo 40i/99 [Preview - JPEG: 400 x 324 pix - 42kb] [Normal - JPEG: 800 x 647 pix - 87kb] Lightcurve of the Pulsar in the Crab Nebula In combination with the large light collecting power of the VLT Unit Telescopes, the high time resolution (25 nsec = 0.000000025 sec) of the ESO-developed FIERA CCD-detector controller opens a new observing window for celestial objects that undergo light intensity variations on very short time scales. A first implementation of this type of observing mode was tested with FORS2 during the first commissioning phase, by means of one of the most fascinating astronomical objects, the rapidly spinning neutron star in the Crab Nebula . It is also known as the Crab pulsar and is an exceedingly dense object that represents an extreme state of matter - it weighs as much as the Sun, but measures only about 30 km across. The result presented here was obtained in the so-called trailing mode , during which one of the rectangular openings of the Multi-Object Spectroscopy (MOS) assembly within FORS2 is placed in front of the lower end of the field. In this way, the entire surface of the CCD is covered, except the opening in which the object under investigation is positioned. By rotating this opening, some neighbouring objects (e.g. stars for alignment) may be observed simultaneously. As soon as the shutter is opened, the charges on the chip are progressively shifted upwards, one pixel at a time, until those first collected in the bottom row behind the opening have reached the top row. Then the entire CCD is read out and the digital data with the full image is stored in the computer. In this way, successive images (or spectra) of the object are recorded in the same frame, displaying the intensity variation with time during the exposure. For this observation, the total exposure lasted 2.5 seconds. During this time interval the image of the pulsar (and those of some neighbouring stars) were shifted 2048 times over the 2048 rows of the CCD. Each individual exposure therefore lasted exactly 1.2 msec (0.0012 sec), corresponding to a nominal time-resolution of 2.4 msec (2 pixels). Faster or slower time resolutions are possible by increasing or decreasing the shift and read-out rate [2]. In ESO PR Photo 40h/99 , the continuous lines in the top and bottom half are produced by normal stars of constant brightness, while the series of dots represents the individual pulses of the Crab pulsar, one every 33 milliseconds (i.e. the neutron star rotates around its axis 30 times per second). It is also obvious that these dots are alternatively brighter and fainter: they mirror the double-peaked profile of the light pulses, as shown in ESO PR Photo 40i/99 . In this diagramme, the time increases along the abscissa axis (1 pixel = 1.2 msec) and the momentary intensity (uncalibrated) is along the ordinate axis. One full revolution of the neutron star corresponds to the distance from one high peak to the next, and the diagramme therefore covers six consecutive revolutions (about 200 milliseconds). Following thorough testing, this new observing mode will allow to investigate the brightness variations of this and many other objects in great detail in order to gain new and fundamental insights in the physical mechanisms that produce the radiation pulses. In addition, it is foreseen to do high time resolution spectroscopy of rapidly varying phenomena. Pushing it to the limits with an 8.2-m telescope like KUEYEN will be a real challenge to the observers that will most certainly lead to great and exciting research projects in various fields of modern astrophysics. Technical information : The frame shown in Photo 40h/99 was obtained during a total exposure time of 2.5 sec without any optical filtre. During this time, the charges on the CCD were shifted over 2048 rows; each row was therefore exposed during 1.2 msec. The bright continuous line comes from the star next to the pulsar; the orientation was such that the "observation slit" was placed over two neighbouring stars. Preliminary data reduction: 11 pixels were added across the pulsar image to increase the signal-to-noise ratio and the background light from the Crab Nebula was subtracted for the same reason. Division by a brighter star (also background-subtracted, but not shown in the image) helped to reduce the influence of the Earth's atmosphere. Notes [1] The masks are produced by the Mask Manufacturing Unit (MMU) built by the VIRMOS Consortium for the VIMOS and NIRMOS instruments that will be installed at the VLT MELIPAL and YEPUN telescopes, respectively. [2] The time resolution achieved during the present test was limited by the maximum charge transfer rate of this particular CCD chip; in the future, FORS2 may be equipped with a new chip with a rate that is up to 20 times faster. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.
Image transmission system using adaptive joint source and channel decoding
NASA Astrophysics Data System (ADS)
Liu, Weiliang; Daut, David G.
2005-03-01
In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.
NASA Astrophysics Data System (ADS)
2001-01-01
Last year saw very good progress at ESO's Paranal Observatory , the site of the Very Large Telescope (VLT). The third and fourth 8.2-m Unit Telescopes, MELIPAL and YEPUN had "First Light" (cf. PR 01/00 and PR 18/00 ), while the first two, ANTU and KUEYEN , were busy collecting first-class data for hundreds of astronomers. Meanwhile, work continued towards the next phase of the VLT project, the combination of the telescopes into the VLT Interferometer. The test instrument, VINCI (cf. PR 22/00 ) is now being installed in the VLTI Laboratory at the centre of the observing platform on the top of Paranal. Below is a new collection of video sequences and photos that illustrate the latest developments at the Paranal Observatory. The were obtained by the EPR Video Team in December 2000. The photos are available in different formats, including "high-resolution" that is suitable for reproduction purposes. A related ESO Video News Reel for professional broadcasters will soon become available and will be announced via the usual channels. Overview Paranal Observatory (Dec. 2000) Video Clip 02a/01 [MPEG - 4.5Mb] ESO PR Video Clip 02a/01 "Paranal Observatory (December 2000)" (4875 frames/3:15 min) [MPEG Video+Audio; 160x120 pix; 4.5Mb] [MPEG Video+Audio; 320x240 pix; 13.5 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 02a/01 shows some of the construction activities at the Paranal Observatory in December 2000, beginning with a general view of the site. Then follow views of the Residencia , a building that has been designed by Architects Auer and Weber in Munich - it integrates very well into the desert, creating a welcome recreational site for staff and visitors in this harsh environment. The next scenes focus on the "stations" for the auxiliary telescopes for the VLTI and the installation of two delay lines in the 140-m long underground tunnel. The following part of the video clip shows the start-up of the excavation work for the 2.6-m VLT Survey Telescope (VST) as well as the location known as the "NTT Peak", now under consideration for the installation of the 4-m VISTA telescope. The last images are from to the second 8.2-m Unit Telescope, KUEYEN, that has been in full use by the astronomers with the UVES and FORS2 instruments since April 2000. ESO PR Photo 04a/01 ESO PR Photo 04a/01 [Preview - JPEG: 466 x 400 pix - 58k] [Normal - JPEG: 931 x 800 pix - 688k] [Hires - JPEG: 3000 x 2577 pix - 7.6M] Caption : PR Photo 04a/01 shows an afternoon view from the Paranal summit towards East, with the Base Camp and the new Residencia on the slope to the right, above the valley in the shadow of the mountain. ESO PR Photo 04b/01 ESO PR Photo 04b/01 [Preview - JPEG: 791 x 400 pix - 89k] [Normal - JPEG: 1582 x 800 pix - 1.1Mk] [Hires - JPEG: 3000 x 1517 pix - 3.6M] PR Photo 04b/01 shows the ramp leading to the main entrance to the partly subterranean Residencia , with the steel skeleton for the dome over the central area in place. ESO PR Photo 04c/01 ESO PR Photo 04c/01 [Preview - JPEG: 498 x 400 pix - 65k] [Normal - JPEG: 995 x 800 pix - 640k] [Hires - JPEG: 3000 x 2411 pix - 6.6M] PR Photo 04c/01 is an indoor view of the reception hall under the dome, looking towards the main entrance. ESO PR Photo 04d/01 ESO PR Photo 04d/01 [Preview - JPEG: 472 x 400 pix - 61k] [Normal - JPEG: 944 x 800 pix - 632k] [Hires - JPEG: 3000 x 2543 pix - 5.8M] PR Photo 04d/01 shows the ramps from the reception area towards the rooms. The VLT Interferometer The Delay Lines consitute a most important element of the VLT Interferometer , cf. PR Photos 26a-e/00. At this moment, two Delay Lines are operational on site. A third system will be integrated early this year. The VLTI Delay Line is located in an underground tunnel that is 168 metres long and 8 metres wide. This configuration has been designed to accommodate up to eight Delay Lines, including their transfer optics in an ideal environment: stable temperature, high degree of cleanliness, low levels of straylight, low air turbulence. The positions of the Delay Line carriages are computed to adjust the Optical Path Lengths requested for the fringe pattern observation. The positions are controlled in real time by a laser metrology system, specially developed for this purpose. The position precision is about 20 nm (1 nm = 10 -9 m, or 1 millionth of a millimetre) over a distance of 120 metres. The maximum velocity is 0.50 m/s in position mode and maximum 0.05 m/s in operation. The system is designed for 25 year of operation and to survive earthquake up to 8.6 magnitude on the Richter scale. The VLTI Delay Line is a three-year project, carried out by ESO in collaboration with Dutch Space Holdings (formerly Fokker Space) and TPD-TNO . VLTI Delay Lines (December 2000) - ESO PR Video Clip 02b/01 [MPEG - 3.6Mb] ESO PR Video Clip 02b/01 "VLTI Delay Lines (December 2000)" (2000 frames/1:20 min) [MPEG Video+Audio; 160x120 pix; 3.6Mb] [MPEG Video+Audio; 320x240 pix; 13.7 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 02b/00 shows the Delay Lines of the VLT Interferometer facility at Paranal during tests. One of the carriages is moving on 66-metre long rectified rails, driven by a linear motor. The carriage is equipped with three wheels in order to preserve high guidance accuracy. Another important element is the Cat's Eye that reflects the light from the telescope to the VLT instrumentation. This optical system is made of aluminium (including the mirrors) to avoid thermo-mechanical problems. ESO PR Photo 04e/01 ESO PR Photo 04e/01 [Preview - JPEG: 400 x 402 pix - 62k] [Normal - JPEG: 800 x 804 pix - 544k] [Hires - JPEG: 3000 x 3016 pix - 6.2M] Caption : PR Photo 04e/01 shows one of the 30 "stations" for the movable 1.8-m Auxiliary Telescopes. When one of these telescopes is positioned ("parked") on top of it, The light will be guided through the hole towards the Interferometric Tunnel and the Delay Lines. ESO PR Photo 04f/01 ESO PR Photo 04f/01 [Preview - JPEG: 568 x 400 pix - 96k] [Normal - JPEG: 1136 x 800 pix - 840k] [Hires - JPEG: 3000 x 2112 pix - 4.6M] PR Photo 04f/01 shows a general view of the Interferometric Tunnel and the Delay Lines. ESO PR Photo 04g/01 ESO PR Photo 04g/01 [Preview - JPEG: 406 x 400 pix - 62k] [Normal - JPEG: 812 x 800 pix - 448k] [Hires - JPEG: 3000 x 2956 pix - 5.5M] PR Photo 04g/01 shows one of the Delay Line carriages in parking position. The "NTT Peak" The "NTT Peak" is a mountain top located about 2 km to the north of Paranal. It received this name when ESO considered to move the 3.58-m New Technology Telescope from La Silla to this peak. The possibility of installing the 4-m VISTA telescope (cf. PR 03/00 ) on this peak is now being discussed. ESO PR Photo 04h/01 ESO PR Photo 04h/01 [Preview - JPEG: 630 x 400 pix - 89k] [Normal - JPEG: 1259 x 800 pix - 1.1M] [Hires - JPEG: 3000 x 1907 pix - 5.2M] PR Photo 04h/01 shows the view from the "NTT Peak" towards south, vith the Paranal mountain and the VLT enclosures in the background. ESO PR Photo 04i/01 ESO PR Photo 04i/01 [Preview - JPEG: 516 x 400 pix - 50k] [Normal - JPEG: 1031 x 800 pix - 664k] [Hires - JPEG: 3000 x 2328 pix - 6.0M] PR Photo 04i/01 is a view towards the "NTT Peak" from the top of the Paranal mountain. The access road and the concrete pillar that was used to support a site testing telescope at the top of this peak are seen This is the caption to ESO PR Photos 04a-1/01 and PR Video Clips 02a-b/01 . They may be reproduced, if credit is given to the European Southern Observatory. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 01/01 about the Physics On Stage Festival (11 January 2001) . Information is also available on the web about other ESO videos.
NASA Astrophysics Data System (ADS)
Johnson, J. R.; Bell, J. F., III; Hayes, A.; Deen, R. G.; Godber, A.; Arvidson, R. E.; Lemmon, M. T.
2015-12-01
The Mastcam imaging system on the Curiosity rover continued acquisition of multispectral images of the same terrain at multiple times of day at three new rover locations between sols 872 and 1003. These data sets will be used to investigate the light scattering properties of rocks and soils along the Curiosity traverse using radiative transfer models. Images were acquired by the Mastcam-34 (M-34) camera on Sols 872-892 at 8 times of day (Mojave drill location), Sols 914-917 (Telegraph Peak drill location) at 9 times of day, and Sols 1000-1003 at 8 times of day (Stimson-Murray Formation contact near Marias Pass). Data sets were acquired using filters centered at 445, 527, 751, and 1012 nm, and the images were jpeg-compressed. Data sets typically were pointed ~east and ~west to provide phase angle coverage from near 0° to 125-140° for a variety of rocks and soils. Also acquired on Sols 917-918 at the Telegraph Peak site was a multiple time-of-day Mastcam sequence pointed southeast using only the broadband Bayer filters that provided losslessly compressed images with phase angles ~55-129°. Navcam stereo images were also acquired with each data set to provide broadband photometry and terrain measurements for computing surface normals and local incidence and emission angles used in photometric modeling. On Sol 1028, the MAHLI camera was used as a goniometer to acquire images at 20 arm positions, all centered at the same location within the work volume from a near-constant distance of 85 cm from the surface. Although this experiment was run at only one time of day (~15:30 LTST), it provided phase angle coverage from ~30° to ~111°. The terrain included the contact between the uppermost portion of the Murray Formation and the Stimson sandstones, and was the first acquisition of both Mastcam and MALHI photometry images at the same rover location. The MAHLI images also allowed construction of a 3D shape model of the Stimson-Murray contact region. The attached figure shows a phase color composite of the western Stimson area, created using phase angles of 8°, 78°, and 130° at 751 nm. The red areas correspond to highly backscattering materials that appear to concentrate along linear fractures throughout this area. The blue areas correspond to more forward scattering materials dispersed through the stratigraphic sequence.
Processed Thematic Mapper Satellite Imagery for Selected Areas within the U.S.-Mexico Borderlands
Dohrenwend, John C.; Gray, Floyd; Miller, Robert J.
2000-01-01
The study is summarized in the Adobe Acrobat Portable Document Format (PDF) file OF00-309.PDF. This publication also contain satellite full-scene images of selected areas along the U.S.-Mexico border. These images are presented as high-resolution images in jpeg format (IMAGES). The folder LOCATIONS in contains TIFF images showing exact positions of easily-identified reference locations for each of the Landsat TM scenes located at least partly within the U.S. A reference location table (BDRLOCS.DOC in MS Word format) lists the latitude and longitude of each reference location with a nominal precision of 0.001 minute of arc
Multiple descriptions based on multirate coding for JPEG 2000 and H.264/AVC.
Tillo, Tammam; Baccaglini, Enrico; Olmo, Gabriella
2010-07-01
Multiple description coding (MDC) makes use of redundant representations of multimedia data to achieve resiliency. Descriptions should be generated so that the quality obtained when decoding a subset of them only depends on their number and not on the particular received subset. In this paper, we propose a method based on the principle of encoding the source at several rates, and properly blending the data encoded at different rates to generate the descriptions. The aim is to achieve efficient redundancy exploitation, and easy adaptation to different network scenarios by means of fine tuning of the encoder parameters. We apply this principle to both JPEG 2000 images and H.264/AVC video data. We consider as the reference scenario the distribution of contents on application-layer overlays with multiple-tree topology. The experimental results reveal that our method favorably compares with state-of-art MDC techniques.
NASA Astrophysics Data System (ADS)
2000-09-01
VLT YEPUN Joins ANTU, KUEYEN and MELIPAL It was a historical moment last night (September 3 - 4, 2000) in the VLT Control Room at the Paranal Observatory , after nearly 15 years of hard work. Finally, four teams of astronomers and engineers were sitting at the terminals - and each team with access to an 8.2-m telescope! From now on, the powerful "Paranal Quartet" will be observing night after night, with a combined mirror surface of more than 210 m 2. And beginning next year, some of them will be linked to form part of the unique VLT Interferometer with unparalleled sensitivity and image sharpness. YEPUN "First Light" Early in the evening, the fourth 8.2-m Unit Telescope, YEPUN , was pointed to the sky for the first time and successfully achieved "First Light". Following a few technical exposures, a series of "first light" photos was made of several astronomical objects with the VLT Test Camera. This instrument was also used for the three previous "First Light" events for ANTU ( May 1998 ), KUEYEN ( March 1999 ) and MELIPAL ( January 2000 ). These images served to evaluate provisionally the performance of the new telescope, mainly in terms of mechanical and optical quality. The ESO staff were very pleased with the results and pronounced YEPUN fit for the subsequent commissioning phase. When the name YEPUN was first given to the fourth VLT Unit Telescope, it was supposed to mean "Sirius" in the Mapuche language. However, doubts have since arisen about this translation and a detailed investigation now indicates that the correct meaning is "Venus" (as the Evening Star). For a detailed explanation, please consult the essay On the Meaning of "YEPUN" , now available at the ESO website. The first images At 21:39 hrs local time (01:39 UT), YEPUN was turned to point in the direction of a dense Milky Way field, near the border between the constellations Sagitta (The Arrow) and Aquila (The Eagle). A guide star was acquired and the active optics system quickly optimized the mirror system. At 21:44 hrs (01:44 UT), the Test Camera at the Cassegrain focus within the M1 mirror cell was opened for 30 seconds, with the planetary nebula Hen 2-428 in the field. The resulting "First Light" image was immediately read out and appeared on the computer screen at 21:45:53 hrs (01:45:53 UT). "Not bad! - "Very nice!" were the first, "business-as-usual"-like comments in the room. The zenith distance during this observation was 44° and the image quality was measured as 0.9 arcsec, exactly the same as that registered by the Seeing Monitoring Telescope outside the telescope building. There was some wind. ESO PR Photo 22a/00 ESO PR Photo 22a/00 [Preview - JPEG: 374 x 400 pix - 128k] [Normal - JPEG: 978 x 1046 pix - 728k] Caption : ESO PR Photo 22a/00 shows a colour composite of some of the first astronomical exposures obtained by YEPUN . The object is the planetary nebula Hen 2-428 that is located at a distance of 6,000-8,000 light-years and seen in a dense sky field, only 2° from the main plane of the Milky Way. As other planetary nebulae, it is caused by a dying star (the bluish object at the centre) that shreds its outer layers. The image is based on exposures through three optical filtres: B(lue) (10 min exposure, seeing 0.9 arcsec; here rendered as blue), V(isual) (5 min; 0.9 arcsec; green) and R(ed) (3 min; 0.9 arcsec; red). The field measures 88 x 78 arcsec 2 (1 pixel = 0.09 arcsec). North is to the lower right and East is to the lower left. The 5-day old Moon was about 90° away in the sky that was accordingly bright. The zenith angle was 44°. The ESO staff then proceeded to take a series of three photos with longer exposures through three different optical filtres. They have been combined to produce the image shown in ESO PR Photo 22a/00 . More astronomical images were obtained in sequence, first of the dwarf galaxy NGC 6822 in the Local Group (see PR Photo 22f/00 below) and then of the spiral galaxy NGC 7793 . All 8.2-m telescopes now in operation at Paranal The ESO Director General, Catherine Cesarsky , who was present on Paranal during this event, congratulated the ESO staff to the great achievement, herewith bringing a major phase of the VLT project to a successful end. She was particularly impressed by the excellent optical quality that was achieved at this early moment of the commissioning tests. A measurement showed that already now, 80% of the light is concentrated within 0.22 arcsec. The manager of the VLT project, Massimo Tarenghi , was very happy to reach this crucial project milestone, after nearly fifteen years of hard work. He also remarked that with the M2 mirror already now "in the active optics loop", the telescope was correctly compensating for the somewhat mediocre atmospheric conditions on this night. The next major step will be the "first light" for the VLT Interferometer (VLTI) , when the light from two Unit Telescopes is combined. This event is expected in the middle of next year. Impressions from the YEPUN "First Light" event First Light for YEPUN - ESO PR VC 06/00 ESO PR Video Clip 06/00 "First Light for YEPUN" (5650 frames/3:46 min) [MPEG Video+Audio; 160x120 pix; 7.7Mb] [MPEG Video+Audio; 320x240 pix; 25.7 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 06/00 shows sequences from the Control Room at the Paranal Observatory, recorded with a fixed TV-camera in the evening of September 3 at about 23:00 hrs local time (03:00 UT), i.e., soon after the moment of "First Light" for YEPUN . The video sequences were transmitted via ESO's dedicated satellite communication link to the Headquarters in Garching for production of the clip. It begins at the moment a guide star is acquired to perform an automatic "active optics" correction of the mirrors; the associated explanation is given by Massimo Tarenghi (VLT Project Manager). The first astronomical observation is performed and the first image of the planetary nebula Hen 2-428 is discussed by the ESO Director General, Catherine Cesarsky . The next image, of the nearby dwarf galaxy NGC 6822 , arrives and is shown and commented on by the ESO Director General. Finally, Massimo Tarenghi talks about the next major step of the VLT Project. The combination of the lightbeams from two 8.2-m Unit Telescopes, planned for the summer of 2001, will mark the beginning of the VLT Interferometer. ESO Press Photo 22b/00 ESO Press Photo 22b/00 [Preview; JPEG: 400 x 300; 88k] [Full size; JPEG: 1600 x 1200; 408k] The enclosure for the fourth VLT 8.2-m Unit Telescope, YEPUN , photographed at sunset on September 3, 2000, immediately before "First Light" was successfully achieved. The upper part of the mostly subterranean Interferometric Laboratory for the VLTI is seen in front. (Digital Photo). ESO Press Photo 22c/00 ESO Press Photo 22c/00 [Preview; JPEG: 400 x 300; 112k] [Full size; JPEG: 1280 x 960; 184k] The initial tuning of the YEPUN optical system took place in the early evening of September 3, 2000, from the "observing hut" on the floor of the telescope enclosure. From left to right: Krister Wirenstrand who is responsible for the VLT Control Software, Jason Spyromilio - Head of the Commissioning Team, and Massimo Tarenghi , VLT Manager. (Digital Photo). ESO Press Photo 22d/00 ESO Press Photo 22d/00 [Preview; JPEG: 400 x 300; 112k] [Full size; JPEG: 1280 x 960; 184k] "Mission Accomplished" - The ESO Director General, Catherine Cesarsky , and the Paranal Director, Roberto Gilmozzi , face the VLT Manager, Massimo Tarenghi at the YEPUN Control Station, right after successful "First Light" for this telescope. (Digital Photo). An aerial image of YEPUN in its enclosure is available as ESO PR Photo 43a/99. The mechanical structure of YEPUN was first pre-assembled at the Ansaldo factory in Milan (Italy) where it served for tests while the other telescopes were erected at Paranal. An early photo ( ESO PR Photo 37/95 ) is available that was obtained during the visit of the ESO Council to Milan in December 1995, cf. ESO PR 18/95. Paranal at sunset ESO Press Photo 22e/00 ESO Press Photo 22e/00 [Preview; JPEG: 400 x 200; 14kb] [Normal; JPEG: 800 x 400; 84kb] [High-Res; JPEG: 4000 x 2000; 4.0Mb] Wide-angle view of the Paranal Observatory at sunset. The last rays of the sun illuminate the telescope enclosures at the top of the mountain and some of the buildings at the Base Camp. The new "residencia" that will provide living space for the Paranal staff and visitors from next year is being constructed to the left. The "First Light" observations with YEPUN began soon after sunset. This photo was obtained in March 2000. Additional photos (September 6, 2000) ESO PR Photo 22f/00 ESO PR Photo 22f/00 [Preview - JPEG: 400 x 487 pix - 224k] [Normal - JPEG: 992 x 1208 pix - 1.3Mb] Caption : ESO PR Photo 22f/00 shows a colour composite of three exposures of a field in the dwarf galaxy NGC 6822 , a member of the Local Group of Galaxies at a distance of about 2 million light-years. They were obtained by YEPUN and the VLT Test Camera at about 23:00 hrs local time on September 3 (03:00 UT on September 4), 2000. The image is based on exposures through three optical filtres: B(lue) (10 min exposure; here rendered as blue), V(isual) (5 min; green) and R(ed) (5 min; red); the seeing was 0.9 - 1.0 arcsec. Individual stars of many different colours (temperatures) are seen. The field measures about 1.5 x 1.5 arcmin 2. Another image of this galaxy was obtained earlier with ANTU and FORS1 , cf. PR Photo 10b/99. ESO Press Photo 22g/00 ESO Press Photo 22g/00 [Preview; JPEG: 400 x 300; 136k] [Full size; JPEG: 1280 x 960; 224k] Most of the crew that put together YEPUN is here photographed after the installation of the M1 mirror cell at the bottom of the mechanical structure (on July 30, 2000). Back row (left to right): Erich Bugueno (Mechanical Supervisor), Erito Flores (Maintenance Technician); front row (left to right) Peter Gray (Mechanical Engineer), German Ehrenfeld (Mechanical Engineer), Mario Tapia (Mechanical Engineer), Christian Juica (kneeling - Mechanical Technician), Nelson Montano (Maintenance Engineer), Hansel Sepulveda (Mechanical Technican) and Roberto Tamai (Mechanical Engineer). (Digital Photo). ESO PR Photos may be reproduced, if credit is given to the European Southern Observatory. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 05/00 ("Portugal to Accede to ESO (27 June 2000). Information is also available on the web about other ESO videos.
IIPImage: Large-image visualization
NASA Astrophysics Data System (ADS)
Pillay, Ruven
2014-08-01
IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.
NASA Astrophysics Data System (ADS)
2004-04-01
New Detailed VLT Images of Saturn's Largest Moon Optimizing space missions Titan, the largest moon of Saturn was discovered by Dutch astronomer Christian Huygens in 1655 and certainly deserves its name. With a diameter of no less than 5,150 km, it is larger than Mercury and twice as large as Pluto. It is unique in having a hazy atmosphere of nitrogen, methane and oily hydrocarbons. Although it was explored in some detail by the NASA Voyager missions, many aspects of the atmosphere and surface still remain unknown. Thus, the existence of seasonal or diurnal phenomena, the presence of clouds, the surface composition and topography are still under debate. There have even been speculations that some kind of primitive life (now possibly extinct) may be found on Titan. Titan is the main target of the NASA/ESA Cassini/Huygens mission, launched in 1997 and scheduled to arrive at Saturn on July 1, 2004. The ESA Huygens probe is designed to enter the atmosphere of Titan, and to descend by parachute to the surface. Ground-based observations are essential to optimize the return of this space mission, because they will complement the information gained from space and add confidence to the interpretation of the data. Hence, the advent of the adaptive optics system NAOS-CONICA (NACO) [1] in combination with ESO's Very Large Telescope (VLT) at the Paranal Observatory in Chile now offers a unique opportunity to study the resolved disc of Titan with high sensitivity and increased spatial resolution. Adaptive Optics (AO) systems work by means of a computer-controlled deformable mirror that counteracts the image distortion induced by atmospheric turbulence. It is based on real-time optical corrections computed from image data obtained by a special camera at very high speed, many hundreds of times each second (see e.g. ESO Press Release 25/01 , ESO PR Photos 04a-c/02, ESO PR Photos 19a-c/02, ESO PR Photos 21a-c/02, ESO Press Release 17/02, and ESO Press Release 26/03 for earlier NACO images, and ESO Press Release 11/03 for MACAO-VLTI results.) The southern smile ESO PR Photo 08a/04 ESO PR Photo 08a/04 Images of Titan on November 20, 25 and 26, 2002 Through Five Filters (VLT YEPUN + NACO) [Preview - JPEG: 522 x 400 pix - 40k] [Normal - JPEG: 1043 x 800 pix - 340k] [Hires - JPEG: 2875 x 2205 pix - 1.2M] Caption: ESO PR Photo 08a/04 shows Titan (apparent visual magnitude 8.05, apparent diameter 0.87 arcsec) as observed with the NAOS/CONICA instrument at VLT Yepun (Paranal Observatory, Chile) on November 20, 25 and 26, 2003, between 6.00 UT and 9.00 UT. The median seeing values were 1.1 arcsec and 1.5 arcsec respectively for the 20th and 25th. Deconvoluted ("sharpened") images of Titan are shown through 5 different narrow-band filters - they allow to probe in some detail structures at different altitudes and on the surface. Depending on the filter, the integration time varies from 10 to 100 seconds. While Titan shows its leading hemisphere (i.e. the one observed when Titan moves towards us) on Nov. 20, the trailing side (i.e the one we see when Titan moves away from us in its course around Saturn) - which displays less bright surface features - is observed on the last two dates. ESO PR Photo 08b/04 ESO PR Photo 08b/04 Titan Observed Through Nine Different Filters on November 26, 2002 [Preview - JPEG: 480 x 400 pix - 36k] [Normal - JPEG: 960 x 800 pix - 284k] Caption: ESO PR Photo 08b/04: Images of Titan taken on November 26, 2002 through nine different filters to probe different altitudes, ranging from the stratosphere to the surface. On this night, a stable "seeing" (image quality before adaptive optics correction) of 0.9 arcsec allowed the astronomers to attain the diffraction limit of the telescope (0.032 arcsec resolution). Due to these good observing conditions, Titan's trailing hemisphere was observed with contrasts of about 40%, allowing the detection of several bright features on this surface region, once thought to be quite dark and featureless. ESO PR Photo 08c/04 ESO PR Photo 08c/04 Titan Surface Projections [Preview - JPEG: 601 x 400 pix - 64k] [Normal - JPEG: 1201 x 800 pix - 544k] Caption: ESO PR Photo 08c/04 : Titan images obtained with NACO on November 26th, 2002. Left: Titan's surface projection on the trailing hemisphere as observed at 1.3 μm, revealing a complex brightness structure thanks to the high image contrast of about 40%. Right: a new, possibly meteorological, phenomenon observed at 2.12 μm in Titan's atmosphere, in the form of a bright feature revolving around the South Pole. A team of French astronomers [2] have recently used the NACO state-of-the-art adaptive optics system on the fourth 8.2-m VLT unit telescope, Yepun, to map the surface of Titan by means of near-infrared images and to search for changes in the dense atmosphere. These extraordinary images have a nominal resolution of 1/30th arcsec and show details of the order of 200 km on the surface of Titan. To provide the best possible views, the raw data from the instrument were subjected to deconvolution (image sharpening). Images of Titan were obtained through 9 narrow-band filters, sampling near-infrared wavelengths with large variations in methane opacity. This permits sounding of different altitudes ranging from the stratosphere to the surface. Titan harbours at 1.24 and 2.12 μm a "southern smile", that is a north-south asymmetry, while the opposite situation is observed with filters probing higher altitudes, such as 1.64, 1.75 and 2.17 μm. A high-contrast bright feature is observed at the South Pole and is apparently caused by a phenomenon in the atmosphere, at an altitude below 140 km or so. This feature was found to change its location on the images from one side of the south polar axis to the other during the week of observations. Outlook An additional series of NACO observations of Titan is foreseen later this month (April 2004). These will be a great asset in helping optimize the return of the Cassini/Huygens mission. Several of the instruments aboard the spacecraft depend on such ground-based data to better infer the properties of Titan's surface and lower atmosphere. Although the astronomers have yet to model and interpret the physical and geophysical phenomena now observed and to produce a full cartography of the surface, this first analysis provides a clear demonstration of the marvellous capabilities of the NACO imaging system. More examples of the exciting science possible with this facility will be found in a series of five papers published today in the European research journal Astronomy & Astrophysics (Vol. 47, L1 to L24).
Bell, James F.; Godber, A.; McNair, S.; Caplinger, M.A.; Maki, J.N.; Lemmon, M.T.; Van Beek, J.; Malin, M.C.; Wellington, D.; Kinch, K.M.; Madsen, M.B.; Hardgrove, C.; Ravine, M.A.; Jensen, E.; Harker, D.; Anderson, Ryan; Herkenhoff, Kenneth E.; Morris, R.V.; Cisneros, E.; Deen, R.G.
2017-01-01
The NASA Curiosity rover Mast Camera (Mastcam) system is a pair of fixed-focal length, multispectral, color CCD imagers mounted ~2 m above the surface on the rover's remote sensing mast, along with associated electronics and an onboard calibration target. The left Mastcam (M-34) has a 34 mm focal length, an instantaneous field of view (IFOV) of 0.22 mrad, and a FOV of 20° × 15° over the full 1648 × 1200 pixel span of its Kodak KAI-2020 CCD. The right Mastcam (M-100) has a 100 mm focal length, an IFOV of 0.074 mrad, and a FOV of 6.8° × 5.1° using the same detector. The cameras are separated by 24.2 cm on the mast, allowing stereo images to be obtained at the resolution of the M-34 camera. Each camera has an eight-position filter wheel, enabling it to take Bayer pattern red, green, and blue (RGB) “true color” images, multispectral images in nine additional bands spanning ~400–1100 nm, and images of the Sun in two colors through neutral density-coated filters. An associated Digital Electronics Assembly provides command and data interfaces to the rover, 8 Gb of image storage per camera, 11 bit to 8 bit companding, JPEG compression, and acquisition of high-definition video. Here we describe the preflight and in-flight calibration of Mastcam images, the ways that they are being archived in the NASA Planetary Data System, and the ways that calibration refinements are being developed as the investigation progresses on Mars. We also provide some examples of data sets and analyses that help to validate the accuracy and precision of the calibration
Privacy-preserving photo sharing based on a public key infrastructure
NASA Astrophysics Data System (ADS)
Yuan, Lin; McNally, David; Küpçü, Alptekin; Ebrahimi, Touradj
2015-09-01
A significant number of pictures are posted to social media sites or exchanged through instant messaging and cloud-based sharing services. Most social media services offer a range of access control mechanisms to protect users privacy. As it is not in the best interest of many such services if their users restrict access to their shared pictures, most services keep users' photos unprotected which makes them available to all insiders. This paper presents an architecture for a privacy-preserving photo sharing based on an image scrambling scheme and a public key infrastructure. A secure JPEG scrambling is applied to protect regional visual information in photos. Protected images are still compatible with JPEG coding and therefore can be viewed by any one on any device. However, only those who are granted secret keys will be able to descramble the photos and view their original versions. The proposed architecture applies an attribute-based encryption along with conventional public key cryptography, to achieve secure transmission of secret keys and a fine-grained control over who may view shared photos. In addition, we demonstrate the practical feasibility of the proposed photo sharing architecture with a prototype mobile application, ProShare, which is built based on iOS platform.
Improved Adaptive LSB Steganography Based on Chaos and Genetic Algorithm
NASA Astrophysics Data System (ADS)
Yu, Lifang; Zhao, Yao; Ni, Rongrong; Li, Ting
2010-12-01
We propose a novel steganographic method in JPEG images with high performance. Firstly, we propose improved adaptive LSB steganography, which can achieve high capacity while preserving the first-order statistics. Secondly, in order to minimize visual degradation of the stego image, we shuffle bits-order of the message based on chaos whose parameters are selected by the genetic algorithm. Shuffling message's bits-order provides us with a new way to improve the performance of steganography. Experimental results show that our method outperforms classical steganographic methods in image quality, while preserving characteristics of histogram and providing high capacity.
Visual information processing; Proceedings of the Meeting, Orlando, FL, Apr. 20-22, 1992
NASA Technical Reports Server (NTRS)
Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)
1992-01-01
Topics discussed in these proceedings include nonlinear processing and communications; feature extraction and recognition; image gathering, interpolation, and restoration; image coding; and wavelet transform. Papers are presented on noise reduction for signals from nonlinear systems; driving nonlinear systems with chaotic signals; edge detection and image segmentation of space scenes using fractal analyses; a vision system for telerobotic operation; a fidelity analysis of image gathering, interpolation, and restoration; restoration of images degraded by motion; and information, entropy, and fidelity in visual communication. Attention is also given to image coding methods and their assessment, hybrid JPEG/recursive block coding of images, modified wavelets that accommodate causality, modified wavelet transform for unbiased frequency representation, and continuous wavelet transform of one-dimensional signals by Fourier filtering.
NASA Astrophysics Data System (ADS)
Song, W. M.; Fan, D. W.; Su, L. Y.; Cui, C. Z.
2017-11-01
Calculating the coordinate parameters recorded in the form of key/value pairs in FITS (Flexible Image Transport System) header is the key to determine FITS images' position in the celestial system. As a result, it has great significance in researching the general process of calculating the coordinate parameters. By combining CCD related parameters of astronomical telescope (such as field, focal length, and celestial coordinates in optical axis, etc.), astronomical images recognition algorithm, and WCS (World Coordinate System) theory, the parameters can be calculated effectively. CCD parameters determine the scope of star catalogue, so that they can be used to build a reference star catalogue by the corresponding celestial region of astronomical images; Star pattern recognition completes the matching between the astronomical image and reference star catalogue, and obtains a table with a certain number of stars between CCD plane coordinates and their celestial coordinates for comparison; According to different projection of the sphere to the plane, WCS can build different transfer functions between these two coordinates, and the astronomical position of image pixels can be determined by the table's data we have worked before. FITS images are used to carry out scientific data transmission and analyze as a kind of mainstream data format, but only to be viewed, edited, and analyzed in the professional astronomy software. It decides the limitation of popular science education in astronomy. The realization of a general image visualization method is significant. FITS is converted to PNG or JPEG images firstly. The coordinate parameters in the FITS header are converted to metadata in the form of AVM (Astronomy Visualization Metadata), and then the metadata is added to the PNG or JPEG header. This method can meet amateur astronomers' general needs of viewing and analyzing astronomical images in the non-astronomical software platform. The overall design flow is realized through the java program and tested by SExtractor, WorldWide Telescope, picture viewer, and other software.
Bergrath, Sebastian; Rossaint, Rolf; Lenssen, Niklas; Fitzner, Christina; Skorning, Max
2013-01-16
Still picture transmission was performed using a telemedicine system in an Emergency Medical Service (EMS) during a prospective, controlled trial. In this ancillary, retrospective study the quality and content of the transmitted pictures and the possible influences of this application on prehospital time requirements were investigated. A digital camera was used with a telemedicine system enabling encrypted audio and data transmission between an ambulance and a remotely located physician. By default, images were compressed (jpeg, 640 x 480 pixels). On occasion, this compression was deactivated (3648 x 2736 pixels). Two independent investigators assessed all transmitted pictures according to predefined criteria. In cases of different ratings, a third investigator had final decision competence. Patient characteristics and time intervals were extracted from the EMS protocol sheets and dispatch centre reports. Overall 314 pictures (mean 2.77 ± 2.42 pictures/mission) were transmitted during 113 missions (group 1). Pictures were not taken for 151 missions (group 2). Regarding picture quality, the content of 240 (76.4%) pictures was clearly identifiable; 45 (14.3%) pictures were considered "limited quality" and 29 (9.2%) pictures were deemed "not useful" due to not/hardly identifiable content. For pictures with file compression (n = 84 missions) and without (n = 17 missions), the content was clearly identifiable in 74% and 97% of the pictures, respectively (p = 0.003). Medical reports (n = 98, 32.8%), medication lists (n = 49, 16.4%) and 12-lead ECGs (n = 28, 9.4%) were most frequently photographed. The patient characteristics of group 1 vs. 2 were as follows: median age - 72.5 vs. 56.5 years, p = 0.001; frequency of acute coronary syndrome - 24/113 vs. 15/151, p = 0.014. The NACA scores and gender distribution were comparable. Median on-scene times were longer with picture transmission (26 vs. 22 min, p = 0.011), but ambulance arrival to hospital arrival intervals did not differ significantly (35 vs. 33 min, p = 0.054). Picture transmission was used frequently and resulted in an acceptable picture quality, even with compressed files. In most cases, previously existing "paper data" was transmitted electronically. This application may offer an alternative to other modes of ECG transmission. Due to different patient characteristics no conclusions for a prolonged on-scene time can be drawn. Mobile picture transmission holds important opportunities for clinical handover procedures and teleconsultation.
No-reference quality assessment based on visual perception
NASA Astrophysics Data System (ADS)
Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao
2014-11-01
The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233 images of JPEG, 174 images of White Noise, 174 images of Gaussian Blur, 174 images of Fast Fading. The database includes subjective differential mean opinion score (DMOS) for each image. The experimental results show that the proposed approach not only can assess many kinds of distorted images quality, but also exhibits a superior accuracy and monotonicity.
Kamauu, Aaron W C; DuVall, Scott L; Wiggins, Richard H; Avrin, David E
2008-09-01
In the creation of interesting radiological cases in a digital teaching file, it is necessary to adjust the window and level settings of an image to effectively display the educational focus. The web-based applet described in this paper presents an effective solution for real-time window and level adjustments without leaving the picture archiving and communications system workstation. Optimized images are created, as user-defined parameters are passed between the applet and a servlet on the Health Insurance Portability and Accountability Act-compliant teaching file server.
Digitizing the KSO white light images
NASA Astrophysics Data System (ADS)
Pötzi, W.
From 1989 up to 2007 the Sun was observed at the Kanzelhöhe Observatory in white light on photographic film material. The images are on transparent sheet films and are not available to the scientific community now. With a photo scanner for transparent film material the films are now scanned and then prepared for scientific use. The programs for post processing are already finished and as an output FITS and JPEG-files are produced. The scanning should be finished end of 2011 and the data should then be available via our homepage.
SHD digital cinema distribution over a long distance network of Internet2
NASA Astrophysics Data System (ADS)
Yamaguchi, Takahiro; Shirai, Daisuke; Fujii, Tatsuya; Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu
2003-06-01
We have developed a prototype SHD (Super High Definition) digital cinema distribution system that can store, transmit and display eight-million-pixel motion pictures that have the image quality of a 35-mm film movie. The system contains a video server, a real-time decoder, and a D-ILA projector. Using a gigabit Ethernet link and TCP/IP, the server transmits JPEG2000 compressed motion picture data streams to the decoder at transmission speeds as high as 300 Mbps. The received data streams are decompressed by the decoder, and then projected onto a screen via the projector. With this system, digital cinema contents can be distributed over a wide-area optical gigabit IP network. However, when digital cinema contents are delivered over long distances by using a gigabit IP network and TCP, the round-trip time increases and network throughput either stops rising or diminishes. In a long-distance SHD digital cinema transmission experiment performed on the Internet2 network in October 2002, we adopted enlargement of the TCP window, multiple TCP connections, and shaping function to control the data transmission quantity. As a result, we succeeded in transmitting the SHD digital cinema content data at about 300 Mbps between Chicago and Los Angeles, a distance of more than 3000 km.
Optimal erasure protection for scalably compressed video streams with limited retransmission.
Taubman, David; Thie, Johnson
2005-08-01
This paper shows how the priority encoding transmission (PET) framework may be leveraged to exploit both unequal error protection and limited retransmission for RD-optimized delivery of streaming media. Previous work on scalable media protection with PET has largely ignored the possibility of retransmission. Conversely, the PET framework has not been harnessed by the substantial body of previous work on RD optimized hybrid forward error correction/automatic repeat request schemes. We limit our attention to sources which can be modeled as independently compressed frames (e.g., video frames), where each element in the scalable representation of each frame can be transmitted in one or both of two transmission slots. An optimization algorithm determines the level of protection which should be assigned to each element in each slot, subject to transmission bandwidth constraints. To balance the protection assigned to elements which are being transmitted for the first time with those which are being retransmitted, the proposed algorithm formulates a collection of hypotheses concerning its own behavior in future transmission slots. We show how the PET framework allows for a decoupled optimization algorithm with only modest complexity. Experimental results obtained with Motion JPEG2000 compressed video demonstrate that substantial performance benefits can be obtained using the proposed framework.
A Portrait of One Hundred Thousand and One Galaxies
NASA Astrophysics Data System (ADS)
2002-08-01
Rich and Inspiring Experience with NGC 300 Images from the ESO Science Data Archive Summary A series of wide-field images centred on the nearby spiral galaxy NGC 300 , obtained with the Wide-Field Imager (WFI) on the MPG/ESO 2.2-m telescope at the La Silla Observatory , have been combined into a magnificent colour photo. These images have been used by different groups of astronomers for various kinds of scientific investigations, ranging from individual stars and nebulae in NGC 300, to distant galaxies and other objects in the background. This material provides an interesting demonstration of the multiple use of astronomical data, now facilitated by the establishment of extensively documented data archives, like the ESO Science Data Archive that now is growing rapidly and already contains over 15 Terabyte. Based on the concept of Astronomical Virtual Observatories (AVOs) , the use of archival data sets is on the rise and provides a large number of scientists with excellent opportunities for front-line investigations without having to wait for precious observing time. In addition to presenting a magnificent astronomical photo, the present account also illustrates this important new tool of the modern science of astronomy and astrophysics. PR Photo 18a/02 : WFI colour image of spiral galaxy NGC 300 (full field) . PR Photo 18b/02 : Cepheid stars in NGC 300 PR Photo 18c/02 : H-alpha image of NGC 300 PR Photo 18d/02 : Distant cluster of galaxies CL0053-37 in the NGC 300 field PR Photo 18e/02 : Dark matter distribution in CL0053-37 PR Photo 18f/02 : Distant, reddened cluster of galaxies in the NGC 300 field PR Photo 18g/02 : Distant galaxies, seen through the outskirts of NGC 300 PR Photo 18h/02 : "The View Beyond" ESO PR Photo 18a/02 ESO PR Photo 18a/02 [Preview - JPEG: 400 x 412 pix - 112k] [Normal - JPEG: 1200 x 1237 pix - 1.7M] [Hi-Res - JPEG: 4000 x 4123 pix - 20.3M] Caption : PR Photo 18a/02 is a reproduction of a colour-composite image of the nearby spiral galaxy NGC 300 and the surrounding sky field, obtained in 1999 and 2000 with the Wide-Field Imager (WFI) on the MPG/ESO 2.2-m telescope at the La Silla Observatory. See the text for details about the many different uses of this photo. Smaller areas in this large field are shown in Photos 18b-h/02 , cf. below. The High-Res version of this image has been compressed by a factor 4 (2 x 2 pixel rebinning) to reduce it to a reasonably transportable size. Technical information about this and the other photos is available at the end of this communication. Located some 7 million light-years away, the spiral galaxy NGC 300 [1] is a beautiful representative of its class, a Milky-Way-like member of the prominent Sculptor group of galaxies in the southern constellation of that name. NGC 300 is a big object in the sky - being so close, it extends over an angle of almost 25 arcmin, only slightly less than the size of the full moon. It is also relative bright, even a small pair of binoculars will unveil this magnificent spiral galaxy as a hazy glowing patch on a dark sky background. The comparatively small distance of NGC 300 and its face-on orientation provide astronomers with a wonderful opportunity to study in great detail its structure as well as its various stellar populations and interstellar medium. It was exactly for this purpose that some images of NGC 300 were obtained with the Wide-Field Imager (WFI) on the MPG/ESO 2.2-m telescope at the La Silla Observatory. This advanced 67-million pixel digital camera has already produced many impressive pictures, some of which are displayed in the WFI Photo Gallery [2]. With its large field of view, 34 x 34 arcmin 2 , the WFI is optimally suited to show the full extent of the spiral galaxy NGC 300 and its immediate surroundings in the sky, cf. PR Photo 18a/02 . NGC 300 and "Virtual Astronomy" In addition to being a beautiful sight in its own right, the present WFI-image of NGC 300 is also a most instructive showcase of how astronomers with very different research projects nowadays can make effective use of the same observations for their programmes . The idea to exploit one and the same data set is not new, but thanks to rapid technological developments it has recently developed into a very powerful tool for the astronomers in their continued quest to understand the Universe. This kind of work has now become very efficient with the advent of a fully searchable data archive from which observational data can then - after the expiry of a nominal one-year proprietary period for the observers - be made available to other astronomers. The ESO Science Data Archive was established some years ago and now encompasses more than 15 Terabyte [3]. Normally, the identification of specific data sets in such a large archive would be a very difficult and time-consuming task. However, effective projects and software "tools" like ASTROVIRTEL and Querator now allow the users quickly to "filter" large amounts of data and extract those of their specific interest. Indeed, "Archival Astronomy" has already led to many important discoveries, cf. the ASTROVIRTEL list of publications. There is no doubt that "Virtual Astronomical Observatories" will play an increasingly important role in the future, cf. ESO PR 26/01. The present wide-field images of NGC 300 provide an impressive demonstration of the enormous potential of this innovative approach. Some of the ways they were used are explained below. Cepheids in NGC 300 and the cosmic distance scale ESO PR Photo 18b/02 ESO PR Photo 18b/02 [Preview - JPEG: 468 x 400 pix - 112k] [Full-Res - JPEG: 1258 x 1083 pix - 1.6M] Caption : PR Photo 18b/02 shows some of the Cepheid type stars in the spiral galaxy NGC 300 (at the centre of the markers), as they were identified by Wolfgang Gieren and collaborators during the research programme for which the WFI images of NGC 300 were first obtained. In this area of NGC 300, there is also a huge cloud of ionized hydrogen (a "HII shell"). It measures about 2000 light-years in diameter, thus dwarfing even the enormous Tarantula Nebula in the LMC, also photographed with the WFI (cf. ESO PR Photos 14a-g/02 ). The largest versions ("normal" or "full-res") of this and the following photos are shown with their original pixel size, demonstrating the incredible amount of detail visible on one WFI image. Technical information about this photo is available below. In 1999, Wolfgang Gieren (Universidad de Concepcion, Chile) and his colleagues started a search for Cepheid-type variable stars in NGC 300. These stars constitute a key element in the measurement of distances in the Universe. It has been known since many years that the pulsation period of a Cepheid-type star depends on its intrinsic brightness (its "luminosity"). Thus, once its period has been measured, the astronomers can calculate its luminosity. By comparing this to the star's apparent brightness in the sky, and applying the well-known diminution of light with the second power of the distance, they can obtain the distance to the star. This fundamental method has allowed some of the most reliable measurements of distances in the Universe and has been essential for all kinds of astrophysics, from the closest stars to the remotest galaxies. Previous to Gieren's new project, only about a dozen Cepheids were known in NGC 300. However, by regularly obtaining wide-field WFI exposures of NGC 300 from July 1999 through January 2000 and carefully monitoring the apparent brightness of its brighter stars during that period, the astronomers detected more than 100 additional Cepheids . The brightness variations (in astronomical terminology: "light curves") could be determined with excellent precision from the WFI data. They showed that the pulsation periods of these Cepheids range from about 5 to 115 days. Some of these Cepheids are identified on PR Photo 18b/02 , in the middle of a very crowded field in NGC 300. When fully studied, these unique observational data will yield a new and very accurate distance to NGC 300, making this galaxy a future cornerstone in the calibration of the cosmic distance scale . Moreover, they will also allow to understand in more detail how the brightness of a Cepheid-type star depends on its chemical composition, currently a major uncertainty in the application of the Cepheid method to the calibration of the extragalactic distance scale. Indeed, the effect of the abundance of different elements on the luminosity of a Cepheid can be especially well measured in NGC 300 due to the existence of large variations of these abundances in the stars located in the disk of this galaxy. Gieren and his group, in collaboration with astronomers Fabio Bresolin and Rolf Kudritzki (Institute of Astronomy, Hawaii, USA) are currently measuring the variations of these chemical abundances in stars in the disk of NGC 300, by means of spectra of about 60 blue supergiant stars, obtained with the FORS multi-mode instruments at the ESO Very Large Telescope (VLT) on Paranal. These stars, that are among the optically brightest in NGC 300, were first identified in the WFI images of this galaxy obtained in different colours - the same that were used to produce PR Photo 18a/02 . The nature of those stars was later spectroscopically confirmed at the VLT. As an important byproduct of these measurements, the luminosities of the blue supergiant stars in NGC 300 will themselves be calibrated (as a new cosmic "standard candle"), taking advantage of their stellar wind properties that can be measured from the VLT spectra. The WFI Cepheid observations in NGC 300, as well as the VLT blue supergiant star observations, form part of a large research project recently initiated by Gieren and his group that is concerned with the improvement of various stellar distance indicators in nearby galaxies (the "ARAUCARIA" project ). Clues on star formation history in NGC 300 ESO PR Photo 18c/02 ESO PR Photo 18c/02 [Preview - JPEG: 440 x 400 pix - 63k] [Normal - JPEG: 1200 x 1091 pix - 664k] [Full-Res - JPEG: 5515 x 5014 pix - 14.3M] Caption : PR Photo 18c/02 displays NGC 300, as seen through a narrow optical filter (H-alpha) in the red light of hydrogen atoms. A population of intrinsically bright and young stars turned "on" just a few million years ago. Their radiation and strong stellar winds have shaped many of the clouds of ionized hydrogen gas ("HII shells") seen in this photo. The "rings" near some of the bright stars are caused by internal reflections in the telescope. Technical information about this photo is available below.. But there is much more to discover on these WFI images of NGC 300! The WFI images obtained in several broad and narrow band filters from the ultraviolet to the near-infrared spectral region (U, B, V, R, I and H-alpha) allow a detailed study of groups of heavy, hot stars (known as "OB associations") and a large number of huge clouds of ionized hydrogen ("HII shells") in this galaxy. Corresponding studies have been carried out by Gieren's group, resulting in the discovery of an amazing number of OB associations, including a number of giant associations. These investigations, taken together with the observed distribution of the pulsation periods of the Cepheids, allow to better understand the history of star formation in NGC 300. For example, three distinct peaks in the number distribution of the pulsation periods of the Cepheids seem to indicate that there have been at least three different bursts of star formation within the past 100 million years. The large number of OB associations and HII shells ( PR Photo 18c/02 ) furthermore indicate the presence of a numerous, very young stellar population in NGC 300, aged only a few million years. Dark matter and the observed shapes of distant galaxies In early 2002, Thomas Erben and Mischa Schirmer from the "Institut für Astrophysik and extraterrestrische Forschung" ( IAEF , Universität Bonn, Germany), in the course of their ASTROVIRTEL programme, identified and retrieved all available broad-band and H-alpha images of NGC 300 available in the ESO Science Data Archive. Most of these have been observed for the project by Gieren and his colleagues, described above. However, the scientific interest of the German astronomers was very different from that of their colleagues and they were not at all concerned about the main object in the field, NGC 300. In a very different approach, they instead wanted to study those images to measure the amount of dark matter in the Universe, by means of the weak gravitational lensing effect produced by distant galaxy clusters. Various observations, ranging from the measurement of internal motions ("rotation curves") in spiral galaxies to the presence of hot X-ray gas in clusters of galaxies and the motion of galaxies in those clusters, indicate that there is about ten times more matter in the Universe than what is observed in the form of stars, gas and galaxies ("luminous matter"). As this additional matter does not emit light at any wavelengths, it is commonly referred to as "dark" matter - its true nature is yet entirely unclear. Insight into the distribution of dark matter in the Universe can be gained by looking at the shapes of images of very remote galaxies, billions of light-years away, cf. ESO PR 24/00. Light from such distant objects travels vast distances through space before arriving here on Earth, and whenever it passes heavy clusters of galaxies, it is bent a little due to the associated gravitational field. Thus, in long-exposure, high-quality images, this "weak lensing" effect can be perceived as a coherent pattern of distortion of the images of background galaxies. Gravitational lensing in the NGC 300 field ESO PR Photo 18d/02 ESO PR Photo 18d/02 [Preview - JPEG: 400 x 495 pix - 82k] [Full-Res - JPEG: 1304 x 1615 pix - 3.2M] Caption : PR Photo 18d/02 shows the distant cluster of galaxies CL0053-37 , as imaged on the WFI photo of the NGC 300 sky field. The elongated distribution of the cluster galaxies, as well as the presence of two large, early-type elliptical galaxies indicate that this cluster is still in the process of formation. Some of the galaxies appear to be merging. From the measured redshift ( z = 0.1625), a distance of about 2.1 billion light-years is deduced. Technical information about this photo is available below. ESO PR Photo 18e/02 ESO PR Photo 18e/02 [Preview - JPEG: 400 x 567 pix - 89k] [Normal - JPEG: 723 x 1024 pix - 424k] Caption : PR Photo 18e/02 is a "map" of the dark matter distribution (black contours) in the cluster of galaxies CL0053-37 (shown in PR Photo 18d/02 ), as obtained from the weak lensing effects detected in the WFI images, and the X-ray flux (green contours) taken from the All-Sky Survey carried out by the ROSAT satellite observatory. The distribution of galaxies resembles the elongated, dark-matter profile. Because of ROSAT's limited image sharpness (low "angular resolution"), it cannot be entirely ruled out that the observed X-ray emission is due to an active nucleus of a galaxy in CL0053-37, or even a foreground stellar binary system in NGC 300. The WFI NGC 300 images appeared promising for gravitational lensing research because of the exceptionally long total exposure time. Although the large foreground galaxy NGC 300 would block the light of tens of thousands of galaxies in the background, a huge number of others would still be visible in the outskirts of this sky field, making a search for clusters of galaxies and associated lensing effects quite feasible. To ensure the best possible image sharpness in the combined image, and thus to obtain the most reliable measurements of the shapes of the background objects, only red (R-band) images obtained under the best seeing conditions were combined. In order to provide additional information about the colours of these faint objects, a similar approach was adopted for images in the other bands as well. The German astronomers indeed measured a significant lensing effect for one of the galaxy clusters in the field ( CL0053-37 , see PR Photo 18d/02 ); the images of background galaxies around this cluster were noticeably distorted in the direction tangential to the cluster center. Based on the measured degree of distortion, a map of the distribution of (dark) matter in this direction was constructed ( PR Photo 18e/02 ). The separation of unlensed foreground (bluer) and lensed background galaxies (redder) greatly profited from the photometric measurements done by Gieren's group in the course of their work on the Cepheids in NGC 300. Assuming that the lensed background galaxies lie at a mean redshift of 1.0, i.e. a distance of 8 billion light-years, a mass of about 2 x 10 14 solar masses was obtained for the CL0053-37 cluster. This lensing analysis in the NGC 300 field is part of the Garching-Bonn Deep Survey (GaBoDS) , a weak gravitational lensing survey led by Peter Schneider (IAEF). GaBoDS is based on exposures made with the WFI and until now a sky area of more than 12 square degrees has been imaged during very good seeing conditions. Once complete, this investigation will allow more insight into the distribution and cosmological evolution of galaxy cluster masses, which in turn provide very useful information about the structure and history of the Universe. One hundred thousand galaxies ESO PR Photo 18f/02 ESO PR Photo 18f/02 [Preview - JPEG: 400 x 526 pix - 93k] [Full-Res - JPEG: 756 x 994 pix - 1.0M] Caption : PR Photo 18f/02 shows a group of galaxies , seen on the NGC 300 images. They are all quite red and their similar colours indicate that they must be about equally distant. They probably constitute a distant cluster, now in the stage of formation. Technical information about this photo is available below. ESO PR Photo 18g/02 ESO PR Photo 18g/02 [Preview - JPEG: 469 x 400 pix - xxk] [Full-Res - JPEG: 1055 x 899 pix - 968k] Caption : PR Photo 18g/02 shows an area in the outer regions of NGC 300. Disks of spiral galaxies are usually quite "thin" (some hundred light-years), as compared to their radial extent (tens of thousands of light-years across). In areas where only small amounts of dust are present, it is possible to see much more distant galaxies right through the disk of NGC 300 , as demonstrated by this image. Technical information about this photo is available below. ESO PR Photo 18h/02 ESO PR Photo 18h/02 [Preview - JPEG: 451 x 400 pix - 89k] [Normal - JPEG: 902 x 800 pix - 856k] [Full-Res - JPEG: 2439 x 2163 pix - 6.0M] Caption : PR Photo 18h/02 is an astronomers' joy ride to infinity. Such a rarely seen view of our universe imparts a feeling of the vast distances in space. In the upper half of the image, the outer region of NGC 300 is resolved into innumerable stars, while in the lower half, myriads of galaxies - a thousand times more distant - catch the eye. In reality, many of them are very similar to NGC 300, they are just much more remote. In addition to allowing a detailed investigation of dark matter and lensing effects in this field, the present, very "deep" colour image of NGC 300 invites to perform a closer inspection of the background galaxy population itself . No less than about 100,000 galaxies of all types are visible in this amazing image. Three known quasars ([ICS96] 005342.1-375947, [ICS96] 005236.1-374352, [ICS96] 005336.9-380354) with redshifts 2.25, 2.35 and 2.75, respectively, happen to lie inside this sky field, together with many interacting galaxies, some of which feature tidal tails. There are also several groups of highly reddened galaxies - probably distant clusters in formation, cf. PR Photo 18f/02 . Others are seen right through the outer regions of NGC 300, cf. PR Photo 18g/02 . More detailed investigations of the numerous galaxies in this field are now underway. From the nearby spiral galaxy NGC 300 to objects in the young Universe, it is all there, truly an astronomical treasure trove, cf. PR Photo 18h/02 ! Notes [1]: "NGC" means "New General Catalogue" (of nebulae and clusters) that was published in 1888 by J.L.E. Dreyer in the "Memoirs of the Royal Astronomical Society". [2]: Other colour composite images from the Wide-Field Imager at the MPG/ESO 2.2-m telescope at the La Silla Observatory are available at the ESO Outreach website at http://www.eso.org/esopia"bltxt">Tarantula Nebula in the LMC, cf. ESO PR Photos 14a-g/02. [3]: 1 Terabyte = 10 12 byte = 1000 Gigabyte = 1 million million byte. Technical information about the photos PR Photo 18a/02 and all cutouts were made from 110 WFI images obtained in the B-band (total exposure time 11.0 hours, rendered as blue), 105 images in the V-band (10.4 hours, green), 42 images in the R-band (4.2 hours, red) and 21 images through a H-alpha filter (5.1 hours, red). In total, 278 images of NGC 300 have been assembled to produce this colour image, together with about as many calibration images (biases, darks and flats). 150 GB of hard disk space were needed to store all uncompressed raw data, and about 1 TB of temporary files was produced during the extensive data reduction. Parallel processing of all data sets took about two weeks on a four-processor Sun Enterprise 450 workstation. The final colour image was assembled in Adobe Photoshop. To better show all details, the overall brightness of NGC 300 was reduced as compared to the outskirts of the field. The (red) "rings" near some of the bright stars originate from the H-alpha frames - they are caused by internal reflections in the telescope. The images were prepared by Mischa Schirmer at the Institut für Astrophysik und Extraterrestrische Forschung der Universität Bonn (IAEF) by means of a software pipeline specialised for reduction of multiple CCD wide-field imaging camera data. The raw data were extracted from the public sector of the ESO Science Data Archive. The extensive observations were performed at the ESO La Silla Observatory by Wolfgang Gieren, Pascal Fouque, Frederic Pont, Hermann Boehnhardt and La Silla staff, during 34 nights between July 1999 and January 2000. Some additional observations taken during the second half of 2000 were retrieved by Mischa Schirmer and Thomas Erben from the ESO archive. CD-ROM with full-scale NGC 300 image soon available PR Photo 18a/02 has been compressed by a factor 4 (2 x 2 rebinning). For PR Photos 18b-h/02 , the largest-size versions of the images are shown at the original scale (1 pixel = 0.238 arcsec). A full-resolution TIFF-version (approx. 8000 x 8000 pix; 200 Mb) of PR Photo 18a/02 will shortly be made available by ESO on a special CD-ROM, together with some other WFI images of the same size. An announcement will follow in due time.
NASA Astrophysics Data System (ADS)
2004-12-01
On December 9-10, 2004, the ESO Paranal Observatory was honoured with an overnight visit by His Excellency the President of the Republic of Chile, Ricardo Lagos and his wife, Mrs. Luisa Duran de Lagos. The high guests were welcomed by the ESO Director General, Dr. Catherine Cesarsky, ESO's representative in Chile, Mr. Daniel Hofstadt, and Prof. Maria Teresa Ruiz, Head of the Astronomy Department at the Universidad de Chile, as well as numerous ESO staff members working at the VLT site. The visit was characterised as private, and the President spent a considerable time in pleasant company with the Paranal staff, talking with and getting explanations from everybody. The distinguished visitors were shown the various high-tech installations at the observatory, including the Interferometric Tunnel with the VLTI delay lines and the first Auxiliary Telescope. Explanations were given by ESO astronomers and engineers and the President, a keen amateur astronomer, gained a good impression of the wide range of exciting research programmes that are carried out with the VLT. President Lagos showed a deep interest and impressed everyone present with many, highly relevant questions. Having enjoyed the spectacular sunset over the Pacific Ocean from the Residence terrace, the President met informally with the Paranal employees who had gathered for this unique occasion. Later, President Lagos visited the VLT Control Room from where the four 8.2-m Unit Telescopes and the VLT Interferometer (VLTI) are operated. Here, the President took part in an observing sequence of the spiral galaxy NGC 1097 (see PR Photo 35d/04) from the console of the MELIPAL telescope. After one more visit to the telescope platform at the top of Paranal, the President and his wife left the Observatory in the morning of December 10, 2004, flying back to Santiago. ESO PR Photo 35e/04 ESO PR Photo 35e/04 President Lagos Meets with ESO Staff at the Paranal Residencia [Preview - JPEG: 400 x 267pix - 144k] [Normal - JPEG: 640 x 427 pix - 240k] ESO PR Photo 35f/04 ESO PR Photo 35f/04 The Presidential Couple with Professor Maria Teresa Ruiz and the ESO Director General [Preview - JPEG: 500 x 400 pix - 224k] [Normal - JPEG: 1000 x 800 pix - 656k] [FullRes - JPEG: 1575 x 1260 pix - 1.0M] ESO PR Photo 35g/04 ESO PR Photo 35g/04 President Lagos with ESO Staff [Preview - JPEG: 500 x 400 pix - 192k] [Normal - JPEG: 1000 x 800 pix - 592k] [FullRes - JPEG: 1575 x 1200 pix - 1.1M] Captions: ESO PR Photo 35e/04 was obtained during President Lagos' meeting with ESO Staff at the Paranal Residencia. On ESO PR Photo 35f/04, President Lagos and Mrs. Luisa Duran de Lagos are seen at a quiet moment during the visit to the VLT Control Room, together with Prof. Maria Teresa Ruiz (far right), Head of the Astronomy Department at the Universidad de Chile, and the ESO Director General. ESO PR Photo 35g/04 shows President Lagos with some ESO staff members in the Paranal Residencia. VLT obtains a splendid photo of a unique galaxy, NGC 1097 ESO PR Photo 35d/04 ESO PR Photo 35d/04 Spiral Galaxy NGC 1097 (Melipal + VIMOS) [Preview - JPEG: 400 x 525 pix - 181k] [Normal - JPEG: 800 x 1049 pix - 757k] [FullRes - JPEG: 2296 x 3012 pix - 7.9M] Captions: ESO PR Photo 35d/04 is an almost-true colour composite based on three images made with the multi-mode VIMOS instrument on the 8.2-m Melipal (Unit Telescope 3) of ESO's Very Large Telescope. They were taken on the night of December 9-10, 2004, in the presence of the President of the Republic of Chile, Ricardo Lagos. Details are available in the Technical Note below. A unique and very beautiful image was obtained with the VIMOS instrument with President Lagos at the control desk. Located at a distance of about 45 million light-years in the southern constellation Fornax (the Furnace), NGC 1097 is a relatively bright, barred spiral galaxy of type SBb, seen face-on. At magnitude 9.5, and thus just 25 times fainter than the faintest object that can be seen with the unaided eye, it appears in small telescopes as a bright, circular disc. ESO PR Photo 35d/04, taken on the night of December 9 to 10, 2004 with the VIsible Multi-Object Spectrograph ("VIMOS), a four-channel multiobject spectrograph and imager attached to the 8.2-m VLT Melipal telescope, shows that the real structure is much more complicated. NGC 1097 is indeed a most interesting object in many respects. As this striking image reveals, NGC 1097 presents a centre that consists of a broken ring of bright knots surrounding the galaxy's nucleus. The sizes of these knots - presumably gigantic bubbles of hydrogen atoms having lost one electron (HII regions) through the intense radiation from luminous massive stars - range from roughly 750 to 2000 light-years. The presence of these knots suggests that an energetic burst of star formation has recently occurred. NGC 1097 is also known as an example of the so-called LINER (Low-Ionization Nuclear Emission Region Galaxies) class. Objects of this type are believed to be low-luminosity examples of Active Galactic Nuclei (AGN), whose emission is thought to arise from matter (gas and stars) falling into oblivion in a central black hole. There is indeed much evidence that a supermassive black hole is located at the very centre of NGC 1097, with a mass of several tens of million times the mass of the Sun. This is at least ten times more massive than the central black hole in our own Milky Way. However, NGC 1097 possesses a comparatively faint nucleus only, and the black hole in its centre must be on a very strict "diet": only a small amount of gas and stars is apparently being swallowed by the black hole at any given moment. A turbulent past As can be clearly seen in the upper part of PR Photo 35d/04, NGC 1097 also has a small galaxy companion; it is designated NGC 1097A and is located about 42,000 light-years away from the centre of NGC 1097. This peculiar elliptical galaxy is 25 times fainter than its big brother and has a "box-like" shape, not unlike NGC 6771, the smallest of the three galaxies that make up the famous Devil's Mask, cf. ESO PR Photo 12/04. There is evidence that NGC 1097 and NGC 1097A have been interacting in the recent past. Another piece of evidence for this galaxy's tumultuous past is the presence of four jets - not visible on this image - discovered in the 1970's on photographic plates. These jets are now believed to be the captured remains of a disrupted dwarf galaxy that passed through the inner part of the disc of NGC 1097. Moreover, another interesting feature of this active galaxy is the fact that no less than two supernovae were detected inside it within a time span of only four years. SN 1999eu was discovered by Japanese amateur Masakatsu Aoki (Toyama, Japan) on November 5, 1999. This 17th-magnitude supernova was a peculiar Type II supernova, the end result of the core collapse of a very massive star. And in the night of January 5 to 6, 2003, Reverend Robert Evans (Australia) discovered another Type II supernova of 15th magnitude. Also visible in this very nice image which was taken during very good sky conditions - the seeing was well below 1 arcsec - are a multitude of background galaxies of different colours and shapes. Given the fact that the total exposure time for this three-colour image was just 11 min, it is a remarkable feat, demonstrating once again the very high efficiency of the VLT.
Aladin Lite: Lightweight sky atlas for browsers
NASA Astrophysics Data System (ADS)
Boch, Thomas
2014-02-01
Aladin Lite is a lightweight version of the Aladin tool, running in the browser and geared towards simple visualization of a sky region. It allows visualization of image surveys (JPEG multi-resolution HEALPix all-sky surveys) and permits superimposing tabular (VOTable) and footprints (STC-S) data. Aladin Lite is powered by HTML5 canvas technology and is easily embeddable on any web page and can also be controlled through a Javacript API.
Deepest Wide-Field Colour Image in the Southern Sky
NASA Astrophysics Data System (ADS)
2003-01-01
LA SILLA CAMERA OBSERVES CHANDRA DEEP FIELD SOUTH ESO PR Photo 02a/03 ESO PR Photo 02a/03 [Preview - JPEG: 400 x 437 pix - 95k] [Normal - JPEG: 800 x 873 pix - 904k] [HiRes - JPEG: 4000 x 4366 pix - 23.1M] Caption : PR Photo 02a/03 shows a three-colour composite image of the Chandra Deep Field South (CDF-S) , obtained with the Wide Field Imager (WFI) camera on the 2.2-m MPG/ESO telescope at the ESO La Silla Observatory (Chile). It was produced by the combination of about 450 images with a total exposure time of nearly 50 hours. The field measures 36 x 34 arcmin 2 ; North is up and East is left. Technical information is available below. The combined efforts of three European teams of astronomers, targeting the same sky field in the southern constellation Fornax (The Oven) have enabled them to construct a very deep, true-colour image - opening an exceptionally clear view towards the distant universe . The image ( PR Photo 02a/03 ) covers an area somewhat larger than the full moon. It displays more than 100,000 galaxies, several thousand stars and hundreds of quasars. It is based on images with a total exposure time of nearly 50 hours, collected under good observing conditions with the Wide Field Imager (WFI) on the MPG/ESO 2.2m telescope at the ESO La Silla Observatory (Chile) - many of them extracted from the ESO Science Data Archive . The position of this southern sky field was chosen by Riccardo Giacconi (Nobel Laureate in Physics 2002) at a time when he was Director General of ESO, together with Piero Rosati (ESO). It was selected as a sky region towards which the NASA Chandra X-ray satellite observatory , launched in July 1999, would be pointed while carrying out a very long exposure (lasting a total of 1 million seconds, or 278 hours) in order to detect the faintest possible X-ray sources. The field is now known as the Chandra Deep Field South (CDF-S) . The new WFI photo of CDF-S does not reach quite as deep as the available images of the "Hubble Deep Fields" (HDF-N in the northern and HDF-S in the southern sky, cf. e.g. ESO PR Photo 35a/98 ), but the field-of-view is about 200 times larger. The present image displays about 50 times more galaxies than the HDF images, and therefore provides a more representative view of the universe . The WFI CDF-S image will now form a most useful basis for the very extensive and systematic census of the population of distant galaxies and quasars, allowing at once a detailed study of all evolutionary stages of the universe since it was about 2 billion years old . These investigations have started and are expected to provide information about the evolution of galaxies in unprecedented detail. They will offer insights into the history of star formation and how the internal structure of galaxies changes with time and, not least, throw light on how these two evolutionary aspects are interconnected. GALAXIES IN THE WFI IMAGE ESO PR Photo 02b/03 ESO PR Photo 02b/03 [Preview - JPEG: 488 x 400 pix - 112k] [Normal - JPEG: 896 x 800 pix - 1.0M] [Full-Res - JPEG: 2591 x 2313 pix - 8.6M] Caption : PR Photo 02b/03 contains a collection of twelve subfields from the full WFI Chandra Deep Field South (WFI CDF-S), centred on (pairs or groups of) galaxies. Each of the subfields measures 2.5 x 2.5 arcmin 2 (635 x 658 pix 2 ; 1 pixel = 0.238 arcsec). North is up and East is left. Technical information is available below. The WFI CDF-S colour image - of which the full field is shown in PR Photo 02a/03 - was constructed from all available observations in the optical B- ,V- and R-bands obtained under good conditions with the Wide Field Imager (WFI) on the 2.2-m MPG/ESO telescope at the ESO La Silla Observatory (Chile), and now stored in the ESO Science Data Archive. It is the "deepest" image ever taken with this instrument. It covers a sky field measuring 36 x 34 arcmin 2 , i.e., an area somewhat larger than that of the full moon. The observations were collected during a period of nearly four years, beginning in January 1999 when the WFI instrument was first installed (cf. ESO PR 02/99 ) and ending in October 2002. Altogether, nearly 50 hours of exposure were collected in the three filters combined here, cf. the technical information below. Although it is possible to identify more than 100,000 galaxies in the image - some of which are shown in PR Photo 02b/03 - it is still remarkably "empty" by astronomical standards. Even the brightest stars in the field (of visual magnitude 9) can hardly be seen by human observers with binoculars. In fact, the area density of bright, nearby galaxies is only half of what it is in "normal" sky fields. Comparatively empty fields like this one provide an unsually clear view towards the distant regions in the universe and thus open a window towards the earliest cosmic times . Research projects in the Chandra Deep Field South ESO PR Photo 02c/03 ESO PR Photo 02c/03 [Preview - JPEG: 400 x 513 pix - 112k] [Normal - JPEG: 800 x 1026 pix - 1.2M] [Full-Res - JPEG: 1717 x 2201 pix - 5.5M] ESO PR Photo 02d/03 ESO PR Photo 02d/03 [Preview - JPEG: 400 x 469 pix - 112k] [Normal - JPEG: 800 x 937 pix - 1.0M] [Full-Res - JPEG: 2545 x 2980 pix - 10.7M] Caption : PR Photo 02c-d/03 shows two sky fields within the WFI image of CDF-S, reproduced at full (pixel) size to illustrate the exceptional information richness of these data. The subfields measure 6.8 x 7.8 arcmin 2 (1717 x 1975 pixels) and 10.1 x 10.5 arcmin 2 (2545 x 2635 pixels), respectively. North is up and East is left. Technical information is available below. Astronomers from different teams and disciplines have been quick to join forces in a world-wide co-ordinated effort around the Chandra Deep Field South. Observations of this area are now being performed by some of the most powerful astronomical facilities and instruments. They include space-based X-ray and infrared observations by the ESA XMM-Newton , the NASA CHANDRA , Hubble Space Telescope (HST) and soon SIRTF (scheduled for launch in a few months), as well as imaging and spectroscopical observations in the infrared and optical part of the spectrum by telescopes at the ground-based observatories of ESO (La Silla and Paranal) and NOAO (Kitt Peak and Tololo). A huge database is currently being created that will help to analyse the evolution of galaxies in all currently feasible respects. All participating teams have agreed to make their data on this field publicly available, thus providing the world-wide astronomical community with a unique opportunity to perform competitive research, joining forces within this vast scientific project. Concerted observations The optical true-colour WFI image presented here forms an important part of this broad, concerted approach. It combines observations of three scientific teams that have engaged in complementary scientific projects, thereby capitalizing on this very powerful combination of their individual observations. The following teams are involved in this work: * COMBO-17 (Classifying Objects by Medium-Band Observations in 17 filters) : an international collaboration led by Christian Wolf and other scientists at the Max-Planck-Institut für Astronomie (MPIA, Heidelberg, Germany). This team used 51 hours of WFI observing time to obtain images through five broad-band and twelve medium-band optical filters in the visual spectral region in order to measure the distances (by means of "photometric redshifts") and star-formation rates of about 10,000 galaxies, thereby also revealing their evolutionary status. * EIS (ESO Imaging Survey) : a team of visiting astronomers from the ESO community and beyond, led by Luiz da Costa (ESO). They observed the CDF-S for 44 hours in six optical bands with the WFI camera on the MPG/ESO 2.2-m telescope and 28 hours in two near-infrared bands with the SOFI instrument at the ESO 3.5-m New Technology Telescope (NTT) , both at La Silla. These observations form part of the Deep Public Imaging Survey that covers a total sky area of 3 square degrees. * GOODS (The Great Observatories Origins Deep Survey) : another international team (on the ESO side, led by Catherine Cesarsky ) that focusses on the coordination of deep space- and ground-based observations on a smaller, central area of the CDF-S in order to image the galaxies in many differerent spectral wavebands, from X-rays to radio. GOODS has contributed with 40 hours of WFI time for observations in three broad-band filters that were designed for the selection of targets to be spectroscopically observed with the ESO Very Large Telescope (VLT) at the Paranal Observatory (Chile), for which over 200 hours of observations are planned. About 10,000 galaxies will be spectroscopically observed in order to determine their redshift (distance), star formation rate, etc. Another important contribution to this large research undertaking will come from the GEMS project. This is a "HST treasury programme" (with Hans-Walter Rix from MPIA as Principal Investigator) which observes the 10,000 galaxies identified in COMBO-17 - and eventually the entire WFI-field with HST - to show the evolution of their shapes with time. Great questions With the combination of data from many wavelength ranges now at hand, the astronomers are embarking upon studies of the many different processes in the universe. They expect to shed more light on several important cosmological questions, such as: * How and when was the first generation of stars born? * When exactly was the neutral hydrogen in the universe ionized the first time by powerful radiation emitted from the first stars and active galactic nuclei? * How did galaxies and groups of galaxies evolve during the past 13 billion years? * What is the true nature of those elusive objects that are only seen at the infrared and submillimetre wavelengths (cf. ESO PR 23/02 )? * Which fraction of galaxies had an "active" nucleus (probably with a black hole at the centre) in their past, and how long did this phase last? Moreover, since these extensive optical observations were obtained in the course of a dozen observing periods during several years, it is also possible to perform studies of certain variable phenomena: * How many variable sources are seen and what are their types and properties? * How many supernovae are detected per time interval, i.e. what is the supernovae frequency at different cosmic epochs? * How do those processes depend on each other? This is just a short and very incomplete list of questions astronomers world-wide will address using all the complementary observations. No doubt that the coming studies of the Chandra Deep Field South - with this and other data - will be most exciting and instructive! Other wide-field images Other wide-field images from the WFI have been published in various ESO press releases during the past four years - they are also available at the WFI Photo Gallery . A collection of full-resolution files (TIFF-format) is available on a WFI CD-ROM . Technical Information The very extensive data reduction and colour image processing needed to produce these images were performed by Mischa Schirmer and Thomas Erben at the "Wide Field Expertise Center" of the Institut für Astrophysik und Extraterrestrische Forschung der Universität Bonn (IAEF) in Germany. It was done by means of a software pipeline specialised for reduction of multiple CCD wide-field imaging camera data. This pipeline is mainly based on publicly available software modules and algorithms ( EIS , FLIPS , LDAC , Terapix , Wifix ). The image was constructed from about 150 exposures in each of the following wavebands: B-band (centred at wavelength 456 nm; here rendered as blue, 15.8 hours total exposure time), V-band (540 nm; green, 15.6 hours) and R-band (652 nm; red, 17.8 hours). Only images taken under sufficiently good observing conditions (defined as seeing less than 1.1 arcsec) were included. In total, 450 images were assembled to produce this colour image, together with about as many calibration images (biases, darks and flats). More than 2 Terabyte (TB) of temporary files were produced during the extensive data reduction. Parallel processing of all data sets took about two weeks on a four-processor Sun Enterprise 450 workstation and a 1.8 GHz dual processor Linux PC. The final colour image was assembled in Adobe Photoshop. The observations were performed by ESO (GOODS, EIS) and the COMBO-17 collaboration in the period 1/1999-10/2002.
Novel Algorithm for Classification of Medical Images
NASA Astrophysics Data System (ADS)
Bhushan, Bharat; Juneja, Monika
2010-11-01
Content-based image retrieval (CBIR) methods in medical image databases have been designed to support specific tasks, such as retrieval of medical images. These methods cannot be transferred to other medical applications since different imaging modalities require different types of processing. To enable content-based queries in diverse collections of medical images, the retrieval system must be familiar with the current Image class prior to the query processing. Further, almost all of them deal with the DICOM imaging format. In this paper a novel algorithm based on energy information obtained from wavelet transform for the classification of medical images according to their modalities is described. For this two types of wavelets have been used and have been shown that energy obtained in either case is quite distinct for each of the body part. This technique can be successfully applied to different image formats. The results are shown for JPEG imaging format.
LDPC-based iterative joint source-channel decoding for JPEG2000.
Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane
2007-02-01
A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.
Using Purpose-Built Functions and Block Hashes to Enable Small Block and Sub-file Forensics
2010-01-01
JPEGs. We tested precarve using the nps-2009-canon2-gen6 (Garfinkel et al., 2009) disk image. The disk image was created with a 32 MB SD card and a...analysis of n-grams in the fragment. Fig. 1 e Usage of a 160 GB iPod reported by iTunes 8.2.1 (6) (top), as reported by the file system (bottom center), and...as computing with random sampling (bottom right). Note that iTunes usage actually in GiB, even though the program displays the “GB” label. Fig. 2 e
Lossless Astronomical Image Compression and the Effects of Random Noise
NASA Technical Reports Server (NTRS)
Pence, William
2009-01-01
In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.
Barisoni, Laura; Troost, Jonathan P; Nast, Cynthia; Bagnasco, Serena; Avila-Casado, Carmen; Hodgin, Jeffrey; Palmer, Matthew; Rosenberg, Avi; Gasim, Adil; Liensziewski, Chrysta; Merlino, Lino; Chien, Hui-Ping; Chang, Anthony; Meehan, Shane M; Gaut, Joseph; Song, Peter; Holzman, Lawrence; Gibson, Debbie; Kretzler, Matthias; Gillespie, Brenda W; Hewitt, Stephen M
2016-07-01
The multicenter Nephrotic Syndrome Study Network (NEPTUNE) digital pathology scoring system employs a novel and comprehensive methodology to document pathologic features from whole-slide images, immunofluorescence and ultrastructural digital images. To estimate inter- and intra-reader concordance of this descriptor-based approach, data from 12 pathologists (eight NEPTUNE and four non-NEPTUNE) with experience from training to 30 years were collected. A descriptor reference manual was generated and a webinar-based protocol for consensus/cross-training implemented. Intra-reader concordance for 51 glomerular descriptors was evaluated on jpeg images by seven NEPTUNE pathologists scoring 131 glomeruli three times (Tests I, II, and III), each test following a consensus webinar review. Inter-reader concordance of glomerular descriptors was evaluated in 315 glomeruli by all pathologists; interstitial fibrosis and tubular atrophy (244 cases, whole-slide images) and four ultrastructural podocyte descriptors (178 cases, jpeg images) were evaluated once by six and five pathologists, respectively. Cohen's kappa for inter-reader concordance for 48/51 glomerular descriptors with sufficient observations was moderate (0.40
Displaying radiologic images on personal computers: image storage and compression--Part 2.
Gillespy, T; Rowberg, A H
1994-02-01
This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.
Radiological Image Compression
NASA Astrophysics Data System (ADS)
Lo, Shih-Chung Benedict
The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.
NASA Astrophysics Data System (ADS)
Zaborowicz, M.; Przybył, J.; Koszela, K.; Boniecki, P.; Mueller, W.; Raba, B.; Lewicki, A.; Przybył, K.
2014-04-01
The aim of the project was to make the software which on the basis on image of greenhouse tomato allows for the extraction of its characteristics. Data gathered during the image analysis and processing were used to build learning sets of artificial neural networks. Program enables to process pictures in jpeg format, acquisition of statistical information of the picture and export them to an external file. Produced software is intended to batch analyze collected research material and obtained information saved as a csv file. Program allows for analysis of 33 independent parameters implicitly to describe tested image. The application is dedicated to processing and image analysis of greenhouse tomatoes. The program can be used for analysis of other fruits and vegetables of a spherical shape.
Morgan, Karen L.M.; Krohn, M. Dennis; Doran, Kara; Guy, Kristy K.
2013-01-01
The U.S. Geological Survey (USGS) conducts baseline and storm response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms (Morgan, 2009). On February 7, 2012, the USGS conducted an oblique aerial photographic survey from Pensacola, Fla., to Breton Islands, La., aboard a Piper Navajo Chieftain at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect baseline data for assessing incremental changes since the last survey, and the data can be used in the assessment of future coastal change. The photographs provided here are Joint Photographic Experts Group (JPEG) images. The photograph locations are an estimate of the position of the aircraft and do not indicate the location of the feature in the images (see the Navigation Data page). These photos document the configuration of the barrier islands and other coastal features at the time of the survey. The header of each photo is populated with time of collection, Global Positioning System (GPS) latitude, GPS longitude, GPS position (latitude and longitude), keywords, credit, artist (photographer), caption, copyright, and contact information using EXIFtools (Subino and others, 2012). Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet. Table 1 provides detailed information about the assigned location, name, data, and time the photograph was taken along with links to the photograph. In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files (see the Photos and Maps page).
Recognizable or Not: Towards Image Semantic Quality Assessment for Compression
NASA Astrophysics Data System (ADS)
Liu, Dong; Wang, Dandan; Li, Houqiang
2017-12-01
Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.
& Legislation Links Discussion Lists Quick Links AAPT eMentoring ComPADRE Review of High School Take Physics" Poster Why Physics Poster Thumbnail Download normal resolution JPEG Download high resolution JPEG Download Spanish Version Recruiting Physics Students in High School (FED newsletter article
Method for measuring anterior chamber volume by image analysis
NASA Astrophysics Data System (ADS)
Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli
2007-12-01
Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.
Image compression system and method having optimized quantization tables
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)
1998-01-01
A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.
Adapting the ISO 20462 softcopy ruler method for online image quality studies
NASA Astrophysics Data System (ADS)
Burns, Peter D.; Phillips, Jonathan B.; Williams, Don
2013-01-01
In this paper we address the problem of Image Quality Assessment of no reference metrics, focusing on JPEG corrupted images. In general no reference metrics are not able to measure with the same performance the distortions within their possible range and with respect to different image contents. The crosstalk between content and distortion signals influences the human perception. We here propose two strategies to improve the correlation between subjective and objective quality data. The first strategy is based on grouping the images according to their spatial complexity. The second one is based on a frequency analysis. Both the strategies are tested on two databases available in the literature. The results show an improvement in the correlations between no reference metrics and psycho-visual data, evaluated in terms of the Pearson Correlation Coefficient.
Fpack and Funpack Utilities for FITS Image Compression and Uncompression
NASA Technical Reports Server (NTRS)
Pence, W.
2008-01-01
Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.
Sharpest Ever VLT Images at NAOS-CONICA "First Light"
NASA Astrophysics Data System (ADS)
2001-12-01
Very Promising Start-Up of New Adaptive Optics Instrument at Paranal Summary A team of astronomers and engineers from French and German research institutes and ESO at the Paranal Observatory is celebrating the successful accomplishment of "First Light" for the NAOS-CONICA Adaptive Optics facility . With this event, another important milestone for the Very Large Telescope (VLT) project has been passed. Normally, the achievable image sharpness of a ground-based telescope is limited by the effect of atmospheric turbulence. However, with the Adaptive Optics (AO) technique, this drawback can be overcome and the telescope produces images that are at the theoretical limit, i.e., as sharp as if it were in space . Adaptive Optics works by means of a computer-controlled, flexible mirror that counteracts the image distortion induced by atmospheric turbulence in real time. The larger the main mirror of the telescope is, and the shorter the wavelength of the observed light, the sharper will be the images recorded. During a preceding four-week period of hard and concentrated work, the expert team assembled and installed this major astronomical instrument at the 8.2-m VLT YEPUN Unit Telescope (UT4). On November 25, 2001, following careful adjustments of this complex apparatus, a steady stream of photons from a southern star bounced off the computer-controlled deformable mirror inside NAOS and proceeded to form in CONICA the sharpest image produced so far by one of the VLT telescopes. With a core angular diameter of only 0.07 arcsec, this image is near the theoretical limit possible for a telescope of this size and at the infrared wavelength used for this demonstration (the K-band at 2.2 µm). Subsequent tests reached the spectacular performance of 0.04 arcsec in the J-band (wavelength 1.2 µm). "I am proud of this impressive achievement", says ESO Director General Catherine Cesarsky. "It shows the true potential of European science and technology and it provides a fine demonstration of the value of international collaboration. ESO and its partner institutes and companies in France and Germany have worked a long time towards this goal - with the first, extremely promising results, we shall soon be able to offer a new and fully tuned instrument to our wide research community." The NAOS adaptive optics corrector was built, under an ESO contract, by Office National d'Etudes et de Recherches Aérospatiales (ONERA) , Laboratoire d'Astrophysique de Grenoble (LAOG) and the DESPA and DASGAL laboratories of the Observatoire de Paris in France, in collaboration with ESO. The CONICA infra-red camera was built, under an ESO contract, by the Max-Planck-Institut für Astronomie (MPIA) (Heidelberg) and the Max-Planck Institut für Extraterrestrische Physik (MPE) (Garching) in Germany, in collaboration with ESO. The present event happens less than four weeks after "First Fringes" were achieved for the VLT Interferometer (VLTI) with two of the 8.2-m Unit Telescopes. No wonder that a spirit of great enthusiasm reigns at Paranal! Information for the media: ESO is producing a Video News Release ( ESO Video News Reel No. 13 ) with sequences from the NAOS-CONICA "First Light" event at Paranal, a computer animation illustrating the principle of adaptive optics in NAOS-CONICA, as well as the first astronomical images obtained. In addition to the usual distribution, this VNR will also be transmitted via satellite Friday 7 December 2001 from 09:00 to 09:15 CET (10:00 to 10:15 UT) on "Europe by Satellite" . These video images may be used free of charge by broadcasters. Satellite details, the script and the shotlist will be on-line from 6 December on the ESA TV Service Website http://television.esa.int. Also a pre-view Real Video Stream of the video news release will be available as of that date from this URL. Video Clip 07/01 : Various video scenes related to the NAOS-CONICA "First Light" Event ( ESO Video News Reel No. 13 ). PR Photo 33a/01 : NAOS-CONICA "First light" image of an 8-mag star. PR Photo 33b/01 : The moment of "First Light" at the YEPUN Control Consoles. PR Photo 33c/01 : Image of NGC 3603 (K-band) area (NAOS-CONICA) . PR Photo 33d/01 : Image of NGC 3603 wider field (ISAAC) PR Photo 33e/01 : I-band HST-WFPC2 image of NGC 3603 field . PR Photo 33f/01 : Animated GIF, with NAOS-CONICA (K-band) and HST-WFPC2 (I-band) images of NGC 3603 area PR Photo 33g/01 : Image of the Becklin-Neugebauer Object . PR Photo 33h/01 : Image of a very close double star . PR Photo 33i/01 : Image of a 17-magnitude reference star PR Photo 33j/01 : Image of the central area of the 30 Dor star cluster . PR Photo 33k/01 : The top of the Paranal Mountain (November 25, 2001). PR Photo 33l/01 : The NAOS-CONICA instrument attached to VLT YEPUN.. A very special moment at Paranal! First light for NAOS-CONICA at the VLT - PR Video Clip 07/01] ESO PR Video Clip 07/01 "First Light for NAOS-CONICA" (25 November 2001) (3850 frames/2:34 min) [MPEG Video+Audio; 160x120 pix; 3.6Mb] [MPEG Video+Audio; 320x240 pix; 8.9Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 07/01 provides some background scenes and images around the NAOS-CONICA "First Light" event on November 25, 2001 (extracted from ESO Video News Reel No. 13 ). Contents: NGC 3603 image from ISAAC and a smaller field as observed by NAOS-CONICA ; the Paranal platform in the afternoon, before the event; YEPUN and NAOS-CONICA with cryostat sounds; Tension is rising in the VLT Control Room; Wavefront Sensor display; the "Loop is Closed"; happy team members; the first corrected image on the screen; Images of NGC 3603 by HST and VLT; 30 Doradus central cluster; BN Object in Orion; Statement by the Head of the ESO Instrument Division. ESO PR Photo 33a/01 ESO PR Photo 33a/01 [Preview - JPEG: 317 x 400 pix - 27k] [Normal - JPEG: 800 x 634 pix - 176k] ESO PR Photo 33b/01 ESO PR Photo 33b/01 [Preview - JPEG: 400 x 322 pix - 176k] [Normal - JPEG: 800 x 644 pix - 360k] ESO PR Photo 33a/01 shows the first image in the infrared K-band (wavelength 2.2 µm) of a star (visual magnitude 8) obtained - before (left) and after (right) the adaptive optics was switched on (see the text). The middle panel displays the 3-D intensity profiles of these images, demonstrating the tremendous gain, both in image sharpness and central intensity. ESO PR Photo 33b/01 shows some of the NAOS-CONICA team members in the VLT Control Room at the moment of "First Light" in the night between November 25-26, 2001. From left to right: Thierry Fusco (ONERA), Clemens Storz (MPIA), Robin Arsenault (ESO), Gerard Rousset (ONERA). The numerous boxes with the many NAOS and CONICA parts arrived at the ESO Paranal Observatory on October 24, 2001. Astronomers and engineers from ESO and the participating institutes and organisations then began the painstaking assembly of these very complex instruments on one of the Nasmyth platforms on the fourth VLT 8.2-m Unit Telescope, YEPUN . Then followed days of technical tests and adjustments, working around the clock. In the afternoon of Sunday, November 25, the team finally declared the instrument fit to attempt its "First Light" observation. The YEPUN dome was opened at sunset and a small, rather apprehensive group gathered in the VLT Control Room, peering intensively at the computer screens over the shoulders of their colleagues, the telescope and instrument operators. Time passed imperceptibly to those present, as the basic calibrations required at this early stage to bring NAOS-CONICA to full operational state were successfully completed. Everybody sensed the special moment approaching when, finally, the telescope operator pushed a button and the giant telescope started to turn smoothly towards the first test object, an otherwise undistinguished star in our Milky Way. Its non-corrected infra-red image was recorded by the CONICA detector array and soon appeared on the computer screen. It was already very good by astronomical standards, with a diameter of only 0.50 arsec (FWHM), cf. PR Photo 33a/01 (left) . Then, by another command, the instrument operator switched on the NAOS adaptive optics system , thereby "closing the loop" for the first time on a sky field, by using that ordinary star as a reference light source to measure the atmospheric turbulence. Obediently, the deformable mirror in NAOS began to follow the "orders" that were issued 500 times per second by its powerful control computer.... As if by magics, that stellar image on the computer screen pulled itself together....! What seconds before had been a jumping, rather blurry patch of light suddenly became a rock-steady, razor-sharp and brilliant spot of light. The entire room burst into applause - there were happy faces and smiles all over, and then the operator announced the measured image diameter - a truly impressive 0.068 arcsec, already at this first try, cf. PR Photo 33a/01 (right) ! All the team members who were lucky to be there sent a special thought to those many others who had also put in over four years' hard and dedicated work to make this event a reality. The time of this historical moment was November 25, 2001, 23:00 Chilean time (November 26, 2001, 02:00 am UT) . During this and the following nights, more images were made of astronomcal objects, opening a new chapter of the long tradition of Adaptive Optics at ESO. More information about the NAOS-CONICA international collaboration , technical details about this instrument and its special advantages are available below. The first images The star-forming region around NGC 3603 ESO PR Photo 33c/01 ESO PR Photo 33c/01 [Preview - JPEG: 326 x 400 pix - 200k] [Normal - JPEG: 651 x 800 pix - 480k] ESO PR Photo 33d/01 ESO PR Photo 33d/01 [Preview - JPEG: 348 x 400 pix - 240k] [Normal - JPEG: 695 x 800 pix - 592k] Caption : PR Photo 33c/01 displays a NAOS-CONICA image of the starburst cluster NGC 3603, obtained during the second night of NAOS-CONICA operation. The sky region shown is some 20 arcsec to the North of the centre of the cluster. NAOS was compensating atmospheric disturbances by analyzing light from the central star with its visual wavefront sensor, while CONICA was observing in the K-band. The image is nearly diffraction-limited and has a Full-Width-Half-Maximum (FWHM) diameter of 0.07 arcsec, with a central Strehl ratio of 56% (a measure of the degree of concentration of the light). The exposure lasted 300 seconds. North is up and East is left. The field measures 27 x 27 arcsec. On PR Photo 33d/01 , the sky area shown in this NAOS-CONICA high-resolution image is indicated on an earlier image of a much larger area, obtained in 1999 with the ISAAC multi-mode instrument on VLT ANTU ( ESO PR 16/99 ) Among the first images to be obtained of astronomical objects was one of the stellar cluster NGC 3603 that is located in the Carina spiral arm in the Milky Way at a distance of about 20,000 light-years, cf. PR Photo 33c/01 . With its central starburst cluster, it is one of the densest and most massive star forming regions in our Galaxy. Some of the most massive stars - with masses up to 120 times the mass of our Sun - can be found in this cluster. For a long time astronomers have suspected that the formation of low-mass stars is suppressed by the presence of high-mass stars, but two years ago, stars with masses as low as 10% of the mass of our Sun were detected in NGC 3603 with the ISAAC multi-mode instrument at VLT ANTU, cf. PR Photo 33d/01 and ESO PR 16/99. The high stellar density in this region, however, prevented the search for objects with still lower masses, so-called Brown Dwarfs. The new, high-resolution K-band images like PR Photo 33c/01 , obtained with NAOS-CONICA at YEPUN, now for the first time facilitate the study of the elusive class of brown dwarfs in such a starburst environment. This will, among others, offer very valuable insight into the fundamental problem about the total amount of matter that is deposited into stars in star-forming regions. An illustration of the potential of Adaptive Optics ESO PR Photo 33e/01 ESO PR Photo 33e/01 [Preview - JPEG: 376 x 400 pix - 128k] [Normal - JPEG: 752 x 800 pix - 336k] ESO PR Photo 33f/01 ESO PR Photo 33f/01 [Animated GIF: 400 x 425 pix - 71k] Caption : PR Photo 33e/01 was obtained with the WFPC2 camera on the Hubble Space Telescope (HST) in the I-band (800nm). It is a 400-sec exposure and shows the same sky region as in the NAOS-CONICA image shown in PR Photo 33c/01. PR Photo 33f/01 provides a direct comparison of the two images (animated GIF). The HST image was extracted from archival data. HST is operated by NASA and ESA. Normally, the achievable image sharpness of a ground-based telescope is limited by the effect of atmospheric turbulence . However, the Adaptive Optics (AO) technique overcomes this problem and when the AO instrument is optimized, the telescope produces images that are at the theoretical limit, i.e., as sharp as if it were in space . The theoretical image diameter is inversely proportional to the diameter of the main mirror of the telescope and proportional to the wavelength of the observed light. Thus, the larger the telescope and the shorter the wavelength, the sharper will be the images recorded . To illustrate this, a comparison of the NAOS-CONICA image of NGC 3603 ( PR Photo 33c/01 ) is here made with a near-infrared image obtained earlier by the Hubble Space Telescope (HST) covering the same sky area ( PR Photo 33e/01 ). Both images are close to the theoretical limit ("diffraction limited"). However, the diameter of the VLT YEPUN mirror (8.2-m) is somewhat more than three times that of that of HST (2.4-m). This is "compensated" by the fact that the wavelength of the NAOS-CONICA image (2.2 µm) is about two-and-a-half times longer that than of the HST image (0.8 µm). The measured image diameters are therefore not too different, approx. 0.085 arcsec (HST) vrs. approx. 0.068 arcsec (VLT). Although the exposure times are similar (300 sec for the VLT image; 400 sec for the HST image), the VLT image shows considerably fainter objects. This is partly due to the larger mirror, partly because by observing at a longer wavelength, NAOS-CONICA can detect a host of cool low-mass stars. The Becklin-Neugebauer object and its associated nebulosity ESO PR Photo 33g/01 ESO PR Photo 33g/01 [Preview - JPEG: 299 x 400 pix - 128k] [Normal - JPEG: 597 x 800 pix - 272k] Caption : PR Photo 33g/01 is a composite (false-) colour image obtained by NAOS-CONICA of the region around the Becklin-Neugebauer object that is deeply embedded in the Orion Nebula. It is based on two exposures, one in the light of shock-excited molecular hydrogen line (H 2 ; wavelength 2.12 µm; here rendered as blue) and one in the broader K-band (2.2 µm; red) from ionized hydrogen. A third (green) image was produced as an "average" of the H 2 and K-band images. The field-of-view measures 20 x 25 arcsec 2 , cf. the 1 x 1 arcsec 2 square. North is up and east to the left. PR Photo 33g/01 is a composite image of the region around the Becklin-Neugebauer object (generally refered to as "BN" ). With its associated Kleinmann-Low nebula, it is located in the Orion star forming region at a distance of approx. 1500 light-years. It is the nearest high-mass star-forming complex. The immediate vicinity of BN (the brightest star in the image) is highly dynamic with outflows and cloudlets glowing in the light of shock-excited molecular hydrogen. While many masers and outflows have been detected, the identification of their driving sources is still lacking. Deep images in the infrared K and H bands, as well as in the light of molecular hydrogen emission were obtained with NAOS-CONICA at VLT YEPUN during the current tests. The new images facilitate the detection of fainter and smaller structures in the cloud than ever before. More details on the embedded star cluster are revealed as well. These observations were only made possible by the infrared wavefront sensor of NAOS. The latter is a unique capability of NAOS and allows to do adaptive optics on highly embedded infrared sources, which are practically invisible at optical wavelengths. Exploring the limits ESO PR Photo 33h/01 ESO PR Photo 33h/01 [Preview - JPEG: 400 x 260 pix - 44k] [Normal - JPEG: 800 x 520 pix - 112k] Caption : PR Photo 33h/01 shows a NAOS-CONICA image of the double star GJ 263 for which the angular distance between the two components is only 0.030 arcsec . The raw image, as directly recorded by CONICA, is shown in the middle, with a computer-processed (using the ONERA MISTRAL myopic deconvolution algorithm) version to the right. The recorded Point-Spread-Function (PSF) is shown to the left. For this, the C50S camera (0.01325 arcsec/pixel) was used, with an FeII filter at the near-infrared wavelength 1.257 µm. The exposure time was 10 seconds. ESO PR Photo 33i/01 ESO PR Photo 33i/01 [Preview - JPEG: 400 x 316 pix - 82k] [Normal - JPEG: 800 x 631 pix - 208k] Caption : PR Photo 33i/01 shows the near-diffraction-limited image of a 17-mag reference star , as recorded with NAOS-CONICA during a 200-second exposure in the K-band under 0.60 arcsec seeing. The 3D-profile is also shown. ESO PR Photo 33j/01 ESO PR Photo 33j/01 [Preview - JPEG: 342 x 400 pix - 83k] [Normal - JPEG: 684 x 800 pix - 200k] Caption : PR Photo 33j/01 shows the central cluster in the 30 Doradus HII region in the Large Magellanic Cloud (LMC), a satellite of our Milky Way Galaxy. It was obtained by NAOS-CONICA in the infrared K-band during a 600 second exposure. The field shown here measures 15 x 15 arcsec 2. PR Photos 33h-j/01 provide three examples of images obtained during specific tests where the observers pushed NAOS-CONICA towards the limits to explore the potential of the new instrument. Although, as expected, these images are not "perfect", they bear clear witness to the impressive performance, already at this early stage of the commissioning programme. The first PR Photo 33h/01 shows how diffraction-limited imaging with NAOS-CONICA at a wavelength of 1.257 µm allows to view the individual components of a close double star, here the binary star GJ 263 for which the angular distance between the two stars is only 0.030 arcsec (i.e., the angle subtended by a 1 Euro coin at a distance of 160 km). Spatially resolved observations of binary stars like this one will allow the determination of orbital parameters, and ultimately of the masses of the individual binary star components. After few days of optimisation and calibration, NAOS-CONICA was able to "close the loop" on a reference star as faint as visual magnitude 17 and to provide a fine diffraction-limited K-band image with Strehl ratio 19% under 0.6 arcsec seeing. PR Photo 33i/01 provides a view of this image, as seen in the recorder frame and as a 3D-profile. The exposure time was 200 seconds. The ability to use reference stars as faint as this is an enormous asset for NAOS-CONICA - it will be first to offer this capability to non-specialist users with an instrument on an 8-10 m class telescope . This permits to access many sky fields and already get significant AO corrections, without having to wait for the artificial laser guide star now being constructed for the VLT, see below. 30 Doradus in the Large Magellanic Cloud (LMC - a satellite of our Galaxy) is the most luminous, giant HII region in the Local Group of Galaxies. It is powered by a massive star cluster with more than 100 ultra-luminous stars (of the "Wolf-Rayet"-type and O-stars). The NAOS CONICA K-band image PR Photo 33x/01 resolves the dense stellar core of high-mass stars at the centre of the cluster, revealing thousands of lower mass cluster members. Due to the lack of a sufficiently bright, isolated and single reference star in this sky field, the observers used instead the bright central star complex (R136a) to generate the corrective signals to the flexible mirror, needed to compensate for the atmospheric turbulence. However, R136a is not a round object; it is strongly elongated in the "5 hour"-direction. As a result, all star images seen in this photo are slightly elongated in the same direction as R136a. Nevertheless, this is a small penalty to pay for the large improvement obtained over a direct (seeing-limited) image! Adaptive Optics at ESO - a long tradition ESO PR Photo 33k/01 ESO PR Photo 33k/01 [Preview - JPEG: 400 x 320 pix - 144k] [Normal - JPEG: 800 x 639 pix - 344k] [Hi-Res - JPEG: 3000 x 2398 pix - 3.0M] ESO PR Photo 33l/01 ESO PR Photo 33l/01 [Preview - JPEG: 400 x 367 pix - 47k] [Normal - JPEG: 800 x 734 pix - 592k] [Hi-Res - JPEG: 3000 x 2754 pix - 3.9M] Caption : PR Photo 33k/01 is a view of the upper platform at the ESO Paranal Observatory with the four enclosures for the VLT 8.2-m Unit Telescopes and the partly subterranean Interferometric Laboratory (at centre). YEPUN (UT4) is housed in the enclosure to the right. This photo was obtained in the evening of November 25, 2001, some hours before "First Light" was achieved for the new NAOS-CONICA instrument, mounted at that telescope. PR Photo 33l/01 NAOS-CONICA installed on the Nasmyth B platform of the 8.2-m VLT YEPUN Unit Telescope. From left to right: the telescope adapter/rotator (dark blue), NAOS (light blue) and the CONICA cryostat (red). The control electronics is housed in the white cabinet. "Adaptive Optics" is a modern buzzword of astronomy. It embodies the seemingly magic way by which ground-based telescopes can overcome the undesirable blurring effect of atmospheric turbulence that has plagued astronomers for centuries. With "Adaptive Optics", the images of stars and galaxies captured by these instruments are now as sharp as theoretically possible. Or, as the experts like to say, "it is as if a giant ground-based telescope is 'lifted' into space by a magic hand!" . Adaptive Optics works by means of a computer-controlled, flexible mirror that counteracts the image distortion induced by atmospheric turbulence in real time. The concept is not new. Already in 1989, the first Adaptive Optics system ever built for Astronomy (aptly named "COME-ON" ) was installed on the 3.6-m telescope at the ESO La Silla Observatory, as the early fruit of a highly successful continuing collaboration between ESO and French research institutes (ONERA and Observatoire de Paris). Ten years ago, ESO initiated an Adaptive Optics program , to serve the needs for its frontline VLT project. In 1993, the Adaptive Optics facility (ADONIS) was offered to Europe's astronomers, as the first instrument of its kind, available for non-specialists. It is still in operation and continues to produce frontline results, cf. ESO PR 22/01. In 1997, ESO launched a collaborative effort with a French Consortium ( see below) for the development of the NAOS Nasmyth Adaptive Optics System . With its associated CONICA IR high angular resolution camera , developed with a German Consortium ( see below), it provides a full high angular resolution capability on the VLT at Paranal. With the successful "First Light" on November 25, 2001, this project is now about to enter into the operational phase. The advantages of NAOS-CONICA NAOS-CONICA belongs to a new generation of sophisticated adaptive optics (AO) devices. They have certain advantages over past systems. In particular, NAOS is unique in being equipped with an infrared-sensitive Wavefront Sensor (WFS) that permits to look inside regions that are highly obscured by interstellar dust and therefore unobservable in visible light. With its other WFS for visible light , NAOS should be able to achieve the highest degree of light concentration (the so-called "Strehl ratio") obtained at any existing 8-m class telescope. It also provides partially corrected images, using reference stars (see PR Photo 33e/01 ) as faint as visual magnitude 18, fainter than demonstrated so far at any other AO system at such large telescope. A major advantage of CONICA is to offer the large format and very high image quality required to fully match NAOS' performance , as well as a variety of observing modes. Moreover, NAOS-CONICA is the first astronomical AO instrument to be offered with a full end-to-end observing capability. It is completely integrated into the VLT dataflow system , with a seamless process from the preparation of the observations, including optimization of the instrument, to their execution at the telescope and on to automatic data quality assessment and storage in the VLT Archive. Collaboration and Institutes The Nasmyth Adaptive Optics System (NAOS) has been developed, with the support of INSU-CNRS, by a French Consortium in collaboration with ESO. The French consortium consists of Office National d'Etudes et de Recherches Aérospatiales (ONERA) , Laboratoire d'Astrophysique de Grenoble (LAOG) and Observatoire de Paris (DESPA and DASGAL). The Project Manager is Gérard Rousset (ONERA), the Instrument Responsible is François Lacombe (Observatoire de Paris) and the Project Scientist is Anne-Marie Lagrange (Laboratoire d'Astrophysique de Grenoble). The CONICA Near-Infrared CAmera has been developed by a German Consortium, with an extensive ESO collaboration. The Consortium consists of Max-Planck-Institut für Astronomie (MPIA) (Heidelberg) and the Max-Planck-Institut für Extraterrestrische Physik (MPE) (Garching). The Principal Investigator (PI) is Rainer Lenzen (MPIA), with Reiner Hofmann (MPE) as Co-Investigator. Contacts Norbert Hubin European Southern Observatory Garching, Germany Tel.: +4989-3200-6517 email: nhubin@eso.org Alan Moorwood European Southern Observatory Garching, Germany Tel.: +4989-3200-6294 email: amoorwoo@eso.org Appendix: Technical Information about NAOS and CONICA Once fully tested, NAOS-CONICA will provide adaptive optics assisted imaging, polarimetry and spectroscopy in the 1 - 5 µm waveband. NAOS is an adaptive optics system equipped with both visible and infrared, Shack-Hartmann type, wavefront sensors. Provided a reference source (e.g., a star) with visual magnitude V brighter than 18 or K-magnitude brighter than 13 mag is available within 60 arcsec of the science target, NAOS-CONICA will ultimately offer diffraction limited resolution at the level of 0.030 arcsec at a wavelength of 1 µm, albeit with a large halo around the image core for the faint end of the reference source brightness. This may be compared with VLT median seeing images of 0.65 arcsec at a wavelength of 1 µm and exceptionally good images around 0.30 arcsec. NAOS-CONICA is installed at Nasmyth Focus B at VLT YEPUN (UT4). In about two years' time, this instrument will benefit from a sodium Laser Guide Star (LGS) facility. The creation of an artificial guide star is then possible in any sky field of interest, thereby providing a much better sky coverage than what is possible with natural guide stars only. NAOS is equipped with two wavefront sensors, one in the visible part of the spectrum (0.45 - 0.95 µm) and one in the infrared part (1 - 2.5 µm); both are based on the Shack-Hartmann principle. The maximum correction frequency is about 500 Hz. There are 185 deformable mirror actuators plus a tip-tilt mirror correction. Together, they should permit to obtain a high Strehl ratio in the K-band (2.2 µm), up to 70%, depending on the actual seeing and waveband. Both the visible and IR wavefront sensors (WFS) have been optimized to provide AO correction for faint objects/stars. The visible WFS provides a low-order correction for objects as faint as visual magnitude ~ 18. The IR WFS will provide a low-order correction for objects as faint as K-magnitude 13. CONICA is a high performant instrument in terms of image quality and detector sensitivity. It has been designed so that it is able to make optimal use of the AO system. Inherent mechanical flexures are corrected on-line by NAOS through a pointing model. It offers a variety of modes, e.g., direct imaging, polarimetry, slit spectroscopy, coronagraphy and spectro-imaging. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 06/01 about observations of a binary star (8 October 2001). Information is also available on the web about other ESO videos.
Task-oriented lossy compression of magnetic resonance images
NASA Astrophysics Data System (ADS)
Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques
1996-04-01
A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.
Journal of Chemical Education on CD-ROM, 1999
NASA Astrophysics Data System (ADS)
1999-12-01
The Journal of Chemical Education on CD-ROM contains the text and graphics for all the articles, features, and reviews published in the Journal of Chemical Education. This 1999 issue of the JCE CD series includes all twelve issues of 1999, as well as all twelve issues from 1998 and from 1997, and the September-December issues from 1996. Journal of Chemical Education on CD-ROM is formatted so that all articles on the CD retain as much as possible of their original appearance. Each article file begins with an abstract/keyword page followed by the article pages. All pages of the Journal that contain editorial content, including the front covers, table of contents, letters, and reviews, are included. Also included are abstracts (when available), keywords for all articles, and supplementary materials. The Journal of Chemical Education on CD-ROM has proven to be a useful tool for chemical educators. Like the Computerized Index to the Journal of Chemical Education (1) it will help you to locate articles on a particular topic or written by a particular author. In addition, having the complete article on the CD-ROM provides added convenience. It is no longer necessary to go to the library, locate the Journal issue, and read it while sitting in an uncomfortable chair. With a few clicks of the mouse, you can scan an article on your computer monitor, print it if it proves interesting, and read it in any setting you choose. Searching and Linking JCE CD is fully searchable for any word, partial word, or phrase. Successful searches produce a listing of articles that contain the requested text. Individual articles can be quickly accessed from this list. The Table of Contents of each issue is linked to individual articles listed. There are also links from the articles to any supplementary materials. References in the Chemical Education Today section (found in the front of each issue) to articles elsewhere in the issue are also linked to the article, as are WWW addresses and email addresses. If you have Internet access and a WWW browser and email utility, you can go directly to the Web site or prepare to send a message with a single mouse click.
Full-text searching of the entire CD enables you to find the articles you want. Price and Ordering An order form is inserted in this issue that provides prices and other ordering information. If this insert is not available or if you need additional information, contact: JCE Software, University of Wisconsin-Madison, 1101 University Avenue, Madison, WI 53706-1396; phone: 608/262-5153 or 800/991-5534; fax: 608/265-8094; email: jcesoft@chem.wisc.edu. Information about all our publications (including abstracts, descriptions, updates) is available from our World Wide Web site at: http://jchemed.chem.wisc.edu/JCESoft/. Hardware and Software Requirements Hardware and software requirements for JCE CD 1999 are listed in the table below:
Literature Cited 1. Schatz, P. F. Computerized Index, Journal of Chemical Education; J. Chem. Educ. Software 1993, SP 5-M. Schatz, P. F.; Jacobsen, J. J. Computerized Index, Journal of Chemical Education; J. Chem. Educ. Software 1993, SP 5-W.
HUBBLE SHOWS EXPANSION OF ETA CARINAE DEBRIS
NASA Technical Reports Server (NTRS)
2002-01-01
The furious expansion of a huge, billowing pair of gas and dust clouds are captured in this NASA Hubble Space Telescope comparison image of the supermassive star Eta Carinae. To create the picture, astronomers aligned and subtracted two images of Eta Carinae taken 17 months apart (April 1994, September 1995). Black represents where the material was located in the older image, and white represents the more recent location. (The light and dark streaks that make an 'X' pattern are instrumental artifacts caused by the extreme brightness of the central star. The bright white region at the center of the image results from the star and its immediate surroundings being 'saturated' in one of the images.)Photo Credit: Jon Morse (University of Colorado), Kris Davidson (University of Minnesota), and NASA Image files in GIF and JPEG format and captions may be accessed on Internet via anonymous ftp from oposite.stsci.edu in /pubinfo.
Spatial compression algorithm for the analysis of very large multivariate images
Keenan, Michael R [Albuquerque, NM
2008-07-15
A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.
On scalable lossless video coding based on sub-pixel accurate MCTF
NASA Astrophysics Data System (ADS)
Yea, Sehoon; Pearlman, William A.
2006-01-01
We propose two approaches to scalable lossless coding of motion video. They achieve SNR-scalable bitstream up to lossless reconstruction based upon the subpixel-accurate MCTF-based wavelet video coding. The first approach is based upon a two-stage encoding strategy where a lossy reconstruction layer is augmented by a following residual layer in order to obtain (nearly) lossless reconstruction. The key advantages of our approach include an 'on-the-fly' determination of bit budget distribution between the lossy and the residual layers, freedom to use almost any progressive lossy video coding scheme as the first layer and an added feature of near-lossless compression. The second approach capitalizes on the fact that we can maintain the invertibility of MCTF with an arbitrary sub-pixel accuracy even in the presence of an extra truncation step for lossless reconstruction thanks to the lifting implementation. Experimental results show that the proposed schemes achieve compression ratios not obtainable by intra-frame coders such as Motion JPEG-2000 thanks to their inter-frame coding nature. Also they are shown to outperform the state-of-the-art non-scalable inter-frame coder H.264 (JM) lossless mode, with the added benefit of bitstream embeddedness.
VLTI First Fringes with Two Auxiliary Telescopes at Paranal
NASA Astrophysics Data System (ADS)
2005-03-01
World's Largest Interferometer with Moving Optical Telescopes on Track Summary The Very Large Telescope Interferometer (VLTI) at Paranal Observatory has just seen another extension of its already impressive capabilities by combining interferometrically the light from two relocatable 1.8-m Auxiliary Telescopes. Following the installation of the first Auxiliary Telescope (AT) in January 2004 (see ESO PR 01/04), the second AT arrived at the VLT platform by the end of 2004. Shortly thereafter, during the night of February 2 to 3, 2005, the two high-tech telescopes teamed up and quickly succeeded in performing interferometric observations. This achievement heralds an era of new scientific discoveries. Both Auxiliary Telescopes will be offered from October 1, 2005 to the community of astronomers for routine observations, together with the MIDI instrument. By the end of 2006, Paranal will be home to four operational ATs that may be placed at 30 different positions and thus be combined in a very large number of ways ("baselines"). This will enable the VLTI to operate with enormous flexibility and, in particular, to obtain extremely detailed (sharp) images of celestial objects - ultimately with a resolution that corresponds to detecting an astronaut on the Moon. PR Photo 07a/05: Paranal Observing Platform with AT1 and AT2 PR Photo 07b/05: AT1 and AT2 with Open Domes PR Photo 07c/05: Evening at Paranal with AT1 and AT2 PR Photo 07d/05: AT1 and AT2 under the Southern Sky PR Photo 07e/05: First Fringes with AT1 and AT2 PR Video Clip 01/05: Two ATs at Paranal (Extract from ESO Newsreel 15) A Most Advanced Device ESO PR Video 01/05 ESO PR Video 01/05 Two Auxiliary Telescopes at Paranal [QuickTime: 160 x 120 pix - 37Mb - 4:30 min] [QuickTime: 320 x 240 pix - 64Mb - 4:30 min] ESO PR Photo 07a/05 ESO PR Photo 07a/05 [Preview - JPEG: 493 x400 pix - 44k] [Normal - JPEG: 985 x 800 pix - 727k] [HiRes - JPEG: 5000 x 4060 pix - 13.8M] Captions: ESO PR Video Clip 01/05 is an extract from ESO Video Newsreel 15, released on March 14, 2005. It provides an introduction to the VLT Interferometer (VLTI) and the two Auxiliary Telescopes (ATs) now installed at Paranal. ESO PR Photo 07a/05 shows the impressive ensemble at the summit of Paranal. From left to right, the enclosure of VLT Antu, Kueyen and Melipal, AT1, the VLT Survey Telescope (VST) in the background, AT2 and VLT Yepun. Located at the summit of the 2,600-m high Cerro Paranal in the Atacama Desert (Chile), ESO's Very Large Telescope (VLT) is at the forefront of astronomical technology and is one of the premier facilities in the world for optical and near-infrared observations. The VLT is composed of four 8.2-m Unit Telescope (Antu, Kueyen, Melipal and Yepun). They have been progressively put into service together with a vast suite of the most advanced astronomical instruments and are operated every night in the year. Contrary to other large astronomical telescopes, the VLT was designed from the beginning with the use of interferometry as a major goal. The href="/instruments/vlti">VLT Interferometer (VLTI) combines starlight captured by two 8.2- VLT Unit Telescopes, dramatically increasing the spatial resolution and showing fine details of a large variety of celestial objects. The VLTI is arguably the world's most advanced optical device of this type. It has already demonstrated its powerful capabilities by addressing several key scientific issues, such as determining the size and the shape of a variety of stars (ESO PR 22/02, PR 14/03 and PR 31/03), measuring distances to stars (ESO PR 25/04), probing the innermost regions of the proto-planetary discs around young stars (ESO PR 27/04) or making the first detection by infrared interferometry of an extragalactic object (ESO PR 17/03). "Little Brothers" ESO PR Photo 07b/05 ESO PR Photo 07b/05 [Preview - JPEG: 597 x 400 pix - 47k] [Normal - JPEG: 1193 x 800 pix - 330k] [HiRes - JPEG: 5000 x 3354 pix - 10.0M] ESO PR Photo 07c/05 ESO PR Photo 07c/05 [Preview - JPEG: 537 x 400 pix - 31k] [Normal - JPEG: 1074 x 800 pix - 555k] [HiRes - JPEG: 3000 x 2235 pix - 6.0M] ESO PR Photo 07d/05 ESO PR Photo 07d/05 [Preview - JPEG: 400 x 550 pix - 60k] [Normal - JPEG: 800 x 1099 pix - 946k] [HiRes - JPEG: 2414 x 3316 pix - 11.0M] Captions: ESO PR Photo 07b/05 shows VLTI Auxiliary Telescopes 1 and 2 (AT1 and AT2) in the early evening light, with the spherical domes opened and ready for observations. In ESO PR Photo 07c/05, the same scene is repeated later in the evening, with three of the large telescope enclosures in the background. This photo and ESO PR Photo 07c/05 which is a time-exposure with AT1 and AT2 under the beautiful night sky with the southern Milky Way band were obtained by ESO staff member Frédéric Gomté. However, most of the time the large telescopes are used for other research purposes. They are therefore only available for interferometric observations during a limited number of nights every year. Thus, in order to exploit the VLTI each night and to achieve the full potential of this unique setup, some other (smaller), dedicated telescopes were included into the overall VLT concept. These telescopes, known as the VLTI Auxiliary Telescopes (ATs), are mounted on tracks and can be placed at precisely defined "parking" observing positions on the observatory platform. From these positions, their light beams are fed into the same common focal point via a complex system of reflecting mirrors mounted in an underground system of tunnels. The Auxiliary Telescopes are real technological jewels. They are placed in ultra-compact enclosures, complete with all necessary electronics, an air conditioning system and cooling liquid for thermal control, compressed air for enclosure seals, a hydraulic plant for opening the dome shells, etc. Each AT is also fitted with a transporter that lifts the telescope and relocates it from one station to another. It moves around with its own housing on the top of Paranal, almost like a snail. Moreover, these moving ultra-high precision telescopes, each weighing 33 tonnes, fulfill very stringent mechanical stability requirements: "The telescopes are unique in the world", says Bertrand Koehler, the VLTI AT Project Manager. "After being relocated to a new position, the telescope is repositioned to a precision better than one tenth of a millimetre - that is, the size of a human hair! The image of the star is stabilized to better than thirty milli-arcsec - this is how we would see an object of the same size as one of the VLT enclosures on the Moon. Finally, the path followed by the light inside the telescope after bouncing on ten mirrors is stable to better than a few nanometres, which is the size of about one hundred atoms." A World Premiere ESO PR Photo 07e/05 ESO PR Photo 07e/05 "First Fringes" with two ATs [Preview - JPEG: 400 x 559 pix - 61k] [Normal - JPEG: 800 x 1134 pix - 357k] Caption: ESO PR Photo 07e/05 The "First Fringes" obtained with the first two VLTI Auxiliary Telescopes, as seen on the computer screen during the observation. The fringe pattern arises when the light beams from the two 1.8-m telescopes are brought together inside the VINCI instrument. The pattern itself contains information about the angular extension of the observed object, here the 6th-magnitude star HD62082. The fringes are acquired by moving a mirror back and forth around the position of equal path length for the two telescopes. One such scan can be seen in the third row window. This pattern results from the raw interferometric signals (the last two rows) after calibration and filtering using the photometric signals (the 4th and 5th row). The first two rows show the spectrum of the fringe pattern signal. More details about the interpretation of this pattern is given in Appendix A of PR 06/01. The possibility to move the ATs around and thus to perform observations with a large number of different telescope configurations ensures a great degree of flexibility, unique for an optical interferometric installation of this size and crucial for its exceptional performance. The ATs may be placed at 30 different positions and thus be combined in a very large number of ways. If the 8.2-m VLT Unit Telescopes are also taken into account, no less than 254 independent pairings of two telescopes ("baselines"), different in length and/or orientation, are available. Moreover, while the largest possible distance between two 8.2-m telescopes (ANTU and YEPUN) is about 130 metres, the maximal distance between two ATs may reach 200 metres. As the achievable image sharpness increases with telescope separation, interferometric observations with the ATs positioned at the extreme positions will therefore yield sharper images than is possible by combining light from the large telescopes alone. All of this will enable the VLTI to obtain exceedingly detailed (sharp) and very complete images of celestial objects - ultimately with a resolution that corresponds to detecting an astronaut on the Moon. Auxiliary Telescope no. 1 (AT1) was installed on the observatory's platform in January 2004. Now, one year later, the second of the four to be delivered, has been integrated into the VLTI. The installation period lasted two months and ended around midnight during the night of February 2-3, 2005. With extensive experience from the installation of AT1, the team of engineers and astronomers were able to combine the light from the two Auxiliary Telescopes in a very short time. In fact, following the necessary preparations, it took them only five minutes to adjust this extremely complex optical system and successfully capture the "First Fringes" with the VINCI test instrument! The star which was observed is named HD62082 and is just at the limit of what can be observed with the unaided eye (its visual magnitude is 6.2). The fringes were as clear as ever, and the VLTI control system kept them stable for more than one hour. Four nights later this exercise was repeated successfully with the mid-infrared science instrument MIDI. Fringes on the star Alphard (Alpha Hydrae) were acquired on February 7 at 4:05 local time. For Roberto Gilmozzi, Director of ESO's La Silla Paranal Observatory, "this is a very important new milestone. The introduction of the Auxiliary Telescopes in the development of the VLT Interferometer will bring interferometry out of the specialist experiment and into the domain of common user instrumentation for every astronomer in Europe. Without doubt, it will enormously increase the potentiality of the VLTI." With two more telescopes to be delivered within a year to the Paranal Observatory, ESO cements its position as world-leader in ground-based optical astronomy, providing Europe's scientists with the tools they need to stay at the forefront in this exciting science. The VLT Interferometer will, for example, allow astronomers to study details on the surface of stars or to probe proto-planetary discs and other objects for which ultra-high precision imaging is required. It is premature to speculate on what the Very Large Telescope Interferometer will soon discover, but it is easy to imagine that there may be quite some surprises in store for all of us.
Piippo-Huotari, Oili; Norrman, Eva; Anderzén-Carlsson, Agneta; Geijer, Håkan
2018-05-01
The radiation dose for patients can be reduced with many methods and one way is to use abdominal compression. In this study, the radiation dose and image quality for a new patient-controlled compression device were compared with conventional compression and compression in the prone position . To compare radiation dose and image quality of patient-controlled compression compared with conventional and prone compression in general radiography. An experimental design with quantitative approach. After obtaining the approval of the ethics committee, a consecutive sample of 48 patients was examined with the standard clinical urography protocol. The radiation doses were measured as dose-area product and analyzed with a paired t-test. The image quality was evaluated by visual grading analysis. Four radiologists evaluated each image individually by scoring nine criteria modified from the European quality criteria for diagnostic radiographic images. There was no significant difference in radiation dose or image quality between conventional and patient-controlled compression. Prone position resulted in both higher dose and inferior image quality. Patient-controlled compression gave similar dose levels as conventional compression and lower than prone compression. Image quality was similar with both patient-controlled and conventional compression and was judged to be better than in the prone position.
Hunting the Southern Skies with SIMBA
NASA Astrophysics Data System (ADS)
2001-08-01
First Images from the New "Millimetre Camera" on SEST at La Silla Summary A new instrument, SIMBA ("SEST IMaging Bolometer Array") , has been installed at the Swedish-ESO Submillimetre Telescope (SEST) at the ESO La Silla Observatory in July 2001. It records astronomical images at a wavelength of 1.2 mm and is able to quickly map large sky areas. In order to achieve the best possible sensitivity, SIMBA is cooled to only 0.3 deg above the absolute zero on the temperature scale. SIMBA is the first imaging millimetre instrument in the southern hemisphere . Radiation at this wavelength is mostly emitted from cold dust and ionized gas in a variety of objects in the Universe. Among other, SIMBA now opens exciting prospects for in-depth studies of the "hidden" sites of star formation , deep inside dense interstellar nebulae. While such clouds are impenetrable to optical light, they are transparent to millimetre radiation and SIMBA can therefore observe the associated phenomena, in particular the dust around nascent stars . This sophisticated instrument can also search for disks of cold dust around nearby stars in which planets are being formed or which may be left-overs of this basic process. Equally important, SIMBA may observe extremely distant galaxies in the early universe , recording them while they were still in the formation stage. Various SIMBA images have been obtained during the first tests of the new instrument. The first observations confirm the great promise for unique astronomical studies of the southern sky in the millimetre wavelength region. These results also pave the way towards the Atacama Large Millimeter Array (ALMA) , the giant, joint research project that is now under study in Europe, the USA and Japan. PR Photo 28a/01 : SIMBA image centered on the infrared source IRAS 17175-3544 PR Photo 28b/01 : SIMBA image centered on the infrared source IRAS 18434-0242 PR Photo 28c/01 : SIMBA image centered on the infrared source IRAS 17271-3439 PR Photo 28d/01 : View of the SIMBA instrument First observations with SIMBA SIMBA ("SEST IMaging Bolometer Array") was built and installed at the Swedish-ESO Submillimetre Telescope (SEST) at La Silla (Chile) within an international collaboration between the University of Bochum and the Max Planck Institute for Radio Astronomy in Germany, the Swedish National Facility for Radio Astronomy and ESO . The SIMBA ("Lion" in Swahili) instrument detects radiation at a wavelength of 1.2 mm . It has 37 "horns" and acts like a camera with 37 picture elements (pixels). By changing the pointing direction of the telescope, relatively large sky fields can be imaged. As the first and only imaging millimetre instrument in the southern hemisphere , SIMBA now looks up towards rich and virgin hunting grounds in the sky. Observations at millimetre wavelengths are particularly useful for studies of star formation , deep inside dense interstellar clouds that are impenetrable to optical light. Other objects for which SIMBA is especially suited include planet-forming disks of cold dust around nearby stars and extremely distant galaxies in the early universe , still in the stage of formation. During the first observations, SIMBA was used to study the gas and dust content of star-forming regions in our own Milky Way Galaxy, as well as in the Magellanic Clouds and more distant galaxies. It was also used to record emission from planetary nebulae , clouds of matter ejected by dying stars. Moreover, attempts were made to detect distant galaxies and quasars radiating at mm-wavelengths and located in two well-studied sky fields, the "Hubble Deep Field South" and the "Chandra Deep Field" [1]. Observations with SEST and SIMBA also serve to identify objects that can be observed at higher resolution and at shorter wavelengths with future southern submm telescopes and interferometers such as APEX (see MPG Press Release 07/01 of 6 July 2001) and ALMA. SIMBA images regions of high-mass star formation ESO PR Photo 28a/01 ESO PR Photo 28a/01 [Preview - JPEG: 400 x 568 pix - 61k] [Normal - JPEG: 800 x 1136 pix - 200k] Caption : This intensity-coded, false-colour SIMBA image is centered on the infrared source IRAS 17175-3544 and covers the well-known high-mass star formation complex NGC 6334 , at a distance of 5500 light-years. The southern bright source is an ultra-compact region of ionized hydrogen ("HII region") created by a star or several stars already formed. The northern bright source has not yet developed an HII region and may be a star or a cluster of stars that are presently forming. A remarkable, narrow, linear dust filament extends over the image; it was known to exist before, but the SIMBA image now shows it to a much larger extent and much more clearly. This and the following images cover an area of about 15 arcmin x 6 arcmin on the sky and have a pixel size of 8 arcsec. ESO PR Photo 28b/01 ESO PR Photo 28b/01 [Preview - JPEG: 532 x 400 pix - 52k] [Normal - JPEG: 1064 x 800 pix - 168k] Caption : This SIMBA image is centered on the object IRAS 18434-0242 . It includes many bright sources that are associated with dense cores and compact HII regions located deep inside the cloud. A much less detailed map was made several years ago with a single channel bolometer on SEST. The new SIMBA map is more extended and shows more sources. ESO PR Photo 28c/01 ESO PR Photo 28c/01 [Preview - JPEG: 400 x 505 pix - 59k] [Normal - JPEG: 800 x 1009 pix - 160k] Caption : Another SIMBA image is centered on IRAS 17271-3439 and includes an extended bright source that is associated with several compact HII regions as well as a cluster of weaker sources. Some of the recent SIMBA images are shown above; they were taken during test observations, and within a pilot survey of high-mass starforming regions . Stars form in interstellar clouds that consist of gas and dust. The denser parts of these clouds can collapse into cold and dense cores which may form stars. Often many stars are formed in clusters, at about the same time. The newborn stars heat up the surrounding regions of the cloud . Radiation is emitted, first at mm-wavelengths and later at infrared wavelengths as the cloud core gets hotter. If very massive stars are formed, their UV-radiation ionizes the immediate surrounding gas and this ionized gas also emits at mm-wavelengths. These ionized regions are called ultra compact HII regions . Because the stars form deep inside the interstellar clouds, the obscuration at visible wavelengths is very high and it is not possible to see these regions optically. The objects selected for the SIMBA survey are from a catalog of objects, first detected at long infrared wavelengths with the IRAS satellite (launched in 1983), hence the designations indicated in Photos 28a-c/01 . From 1995 to 1998, the ESA Infrared Space Observatory (ISO) gathered an enormous amount of valuable data, obtaining images and spectra in the broad infrared wavelength region from 2.5 to 240 µm (0.025 to 0.240 mm), i.e. just shortward of the millimetre region in which SIMBA operates. ISO produced mid-infrared images of field size and angular resolution (sharpness) comparable to those of SIMBA. It will obviously be most interesting to combine the images that will be made with SIMBA with imaging and spectral data from ISO and also with those obtained by large ground-based telescopes in the near- and mid-infrared spectral regions. Some technical details about the SIMBA instrument ESO PR Photo 28d/01 ESO PR Photo 28d/01 [Preview - JPEG: 509 x 400 pix - 83k] [Normal - JPEG: 1017 x 800 pix - 528k] Caption : The SIMBA instrument - with the cover removed - in the SEST electronics laboratory. The 37 antenna horns to the right, each of which produces one picture element (pixel) of the combined image. The bolometer elements are located behind the horns. The cylindrical aluminium foil covered unit is the cooler that keeps SIMBA at extremely low temperature (-272.85 °C, or only 0.3 deg above the absolute zero) when it is mounted in the telescope. SIMBA is unique because of its ability to quickly map large sky areas due to the fast scanning mode. In order to achieve low noise and good sensitivity, the instrument is cooled to only 0.3 deg above the absolute zero, i.e., to -272.85 °C. SIMBA consists of 37 horns (each providing one pixel on the sky) arranged in a hexagonal pattern, cf. Photo 28d/01 . To form images, the sky position of the telescope is changed according to a raster pattern - in this way all of a celestial object and the surrounding sky field may be "scanned" fast, at speeds of typically 80 arcsec per second. This makes SIMBA a very efficient facility: for instance, a fully sampled image of good sensitivity with a field size of 15 arcmin x 6 arcmin can be taken in 15 minutes. If higher sensitivity is needed (to observe fainter sources), more images may be obtained of the same field and then added together. Large sky areas can be covered by combining many images taken at different positions. The image resolution (the "telescope beamsize") is 22 arcsec, corresponding to the angular resolution of this 15-m telescope at the indicated wavelength. Note [1} Observations of the HDFS and CDFS fields in other wavebands with other telescopes at the ESO observatories have been reported earlier, e.g. within the ESO Imaging Survey Project (EIS) (the "EIS Deep-Survey"). It is the ESO policy on these fields to make data public world-wide.
HOT WHITE DWARF SHINES IN YOUNG STAR CLUSTER
NASA Technical Reports Server (NTRS)
2002-01-01
A dazzling 'jewel-box' collection of over 20,000 stars can be seen in crystal clarity in this NASA Hubble Space Telescope image, taken with the Wide Field and Planetary Camera 2. The young (40 million year old) cluster, called NGC 1818, is 164,000 light-years away in the Large Magellanic Cloud (LMC), a satellite galaxy of our Milky Way. The LMC, a site of vigorous current star formation, is an ideal nearby laboratory for studying stellar evolution. In the cluster, astronomers have found a young white dwarf star, which has only very recently formed following the burnout of a red giant. Based on this observation astronomers conclude that the red giant progenitor star was 7.6 times the mass of our Sun. Previously, astronomers have estimated that stars anywhere from 6 to 10 solar masses would not just quietly fade away as white dwarfs but abruptly self-destruct in torrential explosions. Hubble can easily resolve the star in the crowded cluster, and detect its intense blue-white glow from a sizzling surface temperature of 50,000 degrees Fahrenheit. IMAGE DATA Date taken: December 1995 Wavelength: natural color reconstruction from three filters (I,B,U) Field of view: 100 light-years, 2.2 arc minutes TARGET DATA Name: NGC 1818 Distance: 164,000 light-years Constellation: Dorado Age: 40 million years Class: Rich star cluster Apparent magnitude: 9.7 Apparent diameter: 7 arc minutes Credit: Rebecca Elson and Richard Sword, Cambridge UK, and NASA (Original WFPC2 image courtesy J. Westphal, Caltech) Image files are available electronically via the World Wide Web at: http://oposite.stsci.edu/pubinfo/1998/16 and via links in http://oposite.stsci.edu/pubinfo/latest.html or http://oposite.stsci.edu/pubinfo/pictures.html. GIF and JPEG images are available via anonymous ftp to oposite.stsci.edu in /pubinfo/GIF/9816.GIF and /pubinfo/JPEG/9816.jpg.
Morgan, Karen L. M.; Karen A. Westphal,
2016-04-28
The U.S. Geological Survey (USGS), as part of the National Assessment of Coastal Change Hazards project, conducts baseline and storm-response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms (Morgan, 2009). On September 9-10, 2008, the USGS conducted an oblique aerial photographic survey from Calcasieu Lake, Louisiana, to Brownsville, Texas, aboard a Cessna C-210 (aircraft) at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect baseline data for assessing incremental changes of the beach and nearshore area, and the data can be used in the assessment of future coastal change.The photographs provided in this report are Joint Photographic Experts Group (JPEG) images. ExifTool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft at the time the photograph was taken and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the state of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segments can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet.In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML file was created using the photographic navigation files. The KML file can be found in the kml folder.
Morgan, Karen L.M.; Krohn, M. Dennis
2014-01-01
The U.S. Geological Survey (USGS) conducts baseline and storm response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms. On November 4-6, 2012, approximately one week after the landfall of Hurricane Sandy, the USGS conducted an oblique aerial photographic survey from Cape Lookout, N.C., to Montauk, N.Y., aboard a Piper Navajo Chieftain (aircraft) at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect post-Hurricane Sandy data for assessing incremental changes in the beach and nearshore area since the last survey in 2009. The data can be used in the assessment of future coastal change. The photographs provided here are Joint Photographic Experts Group (JPEG) images. The photograph locations are an estimate of the position of the aircraft and do not indicate the location of the feature in the images. These photos document the configuration of the barrier islands and other coastal features at the time of the survey. Exiftool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet. Table 1 provides detailed information about the GPS location, image name, date, and time each of the 9,481 photographs were taken, along with links to each photograph. The photographs are organized in segments, also referred to as contact sheets, and represent approximately 5 minutes of flight time. In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files.
Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro
2008-04-01
This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data.
Image splitting and remapping method for radiological image compression
NASA Astrophysics Data System (ADS)
Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.
1990-07-01
A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.
Prediction of compression-induced image interpretability degradation
NASA Astrophysics Data System (ADS)
Blasch, Erik; Chen, Hua-Mei; Irvine, John M.; Wang, Zhonghai; Chen, Genshe; Nagy, James; Scott, Stephen
2018-04-01
Image compression is an important component in modern imaging systems as the volume of the raw data collected is increasing. To reduce the volume of data while collecting imagery useful for analysis, choosing the appropriate image compression method is desired. Lossless compression is able to preserve all the information, but it has limited reduction power. On the other hand, lossy compression, which may result in very high compression ratios, suffers from information loss. We model the compression-induced information loss in terms of the National Imagery Interpretability Rating Scale or NIIRS. NIIRS is a user-based quantification of image interpretability widely adopted by the Geographic Information System community. Specifically, we present the Compression Degradation Image Function Index (CoDIFI) framework that predicts the NIIRS degradation (i.e., a decrease of NIIRS level) for a given compression setting. The CoDIFI-NIIRS framework enables a user to broker the maximum compression setting while maintaining a specified NIIRS rating.
NASA Astrophysics Data System (ADS)
2004-05-01
Successful "First Light" for the Mid-Infrared VISIR Instrument on the VLT Summary Close to midnight on April 30, 2004, intriguing thermal infrared images of dust and gas heated by invisible stars in a distant region of our Milky Way appeared on a computer screen in the control room of the ESO Very Large Telescope (VLT). These images mark the successful "First Light" of the VLT Imager and Spectrometer in the InfraRed (VISIR), the latest instrument to be installed on this powerful telescope facility at the ESO Paranal Observatory in Chile. The event was greeted with a mixture of delight, satisfaction and some relief by the team of astronomers and engineers from the consortium of French and Dutch Institutes and ESO who have worked on the development of VISIR for around 10 years [1]. Pierre-Olivier Lagage (CEA, France), the Principal Investigator, is content : "This is a wonderful day! A result of many years of dedication by a team of engineers and technicians, who can today be proud of their work. With VISIR, astronomers will have at their disposal a great instrument on a marvellous telescope. And the gain is enormous; 20 minutes of observing with VISIR is equivalent to a whole night of observing on a 3-4m class telescope." Dutch astronomer and co-PI Jan-Willem Pel (Groningen, The Netherlands) adds: "What's more, VISIR features a unique observing mode in the mid-infrared: spectroscopy at a very high spectral resolution. This will open up new possibilities such as the study of warm molecular hydrogen most likely to be an important component of our galaxy." PR Photo 16a/04: VISIR under the Cassegrain focus of the Melipal telescope PR Photo 16b/04: VISIR mounted behind the mirror of the Melipal telescope PR Photo 16c/04: Colour composite of the star forming region G333.6-0.2 PR Photo 16d/04: Colour composite of the Galactic Centre PR Photo 16e/04: The Ant Planetary Nebula at 12.8 μm PR Photo 16f/04: The starburst galaxy He2-10 at 11.3μm PR Photo 16g/04: High-resolution spectrum of G333.6-0.2 around 12.8μm PR Photo 16h/04: High-resolution spectrum of the Ant Planetary Nebula around 12.8μm From cometary tails to centres of galaxies The mid-infrared spectral region extends from a few to a few tens of microns in wavelength and provides a unique view of our Universe. Optical astronomy, that is astronomy at wavelengths to which our eyes are sensitive, is mostly directed towards light emitted by gas, be it in stars, nebulae or galaxies. Mid-Infrared astronomy, however, allows us to also detect solid dust particles at temperatures of -200 to +300 °C. Dust is very abundant in the universe in many different environments, ranging from cometary tails to the centres of galaxies. This dust also often totally absorbs and hence blocks the visible light reaching us from such objects. Red light, and especially infrared light, can propagate much better in dust clouds. Many important astrophysical processes occur in regions of high obscuration by dust, most notably star formation and the late stages of their evolution, when stars that have burnt nearly all their fuel shed much of their outer layers and dust grains form in their "stellar wind". Stars are born in so-called molecular clouds. The proto-stars feed from these clouds and are shielded from the outside by them. Infrared is a tool - very much as ultrasound is for medical inspections - for looking into those otherwise hidden regions to study the stellar "embryos". It is thus crucial to also observe the Universe in the infrared and mid-infrared. Unfortunately, there are also infrared-emitting molecules in the Earth's atmosphere, e.g. water vapour, Nitric Oxides, Ozone, Methane. Because of these gases, the atmosphere is completely opaque at certain wavelengths, except in a few "windows" where the Earth's atmosphere is transparent. Even in these windows, however, the sky and telescope emit radiation in the infrared to an extent that observing in the mid-infrared at night is comparable to trying to do optical astronomy in daytime. Ground-based infrared astronomers have thus become extremely adept at developing special techniques called "chopping' and "nodding" for detecting the extremely faint astronomical signals against this unwanted bright background [3]. VISIR: an extremely complex instrument VISIR - the VLT Imager and Spectrometer in the InfraRed - is a complex multi-mode instrument designed to operate in the 10 and 20 μm atmospheric windows, i.e. at wavelengths up to about 40 times longer than visible light and to provide images as well as spectra at a wide range of resolving power up to ~ 30.000. It can sample images down to the diffraction limit of the 8.2-m Melipal telescope (0.27 arcsec at 10 μm wavelength, i.e. corresponding to a resolution of 500 m on the Moon), which is expected to be reached routinely due to the excellent seeing conditions experienced for a large fraction of the time at the VLT [2]. Because at room temperature the metal and glass of VISIR would emit strongly at exactly the same wavelengths and would swamp any faint mid-infrared astronomical signals, the whole VISIR instrument is cooled to a temperature close to -250° C and its two panoramic 256x256 pixel array detectors to even lower temperatures, only a few degrees above absolute zero. It is also kept in a vacuum tank to avoid the unavoidable condensation of water and icing which would otherwise occur. The complete instrument is mounted on the telescope and must remain rigid to within a few thousandths of a millimetre as the telescope moves to acquire and then track objects anywhere in the sky. Needless to say, this makes for an extremely complex instrument and explains the many years needed to develop and bring it to the telescope on the top of Paranal. VISIR also includes a number of important technological innovations, most notably its unique cryogenic motor drive systems comprising integrated stepper motors, gears and clutches whose shape is similar to that of the box of the famous French Camembert cheese. VISIR is mounted on Melipal ESO PR Photo 16a/04 ESO PR Photo 16a/04 VISIR under the Cassegrain focus of the Melipal telescope [Preview - JPEG: 400 x 476 pix - 271k] [Normal - JPEG: 800 x 951 pix - 600k] ESO PR Photo 16b/04 ESO PR Photo 16b/04 VISIR mounted behind the mirror of the Melipal telescope [Preview - JPEG: 400 x 603 pix - 366k] [Normal - JPEG: 800 x 1206 pix - 945k] Caption: ESO PR Photo 16a/04 shows VISIR about to be attached at the Cassegrain focus of the Melipal telescope. On ESO PR Photo 16b/04, VISIR appears much smaller once mounted behind the enormous 8.2-m diameter mirror of the Melipal telescope. The fully integrated VISIR plus all the associated equipment (amounting to a total of around 8 tons) was air freighted from Paris to Santiago de Chile and arrived at the Paranal Observatory on 25th March after a subsequent 1500 km journey by road. Following tests to confirm that nothing had been damaged, VISIR was mounted on the third VLT telescope "Melipal" on April 27th. PR Photos 16a/04 and 16b/04 show the approximately 1.6 tons of VISIR being mounted at the Cassegrain focus, below the 8.2-m main mirror. First technical light on a star was achieved on April 29th, shortly after VISIR had been cooled down to its operating temperature. This allowed to proceed with the necessary first basic operations, including focusing the telescope, and tests. While telescope focusing was one of the difficult and frequent tasks faced by astronomers in the past, this is no longer so with the active optics feature of the VLT telescopes which, in principle, has to be focused only once after which it will forever be automatically kept in perfect focus. First images and spectra from VISIR ESO PR Photo 16c/04 ESO PR Photo 16c/04 Colour composite of the star forming region G333.6-0.2 [Preview - JPEG: 400 x 477 pix - 78k] [Normal - JPEG: 800 x 954 pix - 191k] ESO PR Photo 16d/04 ESO PR Photo 16d/04 Colour composite of the Galactic Centre [Preview - JPEG: 400 x 478 pix - 159k] [Normal - JPEG: 800 x 955 pix - 348k] Caption: ESO PR Photo 16c/04 is a colour composite image of the visually obscured G333.6-0.2 star-forming region at a distance of nearly 10,000 light-years in our Milky Way galaxy. This image was made by combining three digital images of the intensity of the infrared emission at wavelengths of 11.3μm (one of the Polycyclic Aromatic Hydrocarbon features, coded blue), 12.8 μm (an emission line of [NeII], coded green) and 19μm (warm dust emission, coded red). Each pixel subtends 0.127 arcsec and the total field is ~ 33 x 33 arcsec with North at the top and East to the left. The total integration times were 13 seconds at the shortest and 35 seconds at the longer wavelengths. The brighter spots locate regions where the dust, which obscures all the visible light, has been heated by recently formed stars. ESO PR Photo 16d/04 shows another colour composite, this time of the Galactic Centre at a distance of about 30,000 light-years. It was made by combining images in filters centred at 8.6μm (Polycyclic Aromatic Hydrocarbon molecular feature - coded blue), 12.8μm ([NeII] - coded green) and 19.5μm (coded red). Each pixel subtends 0.127 arcsec and the total field is ~ 33 x 33 arcsec with North at the top and East to the left. Total integration times were 300, 160 and 300 s for the 3 filters, respectively. This region is very rich, full of stars, dust, ionised and molecular gas. One of the scientific goals will be to detect and monitor the signal from the black hole at the centre of our galaxy. ESO PR Photo 16e/04 ESO PR Photo 16e/04 The Ant Planetary Nebula at 12.8 μm [Preview - JPEG: 400 x 477 pix - 77k] [Normal - JPEG: 800 x 954 pix - 182k] Caption: ESO PR Photo 16e/04 is an image of the "Ant" Planetary Nebula (Mz3) in the narrow-band filter centred at wavelength 12.8 μm. The scale is 0.127 arcsec/pixel and the total field-of-view is 33 x 33 arcsec, with North at the top and East to the left. The total integration time was 200 seconds. Note the diffraction rings around the central star which confirm that the maximum spatial resolution possible with the 8.2-m telescope is being achieved. ESO PR Photo 16f/04 ESO PR Photo 16f/04 The starburst galaxy He2-10 at 11.3μm [Preview - JPEG: 400 x 477 pix - 69k] [Normal - JPEG: 800 x 954 pix - 172k] Caption: ESO PR Photo 16f/04 is an image at wavelength 11.3 μm of the "nearby" (distance about 30 million light-years) blue compact galaxy He2-10, which is actively forming stars. The scale is 0.127 arcsec per pixel and the full field covers 15 x 15 arcsec with North at the top and East on the left. The total integration time for this observation is one hour. Several star forming regions are detected, as well as a diffuse emission, which was unknown until these VISIR observations. The star-forming regions on the left of the image are not visible in optical images. ESO PR Photo 16g/04 ESO PR Photo 16g/04 High-resolution spectrum of G333.6-0.2 around 12.8 μm [Preview - JPEG: 652 x 400 pix - 123k] [Normal - JPEG: 1303 x 800 pix - 277k] Caption: ESO PR Photo 16g/04 is a reproduction of a high-resolution spectrum of the Ne II line (ionised Neon) at 12.8135 μm of the star-forming region G333.6-0.2 shown in ESO PR Photo 16c/04. This spectrum reveals the complex motions of the ionized gas in this region. The images are 256 x 256 frames of 50 x 50 micron pixels. The "field" direction is horizontal, with total slit length of 32.5 arcsec; North is left and South is to the right. The dispersion direction is vertical, with the wavelength increasing downward. The total integration time was 80 sec. ESO PR Photo 16h/04 ESO PR Photo 16h/04 High-resolution spectrum of the Ant nebula around 12.8 μm [Preview - JPEG: 610 x 400 pix - 354k] [Normal - JPEG: 1219 x 800 pix - 901k] Caption: ESO PR Photo 16h/04 is a reproduction of a high-resolution spectrum of the Ne II line (ionised Neon) at 12.8135 microns of the Ant Planetary Nebula, also known as Mz-3, shown in ESO PR Photo 16d/04. The technical details are similar to ESO PR Photo 16g/04. The total integration time was 120 sec. The photos above resulted from some of the first observational tests with VISIR. PR Photo 16c/04 shows the scientific "First Light" image, obtained one day later on April 30th, of a visually obscured star forming region nearly 10,000 light-years away in our galaxy, the Milky Way. The picture shown here is a false-colour image made by combining three digital images of the intensity of the infrared emission from this region at wavelengths of 11.3 μm (one of the Polycyclic Aromatic Hydrocarbon - PAH - features), 12.8 μm (an emission line of ionised neon) and 19 μm (cool dust emission). Ten times sharper Until now, an elegant way to avoid the problems caused by the emission and absorption of the atmosphere was to fly infrared telescopes on satellites as was done in the highly successful IRAS and ISO missions and currently the Spitzer observatory. For both technical and cost reasons, however, such telescopes have so far been limited to only 60-85 cm in diameter. While very sensitive therefore, the spatial resolution (sharpness) delivered by these telescopes is 10 times worse than that of the 8.2-m diameter VLT telescopes. They have also not been equipped with the very high spectral resolution capability, a feature of the VISIR instrument, which is thus expected to remain the instrument of choice for a wide range of studies for many years to come despite the competition from space. More information A corresponding [1]: The consortium of institutes responsible for building the VISIR instrument under contract to ESO comprises the CEA/DSM/DAPNIA, Saclay, France - led by the Principal Investigator (PI), Pierre-Olivier Lagage and the Netherlands Foundation for Research in Astronomy/ASTRON - (Dwingeloo, The Netherlands) with Jan-Willem Pel from Groningen University as Co-PI for the spectrometer. [2]: Stellar radiation on its way to the observer is also affected by the turbulence of the Earth's atmosphere. This is the effect which makes the stars twinkle for the human eye. While the general public enjoys this phenomenon as something that makes the night sky interesting and may be entertaining, the twinkling is a major concern for amateur and professional astronomers, as it smears out the optical images. Infrared radiation is less affected by this effect. Therefore an instrument like VISIR can make full use of the extremely high optical quality of modern telescopes, like the VLT. [3]: Observations from the ground at wavelengths of 10 to 20 μm are particularly difficult because this is the wavelength region in which both the telescope and the atmosphere emits most strongly. In order to minimize its effect, the images shown here were made by tilting the telescope secondary mirror every few seconds (chopping) and the whole telescope every minute (nodding) so that this unwanted telescope and sky background emission could be measured and subtracted from the science images faster than it varies.
Compressed domain indexing of losslessly compressed images
NASA Astrophysics Data System (ADS)
Schaefer, Gerald
2001-12-01
Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.
A comparison of select image-compression algorithms for an electronic still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.
Lossless medical image compression with a hybrid coder
NASA Astrophysics Data System (ADS)
Way, Jing-Dar; Cheng, Po-Yuen
1998-10-01
The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.
A database for assessment of effect of lossy compression on digital mammograms
NASA Astrophysics Data System (ADS)
Wang, Jiheng; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria
2018-03-01
With widespread use of screening digital mammography, efficient storage of the vast amounts of data has become a challenge. While lossless image compression causes no risk to the interpretation of the data, it does not allow for high compression rates. Lossy compression and the associated higher compression ratios are therefore more desirable. The U.S. Food and Drug Administration (FDA) currently interprets the Mammography Quality Standards Act as prohibiting lossy compression of digital mammograms for primary image interpretation, image retention, or transfer to the patient or her designated recipient. Previous work has used reader studies to determine proper usage criteria for evaluating lossy image compression in mammography, and utilized different measures and metrics to characterize medical image quality. The drawback of such studies is that they rely on a threshold on compression ratio as the fundamental criterion for preserving the quality of images. However, compression ratio is not a useful indicator of image quality. On the other hand, many objective image quality metrics (IQMs) have shown excellent performance for natural image content for consumer electronic applications. In this paper, we create a new synthetic mammogram database with several unique features. We compare and characterize the impact of image compression on several clinically relevant image attributes such as perceived contrast and mass appearance for different kinds of masses. We plan to use this database to develop a new objective IQM for measuring the quality of compressed mammographic images to help determine the allowed maximum compression for different kinds of breasts and masses in terms of visual and diagnostic quality.
Digital data registration and differencing compression system
NASA Technical Reports Server (NTRS)
Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)
1990-01-01
A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.
Digital Data Registration and Differencing Compression System
NASA Technical Reports Server (NTRS)
Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)
1996-01-01
A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.
Digital data registration and differencing compression system
NASA Technical Reports Server (NTRS)
Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)
1992-01-01
A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.
Lim, Eugene Y; Lee, Chiang; Cai, Weidong; Feng, Dagan; Fulham, Michael
2007-01-01
Medical practice is characterized by a high degree of heterogeneity in collaborative and cooperative patient care. Fast and effective communication between medical practitioners can improve patient care. In medical imaging, the fast delivery of medical reports to referring medical practitioners is a major component of cooperative patient care. Recently, mobile phones have been actively deployed in telemedicine applications. The mobile phone is an ideal medium to achieve faster delivery of reports to the referring medical practitioners. In this study, we developed an electronic medical report delivery system from a medical imaging department to the mobile phones of the referring doctors. The system extracts a text summary of medical report and a screen capture of diagnostic medical image in JPEG format, which are transmitted to 3G GSM mobile phones.
NASA Astrophysics Data System (ADS)
Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun
2018-07-01
Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.
The Mars Hand Lens Imager (MAHLI) aboard the Mars rover, Curiosity
NASA Astrophysics Data System (ADS)
Edgett, K. S.; Ravine, M. A.; Caplinger, M. A.; Ghaemi, F. T.; Schaffner, J. A.; Malin, M. C.; Baker, J. M.; Dibiase, D. R.; Laramee, J.; Maki, J. N.; Willson, R. G.; Bell, J. F., III; Cameron, J. F.; Dietrich, W. E.; Edwards, L. J.; Hallet, B.; Herkenhoff, K. E.; Heydari, E.; Kah, L. C.; Lemmon, M. T.; Minitti, M. E.; Olson, T. S.; Parker, T. J.; Rowland, S. K.; Schieber, J.; Sullivan, R. J.; Sumner, D. Y.; Thomas, P. C.; Yingst, R. A.
2009-08-01
The Mars Science Laboratory (MSL) rover, Curiosity, is expected to land on Mars in 2012. The Mars Hand Lens Imager (MAHLI) will be used to document martian rocks and regolith with a 2-megapixel RGB color CCD camera with a focusable macro lens mounted on an instrument-bearing turret on the end of Curiosity's robotic arm. The flight MAHLI can focus on targets at working distances of 20.4 mm to infinity. At 20.4 mm, images have a pixel scale of 13.9 μm/pixel. The pixel scale at 66 mm working distance is about the same (31 μm/pixel) as that of the Mars Exploration Rover (MER) Microscopic Imager (MI). MAHLI camera head placement is dependent on the capabilities of the MSL robotic arm, the design for which presently has a placement uncertainty of ~20 mm in 3 dimensions; hence, acquisition of images at the minimum working distance may be challenging. The MAHLI consists of 3 parts: a camera head, a Digital Electronics Assembly (DEA), and a calibration target. The camera head and DEA are connected by a JPL-provided cable which transmits data, commands, and power. JPL is also providing a contact sensor. The camera head will be mounted on the rover's robotic arm turret, the DEA will be inside the rover body, and the calibration target will be mounted on the robotic arm azimuth motor housing. Camera Head. MAHLI uses a Kodak KAI-2020CM interline transfer CCD (1600 x 1200 active 7.4 μm square pixels with RGB filtered microlenses arranged in a Bayer pattern). The optics consist of a group of 6 fixed lens elements, a movable group of 3 elements, and a fixed sapphire window front element. Undesired near-infrared radiation is blocked using a coating deposited on the inside surface of the sapphire window. The lens is protected by a dust cover with a Lexan window through which imaging can be ac-complished if necessary, and targets can be illuminated by sunlight or two banks of two white light LEDs. Two 365 nm UV LEDs are included to search for fluores-cent materials at night. DEA and Onboard Processing. The DEA incorpo-rates the circuit elements required for data processing, compression, and buffering. It also includes all power conversion and regulation capabilities for both the DEA and the camera head. The DEA has an 8 GB non-volatile flash memory plus 128 MB volatile storage. Images can be commanded as full-frame or sub-frame and the camera has autofocus and autoexposure capa-bilities. MAHLI can also acquire 720p, ~7 Hz high definition video. Onboard processing includes options for Bayer pattern filter interpolation, JPEG-based compression, and focus stack merging (z-stacking). Malin Space Science Systems (MSSS) built and will operate the MAHLI. Alliance Spacesystems, LLC, designed and built the lens mechanical assembly. MAHLI shares common electronics, detector, and software designs with the MSL Mars Descent Imager (MARDI) and the 2 MSL Mast Cameras (Mastcam). Pre-launch images of geologic materials imaged by MAHLI are online at: http://www.msss.com/msl/mahli/prelaunch_images/.
Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information
NASA Technical Reports Server (NTRS)
Pence, William D.; White, R. L.; Seaman, R.
2010-01-01
We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.
Iurov, Iu B; Khazatskiĭ, I A; Akindinov, V A; Dovgilov, L V; Kobrinskiĭ, B A; Vorsanova, S G
2000-08-01
Original software FISHMet has been developed and tried for improving the efficiency of diagnosis of hereditary diseases caused by chromosome aberrations and for chromosome mapping by fluorescent in situ hybridization (FISH) method. The program allows creation and analysis of pseudocolor chromosome images and hybridization signals in the Windows 95 system, allows computer analysis and editing of the results of pseudocolor hybridization in situ, including successive imposition of initial black-and-white images created using fluorescent filters (blue, green, and red), and editing of each image individually or of a summary pseudocolor image in BMP, TIFF, and JPEG formats. Components of image computer analysis system (LOMO, Leitz Ortoplan, and Axioplan fluorescent microscopes, COHU 4910 and Sanyo VCB-3512P CCD cameras, Miro-Video, Scion LG-3 and VG-5 image capture maps, and Pentium 100 and Pentium 200 computers) and specialized software for image capture and visualization (Scion Image PC and Video-Cup) have been used with good results in the study.
A new hyperspectral image compression paradigm based on fusion
NASA Astrophysics Data System (ADS)
Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto
2016-10-01
The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.
Data Compression Techniques for Maps
1989-01-01
Lempel - Ziv compression is applied to the classified and unclassified images as also to the output of the compression algorithms . The algorithms ...resulted in a compression of 7:1. The output of the quadtree coding algorithm was then compressed using Lempel - Ziv coding. The compression ratio achieved...using Lempel - Ziv coding. The unclassified image gave a compression ratio of only 1.4:1. The K means classified image
Fast Lossless Compression of Multispectral-Image Data
NASA Technical Reports Server (NTRS)
Klimesh, Matthew
2006-01-01
An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.
Optimal Compression Methods for Floating-point Format Images
NASA Technical Reports Server (NTRS)
Pence, W. D.; White, R. L.; Seaman, R.
2009-01-01
We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.
Outer planet Pioneer imaging communications system study. [data compression
NASA Technical Reports Server (NTRS)
1974-01-01
The effects of different types of imaging data compression on the elements of the Pioneer end-to-end data system were studied for three imaging transmission methods. These were: no data compression, moderate data compression, and the advanced imaging communications system. It is concluded that: (1) the value of data compression is inversely related to the downlink telemetry bit rate; (2) the rolling characteristics of the spacecraft limit the selection of data compression ratios; and (3) data compression might be used to perform acceptable outer planet mission at reduced downlink telemetry bit rates.
Compressive sensing in medical imaging
Graff, Christian G.; Sidky, Emil Y.
2015-01-01
The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400
Black Hole in Search of a Home
NASA Astrophysics Data System (ADS)
2005-09-01
Astronomers Discover Bright Quasar Without Massive Host Galaxy An international team of astronomers [1] used two of the most powerful astronomical facilities available, the ESO Very Large Telescope (VLT) at Cerro Paranal and the Hubble Space Telescope (HST), to conduct a detailed study of 20 low redshift quasars. For 19 of them, they found, as expected, that these super massive black holes are surrounded by a host galaxy. But when they studied the bright quasar HE0450-2958, located some 5 billion light-years away, they couldn't find evidence for an encircling galaxy. This, the astronomers suggest, may indicate a rare case of collision between a seemingly normal spiral galaxy and a much more exotic object harbouring a very massive black hole. With masses up to hundreds of millions that of the Sun, "super massive" black holes are the most tantalizing objects known. Hiding in the centre of most large galaxies, including our own Milky Way (see ESO PR 26/03), they sometimes manifest themselves by devouring matter they engulf from their surroundings. Shining up to the largest distances, they are then called "quasars" or "QSOs" (for "quasi-stellar objects"), as they had initially been confused with stars. Decades of observations of quasars have suggested that they are always associated with massive host galaxies. However, observing the host galaxy of a quasar is a challenging work, because the quasar is radiating so energetically that its host galaxy is hard to detect in the flare. ESO PR Photo 28a/05 ESO PR Photo 28a/05 Two Quasars with their Host Galaxy [Preview - JPEG: 400 x 760 pix - 82k] [Normal - JPEG: 800 x 1520 pix - 395k] [Full Res - JPEG: 1722 x 3271 pix - 4.0M] Caption: ESO PR Photo 28a/05 shows two examples of quasars from the sample studied by the astronomers, where the host galaxy is obvious. In each case, the quasar is the bright central spot. The host of HE1239-2426 (left), a z=0.082 quasar, displays large spiral arms, while the host of HE1503+0228 (right), having a redshift of 0.135, is more fuzzy and shows only hints of spiral arms. Although these particular objects are rather close to us and constitute therefore easy targets, their host would still be perfectly visible at much higher redshift, including at distances as large as the one of HE0450-2958 (z=0.285). The observations were done with the ACS camera on the HST. ESO PR Photo 28b/05 ESO PR Photo 28b/05 The Quasar without a Home: HE0450-2958 [Preview - JPEG: 400 x 760 pix - 53k] [Normal - JPEG: 800 x 1520 pix - 197k] [Full Res - JPEG: 1718 x 3265 pix - 1.5M] Caption of ESO PR Photo 28b/05: (Left) HST image of the z=0.285 quasar HE0450-2958. No obvious host galaxy centred on the quasar is seen. Only a strongly disturbed and star forming companion galaxy is seen near the top of the image. (Right) Same image shown after applying an efficient image sharpening method known as MCS-deconvolution. In contrast to the usual cases, as the ones shown in ESO PR Photo 28a/05, the quasar is not situated at the centre of an extended host galaxy, but on the edge of a compact structure, whose spectra (see ESO PR Photo 28c/05) show it to be composed of gas ionised by the quasar radiation. This gas may have been captured through a collision with the star-forming galaxy. The star indicated on the figure is a nearby galactic star seen by chance in the field of view. To overcome this problem, the astronomers devised a new and highly efficient strategy. Using ESO's VLT for spectroscopy and HST for imagery, they observed their quasars at the same time as a reference star. Simultaneous observation of a star allowed them to measure at best the shape of the quasar point source on spectra and images, and further to separate the quasar light from the other contribution, i.e. from the underlying galaxy itself. This very powerful image and spectra sharpening method ("MCS deconvolution") was applied to these data in order to detect the finest details of the host galaxy (see e.g. ESO PR 19/03). Using this efficient technique, the astronomers could detect a host galaxy for all but one of the quasars they studied. No stellar environment was found for HE0450-2958, suggesting that if any host galaxy exists, it must either have a luminosity at least six times fainter than expected a priori from the quasar observed luminosity, or a radius smaller than about 300 light-years. Typical radii for quasar host galaxies range between 6,000 and 50,000 light-years, i.e. they are at least 20 to 170 times larger. "With the data we managed to secure with the VLT and the HST, we would have been able to detect a normal host galaxy", says Pierre Magain (Université de Liège, Belgium), lead author of the paper reporting the study. "We must therefore conclude that, contrary to our expectations, this bright quasar is not surrounded by a massive galaxy." Instead, the astronomers detected just besides the quasar a bright cloud of about 2,500 light-years in size, which they baptized "the blob". The VLT observations show this cloud to be composed only of gas ionised by the intense radiation coming from the quasar. It is probably the gas of this cloud which is feeding the supermassive black hole, allowing it to become a quasar. ESO PR Photo 28c/05 ESO PR Photo 28c/05 Spectrum of Quasar HE0450-2958, the Blob and the Companion Galaxy (FORS/VLT) [Preview - JPEG: 400 x 561 pix - 112k] [Normal - JPEG: 800 x 1121 pix - 257k] [HiRes - JPEG: 2332 x 3268 pix - 1.1M] Caption: ESO PR Photo 28c/05 presents the spectra of the three objects indicated in ESO PR Photo 28b/05 as obtained with FORS1 on ESO's Very Large Telescope. The spectrum of the companion galaxy shown on the top panel reveals strong star formation. Thanks to the image sharpening process, it has been possible to separate very well the spectra of the quasar (centre) from that of the blob (bottom). The spectrum of the blob shows exclusively strong narrow emission lines having properties indicative of ionisation by the quasar light. There is no trace of stellar light, down to very faint levels, in the surrounding of the quasar. A strongly perturbed galaxy, showing all signs of a recent collision, is also seen on the HST images 2 arcseconds away (corresponding to about 50,000 light-years), with the VLT spectra showing it to be presently in a state where it forms stars at a frantic rate. "The absence of a massive host galaxy, combined with the existence of the blob and the star-forming galaxy, lead us to believe that we have uncovered a really exotic quasar, says team member Frédéric Courbin (Ecole Polytechnique Fédérale de Lausanne, Switzerland). "There is little doubt that a burst in the formation of stars in the companion galaxy and the quasar itself have been ignited by a collision that must haven taken place about 100 million years ago. What happened to the putative quasar host remains unknown." HE0450-2958 constitutes a challenging case of interpretation. The astronomers propose several possible explanations, that will need to be further investigated and confronted. Has the host galaxy been completely disrupted as a result of the collision? It is hard to imagine how that could happen. Has an isolated black hole captured gas while crossing the disc of a spiral galaxy? This would require very special conditions and would probably not have caused such a tremendous perturbation as is observed in the neighbouring galaxy. Another intriguing hypothesis is that the galaxy harbouring the black hole was almost exclusively made of dark matter. "Whatever the solution of this riddle, the strong observable fact is that the quasar host galaxy, if any, is much too faint", says team member Knud Jahnke (Astrophysikalisches Institut Potsdam, Germany). The report on HE0450-2958 is published in the September 15, 2005 issue of the journal Nature ("Discovery of a bright quasar without a massive host galaxy" by Pierre Magain et al.).
Wienert, Stephan; Beil, Michael; Saeger, Kai; Hufnagl, Peter; Schrader, Thomas
2009-01-09
The virtual microscopy is widely accepted in Pathology for educational purposes and teleconsultation but is far from the routine use in surgical pathology due to the technical requirements and some limitations. A technical problem is the limited bandwidth of a usual network and the delayed transmission rate and presentation time on the screen. In this study the process of secondary diagnostic was evaluated using the "T.Konsult Pathologie" service of the Professional Association of German Pathologists within the German breast cancer screening program. The characteristics of the access to the WSI (Whole Slide Images) have been analyzed to explore the possibilities of prefetching and caching to reduce the presentation and transfer time with the goal to increase user acceptance. The log files of the web server were analyzed to reconstruct the movements of the pathologist on the WSI and to create the observation path. Using a specialized tool the observation paths were extracted automatically from the log files. The attributes linearity, 3-point-linearity, changes per request, and number of consecutive requests were calculated to design, develop and evaluate different caching and prefetching strategies. The analysis of the observation paths showed that a complete accordance of two image requests is a very rare event. But more frequently a partial covering of two requested image areas can be found. In total 257 diagnostic paths from 131 WSI have been extracted and analysed. On average a diagnostic path consists of 16 image requests and takes 189 seconds between first and last image request. The mean linearity was 0,41 and the mean 3-point-linearity 0,85. Three different caching algorithms have been compared with respect to hit rate and additional image requests on the WSI server. Tests demonstrated that 95% of the diagnostic paths could be loaded without any deletion of entries in the cache (cache size 12,2 Megapixel). If the image parts are stored after JPEG compression this complies with less than 2 MB. WSI telepathology is a technology which offers the possibility to break the limitations of conventional static telepathology. The complete histological slide may be investigated instead of sets of images of lesions sampled by the presenting pathologist. The benefit is demonstrated by the high diagnostic security of 95% accordance between first and second diagnosis.
Wienert, Stephan; Beil, Michael; Saeger, Kai; Hufnagl, Peter; Schrader, Thomas
2009-01-01
Background The virtual microscopy is widely accepted in Pathology for educational purposes and teleconsultation but is far from the routine use in surgical pathology due to the technical requirements and some limitations. A technical problem is the limited bandwidth of a usual network and the delayed transmission rate and presentation time on the screen. Methods In this study the process of secondary diagnostic was evaluated using the "T.Konsult Pathologie" service of the Professional Association of German Pathologists within the German breast cancer screening program. The characteristics of the access to the WSI (Whole Slide Images) have been analyzed to explore the possibilities of prefetching and caching to reduce the presentation and transfer time with the goal to increase user acceptance. The log files of the web server were analyzed to reconstruct the movements of the pathologist on the WSI and to create the observation path. Using a specialized tool the observation paths were extracted automatically from the log files. The attributes linearity, 3-point-linearity, changes per request, and number of consecutive requests were calculated to design, develop and evaluate different caching and prefetching strategies. Results The analysis of the observation paths showed that a complete accordance of two image requests is a very rare event. But more frequently a partial covering of two requested image areas can be found. In total 257 diagnostic paths from 131 WSI have been extracted and analysed. On average a diagnostic path consists of 16 image requests and takes 189 seconds between first and last image request. The mean linearity was 0,41 and the mean 3-point-linearity 0,85. Three different caching algorithms have been compared with respect to hit rate and additional image requests on the WSI server. Tests demonstrated that 95% of the diagnostic paths could be loaded without any deletion of entries in the cache (cache size 12,2 Megapixel). If the image parts are stored after JPEG compression this complies with less than 2 MB. Discussion WSI telepathology is a technology which offers the possibility to break the limitations of conventional static telepathology. The complete histological slide may be investigated instead of sets of images of lesions sampled by the presenting pathologist. The benefit is demonstrated by the high diagnostic security of 95% accordance between first and second diagnosis. PMID:19134181
The compression and storage method of the same kind of medical images: DPCM
NASA Astrophysics Data System (ADS)
Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong
2006-09-01
Medical imaging has started to take advantage of digital technology, opening the way for advanced medical imaging and teleradiology. Medical images, however, require large amounts of memory. At over 1 million bytes per image, a typical hospital needs a staggering amount of memory storage (over one trillion bytes per year), and transmitting an image over a network (even the promised superhighway) could take minutes--too slow for interactive teleradiology. This calls for image compression to reduce significantly the amount of data needed to represent an image. Several compression techniques with different compression ratio have been developed. However, the lossless techniques, which allow for perfect reconstruction of the original images, yield modest compression ratio, while the techniques that yield higher compression ratio are lossy, that is, the original image is reconstructed only approximately. Medical imaging poses the great challenge of having compression algorithms that are lossless (for diagnostic and legal reasons) and yet have high compression ratio for reduced storage and transmission time. To meet this challenge, we are developing and studying some compression schemes, which are either strictly lossless or diagnostically lossless, taking advantage of the peculiarities of medical images and of the medical practice. In order to increase the Signal to Noise Ratio (SNR) by exploitation of correlations within the source signal, a method of combining differential pulse code modulation (DPCM) is presented.
Subjective evaluation of compressed image quality
NASA Astrophysics Data System (ADS)
Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin
1992-05-01
Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.
JPEG2000-coded image error concealment exploiting convex sets projections.
Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio
2005-04-01
Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.
Image-adapted visually weighted quantization matrices for digital image compression
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1994-01-01
A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Toward an image compression algorithm for the high-resolution electronic still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.
First-Ever Census of Variable Mira-Type Stars in Galaxy Outside the Local Group
NASA Astrophysics Data System (ADS)
2003-05-01
First-Ever Census of Variable Mira-Type Stars in Galaxy Outsidethe Local Group Summary An international team led by ESO astronomer Marina Rejkuba [1] has discovered more than 1000 luminous red variable stars in the nearby elliptical galaxy Centaurus A (NGC 5128) . Brightness changes and periods of these stars were measured accurately and reveal that they are mostly cool long-period variable stars of the so-called "Mira-type" . The observed variability is caused by stellar pulsation. This is the first time a detailed census of variable stars has been accomplished for a galaxy outside the Local Group of Galaxies (of which the Milky Way galaxy in which we live is a member). It also opens an entirely new window towards the detailed study of stellar content and evolution of giant elliptical galaxies . These massive objects are presumed to play a major role in the gravitational assembly of galaxy clusters in the Universe (especially during the early phases). This unprecedented research project is based on near-infrared observations obtained over more than three years with the ISAAC multi-mode instrument at the 8.2-m VLT ANTU telescope at the ESO Paranal Observatory . PR Photo 14a/03 : Colour image of the peculiar galaxy Centaurus A . PR Photo 14b/03 : Location of the fields in Centaurus A, now studied. PR Photo 14c/03 : "Field 1" in Centaurus A (visual light; FORS1). PR Photo 14d/03 : "Field 2" in Centaurus A (visual light; FORS1). PR Photo 14e/03 : "Field 1" in Centaurus A (near-infrared; ISAAC). PR Photo 14f/03 : "Field 2" in Centaurus A (near-infrared; ISAAC). PR Photo 14g/03 : Light variation of six variable stars in Centaurus A PR Photo 14h/03 : Light variation of stars in Centaurus A (Animated GIF) PR Photo 14i/03 : Light curves of four variable stars in Centaurus A. Mira-type variable stars Among the stars that are visible in the sky to the unaided eye, roughly one out of three hundred (0.3%) displays brightness variations and is referred to by astronomers as a "variable star". The percentage is much higher among large, cool stars ("red giants") - in fact, almost all luminous stars of that type are variable. Such stars are known as Mira-variables ; the name comes from the most prominent member of this class, Omicron Ceti in the constellation Cetus (The Whale), also known as "Stella Mira" (The Wonderful Star). Its brightness changes with a period of 332 days and it is about 1500 times brighter at maximum (visible magnitude 2 and one of the fifty brightest stars in the sky) than at minimum (magnitude 10 and only visible in small telescopes) [2]. Stars like Omicron Ceti are nearing the end of their life. They are very large and have sizes from a few hundred to about a thousand times that of the Sun. The brightness variation is due to pulsations during which the star's temperature and size change dramatically. In the following evolutionary phase, Mira-variables will shed their outer layers into surrounding space and become visible as planetary nebulae with a hot and compact star (a "white dwarf") at the middle of a nebula of gas and dust (cf. the "Dumbbell Nebula" - ESO PR Photo 38a-b/98 ). Several thousand Mira-type stars are currently known in the Milky Way galaxy and a few hundred have been found in other nearby galaxies, including the Magellanic Clouds. The peculiar galaxy Centaurus A ESO PR Photo 14a/03 ESO PR Photo 14a/03 [Preview - JPEG: 400 x 451 pix - 53k [Normal - JPEG: 800 x 903 pix - 528k] [Hi-Res - JPEG: 3612 x 4075 pix - 8.4M] ESO PR Photo 14b/03 ESO PR Photo 14b/03 [Preview - JPEG: 570 x 400 pix - 52k [Normal - JPEG: 1140 x 800 pix - 392k] ESO PR Photo 14c/03 ESO PR Photo 14c/03 [Preview - JPEG: 400 x 451 pix - 61k [Normal - JPEG: 800 x 903 pix - 768k] ESO PR Photo 14d/03 ESO PR Photo 14d/03 [Preview - JPEG: 400 x 451 pix - 56k [Normal - JPEG: 800 x 903 pix - 760k] Captions : PR Photo 14a/03 is a colour composite photo of the peculiar galaxy Centaurus A (NGC 5128) , obtained with the Wide-Field Imager (WFI) camera at the ESO/MPG 2.2-m telescope on La Silla. It is based on a total of nine 3-min exposures made on March 25, 1999, through different broad-band optical filters (B(lue) - total exposure time 9 min - central wavelength 456 nm - here rendered as blue; V(isual) - 540 nm - 9 min - green; I(nfrared) - 784 nm - 9 min - red); it was prepared from files in the ESO Science Data Archive by ESO-astronomer Benoît Vandame . The elliptical shape and the central dust band, the imprint of a galaxy collision, are well visible. PR Photo 14b/03 identifies the two regions of Centaurus A (the rectangles in the upper left and lower right inserts) in which a search for variable stars was made during the present research project: "Field 1" is located in an area north-east of the center in which many young stars are present. This is also the direction in which an outflow ("jet") is seen on deep optical and radio images. "Field 2" is positioned in the galaxy's halo, south of the centre. High-resolution, very deep colour photos of these two fields and their immediate surroundings are shown in PR Photos 14c-d/03 . They were produced by means of CCD-frames obtained in July 1999 through U- and V-band optical filters with the VLT FORS1 multi-mode instrument at the 8.2-m VLT ANTU telescope on Paranal. Note the great variety of object types and colours, including many background galaxies which are seen through these less dense regions of Centaurus A . The total exposure time was 30 min in each filter and the seeing was excellent, 0.5 arcsec. The original pixel size is 0.196 arcsec and the fields measure 6.7 x 6.7 arcmin 2 (2048 x 2048 pix 2 ). North is up and East is left on all photos. Centaurus A (NGC 5128) is the nearest giant galaxy, at a distance of about 13 million light-years. It is located outside the Local Group of Galaxies to which our own galaxy, the Milky Way, and its satellite galaxies, the Magellanic Clouds, belong. Centaurus A is seen in the direction of the southern constellation Centaurus. It is of elliptical shape and is currently merging with a companion galaxy, making it one of the most spectacular objects in the sky, cf. PR Photo 14a/03 . It possesses a very heavy black hole at its centre (see ESO PR 04/01 ) and is a source of strong radio and X-ray emission. During the present research programme, two regions in Centaurus A were searched for stars of variable brightness; they are located in the periphery of this peculiar galaxy, cf. PR Photos 14b-d/03 . An outer field ("Field 1") coincides with a stellar shell with many blue and luminous stars produced by the on-going galaxy merger; it lies at a distance of 57,000 light-years from the centre. The inner field ("Field 2") is more crowded and is situated at a projected distance of about 30,000 light-years from the centre.. Three years of VLT observations ESO PR Photo 14e/03 ESO PR Photo 14e/03 [Preview - JPEG: 400 x 447 pix - 120k [Normal - JPEG: 800 x 894 pix - 992k] ESO PR Photo 14f/03 ESO PR Photo 14f/03 [Preview - JPEG: 400 x 450 pix - 96k [Normal - JPEG: 800 x 899 pix - 912k] Caption : PR Photos 14e-f/03 are colour composites of two small fields ("Field 1" and "Field 2") in the peculiar galaxy Centaurus A (NGC 5128) , based on exposures through three near-infrared filters (the J-, H- and K-bands at wavelengths 1.2, 1.6 and 2.2 µm, respectively) with the ISAAC multi-mode instrument at the 8.2-m VLT ANTU telescope at the ESO Paranal observatory. The corresponding areas are outlined within the two inserts in PR Photo 14b/03 and may be compared with the visual images from FORS1 ( PR Photos 14c-d/03 ). These ISAAC photos are the deepest near-infrared images ever obtained in this galaxy and show thousands of its stars of different colours. In the present colour-coding, the redder an image, the cooler is the star. The original pixel size is 0.15 arcsec and both fields measure 2.5 x 2.5 arcmin 2. North is up and East is left. Under normal circumstances, any team of professional astronomers will have access to the largest telescopes in the world for only a very limited number of consecutive nights each year. However, extensive searches for variable stars like the present require repeated observations lasting minutes-to-hours over periods of months-to-years. It is thus not feasible to perform such observations in the classical way in which the astronomers travel to the telescope each time. Fortunately, the operational system of the VLT at the ESO Paranal Observatory (Chile) is also geared to encompass this kind of long-term programme. Between April 1999 and July 2002, the 8.2-m VLT ANTU telescope on Cerro Paranal in Chile) was operated in service mode on many occasions to obtain K-band images of the two fields in Centaurus A by means of the near-infrared ISAAC multi-mode instrument. Each field was observed over 20 times in the course of this three-year period ; some of the images were obtained during exceptional seeing conditions of 0.30 arcsec. One set of complementary optical images was obtained with the FORS1 multi-mode instrument (also on VLT ANTU) in July 1999. Each image from the ISAAC instrument covers a sky field measuring 2.5 x 2.5 arcmin 2. The combined images, encompassing a total exposure of 20 hours are indeed the deepest infrared images ever made of the halo of any galaxy as distant as Centaurus A , about 13 million light-years. Discovering one thousand Mira variables ESO PR Photo 14g/03 ESO PR Photo 14g/03 [Preview - JPEG: 400 x 480 pix - 61k [Normal - JPEG: 800 x 961 pix - 808k] ESO PR Photo 14h/03 ESO PR Photo 14h/03 [Animated GIF: 263 x 267 pix - 56k ESO PR Photo 14i/03 ESO PR Photo 14i/03 [Preview - JPEG: 480 x 400 pix - 33k [Normal - JPEG: 959 x 800 pix - 152k] Captions : PR Photo 14g/03 shows a zoomed-in area within "Field 2" in Centaurus A , from the ISAAC colour image shown in PR Photo 14e/03 . Nearly all red stars in this area are of the variable Mira-type. The brightness variation of some stars (labelled A-D) is demonstrated in the animated-GIF image PR Photo 14h/03 . The corresponding light curves (brightness over the pulsation period) are shown in PR Photo 14i/03 . Here the abscissa indicates the pulsation phase (one full period corresponds to the interval from 0 to 1) and the ordinate unit is near-infrared K s -magnitude. One magnitude corresponds to a difference in brightness of a factor 2.5. Once the lengthy observations were completed, two further steps were needed to identify the variable stars in Centaurus A . First, each ISAAC frame was individually processed to identify the thousands and thousands of faint point-like images (stars) visible in these fields. Next, all images were compared using a special software package ("DAOPHOT") to measure the brightness of all these stars in the different frames, i.e., as a function of time. While most stars in these fields as expected were found to have constant brightness, more than 1000 stars displayed variations in brightness with time; this is by far the largest number of variable stars ever discovered in a galaxy outside the Local Group of Galaxies. The detailed analysis of this enormous dataset took more than a year. Most of the variable stars were found to be of the Mira-type and their light curves (brightness over the pulsation period) were measured, cf. PR Photo 14i/03 . For each of them, values of the characterising parameters, the period (days) and brightness amplitude (magnitudes) were determined. A catalogue of the newly discovered variable stars in Centaurus A has now been made available to the astronomical community via the European research journal Astronomy & Astrophysics. Marina Rejkuba is pleased and thankful: "We are really very fortunate to have carried out this ambitious project so successfully. It all depended critically on different factors: the repeated granting of crucial observing time by the ESO Observing Programmes Committee over different observing periods in the face of rigorous international competition, the stability and reliability of the telescope and the ISAAC instrument over a period of more than three years and, not least, the excellent quality of the service mode observations, so efficiently performed by the staff at the Paranal Observatory." What have we learned about Centaurus A? The present study of variable stars in this giant elliptical galaxy is the first-ever of its kind. Although the evaluation of the very large observational data material is still not finished, it has already led to a number of very useful scientific results. Confirmation of the presence of an intermediate-age population Based on earlier research (optical and near-IR colour-magnitude diagrams of the stars in the fields), the present team of astronomers had previously detected the presence of intermediate-age and young stellar populations in the halo of this galaxy. The youngest stars appear to be aligned with the powerful jet produced by the massive black hole at the centre. Some of the very luminous red variable stars now discovered confirm the presence of a population of intermediate-age stars in the halo of this galaxy. It also contributes to our understanding of how giant elliptical galaxies form. New measurement of the distance to Centaurus A The pulsation of Mira-type variable stars obeys a period-luminosity relation. The longer its period, the more luminous is a Mira-type star. This fact makes it possible to use Mira-type stars as "standard candles" (objects of known intrinsic luminosity) for distance determinations. They have in fact often been used in this way to measure accurate distances to more nearby objects, e.g., to individual clusters of stars and to the center in our Milky Way galaxy, and also to galaxies in the Local Group, in particular the Magellanic Clouds. This method works particularly well with infrared measurements and the astronomers were now able to measure the distance to Centaurus A in this new way. They found 13.7 ± 1.9 million light-years , in general agreement with and thus confirming other methods. Study of stellar population gradients in the halo of a giant elliptical galaxy The two fields here studied contain different populations of stars. A clear dependence on the location (a "gradient") within the galaxy is observed, which can be due to differences in chemical composition or age, or to a combination of both. Understanding the cause of this gradient will provide additional clues to how Centaurus A - and indeed all giant elliptical galaxies - was formed and has since evolved. Comparison with other well-known nearby galaxies Past searches have discovered Mira-type variable stars thoughout the Milky Way, our home galaxy, and in other nearby galaxies in the Local Group. However, there are no giant elliptical galaxies like Centaurus A in the Local Group and this is the first time it has been possible to identify this kind of stars in that type of galaxy. The present investigation now opens a new window towards studies of the stellar constituents of such galaxies .
NASA Astrophysics Data System (ADS)
2000-01-01
VLT MELIPAL Achieves Successful "First Light" in Record Time This was a night to remember at the ESO Paranal Observatory! For the first time, three 8.2-m VLT telescopes were observing in parallel, with a combined mirror surface of nearly 160 m 2. In the evening of January 26, the third 8.2-m Unit Telescope, MELIPAL ("The Southern Cross" in the Mapuche language), was pointed to the sky for the first time and successfully achieved "First Light". During this night, a number of astronomical exposures were made that served to evaluate provisionally the performance of the new telescope. The ESO staff expressed great satisfaction with MELIPAL and there were broad smiles all over the mountain. The first images ESO PR Photo 04a/00 ESO PR Photo 04a/00 [Preview - JPEG: 400 x 352 pix - 95k] [Normal - JPEG: 800 x 688 pix - 110k] Caption : ESO PR Photo 04a/00 shows the "very first light" image for MELIPAL . It is that of a relatively bright star, as recorded by the Guide Probe at about 21:50 hrs local time on January 26, 2000. It is a 0.1 sec exposure, obtained after preliminary adjustment of the optics during a few iterations with the computer controlled "active optics" system. The image quality is measured as 0.46 arcsec FWHM (Full-Width at Half Maximum). ESO PR Photo 04b/00 ESO PR Photo 04b/00 [Preview - JPEG: 400 x 429 pix - 39k] [Normal - JPEG: 885 x 949 pix - 766k] Caption : ESO PR Photo 04b/00 shows the central region of the Crab Nebula, the famous supernova remnant in the constellation Taurus (The Bull). It was obtained early in the night of "First Light" with the third 8.2-m VLT Unit Telescope, MELIPAL . It is a composite of several 30-sec exposures with the VLT Test Camera in three broad-band filters, B (here rendered as blue; most synchrotron emission), V (green) and R (red; mostly emission from hydrogen atoms). The Crab Pulsar is visible to the left; it is the lower of the two brightest stars near each other. The image quality is about 0.9 arcsec, and is completely determined by the external seeing caused by the atmospheric turbulence above the telescope at the time of the observation. The coloured, vertical lines to the left are artifacts of a "bad column" of the CCD. The field measures about 1.3 x 1.3 arcmin 2. This image may be compared with that of the same area that was recently obtained with the FORS2 instrument at KUEYEN ( PR Photo 40g/99 ). Following two days of preliminary adjustments after the installation of the secondary mirror, cf. ESO PR Photos 03a-n/00 , MELIPAL was pointed to the sky above Paranal for the first time, soon after sunset in the evening of January 26. The light of a bright star was directed towards the Guide Probe camera, and the VLT Commissioning Team, headed by Dr. Jason Spyromilio , initiated the active optics procedure . This adjusts the 150 computer-controlled supports under the main 8.2-m Zerodur mirror as well as the position of the secondary 1.1-m Beryllium mirror. After just a few iterations, the optical quality of the recorded stellar image was measured as 0.46 arcsec ( PR Photo 04a/00 ), a truly excellent value, especially at this stage! Immediately thereafter, at 22:16 hrs local time (i.e., at 01:16 hrs UT on January 27), the shutter of the VLT Test Camera at the Cassegrain focus was opened. A 1-min exposure was made through a R(ed) optical filter of a distant star cluster in the constellation Eridanus (The River). The light from its faint stars was recorded by the CCD at the focal plane and the resulting frame was read into the computer. Despite the comparatively short exposure time, myriads of stars were seen when this "first frame" was displayed on the computer screen. Moreover, the sizes of these images were found to be virtually identical to the 0.6 arcsec seeing measured simultaneously with a monitor telescope, outside the telescope enclosure. This confirmed that MELIPAL was in very good shape. Nevertheless, these very first images were still slightly elongated and further optical adjustments and tests were therefore made to eliminate this unwanted effect. It is a tribute to the extensive experience and fine skills of the ESO staff that within only 1 hour, a 30 sec exposure of the central region of the Crab Nebula in Taurus with round images was obtained, cf. PR Photo 04b/00 . The ESO Director General, Dr. Catherine Cesarsky , who assumed her function in September 1999, was present in the Control Room during these operations. She expressed great satisfaction with the excellent result and warmly congratulated the ESO staff to this achievement. She was particularly impressed with the apparent ease with which a completely new telescope of this size could be adjusted in such a short time. A part of her statement on this occasion was recorded on ESO PR Video Clip 02/00 that accompanies this Press Release. Three telescopes now in operation at Paranal At 02:30 UT on January 27, 2000, three VLT Unit Telescopes were observing in parallel, with measured seeing values of 0.6 arcsec ( ANTU - "The Sun"), 0.7 arcsec ( KUEYEN -"The Moon") and 0.7 arcsec ( MELIPAL ). MELIPAL has now joined ANTU and KUEYEN that had "First Light" in May 1998 and March 1999, respectively. The fourth VLT Unit Telescope, YEPUN ("Sirius") will become operational later this year. While normal scientific observations continue with ANTU , the UVES and FORS2 astronomical instruments are now being commissioned at KUEYEN , before this telescope will be handed over to the astronomers on April 1, 2000. The telescope commissioning period will now start for MELIPAL , after which its first instrument, VIMOS will be installed later this year. Impressions from the MELIPAL "First Light" event First Light for MELIPAL ESO PR Video Clip 02/00 "First Light for MELIPAL" (3350 frames/2:14 min) [MPEG Video+Audio; 160x120 pix; 3.1Mb] [MPEG Video+Audio; 320x240 pix; 9.4 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 02/00 shows sequences from the Control Room at the Paranal Observatory, recorded with a fixed TV-camera on January 27 at 03:00 UT, soon after the moment of "First Light" with the third 8.2-m VLT Unit Telescope ( MELIPAL ). The video sequences were transmitted via ESO's dedicated satellite communication link to the Headquarters in Garching for production of the Clip. It begins with a statement by the Manager of the VLT Project, Dr. Massimo Tarenghi , as exposures of the Crab Nebula are obtained with the telescope and the raw frames are successively displayed on the monitor screen. In a following sequence, ESO's Director General, Dr. Catherine Cesarsky , briefly relates the moment of "First Light" for MELIPAL , as she experienced it at the telescope controls. ESO Press Photo 04c/00 ESO Press Photo 04c/00 [Preview; JPEG: 400 x 300; 44k] [Full size; JPEG: 1600 x 1200; 241k] The computer screen with the image of a bright star, as recorded by the Guide Probe in the early evening of January 26; see also PR Photo 04a/00. This image was used for the initial adjustments by means of the active optics system. (Digital Photo). ESO Press Photo 04d/00 ESO Press Photo 04d/00 [Preview; JPEG: 400 x 314; 49k] [Full size; JPEG: 1528 x 1200; 189k] ESO staff at the moment of "First Light" for MELIPAL in the evening of January 26. The photo was made in the wooden hut on the telescope observing floor from where the telescope was controlled during the first hours. (Digital Photo). ESO PR Photos may be reproduced, if credit is given to the European Southern Observatory. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 01/00 with aerial sequences from Paranal (12 January 2000). Information is also available on the web about other ESO videos.
Lossless compression of VLSI layout image data.
Dai, Vito; Zakhor, Avideh
2006-09-01
We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.
Cloud solution for histopathological image analysis using region of interest based compression.
Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana
2017-07-01
Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.
Compression of regions in the global advanced very high resolution radiometer 1-km data set
NASA Technical Reports Server (NTRS)
Kess, Barbara L.; Steinwand, Daniel R.; Reichenbach, Stephen E.
1994-01-01
The global advanced very high resolution radiometer (AVHRR) 1-km data set is a 10-band image produced at USGS' EROS Data Center for the study of the world's land surfaces. The image contains masked regions for non-land areas which are identical in each band but vary between data sets. They comprise over 75 percent of this 9.7 gigabyte image. The mask is compressed once and stored separately from the land data which is compressed for each of the 10 bands. The mask is stored in a hierarchical format for multi-resolution decompression of geographic subwindows of the image. The land for each band is compressed by modifying a method that ignores fill values. This multi-spectral region compression efficiently compresses the region data and precludes fill values from interfering with land compression statistics. Results show that the masked regions in a one-byte test image (6.5 Gigabytes) compress to 0.2 percent of the 557,756,146 bytes they occupy in the original image, resulting in a compression ratio of 89.9 percent for the entire image.
NASA Technical Reports Server (NTRS)
Tilton, James C.; Ramapriyan, H. K.
1989-01-01
A case study is presented where an image segmentation based compression technique is applied to LANDSAT Thematic Mapper (TM) and Nimbus-7 Coastal Zone Color Scanner (CZCS) data. The compression technique, called Spatially Constrained Clustering (SCC), can be regarded as an adaptive vector quantization approach. The SCC can be applied to either single or multiple spectral bands of image data. The segmented image resulting from SCC is encoded in small rectangular blocks, with the codebook varying from block to block. Lossless compression potential (LDP) of sample TM and CZCS images are evaluated. For the TM test image, the LCP is 2.79. For the CZCS test image the LCP is 1.89, even though when only a cloud-free section of the image is considered the LCP increases to 3.48. Examples of compressed images are shown at several compression ratios ranging from 4 to 15. In the case of TM data, the compressed data are classified using the Bayes' classifier. The results show an improvement in the similarity between the classification results and ground truth when compressed data are used, thus showing that compression is, in fact, a useful first step in the analysis.
Morgan, Karen L.M.; Westphal, Karen A.
2014-01-01
The U.S. Geological Survey (USGS) conducts baseline and storm response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms. On July 13, 2013, the USGS conducted an oblique aerial photographic survey from Breton Island, Louisiana, to the Alabama-Florida border, aboard a Cessna 172 flying at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect baseline data for assessing incremental changes since the last survey, and the data can be used in the assessment of future coastal change. The images provided here are Joint Photographic Experts Group (JPEG) images. ExifTtool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the configuration of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segements can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet. Table 1 provides detailed information about the GPS location, name, date, and time each of the 1242 photographs taken along with links to each photograph. The photography is organized into segments, also referred to as contact sheets, and represent approximately 5 minutes of flight time. (Also see the Photos and Maps page). In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files.
Morgan, Karen L.M.; Westphal, Karen A.
2014-01-01
The U.S. Geological Survey (USGS) conducts baseline and storm response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms. On August 8, 2012, the USGS conducted an oblique aerial photographic survey from Dauphin Island, Alabama, to Breton Island, Louisiana, aboard a Cessna 172 at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect baseline data for assessing incremental changes since the last survey, and the data can be used in the assessment of future coastal change. The images provided here are Joint Photographic Experts Group (JPEG) images. Exiftool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the configuration of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segements can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet. Table 1 provides detailed information about the GPS location, name, date, and time each of the 1241 photographs taken along with links to each photograph. The photography is organized into segments, also referred to as contact sheets, and represent approximately 5 minutes of flight time. (Also see the Photos and Maps page). In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files.
Comparison of two SVD-based color image compression schemes.
Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli
2017-01-01
Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.
Comparison of two SVD-based color image compression schemes
Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli
2017-01-01
Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR. PMID:28257451
Compression of the Global Land 1-km AVHRR dataset
Kess, B. L.; Steinwand, D.R.; Reichenbach, S.E.
1996-01-01
Large datasets, such as the Global Land 1-km Advanced Very High Resolution Radiometer (AVHRR) Data Set (Eidenshink and Faundeen 1994), require compression methods that provide efficient storage and quick access to portions of the data. A method of lossless compression is described that provides multiresolution decompression within geographic subwindows of multi-spectral, global, 1-km, AVHRR images. The compression algorithm segments each image into blocks and compresses each block in a hierarchical format. Users can access the data by specifying either a geographic subwindow or the whole image and a resolution (1,2,4, 8, or 16 km). The Global Land 1-km AVHRR data are presented in the Interrupted Goode's Homolosine map projection. These images contain masked regions for non-land areas which comprise 80 per cent of the image. A quadtree algorithm is used to compress the masked regions. The compressed region data are stored separately from the compressed land data. Results show that the masked regions compress to 0·143 per cent of the bytes they occupy in the test image and the land areas are compressed to 33·2 per cent of their original size. The entire image is compressed hierarchically to 6·72 per cent of the original image size, reducing the data from 9·05 gigabytes to 623 megabytes. These results are compared to the first order entropy of the residual image produced with lossless Joint Photographic Experts Group predictors. Compression results are also given for Lempel-Ziv-Welch (LZW) and LZ77, the algorithms used by UNIX compress and GZIP respectively. In addition to providing multiresolution decompression of geographic subwindows of the data, the hierarchical approach and the use of quadtrees for storing the masked regions gives a marked improvement over these popular methods.
Building a Steganography Program Including How to Load, Process, and Save JPEG and PNG Files in Java
ERIC Educational Resources Information Center
Courtney, Mary F.; Stix, Allen
2006-01-01
Instructors teaching beginning programming classes are often interested in exercises that involve processing photographs (i.e., files stored as .jpeg). They may wish to offer activities such as color inversion, the color manipulation effects archived with pixel thresholding, or steganography, all of which Stevenson et al. [4] assert are sought by…
NASA Astrophysics Data System (ADS)
Wan, Tat C.; Kabuka, Mansur R.
1994-05-01
With the tremendous growth in imaging applications and the development of filmless radiology, the need for compression techniques that can achieve high compression ratios with user specified distortion rates becomes necessary. Boundaries and edges in the tissue structures are vital for detection of lesions and tumors, which in turn requires the preservation of edges in the image. The proposed edge preserving image compressor (EPIC) combines lossless compression of edges with neural network compression techniques based on dynamic associative neural networks (DANN), to provide high compression ratios with user specified distortion rates in an adaptive compression system well-suited to parallel implementations. Improvements to DANN-based training through the use of a variance classifier for controlling a bank of neural networks speed convergence and allow the use of higher compression ratios for `simple' patterns. The adaptation and generalization capabilities inherent in EPIC also facilitate progressive transmission of images through varying the number of quantization levels used to represent compressed patterns. Average compression ratios of 7.51:1 with an averaged average mean squared error of 0.0147 were achieved.
Montironi, R; Thompson, D; Scarpelli, M; Bartels, H G; Hamilton, P W; Da Silva, V D; Sakr, W A; Weyn, B; Van Daele, A; Bartels, P H
2002-01-01
Objective: To describe practical experiences in the sharing of very large digital data bases of histopathological imagery via the Internet, by investigators working in Europe, North America, and South America. Materials: Experiences derived from medium power (sampling density 2.4 pixels/μm) and high power (6 pixels/μm) imagery of prostatic tissues, skin shave biopsies, breast lesions, endometrial sections, and colonic lesions. Most of the data included in this paper were from prostate. In particular, 1168 histological images of normal prostate, high grade prostatic intraepithelial neoplasia (PIN), and prostate cancer (PCa) were recorded, archived in an image format developed at the Optical Sciences Center (OSC), University of Arizona, and transmitted to Ancona, Italy, as JPEG (joint photographic experts group) files. Images were downloaded for review using the Internet application FTP (file transfer protocol). The images were then sent from Ancona to other laboratories for additional histopathological review and quantitative analyses. They were viewed using Adobe Photoshop, Paint Shop Pro, and Imaging for Windows. For karyometric analysis full resolution imagery was used, whereas histometric analyses were carried out on JPEG imagery also. Results: The three applications of the telecommunication system were remote histopathological assessment, remote data acquisition, and selection of material. Typical data volumes for each project ranged from 120 megabytes to one gigabyte, and transmission times were usually less than one hour. There were only negligible transmission errors, and no problem in efficient communication, although real time communication was an exception, because of the time zone differences. As far as the remote histopathological assessment of the prostate was concerned, agreement between the pathologist's electronic diagnosis and the diagnostic label applied to the images by the recording scientist was present in 96.6% of instances. When these images were forwarded to two pathologists, the level of concordance with the reviewing pathologist who originally downloaded the files from Tucson was as high as 97.2% and 98.0%. Initial results of studies made by researchers belonging to our group but located in others laboratories showed the feasibility of making quantitative analysis on the same images. Conclusions: These experiences show that diagnostic teleconsultation and quantitative image analyses via the Internet are not only feasible, but practical, and allow a close collaboration between researchers widely separated by geographical distance and analytical resources. PMID:12037030
VLT Images the Horsehead Nebula
NASA Astrophysics Data System (ADS)
2002-01-01
Summary A new, high-resolution colour image of one of the most photographed celestial objects, the famous "Horsehead Nebula" (IC 434) in Orion, has been produced from data stored in the VLT Science Archive. The original CCD frames were obtained in February 2000 with the FORS2 multi-mode instrument at the 8.2-m VLT KUEYEN telescope on Paranal (Chile). The comparatively large field-of-view of the FORS2 camera is optimally suited to show this extended object and its immediate surroundings in impressive detail. PR Photo 02a/02 : View of the full field around the Horsehead Nebula. PR Photo 02b/02 : Enlargement of a smaller area around the Horse's "mouth" A spectacular object ESO PR Photo 02a/02 ESO PR Photo 02a/02 [Preview - JPEG: 400 x 485 pix - 63k] [Normal - JPEG: 800 x 970 pix - 896k] [Full-Res - JPEG: 1951 x 2366 pix - 4.7M] ESO PR Photo 02b/02 ESO PR Photo 02b/02 [Preview - JPEG: 400 x 501 pix - 91k] [Normal - JPEG: 800 x 1002 pix - 888k] [Full-Res - JPEG: 1139 x 1427 pix - 1.9M] Caption : PR Photo 02a/02 is a reproduction of a composite colour image of the Horsehead Nebula and its immediate surroundings. It is based on three exposures in the visual part of the spectrum with the FORS2 multi-mode instrument at the 8.2-m KUEYEN telescope at Paranal. PR Photo 02b/02 is an enlargement of a smaller area. Technical information about these photos is available below. PR Photo 02a/02 shows the famous "Horsehead Nebula" , which is situated in the Orion molecular cloud complex. Its official name is Barnard 33 and it is a dust protrusion in the southern region of the dense dust cloud Lynds 1630 , on the edge of the HII region IC 434 . The distance to the region is about 1400 light-years (430 pc). This beautiful colour image was produced from three images obtained with the multi-mode FORS2 instrument at the second VLT Unit Telescope ( KUEYEN ), some months after it had "First Light", cf. PR 17/99. The image files were extracted from the VLT Science Archive Facility and the photo constitutes a fine example of the subsequent use of such valuable data. Details about how the photo was made and some weblinks to other pictures are available below. The comparatively large field-of-view of the FORS2 camera (nearly 7 x 7 arcmin 2 ) and the detector resolution (0.2 arcsec/pixel) make this instrument optimally suited for imaging of this extended object and its immediate surroundings. There is obviously a wealth of detail, and scientific information can be derived from the colours shown in this photo. Three predominant colours are seen in the image: red from the hydrogen (H-alpha) emission from the HII region; brown for the foreground obscuring dust; and blue-green for scattered starlight. The blue-green regions of the Horsehead Nebula correspond to regions not shadowed from the light from the stars in the H II region to the top of the picture and scatter stellar radiation towards the observer; these are thus `mountains' of dust . The Horse's `mane' is an area in which there is less dust along the line-of-sight and the background (H-alpha) emission from ionized hydrogen atoms can be seen through the foreground dust. A chaotic area At the high resolution of this image the Horsehead appears very chaotic with many wisps and filaments and diffuse dust . At the top of the figure there is a bright rim separating the dust from the HII region. This is an `ionization front' where the ionizing photons from the HII region are moving into the cloud, destroying the dust and the molecules and heating and ionizing the gas. Dust and molecules can exist in cold regions of interstellar space which are shielded from starlight by very large layers of gas and dust. Astronomers refer to elongated structures, such as the Horsehead, as `elephant trunks' (never mind the zoological confusion!) which are common on the boundaries of HII regions. They can also be seen elsewhere in Orion - another well-known example is the pillars of M16 (the "Eagle Nebula") made famous by the fine HST image - a new infrared view by VLT and ISAAC of this area was published last month, cf. PR 25/01. Such structures are only temporary as they are being constantly eroded by the expanding region of ionized gas and are destroyed on timescales of typically a few thousand years. The Horsehead as we see it today will therefore not last forever and minute changes will become observable as the time passes. The surroundings To the east of the Horsehead (at the bottom of this image) there is ample evidence for star formation in the Lynds 1630 dark cloud . Here, the reflection nebula NGC 2023 surrounds the hot B-type star HD 37903 and some Herbig Haro objects are found which represent high-speed gas outflows from very young stars with masses of around a solar mass. The HII region to the west (top of picture) is ionized by the strong radiation from the bright star Sigma Orionis , located just below the southernmost star in Orion's Belt. The chain of dust and molecular clouds are part of the Orion A and B regions (also known as Orion's `sword' ). Other images of the Horsehead Nebula The Horsehead Nebula is a favourite object for amateur astrophotographers and large numbers of images are available on the WWW. Due to its significant extension and the limited field-of-view of some professional telescopes, fewer photographs are available from today's front-line facilities, except from specialized wide-field instruments like Schmidt telescopes, etc. The links below point to a number of prominent photos obtained elsewhere and some contain further useful links to other sites with more information about this splendid sky area. "Astronomy Picture of the Day" : http://antwrp.gsfc.nasa.gov/apod/ap971025.html Hubble Heritage image : http://hubble.stsci.edu/news_.and._views/pr.cgi?2001%2B12 INT Wide-Field image : http://www.ing.iac.es/PR/science/horsehead.htm NOT image : http://www.not.iac.es/new/general/photos/astronomical/ NOAO Wide-Field image : http://www.noao.edu/outreach/press/pr01/ir0101.html Bill Arnett's site : http://www.seds.org/billa/twn/b33x.html Technical information about the photos PR Photo 02a/02 was produced from three images, obtained on February 1, 2000, with the FORS2 multi-mode instrument at the 8.2-m KUEYEN Unit Telescope and extracted from the VLT Science Archive Facility. The frames were obtained in the B-band (600 sec exposure; wavelength 429 nm; FWHM 88 nm; here rendered as blue), V-band (300 sec; 554 nm; 112 nm; green) and R-band (120 sec; 655 nm; 165 nm; red) The original pixel size is 0.2 arcsec. The photo shows the full field recorded in all three colours, approximately 6.5 x 6.7 arcmin 2. The seeing was about 0.75 arcsec. PR Photo 02b/02 is an enlargement of a smaller area, measuring 3.8 x 4.1 arcmin 2. North is to the left and east is down (the usual orientation for showing this object). The frames were recorded with a TK2048 SITe CCD and the ESO-FIERA Controller, built by the Optical Detector Team (ODT). The images were prepared by Cyril Cavadore (ESO-ODT) , by means of Prism software. ESO PR Photos 02a-b/02 may be reproduced, if credit is given the European Southern Observatory (ESO).
NASA/IPAC Infrared Archive's General Image Cutouts Service
NASA Astrophysics Data System (ADS)
Alexov, A.; Good, J. C.
2006-07-01
The NASA/IPAC Infrared Archive (IRSA) ``Cutouts" Service (http://irsa.ipac.caltech.edu/applications/Cutouts) is a general tool for creating small ``cutout" FITS images and JPEGs from collections of data archived at IRSA. This service is a companion to IRSA's Atlas tool (http://irsa.ipac.caltech.edu/applications/Atlas/), which currently serves over 25 different data collections of various sizes and complexity and returns entire images for a user-defined region of the sky. The Cutouts Services sits on top of Atlas and extends the Atlas functionality by generating subimages at locations and sizes requested by the user from images already identified by Atlas. These results can be downloaded individually, in batch mode (using the program wget), or as a tar file. Cutouts re-uses IRSA's software architecture along with the publicly available Montage mosaicking tools. The advantages and disadvantages of this approach to generic cutout serving will be discussed.
Improved photo response non-uniformity (PRNU) based source camera identification.
Cooper, Alan J
2013-03-10
The concept of using Photo Response Non-Uniformity (PRNU) as a reliable forensic tool to match an image to a source camera is now well established. Traditionally, the PRNU estimation methodologies have centred on a wavelet based de-noising approach. Resultant filtering artefacts in combination with image and JPEG contamination act to reduce the quality of PRNU estimation. In this paper, it is argued that the application calls for a simplified filtering strategy which at its base level may be realised using a combination of adaptive and median filtering applied in the spatial domain. The proposed filtering method is interlinked with a further two stage enhancement strategy where only pixels in the image having high probabilities of significant PRNU bias are retained. This methodology significantly improves the discrimination between matching and non-matching image data sets over that of the common wavelet filtering approach. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
OXYGEN-RICH SUPERNOVA REMNANT IN THE LARGE MAGELLANIC CLOUD
NASA Technical Reports Server (NTRS)
2002-01-01
This is a NASA Hubble Space Telescope image of the tattered debris of a star that exploded 3,000 years ago as a supernova. This supernova remnant, called N132D, lies 169,000 light-years away in the satellite galaxy, the Large Magellanic Cloud. A Hubble Wide Field Planetary Camera 2 image of the inner regions of the supernova remnant shows the complex collisions that take place as fast moving ejecta slam into cool, dense interstellar clouds. This level of detail in the expanding filaments could only be seen previously in much closer supernova remnants. Now, Hubble's capabilities extend the detailed study of supernovae out to the distance of a neighboring galaxy. Material thrown out from the interior of the exploded star at velocities of more than four million miles per hour (2,000 kilometers per second) plows into neighboring clouds to create luminescent shock fronts. The blue-green filaments in the image correspond to oxygen-rich gas ejected from the core of the star. The oxygen-rich filaments glow as they pass through a network of shock fronts reflected off dense interstellar clouds that surrounded the exploded star. These dense clouds, which appear as reddish filaments, also glow as the shock wave from the supernova crushes and heats the clouds. Supernova remnants provide a rare opportunity to observe directly the interiors of stars far more massive than our Sun. The precursor star to this remnant, which was located slightly below and left of center in the image, is estimated to have been 25 times the mass of our Sun. These stars 'cook' heavier elements through nuclear fusion, including oxygen, nitrogen, carbon, iron etc., and the titanic supernova explosions scatter this material back into space where it is used to create new generations of stars. This is the mechanism by which the gas and dust that formed our solar system became enriched with the elements that sustain life on this planet. Hubble spectroscopic observations will be used to determine the exact chemical composition of this nuclear- processed material, and thereby test theories of stellar evolution. The image shows a region of the remnant 50 light-years across. The supernova explosion should have been visible from Earth's southern hemisphere around 1,000 B.C., but there are no known historical records that chronicle what would have appeared as a 'new star' in the heavens. This 'true color' picture was made by superposing images taken on 9-10 August 1994 in three of the strongest optical emission lines: singly ionized sulfur (red), doubly ionized oxygen (green), and singly ionized oxygen (blue). Photo credit: Jon A. Morse (STScI) and NASA Investigating team: William P. Blair (PI; JHU), Michael A. Dopita (MSSSO), Robert P. Kirshner (Harvard), Knox S. Long (STScI), Jon A. Morse (STScI), John C. Raymond (SAO), Ralph S. Sutherland (UC-Boulder), and P. Frank Winkler (Middlebury). Image files in GIF and JPEG format may be accessed via anonymous ftp from oposite.stsci.edu in /pubinfo: GIF: /pubinfo/GIF/N132D.GIF JPEG: /pubinfo/JPEG/N132D.jpg The same images are available via World Wide Web from links in URL http://www.stsci.edu/public.html.
NASA Astrophysics Data System (ADS)
Li, Xianye; Meng, Xiangfeng; Yang, Xiulun; Wang, Yurong; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2018-03-01
A multiple-image encryption method via lifting wavelet transform (LWT) and XOR operation is proposed, which is based on a row scanning compressive ghost imaging scheme. In the encryption process, the scrambling operation is implemented for the sparse images transformed by LWT, then the XOR operation is performed on the scrambled images, and the resulting XOR images are compressed in the row scanning compressive ghost imaging, through which the ciphertext images can be detected by bucket detector arrays. During decryption, the participant who possesses his/her correct key-group, can successfully reconstruct the corresponding plaintext image by measurement key regeneration, compression algorithm reconstruction, XOR operation, sparse images recovery, and inverse LWT (iLWT). Theoretical analysis and numerical simulations validate the feasibility of the proposed method.
Image data compression having minimum perceptual error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1995-01-01
A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
High efficient optical remote sensing images acquisition for nano-satellite-framework
NASA Astrophysics Data System (ADS)
Li, Feng; Xin, Lei; Liu, Yang; Fu, Jie; Liu, Yuhong; Guo, Yi
2017-09-01
It is more difficult and challenging to implement Nano-satellite (NanoSat) based optical Earth observation missions than conventional satellites because of the limitation of volume, weight and power consumption. In general, an image compression unit is a necessary onboard module to save data transmission bandwidth and disk space. The image compression unit can get rid of redundant information of those captured images. In this paper, a new image acquisition framework is proposed for NanoSat based optical Earth observation applications. The entire process of image acquisition and compression unit can be integrated in the photo detector array chip, that is, the output data of the chip is already compressed. That is to say, extra image compression unit is no longer needed; therefore, the power, volume, and weight of the common onboard image compression units consumed can be largely saved. The advantages of the proposed framework are: the image acquisition and image compression are combined into a single step; it can be easily built in CMOS architecture; quick view can be provided without reconstruction in the framework; Given a certain compression ratio, the reconstructed image quality is much better than those CS based methods. The framework holds promise to be widely used in the future.
Design and evaluation of web-based image transmission and display with different protocols
NASA Astrophysics Data System (ADS)
Tan, Bin; Chen, Kuangyi; Zheng, Xichuan; Zhang, Jianguo
2011-03-01
There are many Web-based image accessing technologies used in medical imaging area, such as component-based (ActiveX Control) thick client Web display, Zerofootprint thin client Web viewer (or called server side processing Web viewer), Flash Rich Internet Application(RIA) ,or HTML5 based Web display. Different Web display methods have different peformance in different network environment. In this presenation, we give an evaluation on two developed Web based image display systems. The first one is used for thin client Web display. It works between a PACS Web server with WADO interface and thin client. The PACS Web server provides JPEG format images to HTML pages. The second one is for thick client Web display. It works between a PACS Web server with WADO interface and thick client running in browsers containing ActiveX control, Flash RIA program or HTML5 scripts. The PACS Web server provides native DICOM format images or JPIP stream for theses clients.
Learning random networks for compression of still and moving images
NASA Technical Reports Server (NTRS)
Gelenbe, Erol; Sungur, Mert; Cramer, Christopher
1994-01-01
Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.
An image assessment study of image acceptability of the Galileo low gain antenna mission
NASA Technical Reports Server (NTRS)
Chuang, S. L.; Haines, R. F.; Grant, T.; Gold, Yaron; Cheung, Kar-Ming
1994-01-01
This paper describes a study conducted by NASA Ames Research Center (ARC) in collaboration with the Jet Propulsion Laboratory (JPL), Pasadena, California on the image acceptability of the Galileo Low Gain Antenna mission. The primary objective of the study is to determine the impact of the Integer Cosine Transform (ICT) compression algorithm on Galilean images of atmospheric bodies, moons, asteroids and Jupiter's rings. The approach involved fifteen volunteer subjects representing twelve institutions involved with the Galileo Solid State Imaging (SSI) experiment. Four different experiment specific quantization tables (q-table) and various compression stepsizes (q-factor) to achieve different compression ratios were used. It then determined the acceptability of the compressed monochromatic astronomical images as evaluated by Galileo SSI mission scientists. Fourteen different images were evaluated. Each observer viewed two versions of the same image side by side on a high resolution monitor, each was compressed using a different quantization stepsize. They were requested to select which image had the highest overall quality to support them in carrying out their visual evaluations of image content. Then they rated both images using a scale from one to five on its judged degree of usefulness. Up to four pre-selected types of images were presented with and without noise to each subject based upon results of a previously administered survey of their image preferences. Fourteen different images in seven image groups were studied. The results showed that: (1) acceptable compression ratios vary widely with the type of images; (2) noisy images detract greatly from image acceptability and acceptable compression ratios; and (3) atmospheric images of Jupiter seem to have higher compression ratios of 4 to 5 times that of some clear surface satellite images.
Compressed/reconstructed test images for CRAF/Cassini
NASA Technical Reports Server (NTRS)
Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.
1991-01-01
A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.
High-performance compression of astronomical images
NASA Technical Reports Server (NTRS)
White, Richard L.
1993-01-01
Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.
Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao
2016-12-01
To address the low compression efficiency of lossless compression and the low image quality of general near-lossless compression, a novel near-lossless compression algorithm based on adaptive spatial prediction is proposed for medical sequence images for possible diagnostic use in this paper. The proposed method employs adaptive block size-based spatial prediction to predict blocks directly in the spatial domain and Lossless Hadamard Transform before quantization to improve the quality of reconstructed images. The block-based prediction breaks the pixel neighborhood constraint and takes full advantage of the local spatial correlations found in medical images. The adaptive block size guarantees a more rational division of images and the improved use of the local structure. The results indicate that the proposed algorithm can efficiently compress medical images and produces a better peak signal-to-noise ratio (PSNR) under the same pre-defined distortion than other near-lossless methods.
Lossless data embedding for all image formats
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Du, Rui
2002-04-01
Lossless data embedding has the property that the distortion due to embedding can be completely removed from the watermarked image without accessing any side channel. This can be a very important property whenever serious concerns over the image quality and artifacts visibility arise, such as for medical images, due to legal reasons, for military images or images used as evidence in court that may be viewed after enhancement and zooming. We formulate two general methodologies for lossless embedding that can be applied to images as well as any other digital objects, including video, audio, and other structures with redundancy. We use the general principles as guidelines for designing efficient, simple, and high-capacity lossless embedding methods for three most common image format paradigms - raw, uncompressed formats (BMP), lossy or transform formats (JPEG), and palette formats (GIF, PNG). We close the paper with examples of how the concept of lossless data embedding can be used as a powerful tool to achieve a variety of non-trivial tasks, including elegant lossless authentication using fragile watermarks. Note on terminology: some authors coined the terms erasable, removable, reversible, invertible, and distortion-free for the same concept.
NASA Astrophysics Data System (ADS)
Seeram, Euclid
2006-03-01
The large volumes of digital images produced by digital imaging modalities in Radiology have provided the motivation for the development of picture archiving and communication systems (PACS) in an effort to provide an organized mechanism for digital image management. The development of more sophisticated methods of digital image acquisition (Multislice CT and Digital Mammography, for example), as well as the implementation and performance of PACS and Teleradiology systems in a health care environment, have created challenges in the area of image compression with respect to storing and transmitting digital images. Image compression can be reversible (lossless) or irreversible (lossy). While in the former, there is no loss of information, the latter presents concerns since there is a loss of information. This loss of information from diagnostic medical images is of primary concern not only to radiologists, but also to patients and their physicians. In 1997, Goldberg pointed out that "there is growing evidence that lossy compression can be applied without significantly affecting the diagnostic content of images... there is growing consensus in the radiologic community that some forms of lossy compression are acceptable". The purpose of this study was to explore the opinions of expert radiologists, and related professional organizations on the use of irreversible compression in routine practice The opinions of notable radiologists in the US and Canada are varied indicating no consensus of opinion on the use of irreversible compression in primary diagnosis, however, they are generally positive on the notion of the image storage and transmission advantages. Almost all radiologists are concerned with the litigation potential of an incorrect diagnosis based on irreversible compressed images. The survey of several radiology professional and related organizations reveals that no professional practice standards exist for the use of irreversible compression. Currently, the only standard for image compression is stated in the ACR's Technical Standards for Teleradiology and Digital Image Management.
Clinical utility of wavelet compression for resolution-enhanced chest radiography
NASA Astrophysics Data System (ADS)
Andriole, Katherine P.; Hovanes, Michael E.; Rowberg, Alan H.
2000-05-01
This study evaluates the usefulness of wavelet compression for resolution-enhanced storage phosphor chest radiographs in the detection of subtle interstitial disease, pneumothorax and other abnormalities. A wavelet compression technique, MrSIDTM (LizardTech, Inc., Seattle, WA), is implemented which compresses the images from their original 2,000 by 2,000 (2K) matrix size, and then decompresses the image data for display at optimal resolution by matching the spatial frequency characteristics of image objects using a 4,000- square matrix. The 2K-matrix computed radiography (CR) chest images are magnified to a 4K-matrix using wavelet series expansion. The magnified images are compared with the original uncompressed 2K radiographs and with two-times magnification of the original images. Preliminary results show radiologist preference for MrSIDTM wavelet-based magnification over magnification of original data, and suggest that the compressed/decompressed images may provide an enhancement to the original. Data collection for clinical trials of 100 chest radiographs including subtle interstitial abnormalities and/or subtle pneumothoraces and normal cases, are in progress. Three experienced thoracic radiologists will view images side-by- side on calibrated softcopy workstations under controlled viewing conditions, and rank order preference tests will be performed. This technique combines image compression with image enhancement, and suggests that compressed/decompressed images can actually improve the originals.
Pornographic image recognition and filtering using incremental learning in compressed domain
NASA Astrophysics Data System (ADS)
Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao
2015-11-01
With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.
A Framework of Hyperspectral Image Compression using Neural Networks
Masalmah, Yahya M.; Martínez Nieves, Christian; Rivera Soto, Rafael; ...
2015-01-01
Hyperspectral image analysis has gained great attention due to its wide range of applications. Hyperspectral images provide a vast amount of information about underlying objects in an image by using a large range of the electromagnetic spectrum for each pixel. However, since the same image is taken multiple times using distinct electromagnetic bands, the size of such images tend to be significant, which leads to greater processing requirements. The aim of this paper is to present a proposed framework for image compression and to study the possible effects of spatial compression on quality of unmixing results. Image compression allows usmore » to reduce the dimensionality of an image while still preserving most of the original information, which could lead to faster image processing. Lastly, this paper presents preliminary results of different training techniques used in Artificial Neural Network (ANN) based compression algorithm.« less
Morgan, Karen L. M.; Krohn, M. Dennis; Guy, Kristy K.
2016-04-28
The U.S. Geological Survey (USGS), as part of the National Assessment of Coastal Change Hazards project, conducts baseline and storm-response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms (Morgan, 2009). On September 14-15, 2008, the USGS conducted an oblique aerial photographic survey along the Alabama, Mississippi, and Louisiana barrier islands and the north Texas coast, aboard a Beechcraft Super King Air 200 (aircraft) at an altitude of 500 feet (ft) and approximately 1,200 ft offshore. This mission was flown to collect post-Hurricane Ike data for assessing incremental changes in the beach and nearshore area since the last survey, flown on September 9-10, 2008, and the data can be used in the assessment of future coastal change.The photographs provided in this report are Joint Photographic Experts Group (JPEG) images. ExifTool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft at the time the photograph was taken and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the state of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segments can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet.In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML file was created using the photographic navigation files. The KML file can be found in the kml folder.
Morgan, Karen L. M.; Karen A. Westphal,
2016-04-21
The U.S. Geological Survey (USGS), as part of the National Assessment of Coastal Change Hazards project, conducts baseline and storm-response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms (Morgan, 2009). On September 2-3, 2012, the USGS conducted an oblique aerial photographic survey along the Alabama, Mississippi, and Louisiana barrier islands aboard a Cessna 172 (aircraft) at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect post-Hurricane Isaac data for assessing incremental changes in the beach and nearshore area since the last survey, flown in September 2008 (central Louisiana barrier islands) and June 2011 (Dauphin Island, Alabama, to Breton Island, Louisiana), and the data can be used in the assessment of future coastal change.The photographs provided in this report are Joint Photographic Experts Group (JPEG) images. ExifTool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft at the time the photograph was taken and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the state of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segments can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet.In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files. These KML file(s) can be found in the kml folder.
ESO and NSF Sign Agreement on ALMA
NASA Astrophysics Data System (ADS)
2003-02-01
Green Light for World's Most Powerful Radio Observatory On February 25, 2003, the European Southern Observatory (ESO) and the US National Science Foundation (NSF) are signing a historic agreement to construct and operate the world's largest and most powerful radio telescope, operating at millimeter and sub-millimeter wavelength. The Director General of ESO, Dr. Catherine Cesarsky, and the Director of the NSF, Dr. Rita Colwell, act for their respective organizations. Known as the Atacama Large Millimeter Array (ALMA), the future facility will encompass sixty-four interconnected 12-meter antennae at a unique, high-altitude site at Chajnantor in the Atacama region of northern Chile. ALMA is a joint project between Europe and North America. In Europe, ESO is leading on behalf of its ten member countries and Spain. In North America, the NSF also acts for the National Research Council of Canada and executes the project through the National Radio Astronomy Observatory (NRAO) operated by Associated Universities, Inc. (AUI). The conclusion of the ESO-NSF Agreement now gives the final green light for the ALMA project. The total cost of approximately 650 million Euro (or US Dollars) is shared equally between the two partners. Dr. Cesarsky is excited: "This agreement signifies the start of a great project of contemporary astronomy and astrophysics. Representing Europe, and in collaboration with many laboratories and institutes on this continent, we together look forward towards wonderful research projects. With ALMA we may learn how the earliest galaxies in the Universe really looked like, to mention but one of the many eagerly awaited opportunities with this marvellous facility". "With this agreement, we usher in a new age of research in astronomy" says Dr. Colwell. "By working together in this truly global partnership, the international astronomy community will be able to ensure the research capabilities needed to meet the long-term demands of our scientific enterprise, and that we will be able to study and understand our universe in ways that have previously been beyond our vision". The recent Presidential decree from Chile for AUI and the agreement signed in late 2002 between ESO and the Government of the Republic of Chile (cf. ESO PR 18/02) recognize the interest that the ALMA Project has for Chile, as it will deepen and strengthen the cooperation in scientific and technological matters between the parties. A joint ALMA Board has been established which oversees the realisation of the ALMA project via the management structure. This Board meets for the first time on February 24-25, 2003, at NSF in Washington and will witness this historic event. ALMA: Imaging the Light from Cosmic Dawn ESO PR Photo 06a/03 ESO PR Photo 06a/03 [Preview - JPEG: 588 x 400 pix - 52k [Normal - JPEG: 1176 x 800 pix - 192k] [Hi-Res - JPEG: 3300 x 2244 pix - 2.0M] ESO PR Photo 06b/03 ESO PR Photo 06b/03 [Preview - JPEG: 502 x 400 pix - 82k [Normal - JPEG: 1003 x 800 pix - 392k] [Hi-Res - JPEG: 2222 x 1773 pix - 3.0M] ESO PR Photo 06c/03 ESO PR Photo 06c/03 [Preview - JPEG: 474 x 400 pix - 84k [Normal - JPEG: 947 x 800 pix - 344k] [Hi-Res - JPEG: 2272 x 1920 pix - 2.0M] ESO PR Photo 06d/03 ESO PR Photo 06d/03 [Preview - JPEG: 414 x 400 pix - 69k [Normal - JPEG: 828 x 800 pix - 336k] [HiRes - JPEG: 2935 x 2835 pix - 7.4k] Captions: PR Photo 06a/03 shows an artist's view of the Atacama Large Millimeter Array (ALMA), with 64 12-m antennae. PR Photo 06b/03 is another such view, with the array arranged in a compact configuration at the high-altitude Chajnantor site. The ALMA VertexRSI prototype antennae is shown in PR Photo 06c/03 on the Antenna Test Facility (ATF) site at the NRAO Very Large Array (VLA) site near Socorro (New Mexico, USA). The future ALMA site at Llano de Chajnantor at 5000 metre altitude, some 40 km East of the village of San Pedro de Atacama (Chile) is seen in PR Photo 06d/03 - this view was obtained at 11 hrs in the morning on a crisp and clear autumn day (more views of this site are available at the Chajnantor Photo Gallery). The Atacama Large Millimeter Array (ALMA) will be one of astronomy's most powerful telescopes - providing unprecedented imaging capabilities and sensitivity in the corresponding wavelength range, many orders of magnitude greater than anything of its kind today. ALMA will be an array of 64 antennae that will work together as one telescope to study millimeter and sub-millimeter wavelength radiation from space. This radiation crosses the critical boundary between infrared and microwave radiation and holds the key to understanding such processes as planet and star formation, the formation of early galaxies and galaxy clusters, and the formation of organic and other molecules in space. "ALMA will be one of astronomy's premier tools for studying the universe" says Nobel Laureate Riccardo Giacconi, President of AUI (and former ESO Director General (1993-1999)). "The entire astronomical community is anxious to have the unprecedented power and resolution that ALMA will provide". The President of the ESO Council, Professor Piet van der Kruit, agrees: "ALMA heralds a break-through in sub-millimeter and millimeter astronomy, allowing some of the most penetrating studies the Universe ever made. It is safe to predict that there will be exciting scientific surprises when ALMA enters into operation". What is millimeter and sub-millimeter wavelength astronomy? Astronomers learn about objects in space by studying the energy emitted by those objects. Our Sun and the other stars throughout the Universe emit visible light. But these objects also emit other kinds of light waves, such as X-rays, infrared radiation, and radio waves. Some objects emit very little or no visible light, yet are strong sources at other wavelengths in the electromagnetic spectrum. Much of the energy in the Universe is present in the sub-millimeter and millimeter portion of the spectrum. This energy comes from the cold dust mixed with gas in interstellar space. It also comes from distant galaxies that formed many billions of years ago at the edges of the known universe. With ALMA, astronomers will have a uniquely powerful facility with access to this remarkable portion of the spectrum and hence, new and wonderful opportunities to learn more about those objects. Current observatories simply do not have anywhere near the necessary sensitivity and resolution to unlock the secrets that abundant sub-millimeter and millimeter wavelength radiation can reveal. It will take the unparalleled power of ALMA to fully study the cosmic emission at this wavelength and better understand the nature of the universe. Scientists from all over the world will use ALMA. They will compete for observing time by submitting proposals, which will be judged by a group of their peers on the basis of scientific merit. ALMA's unique capabilities ALMA's ability to detect remarkably faint sub-millimeter and millimeter wavelength emission and to create high-resolution images of the source of that emission gives it capabilities not found in any other astronomical instruments. ALMA will therefore be able to study phenomena previously out of reach to astronomers and astrophysicists, such as: * Very young galaxies forming stars at the earliest times in cosmic history; * New planets forming around young stars in our galaxy, the Milky Way; * The birth of new stars in spinning clouds of gas and dust; and * Interstellar clouds of gas and dust that are the nurseries of complex molecules and even organic chemicals that form the building blocks of life. How will ALMA work? All of ALMA's 64 antennae will work in concert, taking quick "snapshots" or long-term exposures of astronomical objects. Cosmic radiation from these objects will be reflected from the surface of each antenna and focussed onto highly sensitive receivers cooled to just a few degrees above absolute zero in order to suppress undesired "noise" from the surroundings. There the signals will be amplified many times, digitized, and then sent along underground fiber-optic cables to a large signal processor in the central control building. This specialized computer, called a correlator - running at 16,000 million-million operations per second - will combine all of the data from the 64 antennae to make images of remarkable quality. The extraordinary ALMA site Since atmospheric water vapor absorbs millimeter and (especially) sub-millimeter waves, ALMA must be constructed at a very high altitude in a very dry region of the earth. Extensive tests showed that the sky above the Atacama Desert of Chile has the excellent clarity and stability essential for ALMA. That is why ALMA will be built there, on Llano de Chajnantor at an altitude of 5,000 metres in the Chilean Andes. A series of views of this site, also in high-resolution suitable for reproduction, is available at the Chajnantor Photo Gallery. Timeline for ALMA June 1998: Phase 1 (Research and Development) June 1999: European/American Memorandum of Understanding February 2003: Signature of the bilateral Agreement 2004: Tests of the Prototype System 2007: Initial scientific operation of a partially completed array 2011: End of construction of the array
CWICOM: A Highly Integrated & Innovative CCSDS Image Compression ASIC
NASA Astrophysics Data System (ADS)
Poupat, Jean-Luc; Vitulli, Raffaele
2013-08-01
The space market is more and more demanding in terms of on image compression performances. The earth observation satellites instrument resolution, the agility and the swath are continuously increasing. It multiplies by 10 the volume of picture acquired on one orbit. In parallel, the satellites size and mass are decreasing, requiring innovative electronic technologies reducing size, mass and power consumption. Astrium, leader on the market of the combined solutions for compression and memory for space application, has developed a new image compression ASIC which is presented in this paper. CWICOM is a high performance and innovative image compression ASIC developed by Astrium in the frame of the ESA contract n°22011/08/NLL/LvH. The objective of this ESA contract is to develop a radiation hardened ASIC that implements the CCSDS 122.0-B-1 Standard for Image Data Compression, that has a SpaceWire interface for configuring and controlling the device, and that is compatible with Sentinel-2 interface and with similar Earth Observation missions. CWICOM stands for CCSDS Wavelet Image COMpression ASIC. It is a large dynamic, large image and very high speed image compression ASIC potentially relevant for compression of any 2D image with bi-dimensional data correlation such as Earth observation, scientific data compression… The paper presents some of the main aspects of the CWICOM development, such as the algorithm and specification, the innovative memory organization, the validation approach and the status of the project.
Image Data Compression Having Minimum Perceptual Error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1997-01-01
A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
NASA Astrophysics Data System (ADS)
Leihong, Zhang; Zilan, Pan; Luying, Wu; Xiuhua, Ma
2016-11-01
To solve the problem that large images can hardly be retrieved for stringent hardware restrictions and the security level is low, a method based on compressive ghost imaging (CGI) with Fast Fourier Transform (FFT) is proposed, named FFT-CGI. Initially, the information is encrypted by the sender with FFT, and the FFT-coded image is encrypted by the system of CGI with a secret key. Then the receiver decrypts the image with the aid of compressive sensing (CS) and FFT. Simulation results are given to verify the feasibility, security, and compression of the proposed encryption scheme. The experiment suggests the method can improve the quality of large images compared with conventional ghost imaging and achieve the imaging for large-sized images, further the amount of data transmitted largely reduced because of the combination of compressive sensing and FFT, and improve the security level of ghost images through ciphertext-only attack (COA), chosen-plaintext attack (CPA), and noise attack. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.
Blind compressed sensing image reconstruction based on alternating direction method
NASA Astrophysics Data System (ADS)
Liu, Qinan; Guo, Shuxu
2018-04-01
In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.
BILSAT-1: A low-cost, agile, earth observation microsatellite for Turkey
NASA Astrophysics Data System (ADS)
Bradford, Andy; Gomes, Luis M.; Sweeting, Martin; Yuksel, Gokhan; Ozkaptan, Cem; Orlu, Unsal
2003-08-01
TUBITAK-BIlTEN has initaited a project to develop and propagate small satallite technologies in Turkey.As part of this initiative, TUBITAK-BILTEN is working working SSTL to develop a 100kg class enhanced microsatelliote, BILSAT-1. With the successful completion of this project, TUBITAK-BILTEN will be capable of producing its own satellites, covering all phases from design from design to production and in-orbit operatiion. It is hoped that acquisition of these technologies will stimulate Turkish industry into greater involvement in space related activities. The project was started in August 2001 and will run through to February 2003 with launch scheduled for July 2003. BILSAT-1 will be one of the most capable satellites that SSTL have eveer built and features several technologies normally only found on larger satellites. Specifically, the Attitude Determination and Control System of BILSAT-1 will be the most advanced that SSTL have everflown: Dual redundant Star Cameras, sun sensors and rate gyros provide accurate and precise attitude information allowing a very high degree of attitude knowledge. Acruators on board will make the satellite extremely agile — for instance allowing fast slew manoeuvers about its roll and pitch axes. The agile control system also enables ground target revisit times to be reduced compared to nadir-pointing gravity gradient stabilized satellites, and will allow stereoscopic imaging, target tracking and multiple attitude imaging to be undertaken with the satellites prime payloads: a 4-band multispectral 26-metre GSD imaging system and a 12-metre GSD panchromatic imager. Also on board the satellite there are additional payloads, including a state-of-the-art Digital Signal Processing payload (GEZGIN) that will enabele real time image compression in JPEG2000 format using a high performance floating point DSP, and a low resolution multi spectral (9-band) camera (COBAN). BILSAT-1 will also co-operate in the international Disaster Monitoring Constellation (DMC) led by SSTL, providing the ability to enhance the imaging capabilities of the constellation. In parallel with the microsatellite design and build activities, all the infrastructure required to design, produce and operate a satellite is being constructed at BILTEN's premises in Turkey. This infrastructure includes assembly and integration rooms, a PCB prototyping workshop, research and development laboratories, and a satellite mission control ground station.
BILSAT-1: a Low-Cost Agile Earth Observation Microsatellite for Turkey
NASA Astrophysics Data System (ADS)
Bradford, Andy; Gomes, Luis M.; Sweeting, Martin, , Sir
TUBITAK-BILTEN has initiated a project to develop and propagate small satellite technologies in Turkey. As part of this initiative, TUBITAK-BILTEN is working with SSTL (UK) to develop a 100kg class microsatellite, BILSAT-1. With the successful completion of this project, TUBITAK-BILTEN will be capable of producing its own satellites, covering all phases from design to production. It is hoped that acquisition of these technologies will stimulate Turkish industry into greater involvement in space related activities. The project was started in August 2001 and will run through to launch scheduled for February 2003. BILSAT-1 will be one of the most capable microsatellites built by SSTL and features several technologies normally only found on larger satellites. Specifically, the Attitude Determination and Control System of BILSAT-1 will include dual-redundant star cameras, sun sensors and rate gyros to provide precise attitude information allowing very accurate attitude knowledge. Actuators on board will make the satellite extremely agile, allowing fast slew manoeuvres about its roll and pitch axes enabling the satellite to reduce imaging revisit times compared to fixed nadir-pointing gravity gradient stabilized satellites, and will allow novel and complex operations scenarios to be undertaken with the satellites prime payloads; a 26-metre GSD 4-band multispectral and a 12-metre GSD panchromatic imaging system. Stereoscopic imaging, target tracking and multiple attitude imaging are all operational scenarios that feature in the mission plan. Also on board the satellite are additional payloads, including a state-of-the-art Digital Signal Processing Board payload that will enable real time image compression in JPEG2000 format using a high performance floating point DSP, and a low resolution 9-band multispectral camera. BILSAT-1 will join the other 5 microsatellites in the SSTL-led international Disaster Monitoring Constellation (DMC), providing the ability to enhance the imaging capabilities of the constellation whose objective is to provide EO with daily revisit worldwide. In parallel with the Satellite design and build activities at the Surrey Space Centre &SSTL in the UK, all the infrastructure required to design, produce and operate a satellite, is being constructed at BILTEN's premises in Turkey. This infrastructure includes assembly and integration rooms, a PCB prototyping workshop, research and development laboratories, and a satellite mission control ground station.
Next VLT Instrument Ready for the Astronomers
NASA Astrophysics Data System (ADS)
2000-02-01
FORS2 Commissioning Period Successfully Terminated The commissioning of the FORS2 multi-mode astronomical instrument at KUEYEN , the second FOcal Reducer/low dispersion Spectrograph at the ESO Very Large Telescope, was successfully finished today. This important work - that may be likened with the test driving of a new car model - took place during two periods, from October 22 to November 21, 1999, and January 22 to February 8, 2000. The overall goal was to thoroughly test the functioning of the new instrument, its conformity to specifications and to optimize its operation at the telescope. FORS2 is now ready to be handed over to the astronomers on April 1, 2000. Observing time for a six-month period until October 1 has already been allocated to a large number of research programmes. Two of the images that were obtained with FORS2 during the commissioning period are shown here. An early report about this instrument is available as ESO PR 17/99. The many modes of FORS2 The FORS Commissioning Team carried out a comprehensive test programme for all observing modes. These tests were done with "observation blocks (OBs)" that describe the set-up of the instrument and telescope for each exposure in all details, e.g., position in the sky of the object to be observed, filters, exposure time, etc.. Whenever an OB is "activated" from the control console, the corresponding observation is automatically performed. Additional information about the VLT Data Flow System is available in ESO PR 10/99. The FORS2 observing modes include direct imaging, long-slit and multi-object spectroscopy, exactly as in its twin, FORS1 at ANTU . In addition, FORS2 contains the "Mask Exchange Unit" , a motorized magazine that holds 10 masks made of thin metal plates into which the slits are cut by means of a laser. The advantage of this particular observing method is that more spectra (of more objects) can be taken with a single exposure (up to approximately 80) and that the shape of the slits can be adapted to the shape of the objects, thus increasing the scientific return. Results obtained so far look very promising. To increase further the scientific power of the FORS2 instrument in the spectroscopic mode, a number of new optical dispersion elements ("grisms", i.e., a combination of a grating and a glass prism) have been added. They give the scientists a greater choice of spectral resolution and wavelength range. Another mode that is new to FORS2 is the high time resolution mode. It was demonstrated with the Crab pulsar, cf. ESO PR 17/99 and promises very interesting scientific returns. Images from the FORS2 Commissioning Phase The two composite images shown below were obtained during the FORS2 commissioning work. They are based on three exposures through different optical broadband filtres (B: 429 nm central wavelength; 88 nm FWHM (Full Width at Half Maximum), V: 554/111 nm, R: 655/165 nm). All were taken with the 2048 x 2048 pixel 2 CCD detector with a field of view of 6.8 x 6.8 arcmin 2 ; each pixel measures 24 µm square. They were flatfield corrected and bias subtracted, scaled in intensity and some cosmetic cleaning was performed, e.g. removal of bad columns on the CCD. North is up and East is left. Tarantula Nebula in the Large Magellanic Cloud ESO Press Photo 05a/00 ESO Press Photo 05a/00 [Preview; JPEG: 400 x 452; 52k] [Normal; JPEG: 800 x 903; 142k] [Full-Res; JPEG: 2048 x 2311; 2.0Mb] The Tarantula Nebula in the Large Magellanic Cloud , as obtained with FORS2 at KUEYEN during the recent Commissioning period. It was taken during the night of January 31 - February 1, 2000. It is a composite of three exposures in B (30 sec exposure, image quality 0.75 arcsec; here rendered in blue colour), V (15 sec, 0.70 arcsec; green) and R (10 sec, 0.60 arcsec; red). The full-resolution version of this photo retains the orginal pixels. 30 Doradus , also known as the Tarantula Nebula , or NGC 2070 , is located in the Large Magellanic Cloud (LMC) , some 170,000 light-years away. It is one of the largest known star-forming regions in the Local Group of Galaxies. It was first catalogued as a star, but then recognized to be a nebula by the French astronomer A. Lacaille in 1751-52. The Tarantula Nebula is the only extra-galactic nebula which can be seen with the unaided eye. It contains in the centre the open stellar cluster R 136 with many of the largest, hottest, and most massive stars known. Radio Galaxy Centaurus A ESO Press Photo 05b/00 ESO Press Photo 05b/00 [Preview; JPEG: 400 x 448; 40k] [Normal; JPEG: 800 x 896; 110k] [Full-Res; JPEG: 2048 x 2293; 2.0Mb] The radio galaxy Centarus A , as obtained with FORS2 at KUEYEN during the recent Commissioning period. It was taken during the night of January 31 - February 1, 2000. It is a composite of three exposures in B (300 sec exposure, image quality 0.60 arcsec; here rendered in blue colour), V (240 sec, 0.60 arcsec; green) and R (240 sec, 0.55 arcsec; red). The full-resolution version of this photo retains the orginal pixels. ESO Press Photo 05c/00 ESO Press Photo 05c/00 [Preview; JPEG: 400 x 446; 52k] [Normal; JPEG: 801 x 894; 112k] An area, north-west of the centre of Centaurus A with a detailed view of the dust lane and clusters of luminous blue stars. The normal version of this photo retains the orginal pixels. The new FORS2 image of Centaurus A , also known as NGC 5128 , is an example of how frontier science can be combined with esthetic aspects. This galaxy is a most interesting object for the present attempts to understand active galaxies . It is being investigated by means of observations in all spectral regions, from radio via infrared and optical wavelengths to X- and gamma-rays. It is one of the most extensively studied objects in the southern sky. FORS2 , with its large field-of-view and excellent optical resolution, makes it possible to study the global context of the active region in Centaurus A in great detail. Note for instance the great number of massive and luminous blue stars that are well resolved individually, in the upper right and lower left in PR Photo 05b/00 . Centaurus A is one of the foremost examples of a radio-loud active galactic nucleus (AGN) . On images obtained at optical wavelengths, thick dust layers almost completely obscure the galaxy's centre. This structure was first reported by Sir John Herschel in 1847. Until 1949, NGC 5128 was thought to be a strange object in the Milky Way, but it was then identified as a powerful radio galaxy and designated Centaurus A . The distance is about 10-13 million light-years (3-4 Mpc) and the apparent visual magnitude is about 8, or 5 times too faint to be seen with the unaided eye. There is strong evidence that Centaurus A is a merger of an elliptical with a spiral galaxy, since elliptical galaxies would not have had enough dust and gas to form the young, blue stars seen along the edges of the dust lane. The core of Centaurus A is the smallest known extragalactic radio source, only 10 light-days across. A jet of high energy particles from this centre is observed in radio and X-ray images. The core probably contains a supermassive black hole with a mass of about 100 million solar masses. This is the caption to ESO PR Photos 05a-c/00 . They may be reproduced, if credit is given to the European Southern Observatory..
Aldossari, M; Alfalou, A; Brosseau, C
2014-09-22
This study presents and validates an optimized method of simultaneous compression and encryption designed to process images with close spectra. This approach is well adapted to the compression and encryption of images of a time-varying scene but also to static polarimetric images. We use the recently developed spectral fusion method [Opt. Lett.35, 1914-1916 (2010)] to deal with the close resemblance of the images. The spectral plane (containing the information to send and/or to store) is decomposed in several independent areas which are assigned according a specific way. In addition, each spectrum is shifted in order to minimize their overlap. The dual purpose of these operations is to optimize the spectral plane allowing us to keep the low- and high-frequency information (compression) and to introduce an additional noise for reconstructing the images (encryption). Our results show that not only can the control of the spectral plane enhance the number of spectra to be merged, but also that a compromise between the compression rate and the quality of the reconstructed images can be tuned. We use a root-mean-square (RMS) optimization criterion to treat compression. Image encryption is realized at different security levels. Firstly, we add a specific encryption level which is related to the different areas of the spectral plane, and then, we make use of several random phase keys. An in-depth analysis at the spectral fusion methodology is done in order to find a good trade-off between the compression rate and the quality of the reconstructed images. Our new proposal spectral shift allows us to minimize the image overlap. We further analyze the influence of the spectral shift on the reconstructed image quality and compression rate. The performance of the multiple-image optical compression and encryption method is verified by analyzing several video sequences and polarimetric images.
A Lossless hybrid wavelet-fractal compression for welding radiographic images.
Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud
2016-01-01
In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.
Digital mammography, cancer screening: Factors important for image compression
NASA Technical Reports Server (NTRS)
Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria
1993-01-01
The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.
Wavelet/scalar quantization compression standard for fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.
1996-06-12
US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class ofmore » potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.« less
Fu, Chi-Yung; Petrich, Loren I.
1997-01-01
An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.
Fu, C.Y.; Petrich, L.I.
1997-03-25
An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.
Hyperspectral data compression using a Wiener filter predictor
NASA Astrophysics Data System (ADS)
Villeneuve, Pierre V.; Beaven, Scott G.; Stocker, Alan D.
2013-09-01
The application of compression to hyperspectral image data is a significant technical challenge. A primary bottleneck in disseminating data products to the tactical user community is the limited communication bandwidth between the airborne sensor and the ground station receiver. This report summarizes the newly-developed "Z-Chrome" algorithm for lossless compression of hyperspectral image data. A Wiener filter prediction framework is used as a basis for modeling new image bands from already-encoded bands. The resulting residual errors are then compressed using available state-of-the-art lossless image compression functions. Compression performance is demonstrated using a large number of test data collected over a wide variety of scene content from six different airborne and spaceborne sensors .
Impact of lossy compression on diagnostic accuracy of radiographs for periapical lesions
NASA Technical Reports Server (NTRS)
Eraso, Francisco E.; Analoui, Mostafa; Watson, Andrew B.; Rebeschini, Regina
2002-01-01
OBJECTIVES: The purpose of this study was to evaluate the lossy Joint Photographic Experts Group compression for endodontic pretreatment digital radiographs. STUDY DESIGN: Fifty clinical charge-coupled device-based, digital radiographs depicting periapical areas were selected. Each image was compressed at 2, 4, 8, 16, 32, 48, and 64 compression ratios. One root per image was marked for examination. Images were randomized and viewed by four clinical observers under standardized viewing conditions. Each observer read the image set three times, with at least two weeks between each reading. Three pre-selected sites per image (mesial, distal, apical) were scored on a five-scale score confidence scale. A panel of three examiners scored the uncompressed images, with a consensus score for each site. The consensus score was used as the baseline for assessing the impact of lossy compression on the diagnostic values of images. The mean absolute error between consensus and observer scores was computed for each observer, site, and reading session. RESULTS: Balanced one-way analysis of variance for all observers indicated that for compression ratios 48 and 64, there was significant difference between mean absolute error of uncompressed and compressed images (P <.05). After converting the five-scale score to two-level diagnostic values, the diagnostic accuracy was strongly correlated (R (2) = 0.91) with the compression ratio. CONCLUSION: The results of this study suggest that high compression ratios can have a severe impact on the diagnostic quality of the digital radiographs for detection of periapical lesions.
An analysis of absorbing image on the Indonesian text by using color matching
NASA Astrophysics Data System (ADS)
Hutagalung, G. A.; Tulus; Iryanto; Lubis, Y. F. A.; Khairani, M.; Suriati
2018-03-01
The insertion of messages in an image is performed by inserting per character message in some pixels. One way of inserting a message into an image is by inserting the ASCII decimal value of a character to the decimal value of the primary color of the image. Messages that use characters in letters, numbers or symbols, where the use of letters of each word is different in number and frequency of use, as well as the use of letters in various messages within each language. In Indonesian language, the use of the letter A to be the most widely used, and the use of other letters greatly affect the clarity of a message or text presented in the language. This study aims to determine the capacity to absorb the message in Indonesian language from an image and what are the things that affect the difference. The data used in this study consists of several images in JPG or JPEG format can be obtained from the image drawing software or hardware of the image makers at different image sizes. The results of testing on four samples of a color image have been obtained by using an image size of 1200 X 1920.
A Forceful Demonstration by FORS
NASA Astrophysics Data System (ADS)
1998-09-01
New VLT Instrument Provides Impressive Images Following a tight schedule, the ESO Very Large Telescope (VLT) project forges ahead - full operative readiness of the first of the four 8.2-m Unit Telescopes will be reached early next year. On September 15, 1998, another crucial milestone was successfully passed on-time and within budget. Just a few days after having been mounted for the first time at the first 8.2-m VLT Unit Telescope (UT1), the first of a powerful complement of complex scientific instruments, FORS1 ( FO cal R educer and S pectrograph), saw First Light . Right from the beginning, it obtained some excellent astronomical images. This major event now opens a wealth of new opportunities for European Astronomy. FORS - a technological marvel FORS1, with its future twin (FORS2), is the product of one of the most thorough and advanced technological studies ever made of a ground-based astronomical instrument. This unique facility is now mounted at the Cassegrain focus of the VLT UT1. Despite its significant dimensions, 3 x 1.5 metres and 2.3 tonnes, it appears rather small below the giant 53 m 2 Zerodur main mirror. Profiting from the large mirror area and the excellent optical properties of the UT1, FORS has been specifically designed to investigate the faintest and most remote objects in the universe. This complex VLT instrument will soon allow European astronomers to look beyond current observational horizons. The FORS instruments are "multi-mode instruments" that may be used in several different observation modes. It is, e.g., possible to take images with two different image scales (magnifications) and spectra at different resolutions may be obtained of individual or multiple objects. Thus, FORS may first detect the images of distant galaxies and immediately thereafter obtain recordings of their spectra. This allows for instance the determination of their stellar content and distances. As one of the most powerful astronomical instruments of its kind, FORS1 is a real workhorse for the study of the distant universe. How FORS was built The FORS project is being carried out under ESO contract by a consortium of three German astronomical institutes, namely the Heidelberg State Observatory and the University Observatories of Göttingen and Munich. When this project is concluded, the participating institutes will have invested about 180 man-years of work. The Heidelberg State Observatory was responsible for directing the project, for designing the entire optical system, for developing the components of the imaging, spectroscopic, and polarimetric optics, and for producing the special computer software needed for handling and analysing the measurements obtained with FORS. Moreover, a telescope simulator was built in the shop of the Heidelberg observatory that made it possible to test all major functions of FORS in Europe, before the instrument was shipped to Paranal. The University Observatory of Göttingen performed the design, the construction and the installation of the entire mechanics of FORS. Most of the high-precision parts, in particular the multislit unit, were manufactured in the observatory's fine-mechanical workshops. The procurement of the huge instrument housings and flanges, the computer analysis for mechanical and thermal stability of the sensitive spectrograph and the construction of the handling, maintenance and aligning equipment as well as testing the numerous opto- and electro-mechanical functions were also under the responsibility of this Observatory. The University of Munich had the responsibility for the management of the project, the integration and test in the laboratory of the complete instrument, for design and installation of all electronics and electro-mechanics, and for developing and testing the comprehensive software to control FORS in all its parts completely by computers (filter and grism wheels, shutters, multi-object slit units, masks, all optical components, electro motors, encoders etc.). In addition, detailed computer software was provided to prepare the complex astronomical observations with FORS in advance and to monitor the instrument performance by quality checks of the scientific data accumulated. In return for building FORS for the community of European astrophysicists, the scientists in the three institutions of the FORS Consortium have received a certain amount of Guaranteed Observing Time at the VLT. This time will be used for various research projects concerned, among others, with minor bodies in the outer solar system, stars at late stages of their evolution and the clouds of gas they eject, as well as galaxies and quasars at very large distances, thereby permitting a look-back towards the early epoch of the universe. First tests of FORS1 at the VLT UT1: a great success After careful preparation, the FORS consortium has now started the so-called commissioning of the instrument. This comprises the thorough verification of the specified instrument properties at the telescope, checking the correct functioning under software control from the Paranal control room and, at the end of this process, a demonstration that the instrument fulfills its scientific purpose as planned. While performing these tests, the commissioning team at Paranal were able to obtain images of various astronomical objects, some of which are shown here. Two of these were obtained on the night of "FORS First Light". The photos demonstrate some of the impressive posibilities with this new instrument. They are based on observations with the FORS standard resolution collimator (field size 6.8 x 6.8 armin = 2048 x 2048 pixels; 1 pixel = 0.20 arcsec). Spiral galaxy NGC 1288 ESO PR Photo 37a/98 ESO PR Photo 37a/98 [Preview - JPEG: 800 x 908 pix - 224k] [High-Res - JPEG: 3000 x 3406 pix - 1.5Mb] A colour image of spiral galaxy NGC 1288, obtained on the night of "FORS First Light". The first photo shows a reproduction of a colour composite image of the beautiful spiral galaxy NGC 1288 in the southern constellation Fornax. PR Photo 37a/98 covers the entire field that was imaged on the 2048 x 2048 pixel CCD camera. It is based on CCD frames in different colours that were taken under good seeing conditions during the night of First Light (15 September 1998). The distance to this galaxy is about 300 million light-years; it recedes with a velocity of 4500 km/sec. Its diameter is about 200,000 light-years. Technical information : Photo 37a/98 is based on a composite of three images taken behind three different filters: B (420 nm; 6 min), V (530 nm; 3 min) and I (800 nm; 3min) during a period of 0.7 arcsec seeing. The field shown measures 6.8 x 6.8 arcmin. North is left; East is down. Distant cluster of galaxies ESO PR Photo 37b/98 ESO PR Photo 37b/98 [Preview - JPEG: 657 x 800 pix - 248k] [High-Res - JPEG: 2465 x 3000 pix - 1.9Mb] A peculiar cluster of galaxies in a sky field near the quasar PB5763 . ESO PR Photo 37c/98 ESO PR Photo 37c/98 [Preview - JPEG: 670 x 800 pix - 272k] [High-Res - JPEG: 2512 x 3000 pix - 1.9Mb] Enlargement from PR Photo 37b/98, showing the peculiar cluster of galaxies in more detail. The next photos are reproduced from a 5-min near-infrared exposure, also obtained during the night of First Light of the FORS1 instrument (September 15, 1998). PR Photo 37b/98 shows a sky field near the quasar PB5763 in which is also seen a peculiar, quite distant cluster of galaxies. It consists of a large number of faint and distant galaxies that have not yet been thoroughly investigated. Many other fainter galaxies are seen in other areas, for instance in the right part of the field. This cluster is a good example of a type of object to which much observing time with FORS will be dedicated, once it enters into regular operation. An enlargement of the same field is reproduced in PR Photo 37c/98. It shows the individual members of this cluster of galaxies in more detail. Note in particular the interesting spindle-shaped galaxy that apparently possesses an equatorial ring. There is also a fine spiral galaxy and many fainter galaxies. They may be dwarf members of the cluster or be located in the background at even larger distances. Technical information : PR Photos 37b/98 (negative) and 37c/98 (positive) are based on a monochrome image taken in 0.8 arcsec seeing through a near-infrared (I; 800 nm) filtre. The exposure time was 5 minutes and the image was flat-fielded. The fields shown measure 6.8 x 6.8 arcmin and 2.5 x 2.3 arcmin, respectively. North is to the upper left; East is to the lower left. Spiral galaxy NGC 1232 ESO PR Photo 37d/98 ESO PR Photo 37d/98 [Preview - JPEG: 800 x 912 pix - 760k] [High-Res - JPEG: 3000 x 3420 pix - 5.7Mb] A colour image of spiral galaxy NGC 1232, obtained on September 21, 1998. ESO PR Photo 37e/98 ESO PR Photo 37e/98 [Preview - JPEG: 800 x 961 pix - 480k] [High-Res - JPEG: 3000 x 3602 pix - 3.5Mb] Enlargement of central area of PR Photo 37d/98. This spectacular image (Photo 37d/98) of the large spiral galaxy NGC 1232 was obtained on September 21, 1998, during a period of good observing conditions. It is based on three exposures in ultra-violet, blue and red light, respectively. The colours of the different regions are well visible: the central areas (Photo 37e/98) contain older stars of reddish colour, while the spiral arms are populated by young, blue stars and many star-forming regions. Note the distorted companion galaxy on the left side of Photo 37d/98, shaped like the greek letter "theta". NGC 1232 is located 20 o south of the celestial equator, in the constellation Eridanus (The River). The distance is about 100 million light-years, but the excellent optical quality of the VLT and FORS allows us to see an incredible wealth of details. At the indicated distance, the edge of the field shown in PR Photo 37d/98 corresponds to about 200,000 lightyears, or about twice the size of the Milky Way galaxy. Technical information : PR Photos 37d/98 and 37e/98 are based on a composite of three images taken behind three different filters: U (360 nm; 10 min), B (420 nm; 6 min) and R (600 nm; 2:30 min) during a period of 0.7 arcsec seeing. The fields shown measure 6.8 x 6.8 arcmin and 1.6 x 1.8 arcmin, respectively. North is up; East is to the left. Note: [1] This Press Release is published jointly (in English and German) by the European Southern Observatory, the Heidelberg State Observatory and the University Observatories of Goettingen and Munich. Eine Deutsche Fassung dieser Pressemitteilung steht ebenfalls zur Verfügung. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.
Forensic steganalysis: determining the stego key in spatial domain steganography
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Soukal, David; Holotyak, Taras
2005-03-01
This paper is an extension of our work on stego key search for JPEG images published at EI SPIE in 2004. We provide a more general theoretical description of the methodology, apply our approach to the spatial domain, and add a method that determines the stego key from multiple images. We show that in the spatial domain the stego key search can be made significantly more efficient by working with the noise component of the image obtained using a denoising filter. The technique is tested on the LSB embedding paradigm and on a special case of embedding by noise adding (the +/-1 embedding). The stego key search can be performed for a wide class of steganographic techniques even for sizes of secret message well below those detectable using known methods. The proposed strategy may prove useful to forensic analysts and law enforcement.
An adaptive technique to maximize lossless image data compression of satellite images
NASA Technical Reports Server (NTRS)
Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe
1994-01-01
Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua
2014-10-01
The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.
Mobile healthcare information management utilizing Cloud Computing and Android OS.
Doukas, Charalampos; Pliakas, Thomas; Maglogiannis, Ilias
2010-01-01
Cloud Computing provides functionality for managing information data in a distributed, ubiquitous and pervasive manner supporting several platforms, systems and applications. This work presents the implementation of a mobile system that enables electronic healthcare data storage, update and retrieval using Cloud Computing. The mobile application is developed using Google's Android operating system and provides management of patient health records and medical images (supporting DICOM format and JPEG2000 coding). The developed system has been evaluated using the Amazon's S3 cloud service. This article summarizes the implementation details and presents initial results of the system in practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Temple, Brian Allen; Armstrong, Jerawan Chudoung
This document is a mid-year report on a deliverable for the PYTHON Radiography Analysis Tool (PyRAT) for project LANL12-RS-107J in FY15. The deliverable is deliverable number 2 in the work package and is titled “Add the ability to read in more types of image file formats in PyRAT”. Right now PyRAT can only read in uncompressed TIF files (tiff files). It is planned to expand the file formats that can be read by PyRAT, making it easier to use in more situations. A summary of the file formats added include jpeg, jpg, png and formatted ASCII files.
COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation
NASA Technical Reports Server (NTRS)
Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos
2015-01-01
The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.
NASA Astrophysics Data System (ADS)
Li, Gongxin; Li, Peng; Wang, Yuechao; Wang, Wenxue; Xi, Ning; Liu, Lianqing
2014-07-01
Scanning Ion Conductance Microscopy (SICM) is one kind of Scanning Probe Microscopies (SPMs), and it is widely used in imaging soft samples for many distinctive advantages. However, the scanning speed of SICM is much slower than other SPMs. Compressive sensing (CS) could improve scanning speed tremendously by breaking through the Shannon sampling theorem, but it still requires too much time in image reconstruction. Block compressive sensing can be applied to SICM imaging to further reduce the reconstruction time of sparse signals, and it has another unique application that it can achieve the function of image real-time display in SICM imaging. In this article, a new method of dividing blocks and a new matrix arithmetic operation were proposed to build the block compressive sensing model, and several experiments were carried out to verify the superiority of block compressive sensing in reducing imaging time and real-time display in SICM imaging.
Compressed Sensing for Body MRI
Feng, Li; Benkert, Thomas; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo; Chandarana, Hersh
2016-01-01
The introduction of compressed sensing for increasing imaging speed in MRI has raised significant interest among researchers and clinicians, and has initiated a large body of research across multiple clinical applications over the last decade. Compressed sensing aims to reconstruct unaliased images from fewer measurements than that are traditionally required in MRI by exploiting image compressibility or sparsity. Moreover, appropriate combinations of compressed sensing with previously introduced fast imaging approaches, such as parallel imaging, have demonstrated further improved performance. The advent of compressed sensing marks the prelude to a new era of rapid MRI, where the focus of data acquisition has changed from sampling based on the nominal number of voxels and/or frames to sampling based on the desired information content. This paper presents a brief overview of the application of compressed sensing techniques in body MRI, where imaging speed is crucial due to the presence of respiratory motion along with stringent constraints on spatial and temporal resolution. The first section provides an overview of the basic compressed sensing methodology, including the notion of sparsity, incoherence, and non-linear reconstruction. The second section reviews state-of-the-art compressed sensing techniques that have been demonstrated for various clinical body MRI applications. In the final section, the paper discusses current challenges and future opportunities. PMID:27981664
Digital Image Compression Using Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.
1993-01-01
The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.
2013-05-01
Measurement of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC...of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique Todd C...Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c
Digital compression algorithms for HDTV transmission
NASA Technical Reports Server (NTRS)
Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.
1990-01-01
Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.
Real-Time Aggressive Image Data Compression
1990-03-31
implemented with higher degrees of modularity, concurrency, and higher levels of machine intelligence , thereby providing higher data -throughput rates...Project Summary Project Title: Real-Time Aggressive Image Data Compression Principal Investigators: Dr. Yih-Fang Huang and Dr. Ruey-wen Liu Institution...Summary The objective of the proposed research is to develop reliable algorithms !.hat can achieve aggressive image data compression (with a compression
Context Modeler for Wavelet Compression of Spectral Hyperspectral Images
NASA Technical Reports Server (NTRS)
Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh
2010-01-01
A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.
Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.
ERIC Educational Resources Information Center
Culik, Karel II; Kari, Jarkko
1994-01-01
Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…
NASA Astrophysics Data System (ADS)
Aldossari, M.; Alfalou, A.; Brosseau, C.
2017-08-01
In an earlier study [Opt. Express 22, 22349-22368 (2014)], a compression and encryption method that simultaneous compress and encrypt closely resembling images was proposed and validated. This multiple-image optical compression and encryption (MIOCE) method is based on a special fusion of the different target images spectra in the spectral domain. Now for the purpose of assessing the capacity of the MIOCE method, we would like to evaluate and determine the influence of the number of target images. This analysis allows us to evaluate the performance limitation of this method. To achieve this goal, we use a criterion based on the root-mean-square (RMS) [Opt. Lett. 35, 1914-1916 (2010)] and compression ratio to determine the spectral plane area. Then, the different spectral areas are merged in a single spectrum plane. By choosing specific areas, we can compress together 38 images instead of 26 using the classical MIOCE method. The quality of the reconstructed image is evaluated by making use of the mean-square-error criterion (MSE).
Rubble-Pile Minor Planet Sylvia and Her Twins
NASA Astrophysics Data System (ADS)
2005-08-01
VLT NACO Instrument Helps Discover First Triple Asteroid One of the thousands of minor planets orbiting the Sun has been found to have its own mini planetary system. Astronomer Franck Marchis (University of California, Berkeley, USA) and his colleagues at the Observatoire de Paris (France) [1] have discovered the first triple asteroid system - two small asteroids orbiting a larger one known since 1866 as 87 Sylvia [2]. "Since double asteroids seem to be common, people have been looking for multiple asteroid systems for a long time," said Marchis. "I couldn't believe we found one." The discovery was made with Yepun, one of ESO's 8.2-m telescopes of the Very Large Telescope Array at Cerro Paranal (Chile), using the outstanding image' sharpness provided by the adaptive optics NACO instrument. Via the observatory's proven "Service Observing Mode", Marchis and his colleagues were able to obtain sky images of many asteroids over a six-month period without actually having to travel to Chile. ESO PR Photo 25a/05 ESO PR Photo 25a/05 Orbits of Twin Moonlets around 87 Sylvia [Preview - JPEG: 400 x 516 pix - 145k] [Normal - JPEG: 800 x 1032 pix - 350k] ESO PR Photo 25b/05 ESO PR Photo 25b/05 Artist's impression of the triple asteroid system [Preview - JPEG: 420 x 400 pix - 98k] [Normal - JPEG: 849 x 800 pix - 238k] [Full Res - JPEG: 4000 x 3407 pix - 3.7M] [Full Res - TIFF: 4000 x 3000 pix - 36.0M] Caption: ESO PR Photo 25a/05 is a composite image showing the positions of Remus and Romulus around 87 Sylvia on 9 different nights as seen on NACO images. It clearly reveals the orbits of the two moonlets. The inset shows the potato shape of 87 Sylvia. The field of view is 2 arcsec. North is up and East is left. ESO PR Photo 25b/05 is an artist rendering of the triple system: Romulus, Sylvia, and Remus. ESO Video Clip 03/05 ESO Video Clip 03/05 Asteroid Sylvia and Her Twins [Quicktime Movie - 50 sec - 384 x 288 pix - 12.6M] Caption: ESO PR Video Clip 03/05 is an artist rendering of the triple asteroid system showing the large asteroid 87 Sylvia spinning at a rapid rate and surrounded by two smaller asteroids (Remus and Romulus) in orbit around it. This computer animation is also available in broadcast quality to the media (please contact Herbert Zodet). One of these asteroids was 87 Sylvia, which was known to be double since 2001, from observations made by Mike Brown and Jean-Luc Margot with the Keck telescope. The astronomers used NACO to observe Sylvia on 27 occasions, over a two-month period. On each of the images, the known small companion was seen, allowing Marchis and his colleagues to precisely compute its orbit. But on 12 of the images, the astronomers also found a closer and smaller companion. 87 Sylvia is thus not double but triple! Because 87 Sylvia was named after Rhea Sylvia, the mythical mother of the founders of Rome [3], Marchis proposed naming the twin moons after those founders: Romulus and Remus. The International Astronomical Union approved the names. Sylvia's moons are considerably smaller, orbiting in nearly circular orbits and in the same plane and direction. The closest and newly discovered moonlet, orbiting about 710 km from Sylvia, is Remus, a body only 7 km across and circling Sylvia every 33 hours. The second, Romulus, orbits at about 1360 km in 87.6 hours and measures about 18 km across. The asteroid 87 Sylvia is one of the largest known from the asteroid main belt, and is located about 3.5 times further away from the Sun than the Earth, between the orbits of Mars and Jupiter. The wealth of details provided by the NACO images show that 87 Sylvia is shaped like a lumpy potato, measuring 380 x 260 x 230 km (see ESO PR Photo 25a/05). It is spinning at a rapid rate, once every 5 hours and 11 minutes. The observations of the moonlets' orbits allow the astronomers to precisely calculate the mass and density of Sylvia. With a density only 20% higher than the density of water, it is likely composed of water ice and rubble from a primordial asteroid. "It could be up to 60 percent empty space," said co-discoverer Daniel Hestroffer (Observatoire de Paris, France). "It is most probably a "rubble-pile" asteroid", Marchis added. These asteroids are loose aggregations of rock, presumably the result of a collision. Two asteroids smacked into each other and got disrupted. The new rubble-pile asteroid formed later by accumulation of large fragments while the moonlets are probably debris left over from the collision that were captured by the newly formed asteroid and eventually settled into orbits around it. "Because of the way they form, we expect to see more multiple asteroid systems like this." Marchis and his colleagues will report their discovery in the August 11 issue of the journal Nature, simultaneously with an announcement that day at the Asteroid Comet Meteor conference in Armação dos Búzios, Rio de Janeiro state, Brazil.
Multispectral Image Compression Based on DSC Combined with CCSDS-IDC
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741
Two-level image authentication by two-step phase-shifting interferometry and compressive sensing
NASA Astrophysics Data System (ADS)
Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2018-01-01
A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.
Multispectral image compression based on DSC combined with CCSDS-IDC.
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.
Planning/scheduling techniques for VQ-based image compression
NASA Technical Reports Server (NTRS)
Short, Nicholas M., Jr.; Manohar, Mareboyana; Tilton, James C.
1994-01-01
The enormous size of the data holding and the complexity of the information system resulting from the EOS system pose several challenges to computer scientists, one of which is data archival and dissemination. More than ninety percent of the data holdings of NASA is in the form of images which will be accessed by users across the computer networks. Accessing the image data in its full resolution creates data traffic problems. Image browsing using a lossy compression reduces this data traffic, as well as storage by factor of 30-40. Of the several image compression techniques, VQ is most appropriate for this application since the decompression of the VQ compressed images is a table lookup process which makes minimal additional demands on the user's computational resources. Lossy compression of image data needs expert level knowledge in general and is not straightforward to use. This is especially true in the case of VQ. It involves the selection of appropriate codebooks for a given data set and vector dimensions for each compression ratio, etc. A planning and scheduling system is described for using the VQ compression technique in the data access and ingest of raw satellite data.
Effect of data compression on diagnostic accuracy in digital hand and chest radiography
NASA Astrophysics Data System (ADS)
Sayre, James W.; Aberle, Denise R.; Boechat, Maria I.; Hall, Theodore R.; Huang, H. K.; Ho, Bruce K. T.; Kashfian, Payam; Rahbar, Guita
1992-05-01
Image compression is essential to handle a large volume of digital images including CT, MR, CR, and digitized films in a digital radiology operation. The full-frame bit allocation using the cosine transform technique developed during the last few years has been proven to be an excellent irreversible image compression method. This paper describes the effect of using the hardware compression module on diagnostic accuracy in hand radiographs with subperiosteal resorption and chest radiographs with interstitial disease. Receiver operating characteristic analysis using 71 hand radiographs and 52 chest radiographs with five observers each demonstrates that there is no statistical significant difference in diagnostic accuracy between the original films and the compressed images with a compression ratio as high as 20:1.
Architecture for one-shot compressive imaging using computer-generated holograms.
Macfaden, Alexander J; Kindness, Stephen J; Wilkinson, Timothy D
2016-09-10
We propose a synchronous implementation of compressive imaging. This method is mathematically equivalent to prevailing sequential methods, but uses a static holographic optical element to create a spatially distributed spot array from which the image can be reconstructed with an instantaneous measurement. We present the holographic design requirements and demonstrate experimentally that the linear algebra of compressed imaging can be implemented with this technique. We believe this technique can be integrated with optical metasurfaces, which will allow the development of new compressive sensing methods.
Intelligent bandwith compression
NASA Astrophysics Data System (ADS)
Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.
1980-02-01
The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.
Onboard Image Processing System for Hyperspectral Sensor
Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun
2015-01-01
Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281
NASA Astrophysics Data System (ADS)
Ouyang, Bing; Hou, Weilin; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.; Gong, Cuiling
2017-07-01
The compressive line sensing imaging system adopts distributed compressive sensing (CS) to acquire data and reconstruct images. Dynamic CS uses Bayesian inference to capture the correlated nature of the adjacent lines. An image reconstruction technique that incorporates dynamic CS in the distributed CS framework was developed to improve the quality of reconstructed images. The effectiveness of the technique was validated using experimental data acquired in an underwater imaging test facility. Results that demonstrate contrast and resolution improvements will be presented. The improved efficiency is desirable for unmanned aerial vehicles conducting long-duration missions.
Psychophysical Comparisons in Image Compression Algorithms.
1999-03-01
Leister, M., "Lossy Lempel - Ziv Algorithm for Large Alphabet Sources and Applications to Image Compression ," IEEE Proceedings, v.I, pp. 225-228, September...1623-1642, September 1990. Sanford, M.A., An Analysis of Data Compression Algorithms used in the Transmission of Imagery, Master’s Thesis, Naval...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS PSYCHOPHYSICAL COMPARISONS IN IMAGE COMPRESSION ALGORITHMS by % Christopher J. Bodine • March
Casella, Ivan Benaduce; Fukushima, Rodrigo Bono; Marques, Anita Battistini de Azevedo; Cury, Marcus Vinícius Martins; Presti, Calógero
2015-03-01
To compare a new dedicated software program and Adobe Photoshop for gray-scale median (GSM) analysis of B-mode images of carotid plaques. A series of 42 carotid plaques generating ≥50% diameter stenosis was evaluated by a single observer. The best segment for visualization of internal carotid artery plaque was identified on a single longitudinal view and images were recorded in JPEG format. Plaque analysis was performed by both programs. After normalization of image intensity (blood = 0, adventitial layer = 190), histograms were obtained after manual delineation of plaque. Results were compared with nonparametric Wilcoxon signed rank test and Kendall tau-b correlation analysis. GSM ranged from 00 to 100 with Adobe Photoshop and from 00 to 96 with IMTPC, with a high grade of similarity between image pairs, and a highly significant correlation (R = 0.94, p < .0001). IMTPC software appears suitable for the GSM analysis of carotid plaques. © 2014 Wiley Periodicals, Inc.
Progressive transmission of images over fading channels using rate-compatible LDPC codes.
Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul
2006-12-01
In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.
2-Step scalar deadzone quantization for bitplane image coding.
Auli-Llinas, Francesc
2013-12-01
Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000.
Practical steganalysis of digital images: state of the art
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav
2002-04-01
Steganography is the art of hiding the very presence of communication by embedding secret messages into innocuous looking cover documents, such as digital images. Detection of steganography, estimation of message length, and its extraction belong to the field of steganalysis. Steganalysis has recently received a great deal of attention both from law enforcement and the media. In our paper, we classify and review current stego-detection algorithms that can be used to trace popular steganographic products. We recognize several qualitatively different approaches to practical steganalysis - visual detection, detection based on first order statistics (histogram analysis), dual statistics methods that use spatial correlations in images and higher-order statistics (RS steganalysis), universal blind detection schemes, and special cases, such as JPEG compatibility steganalysis. We also present some new results regarding our previously proposed detection of LSB embedding using sensitive dual statistics. The recent steganalytic methods indicate that the most common paradigm in image steganography - the bit-replacement or bit substitution - is inherently insecure with safe capacities far smaller than previously thought.